EFF Legal Intern Haley Amster contributed to this post.
Over the past year, the use of online proctoring apps has skyrocketed. But while companies have seen upwards of a 500% increase in their usage, legitimate concerns about their invasiveness, potential bias, and efficacy are also on the rise. These concerns even led to a U.S. Senate inquiry letter requesting detailed information from three of the top proctoring companies—Proctorio, ProctorU, and ExamSoft—which combined have proctored at least 30 million tests over the course of the pandemic.1 Unfortunately, the companies mostly dismissed the senators’ concerns, in some cases stretching the truth about how the proctoring apps work, and in other cases downplaying the damage this software inflicts on vulnerable students.
In one instance, though, these criticisms seem to have been effective: ProctorU announced in May that it will no longer sell fully-automated proctoring services. This is a good step toward eliminating some of the issues that have concerned EFF with ProctorU and other proctoring apps. The artificial intelligence used by these tools to detect academic dishonesty has been roundly attacked for its bias and accessibility impacts, and the clear evidence that it leads to significant false positives, particularly for vulnerable students. While this is not a complete solution to the problems that online proctoring creates—the surveillance is, after all, the product—we hope other online proctoring companies will also seriously consider the danger that these automated systems present.The AI Shell Game
This reckoning has been a long time coming. For years, online proctoring companies have played fast and loose when talking about their ability to “automatically” detect cheating. On the one hand, they’ve advertised their ability to “flag cheating” with artificial intelligence: ProctorU has claimed to offer “fully automated online proctoring”; Proctorio has touted the automated “suspicion ratings” it assigns test takers; and ExamSoft has claimed to use “Advanced A.I. software” to “detect abnormal student behavior that may signal academic dishonesty.” On the other hand, they’ve all been quick to downplay their use of automation, claiming that they don’t make any final decisions—educators do—and pointing out that their more expensive options include live proctors during exams or video review by a company employee afterward, if you really want top-tier service.
Proctoring companies must admit that their products are flawed, and schools must offer students due process and routes for appeal when these tools flag them, regardless of what software is used to make the allegations.
Nowhere was this doublespeak more apparent than in their recent responses to the Senate inquiry. ProctorU “primarily uses human proctoring – live, trained proctors – to assist test-takers throughout a test and monitor the test environment,” the company claimed. Despite this, it has offered an array of automated features for years, such as their entry-level “Record+” which (until now) didn’t rely on human proctors. Proctorio’s “most popular product offering, Automated Proctoring...records raw evidence of potentially-suspicious activity that may indicate breaches in exam integrity.” But don’t worry: “exam administrators have the ability and obligation to independently analyze the data and determine whether an exam integrity violation has occurred and whether or how to respond to it. Our software does not make inaccurate determinations about violations of exam integrity because our software does not make any determinations about breaches of exam integrity.” According to Proctorio’s FAQ, “Proctorio’s software does not perform any type of algorithmic decision making, such as determining if a breach of exam integrity has occurred. All decisions regarding exam integrity are left up to the exam administrator or institution” [emphasis Proctorio’s].
But this blame-shifting has always rung false. Companies can’t both advertise the efficacy of their cheating-detection tools when it suits them, and dodge critics by claiming that the schools are to blame for any problems.
And now, we’ve got receipts: in a telling statistic released by ProctorU in its announcement of the end of its AI-only service, “research by the company has found that only about 10 percent of faculty members review the video” for students who are flagged by the automated tools. (A separate University of Iowa audit they mention found similar results—only 14 percent of faculty members were analyzing the results they received from Proctorio.) This is critical data for understanding why the blame-shifting argument must be seen for what it is: nonsense. “[I]t's unreasonable and unfair if faculty members" are punishing students based on the automated results without also looking at the videos, says a ProctorU spokesperson—but that’s clearly what has been happening, perhaps the majority of the time, resulting in students being punished based on entirely false, automated allegations. This is just one of the many reasons why proctoring companies must admit that their products are flawed, and schools must offer students due process and routes for appeal when these tools flag them, regardless of what software is used to make the allegations.
We are glad to see that ProctorU is ending AI-only proctoring, but it’s disappointing that it took years of offering an automated service—and causing massive distress to students—before doing so. We’ve also yet to see how ProctorU will limit the other harms that the tools cause, from facial recognition bias to data privacy leaks. But this is a good—and important—way for ProctorU to walk the talk after it admitted to the Senate that “humans are simply better than machines alone at identifying intentional misconduct.”Human Review Leaves Unanswered Questions
Human proctoring isn’t perfect either. It has been criticized for its invasiveness, and for creating an uncomfortable power dynamic where students are surveilled by a stranger in their own homes. And simply requiring human review doesn’t mean students won’t be falsely accused: ExamSoft told the Senate that it relies primarily on human proctors, claiming that video is “reviewed by the proctoring partner’s virtual proctors—trained human invigilators [exam reviewers]—who also flag anomalies,” and that “discrepancies in the findings are reviewed by a second human reviewer,” after which a report is provided to the institution for “final review and determination.”
But that’s the same ExamSoft that proctored the California Bar Exam, in which over one-third of examinees were flagged (over 3,000). After further review, 98% of those flagged were cleared of misconduct, and only 47 test-takers were implicated. Why, if ExamSoft’s human reviewers carefully examined each potential flag, do the results in this case indicate that nearly all of their flags were still false? If the California Bar hadn’t carefully reviewed these allegations, the already-troubling situation, which included significant technical issues such as crashes and problems logging into the site, last-minute updates to instructions, and lengthy tech support wait times, would have been much worse. (Last month, a state auditor’s report revealed that the California State Bar violated state policy when it awarded ExamSoft a new five-year, $4 million contract without evaluating whether it would receive the best value for the money. One has to wonder what, exactly, ExamSoft is offering that’s worth $4 million given this high false-positive rate.)
Unfortunately, additional human review may simply result in teachers and administrators ignoring even more potential false flags, as they further trust the companies to make the decisions for them. We must carefully scrutinize the danger to students whenever schools outsource academic responsibilities to third-party tools, algorithmic or otherwise.
It’s well past time for online proctoring companies to be honest with their users. Each company should release statistics on how many videos are reviewed by humans, at schools or in-house, as well as how many flags are dismissed in each portion of review. This aggregate data would be a first step to understanding the impact of these tools. And the Senate and the Federal Trade Commission should follow up on the claims these companies made in their responses to the senators’ inquiry, which are full of weasel words, misleading descriptions, and other inconsistencies. We’ve outlined our concerns per company below.ExamSoft
- ExamSoft claimed in its response to the Senate that it doesn’t monitor students’ physical environments. But it does keep a recording of your webcam (audio and visual) the entire time you’re being proctored. This recording, with integrated artificial intelligence software, detects, among other things, “student activity” and “background noise.” That sure sounds like environmental monitoring to us.
- ExamSoft omitted from its Senate letter that there have been data security issues, including at least one breach.
- ExamSoft continues to use automated flagging, and conspicuously did not mention disabilities that would lead students to be flagged for cheating, such as stimming. This has already caused a lot of issues for exam-takers with diabetes who have had restrictions on their food availability and insulin use, and have been basically told that a behavior flag is unavoidable.
- The company also claimed that their facial recognition system still allows an exam-taker to proceed with examinations even when there is an issue with identity verification—but users report significant issues with the system recognizing them causing delays and other issues with their exams.
- ProctorU claimed in its response to the Senate that it “prioritizes providing unbiased services,” and its “experienced and trained proctors can distinguish between behavior related to ‘disabilities, muscle conditions, or other traits’” compared with “unusual behavior that may be an attempt to circumvent test rules.” The company does not explain the training proctors receive to make these determinations, or how users can ensure that they are treated fairly when they have concerns about accommodations.
- ProctorU also claims to have received fewer than fifteen complaints related to issues with their facial recognition technology, and claims that it has found no evidence of bias in the facial comparison process it uses to authenticate test-taker identity. This is, to put it mildly, very unlikely.
- ProctorU is currently being sued for violating the Illinois Biometric Information Privacy Act (BIPA), after a data breach affected nearly 500,000 users. The company failed to mention this breach in its response, and while it claims its video files are only kept for up to two years, the lawsuit contends that biometric data from the breach dated back to 2012. There is simply no reason to hold onto biometric data for two years, let alone that eight.
- Aware of face recognition’s well-documented bias, Proctorio has gone out of its way to claim that it doesn’t use it. While this is good news for privacy, it doesn’t negate concerns about bias. The company still uses automation to determine whether a face is in view during exams—what it calls facial detection—which may not compare an exam taker to previous pictures for identification, but still requires, obviously, the ability for the software to match a face in view to an algorithmic model for what a face looks like at various angles. A software researcher has shown that the facial detection model that the company is using “fails to recognize Black faces more than 50 percent of the time.” Separately, Proctorio is facing a lawsuit for misusing the Digital Millennium Copyright Act (DMCA) to force down posts by another security researcher who used snippets of the software’s code in critical commentary online. The company must be more open to criticisms of its automation, and more transparent about its flaws.
- In its response to the Senate, the company claimed that it has “not verified a single instance in which test monitoring was less accurate for a student based on any religious dress, like headscarves they may be wearing, skin tone, gender, hairstyle, or other physical characteristics.” Tell that to the schools who have canceled their contracts due to bias and accessibility issues.
- Lastly, Proctorio continues to promote their automated flagging tools, while dismissing complaints of false-positives by shifting the blame over to schools. As with other online proctoring companies, Proctorio should release statistics on how many videos are reviewed by humans, at schools or in-house, as well as how many flags are dismissed as a result.
Just before the long weekend at the end of May, Amazon announced the release of their Sidewalk mesh network. There are many misconceptions about what it is and what it does, so this article will untangle some of the confusion.It Isn’t Internet Sharing
Much of the press about Amazon Sidewalk has said that it will force you to share your internet or WiFi network. It won’t. It’s a network to connect home automation devices like smart light switches together in more flexible ways. Amazon is opening the network up to partners, the first of which is the Tile tracker.
Sidewalk can use the internet for some features, but won’t in general. If it does, Amazon is limiting its rate to 80 kilobits per second -- or 8 kilobytes per second, which is only about 50% more than the modems we used in the old days. It is also capped at 500 MB per month, which is less than two hours of 80 kbps over the whole month. To be clear: it isn’t going to interfere with your streaming, video calls, or anything else. The average web page is over two megabytes in size, which would take over four minutes to download at that speed.What is Sidewalk, Then?
Sidewalk is primarily a mesh network for home automation devices, like Alexa’s smart device features, Google Home, and Apple HomeKit. This mesh network can provide coverage where your home network is flaky. To build the ecosystem, people incorporate their devices into this mesh network.
The first partner company to integrate to Sidewalk is the Tile tracker tags. Sidewalk allows you to use a Tile tag at a distance further than typical Bluetooth range. Sidewalk uses Bluetooth, WiFi, and 900MHz radio to connect the mesh network together. There will be other partner companies; this is an important thing to understand about the Amazon Sidewalk mesh, that it’s not just Amazon. Other companies will make devices that operate as entities in the network, either as a device like a smart light switch, or as a hub like the Echo and Ring devices.What is a Mesh Network, Anyway?
Suppose you want to send a birthday card to Alice, I live next door to you, and you know I work with Alice. Rather than sending the card through the postal system, you might give me the card to take to Alice. When I get to work, I run into Bob who sits next to Alice, so I give the card to Bob, who gives it to Alice.
That’s a mesh network. A web of people delivers the message in an ad hoc manner, and saves you postage. Notably, mesh networks work without explicit infrastructure or servers.How does Amazon Sidewalk Use a Mesh?
Suppose you put an Alexa-controlled light in your bedroom, but the WiFi there is flaky. If you use Alexa to turn the light on or off, sometimes the command doesn’t get through. Let’s also suppose that in that bedroom, the WiFi from your neighbor’s house is stronger than your WiFi. Well, what if when your WiFi doesn’t process your command, your Alexa uses your neighbor’s WiFi instead? That’s what Amazon Sidewalk does, with a very simple mesh, from your Alexa to your neighbor’s WiFi to your light.
Let’s expand on that example. Suppose that you’re out on a walk in your neighborhood and realize you didn’t turn your lamp off. You press a button on your smartphone to turn the lamp off. Your phone passes that message to a nearby house, perhaps the one across the street, which hands that message to another house, and it ends up at your lamp, in much the same way as your birthday card made its way to Alice.
In some situations, Sidewalk won’t be able to route the message via the mesh. Instead, it has to send the message to the internet, and then back from the internet to the mesh network near the destination.
The Sidewalk documents we have seen do not have details of the mesh routing algorithms, such as how messages are routed via mesh and when or why they go into or out of the internet. So we don’t know how that works. We do know that when Sidewalk tries to send messages without involving the internet, messages are expected to be small, and relatively infrequent, because the bandwidth throttle and total data caps are someone’s “nobody should need anywhere close to this” limits. We don’t know how hard it tries, nor how successful its tries are.How Is Sidewalk’s Privacy and Security?
Amazon describes the privacy and security of Sidewalk in a privacy and security whitepaper. Amazon also has an overview, a blog post about their goals, an IoT Integration site, and developer documentation for the SDK.
While it does not describe the details of the Sidewalk protocols, its description of the cryptographic security and privacy measures is promising. So is the sketch of the routing. It appears to have some good security and privacy protections. Of course, the proof is in the details and ultimate implementation. Amazon has a reasonable track record of designing, building, and updating security and privacy in AWS and related technologies. It’s in their interest to severely limit what participants in the mesh network learn about other participants, and thus whatever leaks researchers find are likely to be bugs.What’s the Bad News?
We have a number of concerns about Sidewalk.Amazon botched the announcement
Most of the articles about Sidewalk focused on the network sharing, without explaining that this is a community mesh network of home automation and related technologies. Even more recent articles, which at least have stopped talking about internet sharing, are instead talking about wireless (WiFi) sharing. It’s been difficult to understand what Sidewalk is and is not. At the end of our investigation, we don’t know that we’ve gotten it right, either. Amazon needs to do a much better job telling us what their new systems do.
To be fair, this is hard! Mesh networking is not widely used for wireless communications because the technology is difficult to implement. Nonetheless, this is all the more reason for Amazon to spend more time describing what Sidewalk is.There are many missing details
Amazon has published some good overviews, white papers, and even some API descriptions, yet there is much that we still don’t know about Sidewalk. For example, we don’t know the details of the security and privacy measures. Likewise, we don’t know what the mesh routing algorithms are. Thus, there’s no independent analysis of Sidewalk.
Moreover, while we like the sketch of Sidewalk’s security, there will be inevitable transfers of information to Amazon, such as IDs of devices on the new network. We don’t know if there are other information transfers to participating devices, or things Amazon can infer.It’s a V1 system, so it’s going to have bugs
Even though the initial description of privacy and security show that care went into designing Sidewalk, it’s a version-one system. So there are bugs in the protocol and the software. There also will be bugs yet to be written in Sidewalk-compatible devices and software made by Amazon and its partners. Being an early-adopter of any new technology has the benefit of being early, as well as the risks of being early.No abuse mitigations
While Sidewalk has been designed for security and privacy, it has not been designed to mitigate abuse. This is a glaring hole.
Amazon’s whitepaper for Sidewalk describes a use case of a lost pet. The first Sidewalk partner is the Tile tracker. While we all empathize with someone whose pet is missing, and we’ve all wondered where we left our keys, any system that allows one to track a pet allows one to be a stalker. So Sidewalk creates new opportunities for people to stalk family members, former romantic partners, friends, neighbors, co-workers, and others. Just drop a tracker in their handbag or car, and you can track them. This has been our main criticism of Sidewalk, and to be fair, Tile says they are working on solutions. This has also been our criticism of Apple’s AirTags. Sidewalk amplifies the existing risk of a surreptitious tracker by giving it the extended reach of every Echo or Ring camera that’s participating in the Sidewalk network. If Sidewalk systems don’t have proper controls on them then estranged spouses, ex-roommates, and nosy neighbors, can use them to spy from anywhere in the world.
We also are concerned about how Amazon might connect its new Sidewalk technology to one of its most controversial products: Ring home doorbell surveillance cameras. For example, if Ring cameras are tied together through Sidewalk technology, they can form neighborhood-wide video surveillance systems.
While Amazon’s whitepapers indicate that the security and privacy is pretty good, Amazon is silent on these kinds of abuse scenarios. Indeed, their pet use case is a proxy for abuse. We are concerned that we don’t know what we don’t know about the overall ecosystem.Opt-out rather than opt-in
Perhaps the most important principle in respectful design is user consent. People must be free to autonomously choose whether or not to use a technology, and whether or not another entity may process their personal information. Opt-in systems have far lower participation than opt-out systems, because most people either are not aware of the system and its settings, or don’t take the time to change the settings.
Thus, defaults matter. By making Sidewalk opt-out instead of opt-in, Amazon is ginning up a wider reach of its network, at the cost of genuine user control of their own technologies.
In Sidewalk’s case, there might be a relatively low infosec cost to a person being pushed into the system until they opt-out. The major risk is the effect of bugs in the system. It’s low risk, but not no risk.
If Amazon had made its new system opt-in, we might not be writing about it at all. It would have traded slower growth for fewer complaints.How Do I Turn Sidewalk Off?
If you’ve decided after reading this that you don’t want to use Sidewalk, it’s easy to turn off.
Amazon has a page with instructions on how to turn Sidewalk off. If you do not use Alexa, Echo, or Ring, you won’t be using Sidewalk at all, so you don’t have to worry about turning it off.Lack of Abuse Mitigations and Opt-Out by Design Are Sidewalk's Biggest Flaws
Amazon’s Sidewalk system is a mesh network that uses their Echo devices and Ring cameras to improve the reach and reliability of their home automation systems and partner systems like Tile’s tracker. It is not an internet sharing system as some have reported. Its design appears to be privacy-friendly and to have good security. It is a brand-new system, so there will be bugs in it.
The major problem is a lack of mitigations to stop people from using it in abusive ways, such as tracking another person. It is also troubling that Amazon foisted the system on its users, placing on them the burden of opting out, rather than respecting its users’ autonomy and giving the opportunity to opt-in.
The Judiciary Committee of the U.S. House of Representatives recently released a comprehensive series of bills designed to curb the excesses of Big Tech. One of them, the Platform Competition and Opportunity Act, addresses one of the biggest, most obvious problems among the largest tech companies: that they use their deep pockets to buy up services and companies which might have one day competed with them.
We’ve said before that increased scrutiny of mergers and acquisitions is the first step in addressing the lack of competition for Big Tech. Restraining internet giants’ power to squash new competitors can help new services and platforms arise, including ones that are not based on a surveillance business model. It would also encourage those giants to innovate and offer better services, rather than relying on being the only game in town.
Big Tech’s acquisitiveness is well-known and has been on the rise. Analysis of Apple’s finances, for example, revealed that over the last 6 years, the company was buying a new company every three to four weeks. Not only do these sales keep startups from ever competing with incumbent powers, they also bring more data under the control of companies that already have too much information on us. This is especially true when one of the draws of a startup’s service was that it provided an alternative to Big Tech’s offering, as we saw when Google bought Fitbit.
The acquisition practices of the largest tech firms have distorted the marketplace. Mergers and acquisitions are now seen as a primary driving force to securing initial investment to launch a startup. In other words, how attractive your company is to a big tech acquisition is now arguably the primary reason a startup gets funded. This makes sense because ultimately the venture capital firms that fund startups are interested in making money, and if the main source of profit in the technology sector is derived from mergers with big tech, as opposed to competing with them, the investment dollars will flow that way.
The Platform Competition and Opportunity Act requires platforms of a certain size—or those owned by people or companies of a certain size—to prove that each proposed acquisition isn’t anticompetitive. In today’s marketplace, that means Apple, Google, Facebook, Amazon, and Microsoft. These companies would have to show that they’re not trying to buy a service that competed with a similar feature of their platforms. In other words, Facebook, home to Facebook Messenger, would not have been allowed to buy WhatsApp under this law. Platforms of this size would also be prevented from buying a service which is either a competitor or is in the process of growing to be a competitor. In other words, Facebook’s acquisition of Instagram would have gathered more scrutiny under this framework.
Stricter rules for mergers and acquisitions are a common-sense way to keep the big players from growing even bigger. The tech marketplace is top-heavy and concentrated, and the Platform Competition and Opportunity Act will prevent further imbalance in the marketplace.
The ACCESS Act is one of the most exciting pieces of federal tech legislation this session. Today’s tech giants grew by taking advantage of the openness of the early Internet, but have designed their own platforms to be increasingly inhospitable for both user freedom and competition. The ACCESS Act would force these platforms to start to open up, breaking down the high walls they use to lock users in and keep competitors down. It would advance the goals of competition and interoperability, which will make the internet a more diverse, more user-friendly place to be.
We’ve praised the ACCESS Act as “a step towards a more interoperable future.” However, the bill currently before Congress is just a first step, and it’s far from perfect. While we strongly agree with the authors’ intent, some important changes would make sure that the ACCESS Act delivers on its promise.
One of the biggest concerns among proponents of interoperability is that a poorly thought-out mandate could end up harming privacy. Interoperability implies more data sharing, and this, skeptics argue, increases the risk of large-scale abuse. We addressed this supposed paradox head-on in a recent whitepaper, where we explained that interoperability can enhance privacy by giving users more choice and making it easier to switch away from services that are built on surveillance.
Requiring large platforms to share more data does create very real risks. In order to mitigate those risks, new rules for interoperability must be grounded in two principles: user consent and data minimization. First, users should have absolute control over whether or not to share their data: they should be able to decide when to start sharing, and then to rescind that permission at any time. Second, the law must ensure that data which is shared between companies in order to enable interoperability—which may include extremely sensitive data, like private messages—is not used for secondary, unexpected purposes. Relatedly, the law must make sure that “interoperability” is not used as a blanket excuse to share data that users wouldn’t otherwise approve of.
The ACCESS Act already has consent requirements for some kinds of data sharing, and it includes a “non-commercialization” clause that prevents both platforms and their competitors from using data for purposes not directly related to interoperability. These are a good start. However, the authors should amend the bill to make it clear that every kind of data sharing is subject to user consent, that they can withdraw that consent at any time, and that the purpose of “interoperability” is limited to things that users actually want.
Which brings us to our next suggestion...Define “Interoperability”
The law should say what interoperability is, and what it isn’t. In the original, senate-introduced version of the bill from 2019, large platforms were required to support “interoperable communications with a user of a competing communications provider.” This rather narrow definition would have limited the scope of the bill to strictly inter-user communications, such as sharing content on social media or sending direct messages to friends.
The new version of the bill is more vague, and doesn’t pin “interoperability” to a particular use case. The term isn’t defined, and the scope of the activities implicated in the newer bill is much broader. This leaves it more open to interpretation.
Such vagueness could be dangerous. Advertisers and data brokers have recently worked to co-opt the rhetoric of interoperability, arguing that Google, Apple, and other developers of user-side software must keep giving them access to sensitive user data in order to promote competition. But as we’ve said before, competition is not an end in itself—we don’t want the ACCESS Act to help more companies compete to exploit your data. Instead, the authors should define interoperability in a way that includes user-empowering interoperability, but explicitly excludes use cases like surveillance advertising.Let the People Sue
Time and again, we’ve seen well intentioned consumer protection laws fail to be effective because of a lack of meaningful enforcement. The easiest way to fix that is to give enforcement power to those who would be most affected by the law: the users. That’s why the ACCESS Act needs a private right of action.
In the House draft of the bill, the FTC would be in charge of enforcing the law. This is a lot of responsibility to vest in an agency that’s already overtaxed. Even if the FTC enforces the law in good faith, it may not have the resources to go toe-to-toe with the biggest corporations in the world. And this kind of regulatory enforcement could open the door to regulatory capture, in which giant corporations successfully lobby to fill enforcement agencies with personnel who’ll serve their interests.
The way to make sure that the bill’s policy turns into practice is to give those who might be harmed – users – the right to sue. Users whose privacy and security are compromised because of interfaces opened by the ACCESS Act should be able to take those responsible to court, whether it’s the large platforms or their would-be competitors who break the law.
As we wrote: “Put simply: the ACCESS Act needs a private right of action so that those of us stuck inside dominant platforms, or pounding on the door to innovate alongside or in competition with them, are empowered to protect ourselves.”Bring back delegability
One of the best ideas from the original version of the ACCESS act was “delegability.” A delegability mandate would require large platforms to open up client-side interfaces so that users, hobbyist developers, and small companies could create tools that work on top of the platforms’ existing infrastructure. Users would then be free to “delegate” some of their interactions with the large platforms to trusted agents who could help make those platforms serve users’ needs. This type of “follow-on innovation” has been a hallmark of new tech platforms in the past, but it’s been sorely lacking in the ecosystem around today’s tech giants, who assert tight control over how people use their services.
Unfortunately, the version of the ACCESS Act recently introduced in the House has dropped the delegability requirement entirely. This is a major exclusion, and it severely limits the kinds of interoperability that the bill would create. The authors should look to the older version of the bill and re-incorporate one of the most important innovations that 2019’s ACCESS Act produced.Government standards as safe harbors, not mandates
The ACCESS Act would establish a multi-stakeholder technical committee which would make recommendations to the FTC about the technical standards that large platforms need to implement to allow interoperability. Many consumer advocates may be tempted to see this as the best way to force big companies to do what the Act tells them. Advocates and lawmakers are (rightly) skeptical of giving Facebook and friends any kind of leeway when it comes to complying with the law.
However, forcing big platforms to use new, committee-designed technical standards may do more harm than good. It will ensure that the standards take a long time to create, and an even longer time to modify. It could mean that platforms that are forced to use those standards must lobby for government approval before changing anything at all, which could prevent them from adding new, user-positive features. It could also mean that the interfaces created in the first round of regulation—reflecting the tech platforms as they exist today—are unable to keep up as the internet evolves, and that they fail to serve their purpose as time goes on. And such clunky bureaucracy may give the tech giants ammunition to argue that the ACCESS act is a needless, costly tax on innovation.
It’s not necessarily bad to have the government design, or bless, a set of technical standards that implement ACCESS’ requirements. However, the platforms subject to the law should also have the freedom to implement the requirements in other ways. The key will be strong enforcement: regulators (or competitors, through a private right of action) should aggressively scrutinize the interfaces that big platforms design, and the law should impose strict penalties when the platforms build interfaces that are inadequate or anti-competitive. If the platforms want to avoid such scrutiny, they should have the choice to implement the government’s standards instead.About That Standardization Process
At EFF, we’re no strangers to the ways that standardization processes can be captured by monopolists, and so we’ve paid close attention to the portions of the ACCESS Act that define new technical standards for interoperability. We have three suggestions:
- Fix the technical committee definition. The current draft of the bill calls for each committee to have two or more reps from the dominant company; two or more reps from smaller, competing companies; two or more digital rights/academic reps; and one rep from the National Institute for Standards and Technology. This may sound like a reasonable balance of interests, but it would in theory allow a committee consisting of 100 Facebook engineers, 100 Facebook lawyers, two engineers from a small startup, two academics and a NIST technologist. Congress should better-define the definition of the technical committee with maximum numbers of reps from the dominant companies and fix the ratio of dominant company reps to the other groups represented at the committee.
- Subject the committee work to public scrutiny and feedback. The work of the technical committee—including access to its mailing lists and meetings, as well as discussion drafts and other technical work—should be a matter of public record. All committee votes should be public. The committee’s final work should be subject to public notice and commentary, and the FTC should ask the committee to revise its designs based on public feedback where appropriate.
- Publish the committee’s final work. The current draft of the ACCESS Act limits access to the committee’s API documentation to “competing businesses or potential competing businesses.” That’s not acceptable. We have long fought for the principle that regulations should be in the public domain, and that includes the ACCESS Act’s API standards. These must be free of any encumbrance, including copyright (and para-copyrights such as anti-circumvention), trade secrecy, or patents, and available for anyone to re-implement. Where necessary, the committee should follow the standardization best practice of requiring participants to covenant not to enforce their patents against those who implement the API.
Ultimately, it’s unlikely that every one of these pieces of policy will make it into the bill. That’s okay—even an imperfect bill can still be a step forward for competition. But these improvements would make sure the new law delivers on its promise, leading to a more competitive internet where everyone has a chance for technological self-determination.
What lawmakers don’t notice is that a lot of the people posting that offensive junk get stopped, again and again, thanks to Section 230. During a March hearing in the House Committee on Energy and Commerce, lawmakers expressed concern over some of the worst content that’s online, including extremist content, falsehoods about COVID-19, and election disinformation.
But it’s people spreading just this type of content that often file lawsuits trying to force their content back online. These unsuccessful lawsuits show that Section 230 has repeatedly stopped disinformation specialists from disseminating their harmful content.
Section 230 stands for the simple idea that you’re responsible for your own speech online—not the speech of others. It also makes clear that online operators, from the biggest platforms to the smallest niche websites, have the right to curate the speech that appears on their site.
Users dedicated to spreading lies or hateful content are a tiny minority, but weakening Section 230 will make their job easier. When content moderation doesn’t go their way—and it usually doesn’t—they’re willing to sue. As the cases below show, Section 230 is rightfully used to quickly dismiss their lawsuits. If lawmakers weaken Section 230, these meritless suits will linger in court longer, costing online services more and making them leery of moderate the speech of known litigious users. That result could make it easier for these users to spread lies online.Section 230 Protects Moderators Who Remove Hateful Content
James Domen identifies as a “former homosexual,” who now identifies as heterosexual. He created videos that describe being LGBTQ as a harmful choice, and shared them on Vimeo, a video-sharing website. In one video, he described the “homosexual lifestyle” this way: “It’ll ruin your life. It’s devastating. It’ll destroy your life.”
In at least five videos, Domen also condemned a California bill that would have expanded a ban on “sexual orientation change efforts,” or SOCE. Medical and professional groups have for decades widely recognized that efforts to change sexual orientation in various ways, sometimes called “conversion therapy,” are harmful.
Vimeo removed Domen’s videos. In a letter to Domen’s attorney, Vimeo explained that SOCE-related videos “disseminate irrational and stereotypical messages that may be harmful to people in the LGBT community,” because it treated homosexuality as “a mental disease or disorder” that “can and should be treated.” Vimeo bans “hateful and discriminatory” content, and company officials told Domen directly that, in their view, his videos fell into that category.
Forcing a website to publish Domen’s anti-LGBTQ content might serve Domen’s interests, but only at the expense of many other users of the platform. No website should have to face a lengthy and expensive lawsuit over such claims. Because of Section 230, they don’t.
Some lawmakers have proposed carving civil rights claims out of Section 230. But that could have the unintended side effect of allowing lawsuits like Domen’s to continue—making tech companies more skittish about removing anti-LGBTQ content.Section 230 Protects Moderators Who Remove Covid-19 Falsehoods
Marshall Daniels hosts a YouTube channel in which he has stated that Judaism is “a complete lie” which was “made up for political gain.” Daniels, who broadcasts as “Young Pharaoh,” has also called Black Lives Matter “an undercover LGBTQ Marxism psyop that is funded by George Soros.”
In April 2020, Daniels live-streamed a video claiming that vaccines contain “rat brains,” that HIV is a “biologically engineered, terroristic weapon,” and that Anthony Fauci “has been murdering motherfuckers and causing medical illnesses since the 1980s.”
In May 2020, Daniels live-streamed a video called “George Floyd, Riots & Anonymous Exposed as Deep State Psyop for NOW.” In that video, he claimed that nationwide protests over George Floyd’s murder were “the result of an operation to cause civil unrest, unleash chaos, and turn the public against [President Trump].” According to YouTube, he also stated the COVID-19 pandemic and Floyd’s murder “were covert operations orchestrated by the Freemasons,” and accused Hillary Clinton and her aide John Podesta of torturing children. Near the video’s end, Daniels stated: “If I catch you talking shit about Trump, I might whoop your ass fast.”
YouTube removed both videos, saying that they violated its policy on harassment and bullying.
Daniels sued YouTube, demanding account reinstatement and damages. He claimed that YouTube amounted to a state actor, and had thus violated his First Amendment rights. (Suggesting that courts treat social media companies as the government has no basis in the law, which the 9th Circuit reaffirmed is the case last year.)
In March, a court dismissed most of Daniels’ claims under Section 230. That law protects online services—both large and small—from getting sued for refusing to publish content they don’t want to publish.
Again, Internet freedom was protected by Section 230. No web host should be forced to carry false and threatening content, or Qanon-based conspiracy theories, like those created by Daniels. Section 230 protects moderators who kick out such content.Section 230 Protects Moderators Who Remove Election Disinformation
The Federal Agency of News LLC, or FAN, is a Russian corporation that purports to be a news service. FAN was founded in the same building as Russia’s Internet Research Agency, or IRA; the IRA became the subject of a criminal indictment in February 2018 for its efforts to meddle in the 2016 U.S. election.
The founder and first General Director of FAN was Aleksandra Yurievna Krylova, who is wanted by the FBI for conspiracy to defraud the U.S. Later in 2018, the FBI unsealed a criminal complaint against FAN’s chief accountant, Elena Khusyaynova. In that complaint, the FBI said that Federal Agency of News was not so different than the IRA. Both were allegedly part of “Project Lakhta,” a Russian operation to interfere with political and electoral systems both in Russia “and other countries, including the United States.”
Facebook shut more than 270 Russian language accounts and pages in April of 2018, including FAN’s account. Company CEO Mark Zuckerberg said the pages “were controlled by the IRA,” which had “repeatedly acted deceptively and tried to manipulate people in the U.S., Europe, and Russia.” The IRA used a “network of hundreds of fake accounts to spread divisive content and interfere in the U.S. presidential election.” Facebook’s Chief Security Officer stated that the IRA had spent about $100,000 on Facebook ads in the United States.
At this point, one might think that anyone with alleged connections to the Internet Research Agency, including FAN, would lie low. But that’s not what happened. Instead, FAN’s new owner, Evgeniy Zubarev, hired U.S. lawyers and filed a lawsuit against Facebook, claiming that his civil rights had been violated. He demanded that FAN’s account be reinstated, and that FAN be paid damages.
Weakening Section 230 will give frivolous lawsuits like the ones above a major boost. Small companies, with no margin for extra legal costs, will be under more pressure to capitulate to bogus demands over their content moderation.
Section 230 protects basic principles, whether you run a blog with a comment section, an email list with 100 users, or a platform serving millions. You have the right to moderate. You have the right to speak your own mind, and serve other users, without following the dictates of a government commission—and without fear of a bankrupting lawsuit.
Innovation, experimentation and real competition are the best paths forward to a better internet. More lawsuits over everyday content moderation won’t get us there.
In March 2016, “smart” doorbell camera maker Ring was a growing company attempting to market its wireless smart security camera when it received an email from an officer in the Los Angeles Police Department (LAPD) Gang and Narcotics Division, who was interested in purchasing a slew of devices.
The Los Angeles detective wanted 20 cameras, consisting of 10 doorbell cameras and 10 “stick up” cameras, which retailed for nearly $3,000. Ring, headquartered in nearby Santa Monica, first offered a discount but quickly sweetened the deal: “I’d be happy to send you those units free of charge,” a Ring employee told the officer, according to emails released in response to California Public Records Act (CPRA) requests filed by EFF and NBC’s Clark Fouraker. These emails are also the subject of a detailed new report from the Los Angeles Times.
A few months later, in July 2016, Ring was working with an LAPD officer to distribute a discount code that would allow officers to purchase Ring cameras for $50 off. As a growing number of people used his discount code, Ring offered the officer more and more free equipment.
These officers receiving free equipment, either for an investigation or for their “hard work” helping to promote the sale of Ring through discount codes, were not isolated incidents. Across the LAPD—from the gang division in Downtown to community policing units in East Los Angeles and Brentwood—Ring offered, or officers requested, thousands of dollars’ worth of free products in exchange for officers’ promotion of Ring products to fellow officers and the larger community, seemingly in violation of department prohibitions on both accepting gifts from vendors and endorsing products.
In another incident, the LAPD asked Ring for cameras to aid in an investigation involving a slew of church break-ins. Ring offered to send the police a number of cameras free of charge, but not without recognizing a marketing opportunity: “If the church sees value in the devices, perhaps it's something that they can talk about with their members. Let's talk more about this on the phone, but for now, I'll get those devices sent out ASAP.”
The LAPD released over 3,000 pages of emails from 2016 between Ring representatives and LAPD personnel in response to the CPRA requests. The records show that leading up to Ring’s official launch of partnerships with police departments—which now number almost 150 in California and over 2000 across the country—Ring worked steadily with Los Angeles police officers to provide free or discounted cameras for official and personal use, and in return, the LAPD worked to encourage the spread of Ring’s products throughout the community. The emails show officers were ready to tout the Ring camera as a device they used themselves, one they “love,” “completely believe in,” and “support.”
For over a year, EFF has been sounding the alarm about Ring and its police partnerships, which have in effect created neighborhood-wide surveillance networks without public input or debate. As part of these partnerships, Ring controls when and how police speak about Ring—with the company often requiring final say over statements and admonishing police departments who stray from the script.
Racial justice and civil liberties advocates have continually pointed out how Ring enables racial profiling. Rather than making people feel safer in their own homes, Ring cameras can often have the reverse effect. By having a supposed crime-fighting tool alert a user every time a person approaches their home, the user can easily get the impression that their home is under siege. This paranoia can turn public neighborhoods filled with innocent pedestrians and workers into de facto police states where Ring owners can report “suspicious” people to their neighbors via Ring’s Neighbors social media platform, or the police. In a recent investigation, VICE found that a vast majority of people labeled “suspicious” were people of color. Ring, with its motion detection alerts, gives residents a digitally aided way of enforcing who does and does not belong in their neighborhood based on their own biases and prejudices.
Ring also has serious implications on First Amendment activities. Earlier this year, EFF reported that LAPD requested footage from Ring cameras related to protests in Los Angeles following the police murder of George Floyd.
These emails further add to these concerns, as they point to a scheme in which public servants have used their positions for private gain and contributed to an environment of fear and suspicion in communities already deeply divided.
When confronted by police encouraging residents to mount security cameras, people should not have to decide whether their local police are operating out of a real concern over safety—or whether they are motivated by the prospect of receiving free equipment.
EFF has submitted a letter raising these concerns and calling on the California Attorney General to initiate a public integrity investigation into the relationship between Ring and the LAPD. The public has a right to know whether officers in their communities have received or are receiving benefits from Ring, and whether those profits have influenced when and if police have encouraged communities to buy and use Ring cameras. Although the incidents recorded in these emails occurred primarily in 2016, Ring’s police partnerships and influence have only spread in the resulting years. It’s time for the California Department of Justice to step in and use its authority to investigate if and when Ring wielded inappropriate influence over California’s police and sheriff’s departments.
Emails between the LAPD and Ring:
EFF’s Letter to the California Department of Justice on the relationship between the LAPD and Ring:
San Francisco – Nearly two dozen rights groups, including the Electronic Frontier Foundation (EFF), have joined together to tell PayPal and its subsidiary Venmo to shape up its policies on account freezes and closures, as its opaque practices are interfering with payment systems connected to many First-Amendment protected activities.
“Companies like PayPal and Venmo have hundreds of millions of users. Access to their services can directly impact an individual, company, or nonprofit’s ability to survive and thrive in our digital world,” said EFF International Director of Freedom of Expression Jillian York. “But while companies like Facebook and YouTube have faced substantial scrutiny for their history of account closures, financial companies like PayPal have often flown under the radar. Now, the human rights community is sending a clear message that it’s time to change.”
The coalition sent a letter to PayPal and Venmo today, voicing particular concern about account closures that seem to have been used to pressure or single-out websites that host controversial—but legal—content. PayPay shut down the account of online bookseller Smashwords over concern about erotic fiction, and also refused to process payments to the whistleblower website Wikileaks. Last year, Venmo was sued for targeting payments associated with Islam or Arab nationalities or ethnicity, and there are also numerous examples of sex worker advocates facing account closures.
Today’s letter calls on PayPal and Venmo to provide more transparency and accountability around its policies and practices for account freezes and closures, including publishing regular transparency reports, providing meaningful notice to users, and offering a timely and meaningful appeals process. These recommendations are in alignment with the Santa Clara Principles on Transparency and Accountability in Content Moderation, a set of principles developed by free expression advocates and scholars to help companies center human rights when moderating user-generated content and accounts.
“More transparency into financial censorship helps civil liberties and human rights advocates see patterns of abuse,” said EFF Chief Program Officer Rainey Reitman. “It’s vital that PayPal and Venmo follow in the steps of other companies and begin publishing annual transparency reports.”
The signers of today’s letter include 7amleh - The Arab Center for the Advancement of Social Media, Access, ACLU of Northern California, American Civil Liberties Union, Article 19, the Center for Democracy and Technology, Center for LGBTQ Economic Advancement & Research (CLEAR), Demand Progress Education Fund, European Legal Support Center (ELSC), Fight for the Future, Freedom of the Press Foundation, Global Voices, Masaar-Technology and Law Community, Mnemonic, New America’s Open Technology Institute, PDX Privacy, the Tor Project, Taraaz, Ranking Digital Rights, Restore the Fourth Minnesota, and SMEX.
For the full letter to PayPal and Venmo:
Unconstitutional Florida Law Barring Platforms from Suspending Politicians Should be Blocked, EFF Tells Court
Tallahassee, Florida—The Electronic Frontier Foundation (EFF) and Protect Democracy urged a federal judge to strike down Florida’s law banning Facebook, Twitter, and other platforms from suspending political candidates’ accounts, saying it unconstitutionally interferes with the First Amendment rights of the companies and their users, and forces companies to give politicians’ speech preferential treatment that other users are denied.
EFF has long criticized large online platforms’ content moderation practices as opaque, inconsistent, and unfair because they often remove legitimate speech and disproportionately harm marginalized populations that struggle to be heard. These are serious problems that have real world consequences, but they don’t justify a law that violates the free speech rights of internet users who don’t happen to be Florida politicians and the private online services on which they rely, EFF said in a brief filed today in U.S. District Court for the Northern District of Florida.
“The First Amendment prevents the government from forcing private publishers to publish the government’s preferred speech, and from forcing them to favor politicians over other speakers. This is a fundamental principle of our democracy,” said EFF Civil Liberties Director David Greene.
The Supreme Court in 1974 unanimously rejected a Florida law requiring newspapers to print candidates’ replies to editorials criticizing them. Government interference with decisions by private entities to edit and curate content is anathema to free speech, the court said.
“The same principle applies here to S.B. 7072,” said Greene.
Florida Governor Ron DeSantis signed the law, set to take effect July 1, to punish social media companies for their speech moderation practices. It follows Facebook’s and Twitter’s bans on former President Donald Trump’s accounts and complaints by lawmakers of both parties that platforms have too much control over what can be said on the internet.
The law gives preferential treatment to political candidates, preventing platforms at any point before an election from canceling their accounts. This gives candidates free rein to violate any platform’s rules with impunity, even when it causes abuse or harassment, or when the speech is unprotected by the First Amendment. Their posts cannot be de-prioritized or notated; all other users receive no such privilege. The law also limits platforms’ ability to moderate content by entities and individuals with large numbers of followers or readers.
S.B. 7072 does mandate that platforms notify users about takedowns, use clear moderation standards, and take other steps to be more transparent. These are laudable provisions. But the overall framework of the law is unconstitutional. Instead, platforms could address unfair content moderation practices through voluntarily adopting a human rights framework for speech curation such as the Santa Clara Principles.
“Internet users should demand transparency, consistency, and due process in platforms’ removal process,” said EFF Senior Staff Attorney Aaron Mackey. “These voluntary practices can help ensure content moderation comports with human rights principles to free expression without violating the First Amendment, as S.B. 7072 does.”
For the full amicus brief:
San Francisco – On Tuesday, June 15, at 5:30 pm PT, the Electronic Frontier Foundation (EFF) will testify against the San Francisco Police Department (SFPD) at the city’s Sunshine Ordinance Task Force hearing. EFF has registered a complaint against the SFPD for withholding records about a controversial investigation and the use of facial recognition.
In September of last year, SFPD arrested a man suspected of illegally discharging a gun, and a report in the San Francisco Chronicle raised concerns that the arrest came after running the man’s photo through a facial-recognition database. If the SFPD was involved in using facial recognition, that could potentially be a violation of San Francisco’s Community Control Over Police Surveillance (CCOPS) ordinance.
EFF filed a public records’ request with the SFPD about the investigation and the arrest, but the department released only previously available public statements. EFF appealed to the Sunshine Ordinance Task Force, after which point SFPD produced many more relevant documents. EFF filed a complaint with the task force about SFPD’s original, misleading record release.
At Tuesday’s hearing, EFF Investigative Researcher Beryl Lipton will ask the task force to uphold EFF’s complaint about the SFPD, arguing that San Francisco’s transparency policies won’t work well unless public agencies are held to account when trying to skirt their responsibilities.
San Francisco Sunshine Ordinance Task Force hearing
EFF Investigative Researcher
Tuesday, June 15
LISTEN/CALL IN LINE:
Meeting ID: 100 327 123#
For more information on the hearing:
When it comes to online services, there are a few very large companies whose gravitational effects can alter the entire tech universe. Their size, power, and diverse levers of control mean that there is no single solution that will put right that which they’ve thrown out of balance. One thing is clear—having such large companies with control over so much of our data is not working for users, not working for privacy or freedom of expression, and it’s blocking the normal flow of competition. These giants need to be prevented from using their tremendous power to just buy up competitors, so that they have to actually compete, and so that new competitors are not incentivized to just be be acquired. Above all, these giants need to be pushed to make it easy for users to leave, or to use other tools to interact with their data without leaving entirely.
In recognition of this reality, the House Judiciary Committee has released a number of proposed laws which would reign in the largest players in the tech space in order to make a healthier, more competitive internet ecosystem. We’ll have more in-depth analysis of all of them in the coming weeks, but our initial thoughts focus on the proposal which would make using a service on your own terms, or moving between services, much easier: the ACCESS Act.
The “Augmenting Compatibility and Competition by Enabling Service Switching Act”—or ACCESS Act—helps accomplish a goal we’ve long promoted as central to breaking the hold large tech companies have on our data and our business: interoperability.
Today too many tech companies are “roach motels” where our data enters but can never leave, or be back under our control. They run services where we only get the features that serve their shareholders’ interests, not our needs. This stymies other innovators, especially those who could move beyond today's surveillance business models. The ACCESS Act creates a solid framework for change.Privacy and Agency: Making Interoperability Work for Users
These services have vast troves of information about our lives. The ACCESS Act checks abuse of that data by enforcing transparency and consent. The bill mandates that platforms of a certain size and type make it possible for a user to leave that service and go to a new one, taking some or even all their data with them, while still maintaining the ability to socialize with the friends, customers, colleagues and communities who are still using the service. Under the bill, a user can request the data for themselves or, with affirmative consent, have it moved for them.
Interoperability means more data sharing, which can create new risks: we don't want more companies competing to exploit our data. But as we’ve written, careful safeguards on new data flows can ensure that users have the first and final word on what happens to their information. The guiding principle should be knowing and clear consent.
First, sensitive data should only be moved at the direction of the users it pertains to, and companies shouldn’t be able to use interoperability to expand their nonconsensual surveillance. That’s why the bill includes a requirement for affirmative consent before a user’s data can be ported. It also forbids any secondary use or sharing of the data that does get shared—a crucial corollary that will ensure data can’t be collected for one purpose, then sold or used for something else.
Furthermore, the bill requires covered platforms to not make changes to their interoperability interfaces without approval from the Federal Trade Commission (FTC), except in emergencies. That’s designed to prevent Facebook or other large platforms from making sudden changes that pull the rug out from under competitors. But there are times that the FTC cannot act quickly enough to approve changes. In the event of a security vulnerability or similar privacy or security emergency, the ACCESS act would allow platforms to address the problem without prior FTC approval.
The bill is not perfect. It lacks some clarity about how much control users will have over ongoing data flows between platforms and their competitors, and it should make it 100% clear that “interoperability” can’t be construed to mean “surveillance advertising.” It also depends on an FTC that has enough staff to promote, rather than stymie, innovation in interoperable interfaces. To make sure the bill’s text turns into action, it should also have a private right of action. Private rights of action allow users themselves to sue a company that fails to abide by the law. This means that users themselves can hold companies accountable in the courts, instead of relying on the often overstretched, under-resourced FTC. It’s not that the FTC should not have oversight power, but that the bill would be strengthened by adding another form of oversight.
Put simply: the ACCESS Act needs a private right of action so that those of us stuck inside dominant platforms, or pounding on the door to innovate alongside or in competition with them, are empowered to protect ourselves.
The bill introduced today is a huge step in bringing much-needed competition to online services. While we believe there are things missing, we are glad to see so many problems being addressed.
The California legislature has been handed what might be their easiest job this year, and they are refusing to do it.
Californians far and wide have spent the pandemic either tethered to their high-speed broadband connections (if they’re lucky), or desperately trying to find ways to make their internet ends meet. School children are using the wifi in parking lots, shared from fast food restaurants. Mobile broadband isn’t cutting it, as anyone who’s been outside of a major city and tried to make a video call on their phone can tell you. Experts everywhere insist we need a bold plan that gives communities, organizations, and nonprofits the ability and the funds to build fiber infrastructure that will serve those individuals who aren’t on the radar of the big telecommunications companies.
Take 60 Seconds to Call Your RepRESENTATIVES Today
Luckily, the California legislature has, sitting on their desks, $7 billion to spend on this public broadband infrastructure. This includes $4 billion to construct a statewide, open-access middle-mile network using California’s highway and utility rights of way. It's a plan that would give California—the world’s fifth largest economy, which is heavily dependent on high-speed internet—one of the largest public broadband fiber networks in the country.
This plan needs only a simple majority to pass. But while Californians are mostly captive to the big telecom and cable companies for whatever high-speed investment they’ve decided will be most profitable, the legislature is captive in a different way: Comcast, AT&T, and other telcos are traditionally some of the biggest lobbyists in the country, and their influence is particularly strong in California. We must convince the legislature to pass Governor Newsom’s plan for a long-term, future-proof investment in our communities. One-thousand Californians have already reached out to their representatives to demand that they take action. We need everyone—you, your friends, your family, and anyone else you know in California—to double that number. Speak up today before the legislature decides to sit this one out. Inaction could force California to lose federal dollars for the project. Every day we don’t move forward is another day lost. The state should be breaking ground as soon as possible for what will undoubtedly be a years-long infrastructure project.
TAKE 60 SECONDS TO CALL YOUR REPRESENTATIVES TODAY
If you're unable to call, please send an email. If you can, do both — the future of California's high-speed internet depend on it.
In Privacy Without Monopoly: Data Protection and Interoperability, we took a thorough look at the privacy implications of various kinds of interoperability. We examined the potential privacy risks of interoperability mandates, such as those contemplated by 2020’s ACCESS Act (USA), the Digital Services Act and Digital Markets Act (EU), and the recommendations presented in the Competition and Markets Authority report on online markets and digital advertising (UK).
We also looked at the privacy implications of “competitive compatibility” (comcom, AKA adversarial interoperability), where new services are able to interoperate with existing incumbents without their permission, by using reverse-engineering, bots, scraping, and other improvised techniques common to unsanctioned innovation.
Our analysis concluded that while interoperability created new privacy risks (for example, that a new firm might misappropriate user data under cover of helping users move from a dominant service to a new rival), these risks can largely be mitigated with thoughtful regulation and strong enforcement. More importantly, interoperability also had new privacy benefits, both because it made it easier to leave a service with unsuitable privacy policies, and because this created real costs for dominant firms that did not respect their users’ privacy: namely, an easy way for those users to make their displeasure known by leaving the service.
Critics of interoperability (including the dominant firms targeted by interoperability proposals) emphasize the fact that weakening a tech platform’s ability to control its users weakens its power to defend its users.
They’re not wrong, but they’re not complete either. It’s fine for companies to defend their users’ privacy—we should accept nothing less—but the standards for defending user-privacy shouldn’t be set by corporate fiat in a remote boardroom, they should come from democratically accountable law and regulation.
The United States lags in this regard: Americans whose privacy is violated have to rely on patchy (and often absent) state privacy laws. The country needs—and deserves—a strong federal privacy law with a private right of action.
That’s something Europeans actually have. The General Data Protection Regulation (GDPR), a powerful, far-reaching, and comprehensive (if flawed and sometimes frustrating) privacy law came into effect in 2018.
The European Commission’s pending Digital Services Act (DSA) and Digital Markets Act (DMA) both contemplate some degree of interoperability, prompting two questions:
- Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy? And
- Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?
We think the answers are “no” and “no,” respectively. Below, we explain why.Does the GDPR mean that the EU doesn’t need interoperability in order to protect Europeans’ privacy?
Increased interoperability can help to address user lock-in and ultimately create opportunities for services to offer better data protection.
The European Data Protection Supervisor has weighed in on the relation between the GDPR and the Digital Markets Act (DMA), and they affirmed that interoperability can advance the GDPR’s goals.
Note that the GDPR doesn’t directly mandate interoperability, but rather “data portability,” the ability to take your data from one online service to another. In this regard, the GDPR represents the first two steps of a three-step process for full technological self-determination:
- The right to access your data, and
- The right to take your data somewhere else.
The GDPR’s data portability framework is an important start! Lawmakers correctly identified the potential of data portability to help promote competition of platform services and to reduce the risk of user lock-in by reducing switching costs for users.
The law is clear on the duty of platforms to provide data in a structured, commonly used and machine-readable format and users should have the right to transmit data without hindrance from one data controller to another. Where technically feasible, users also have the right to ask the data controller to transmit the data to another controller.
Recital 68 of the GDPR explains that data controllers should be encouraged to develop interoperable formats that enable data portability. The WP29, a former official European data protection advisory body, explained that this could be implemented by making application programme interfaces (APIs) available.
However, the GDPR’s data portability limits and interoperability shortcomings have become more obvious since it came into effect. These shortcomings are exacerbated by lax enforcement. Data portability rights are insufficient to get Europeans the technological self-determination the GDPR seeks to achieve.
The limits the GDPR places on which data you have the right to export, and when you can demand that export, have not served their purpose. They have left users with a right to data portability, but few options about where to port that data to.
Missing from the GDPR is step three:
3. The right to interoperate with the service you just left.
The DMA proposal is a legislative way of filling in that missing third step, creating a “real time data portability” obligation, which is a step toward real interop, of the sort that will allow you to leave a service, but remain in contact with the users who stayed behind. An interop mandate breathes life into the moribund idea of data-portability.Does the GDPR mean that interoperability is impossible, because there is no way to satisfy data protection requirements while permitting third-party access to an online service?
The GDPR is very far-reaching, and European officials are still coming to grips with its implications. It’s conceivable that the Commission could propose a regulation that cannot be reconciled with EU data protection rules. We learned that in 2019, when the EU Parliament adopted the Copyright Directive without striking down the controversial and ill-conceived Article 13 (now Article 17). Article 17’s proponents confidently asserted that it would result in mandatory copyright filters for all major online platforms, not realizing that those filters cannot be reconciled with the GDPR.
But we don’t think that’s what’s going on here. Interoperability—both the narrow interop contemplated in the DMA, and more ambitious forms of interop beyond the conservative approach the Commission is taking—is fully compatible with European data protection, both in terms of what Europeans legitimately expect and what the GDPR guarantees.
Indeed, the existence of the GDPR solves the thorniest problem involved in interop and privacy. By establishing the rules for how providers must treat different types of data and when and how consent must be obtained and from whom during the construction and operation of an interoperable service, the GDPR moves hard calls out of the corporate boardroom and into a democratic and accountable realm.
Facebook often asserts that its duty to other users means that it has to block you from bringing some of “your” data with you if you want to leave for a rival service. There is definitely some material on Facebook that is not yours, like private conversations between two or more other people. Even if you could figure out how to access those conversations, we want Facebook to take steps to block your access and prevent you from taking that data elsewhere.
But what about when Facebook asserts that its privacy duties mean it can’t let you bring the replies to your private messages; or the comments on your public posts; or the entries in your address book; with you to a rival service? These are less clear-cut than the case of other peoples’ private conversations, but blocking you from accessing this data also helps Facebook lock you onto its platform, which is also one of the most surveilled environments in the history of data-collection.
There’s something genuinely perverse about deferring these decisions to the reigning world champions of digital surveillance, especially because an unfavorable ruling about which data you can legitimately take with you when you leave Facebook might leave you stuck on Facebook, without a ready means to address any privacy concerns you have about Facebook’s policies.
This is where the GDPR comes in. Rather than asking whether Facebook thinks you have the right to take certain data with you or to continue accessing that data from a rival platform, the GDPR lets us ask the law which kinds of data connections are legitimate, and when consent from other implicated users is warranted. Regulation can make good, accountable decisions about whether a survey app deserves access to all of the “likes” by all of its users’ friends (Facebook decided it did, and the data ended up in the hands of Cambridge Analytica), or whether a user should be able to download a portable list of their friends to help switch to another service (which Facebook continues to prevent).
The point of an interoperability mandate—either the modest version in the DMA or a more robust version that allows full interop—is to allow alternatives to high-surveillance environments like Facebook to thrive by reducing switching costs. There’s a hard collective action problem of getting all your friends to leave Facebook at the same time as you. If people can leave Facebook but stay in touch with their Facebook friends, they don’t need to wait for everyone else in their social circle to feel the same way. They can leave today.
In a world where platforms—giants, startups, co-ops, nonprofits, tinkerers’ hobbies—all treat the GDPR as the baseline for data-processing, services can differentiate themselves by going beyond the GDPR, sparking a race to the top for user privacy.Consent, Minimization and Security
We can divide all the data that can be passed from a dominant platform to a new, interoperable rival into several categories. There is data that should not be passed. For example, a private conversation between two or more parties who do not want to leave the service and who have no connection to the new service. There is data that should be passed after a simple request from the user. For example, your own photos that you uploaded, with your own annotations; your own private and public messages, etc. Then there is data generated by others about you, such as ratings. Finally, there is someone else’s personal information contained in a reply to a message you posted.
The last category is tricky, and it turns on the GDPR’s very fulcrum: consent. The GDPR’s rules on data portability clarify that exporting data needs to respect the rights and freedom of others. Thus, although there is no ban on porting data that does not belong to the requesting user, data from other users shouldn’t be passed on without their explicit consent, or under another GDPR legal basis, and without further safeguards.
That poses a unique challenge for allowing users to take their data with them to other platforms, when that data implicates other users—but it also promises a unique benefit to those other users.
If the data you take with you to another platform implicates other users, the GDPR requires that they consent to it. The GDPR’s rules for this are complex, but also flexible.
For example, say, in the future, that Facebook obtains consent from users to allow their friends to take the comments, annotations, and messages they send to those friends with them to new services. If you quit Facebook and take your data (including your friends’ contributions to it) to a new service, the service doesn’t have to bother all your friends to get their consent again—under the WP Guidelines, so long as the new service uses the data in a way that is consistent with the uses Facebook obtained consent for in the first place, that consent carries over.
But even though the new service doesn’t have to obtain consent from your friends, it does have to notify them within 30 days - so your friends will always know where their data ended up.
And the new platform has all the same GDPR obligations that Facebook has: they must only process data when they have a “lawful basis” to do so; they must practice data minimization; they must maintain the confidentiality and security of the data; and they must be accountable for its use.
None of that prevents a new service from asking your friends for consent when you bring their data along with you from Facebook. A new service might decide to do this just to be sure that they are satisfying the “lawfulness” obligations under the GDPR.
One way to obtain that consent is to incorporate it into Facebook’s own consent “onboarding”—the consent Facebook obtains when each user creates their account. To comply with the GDPR, Facebook already has to obtain consent for a broad range of data-processing activities. If Facebook were legally required to permit interoperability, it could amend its onboarding process to include consent for the additional uses involved in interop.
Of course, the GDPR does not permit far-reaching, speculative consent. There will be cases where no amount of onboarding consent can satisfy either the GDPR or the legitimate privacy expectations of users. In these cases, Facebook can serve as a “consent conduit,” through which consent to allow their friends to take data with muddled claims with them to a rival platform can be sought, obtained, or declined.
Such a system would mean that some people who leave Facebook would have to abandon some of the data they’d hope to take with them—their friends’ contact details, say, or the replies to a thread they started—and it would also mean that users who stayed behind would face a certain amount of administrative burden when their friends tried to leave the service. Facebook might dislike this on the grounds that it “degraded the user experience,” but on the other hand, a flurry of notices from friends and family who are leaving Facebook behind might spur the users who stayed to reconsider that decision and leave as well.
For users pondering whether to allow their friends to take their blended data with them onto a new platform, the GDPR presents a vital assurance: because the GDPR does not permit companies to seek speculative, blanket consent for future activities for new purposes that you haven’t already consented to, and because the companies your friends take your data to have no way of contacting you, they generally cannot lawfully make any further use of that data (except through one of the other narrow bases permitted by GDPR, for example, to fulfil a “legitimate interest”) . Your friends can still access it, but neither they, nor the services they’ve fled to, can process your data beyond the scope of the initial consent to move it to the new context. Once the data and you are separated, there is no way for third parties to obtain the consent they’d need to lawfully repurpose it for new products or services.
Beyond consent, the GDPR binds online services to two other vital obligations: “data minimization” and “data security.” These two requirements act as a further backstop to users whose data travels with their friends to a new platform.
Data minimization means that any user data that lands on a new platform has to be strictly necessary for its users’ purposes (whether or not there might be some commercial reason to retain it). That means that if a Facebook rival imports your comments to its new user’s posts, any irrelevant data that Facebook transmits along with that data (say, your location when you left the comment, or which link brought you to the post), must be discarded. This provides a second layer of protection for users whose friends migrate to new services: not only is their consent required before their blended data travels to the new service, but that service must not retain or process any extraneous information that seeps in along the way.
The GDPR’s security guarantee, meanwhile, guards against improper handling of the data you consent to let your friends take with them to new services. That means that the data in transit has to be encrypted, and likewise the data at rest, on the rival service’s servers. And no matter that the new service is a startup, it has a regulated, affirmative duty to practice good security across the board, with real liability if it commits a material omission that leads to a breach.
Without interoperability, the monopolistic high-surveillance platforms are likely to enjoy long term, sturdy dominance. The collective action problem represented by getting all the people on Facebook whose company you enjoy to leave at the same time you do means that anyone who leaves Facebook incurs a high switching cost.
Interoperability allows users to depart Facebook for rival platforms, including those that both honor the GDPR and go beyond its requirements. These smaller firms will have less political and economic influence than the monopolists whose dominance they erode, and when they do go wrong, their errors will be less consequential because they impact fewer users.
Without interoperability, privacy’s best hope is to gentle Facebook, rendering it biddable and forcing it to abandon its deeply held beliefs in enrichment through nonconsensual surveillance —and to do all of this without the threat of an effective competitor that Facebook users can flee to no matter how badly it treats them.
Interoperability without privacy safeguards is a potential disaster, provoking a competition to see who can extract the most data from users while offering the least benefit in return. Every legislative and regulatory interoperability proposal in the US, the UK, and the EU contains some kind of privacy consideration, but the EU alone has a region-wide, strong privacy regulation that creates a consistent standard for data-protection no matter what measure is being contemplated. Having both components - an interoperability requirement and a comprehensive privacy regulation - is the best way to ensure interoperability leads to competition in desirable activities, not privacy invasions.
The Dartmouth Geisel School of Medicine has ended its months-long dragnet investigation into supposed student cheating, dropping all charges against students and clearing all transcripts of any violations. This affirms what EFF, The Foundation for Individual Rights in Education (FIRE), students, and many others have been saying all along: when educators actively seek out technical evidence of students cheating, whether those are through logs, proctoring apps, or other automated or computer-generated techniques, they must also seek out technical expertise, follow due process, and offer concrete routes of appeal.
The investigation at Dartmouth began when the administration conducted a flawed review of an entire year’s worth of student log data from Canvas, the online learning platform that contains class lectures and other substantive information. After a technical review, EFF determined that the logs easily could have been generated by the automated syncing of course material to devices logged into Canvas but not being used during an exam. It’s simply impossible to know from the logs alone if a student intentionally accessed any of the files, or if the pings exist due to automatic refresh processes that are commonplace in most websites and online services. In this case, many of the logs related to Canvas content that wasn’t even relevant to the tests being taken, raising serious questions about Dartmouth’s allegations.
It’s unclear how many other schools have combed through Canvas logs for evidence of cheating, but the Dartmouth debacle provides clear evidence that its logging system is not meant to be used—and should not be used—as evidence in such investigations.
Along with FIRE, EFF sent a letter to Dartmouth in March laying out our concerns, including the fact that Canvas' own documentation explicitly states that the data in these logs is not intended to be used "in isolation for auditing or other high-stakes analysis involving examining single users or small samples." According to the latest email sent to the student body from the Dean of the School of Medicine, the allegations have been dropped “upon further review and based on new information received from our learning management system provider.” While Instructure, the company behind Canvas, has not responded to numerous requests we’ve sent asking them to comment on Dartmouth’s use of these logs, we are heartened to hear that it is taking misuses of its system seriously. We urge the company to take a more public stand against these sorts of investigations. It’s unclear how many other schools have combed through Canvas logs for evidence of cheating, but the Dartmouth debacle provides clear evidence that its logging system is not meant to be used—and should not be used—as evidence in such investigations.Fighting Disciplinary Technologies
Schools are not the only places where technology is being (mis)used to surveil, punish, or falsely accuse those without recourse. “Disciplinary technologies” are showing up more and more in the areas of our lives where power imbalances are common—in workplaces, in relationships, and in homes. Dartmouth is an example of the way these technologies can exacerbate already existing power dynamics, giving those in power an excuse not to take due process seriously. Students were not only falsely accused, but they were given little to no recourse to defend themselves from what the school saw as incontrovertible evidence against them. It was only after multiple experts demanded the school take a closer look at the evidence that they began to backtrack. What’s worse, only those students who had technical experts available to them had their charges quickly dropped, while those who lacked resources or connections were left with their futures in the balance, raising questions of inequity and preferential treatment.
While we’re pleased that these allegations have been dropped for all students—and pleased that, according to the dean, the school will be reviewing a proposal for open-book exams, which would eliminate the harms caused by online proctoring software—the distress this caused cannot be overstated. Several students expected their careers would be destroyed for cheating when they had not done so; others were told to admit guilt simply because it would be easier on them. Many students complained, some anonymously for fear of reprisal, of the toll these allegations were taking on their mental health. In the midst of the investigation, the school released a dangerous update to its social media policy that silenced students who were speaking out, which appears to still be an active policy. All of this could have been avoided.
We’re working at EFF to craft solutions to the problems created by disciplinary technologies and other tools that put machines in power over ordinary people, and to protect the free speech of those speaking out against their use. It will take technologists, consumers, activists, and changes in the law to course correct—but we believe the fight can be won, and today’s decision at Dartmouth gets us one step closer.
This blog post was written by Kenny Gutierrez, EFF Bridge Fellow.
Recently proposed modifications to the federal Health Insurance Portability and Accountability Act (HIPAA) would invade your most personal and intimate health data. The Office of Civil Rights (OCR), which is part of the U.S. Department of Health and Human Services (HHS), proposes loosening our health privacy protections to address misunderstandings by health professionals about currently permissible disclosures.
EFF recently filed objections to the proposed modifications. The most troubling change would expand the sharing of your health data without your permission, by enlarging the definition of “health care operations” to include “case management” and “care coordination,” which is particularly troubling since these broad terms are not defined. Additionally, the modifications seek to lower the standard of disclosure for emergencies. They also will require covered entities to disclose personal health information (PHI) to uncovered health mobile applications upon patient request. Individually, the changes are troublesome enough. When combined, the impact on the release of PHI, with and without consent, is a threat to patient health and privacy.
Trust in Healthcare is Crucial
The proposed modifications would undermine the requisite trust by patients for health professionals to disclose their sensitive and intimate medical information. If patients no longer feel their doctors will protect their PHI, they will not disclose it or even seek treatment. For example, since there is pervasive prejudice and stigma surrounding addiction, an opiate- dependent patient will probably be less likely to seek treatment, or fully disclose the severity of their condition, if they fear their diagnosis could be shared without their consent. Consequently, the HHS proposal will hinder care coordination and case management. That would increase the cost of healthcare, because of decreased preventative care in the short-term, and increased treatment in the long-term, which is significantly more expensive. Untreated mental illness costs the nation more than $100 billion annually. Currently, only 2.5 million of the 21.2 million people suffering from mental illness seek treatment.
The current HIPAA privacy rule is flexible enough, counter to the misguided assertions of some health care professionals. It protects patient privacy while allowing disclosure, without patient consent, in critical instances such as for treatment, in an emergency, and when a patient is a threat to themselves or public safety.
So, why does HHS seek to modify an already flexible rule? Two congressional hearings, in 2013 and 2015, revealed that there is significant misunderstanding of HIPAA and permissive disclosures amongst medical professionals. As a result, HIPAA is misperceived as rigidly anti-disclosure, and mistakenly framed it as a “regulatory barrier” or “burden.” Many of the proposed modifications double down on this misunderstanding with privacy deregulation, rather than directly addressing some professionals’ confusion with improved training, education, and guidance.
The HHS Proposals Would Reduce Our Health Privacy
Modifications to HIPAA will cause more problems than solutions. Here is a brief overview of the most troubling modifications:
- The proposed rule would massively expand a covered entity’s (CE) use and disclosure of personal health information (PHI) without patient consent. Specifically, it allows unconsented use and disclosure for “care coordination” and “case management,” without adequately defining these vague and overbroad terms. This expanded exception would swallow the consent requirement for many uses and disclosure decisions. Consequently, Big Data (such as corporate data brokers) would obtain and sell this PHI. That could lead to discrimination in insurance policies, housing, employment, and other critical areas because of pre-existing medical conditions, such as substance abuse, mental health illness, or severe disabilities that carry a stigma.
- HHS seeks to lower the standard of unconsented disclosure from “professional judgment” to “good faith belief.” This would undermine patient trust. Currently, a covered entity may disclose some PHI based on their “professional judgment” that it is in the individual’s best interest. The modification would lower this standard to a “good faith belief,” and apparently shift the burden to the injured individual to prove their doctor’s lack of good faith. Professional judgment is properly narrower: it is objective and grounded in expert standards. “Good faith” is both broader and subjective.
- Currently, to disclose PHI in an emergency, the standard for disclosure is “imminent” harm, which invokes a level of certainty that harm is surely impending. HHS proposes instead just “reasonably foreseeable” harm, which is too broad and permissive. This could lead to a doctor disclosing your PHI because you have a sugar-filled diet, you’re a smoker, or you have unprotected sex. Harm in such cases would not be “imminent,” but it could be “reasonably foreseeable.”
Weaker HIPAA Rules for Phone Health Apps Would Hand Our Data to Brokers
The proposed modifications will likely result in more intimate, sensitive, and highly valuable information being sent to entities not covered by HIPAA, including data brokers.
Most Americans have personal health application on their phones for health goals, such as weight management, stress management, and smoking cessation. However, these apps are not covered by HIPAA privacy protections.
A 2014 Federal Trade Commission study revealed that 12 personal health apps and devices transmitted information to 76 different third parties, and some of the data could be linked back to specific users. In addition, 18 third parties received device-specific identifiers, and 22 received other key health information.
Worse, depending on where the PHI is stored, other apps may grant themselves access to your PHI through their own separate permissions. Such permissions have serious consequences because many apps can access data on one’s device that is unrelated to what the app is supposed to do. In a study of 99 apps, researchers found that free apps included more unnecessary permissions than paid apps.
During the pandemic, we have learned once again the importance of trust in the health care system. Ignoring CDC guidelines, many people have not worn masks or practiced social distancing, which has fueled the spread of the virus. These are symptoms of public distrust of health care professionals. Trust is critical in prevention, diagnosis, and treatment.
The proposed HHS changes to HIPAA’s health privacy rules would undoubtedly lead to increased disclosures of PHI without patient consent, undermining the necessary trust the health care system requires. That’s why EFF opposes these changes and will keep fighting for your health privacy.
Imagine this: a limited liability company (LLC) is formed, for the sole purpose of acquiring patents, including what are likely to be low-quality patents of suspect validity. Patents in hand, the LLC starts approaching high-tech companies and demanding licensing fees. If they don’t get paid, the company will use contingency-fee lawyers and a litigation finance firm to make sure the licensing campaign doesn’t have much in the way of up-front costs. This helps give them leverage to extract settlements from companies that don’t want to pay to defend the matter in court, even if a court might ultimately invalidate the patent if it reached the issue.
That sounds an awful lot like a patent troll. That’s the kind of entity that EFF criticizes because they use flimsy patents to squeeze money from operating companies, rather than making their own products. Unfortunately, this description also applies to a company that has just been formed by a consortium of 15 large research universities.
This patent commercialization company has been secretly under discussion since 2018. In September 2020, it quietly went public, when the University of California Regents authorized making UC Berkeley and UCLA two of its founding members. In January, the DOJ said it wouldn’t challenge the program on antitrust grounds.
It’s good news when universities share technology with the private sector, and when startup companies get formed based on university research. That’s part of why so much university research is publicly funded. But there’s not much evidence that university patenting helps technology reach the public, and there’s a growing body of evidence that patents hinder it. Patents in this context are legal tools that allow someone to monopolize publicly-funded research and capture its promise for a private end.
While larger tech companies can absorb the cost of either litigating or paying off the patent assertion entity, smaller innovators will face a much larger burden, proportionately. That means that that the existence of this licensing entity could harm innovation and competition. When taxpayers fund research, the fruits of the research should be available for all.
With 15 universities now forming a consortium to license electronics and software patents, it’s going to be a mess for innovators and lead to worse, more expensive products.Low-Quality Patents By The Bundle
Despite the explosion in university patenting and the growth of technology transfer offices (essentially university patent offices), the great majority of universities lose money on their patents. A 2013 Brookings Institute study showed that 84% of universities didn’t make enough money from their patents to cover the related legal costs and the staffing of their tech transfer office. Just a tiny slice of universities earn the majority of patent-licensing revenue, often from a few blockbuster pharmaceutical or biotech inventions. As many as 95% of university patents do not get licensed at all.
This new university patent licensing company won’t be getting any of the small number of impressive revenue-producing patents. The proposal sent to the UC Board of Regents explains that the LLC’s goal will be to get payment for patents that “have not been successfully licensed via a bilateral ‘one patent, one license’ transaction.” The universities’ proposal is to start by licensing in three areas: autonomous vehicles, “Internet of Things,” and Big Data.
In other words, they’ll be demanding licensing fees over lots and lots of software patents. By and large, software patents are the lowest quality patents, and their rise has coincided with the rise of large-scale patent trolling.
The university LLC won’t engage in the type of patent licensing that most actual university spinoffs would want, which are typically exclusive licenses over patents that give it a product or service no one else has. Rather, “the LLC will focus on non-exclusive sublicenses.” In other words, they’ll use the threat of litigation to attempt to get all competitors in a particular industry to pay for the same patents.
This is the same model pursued by the notorious Intellectual Ventures, a large patent troll company that convinced 61 different universities to contribute at least 470 different patents to its patent pool in an attempt to earn money from patents.What about the Public Interest?
The lawyers and bureaucrats promoting the UC patent licensing scheme know how bad this looks. Their plan is to use patents as weapons, not tools for innovation—exactly the method used by patent trolls. In the “Pros and Cons” section of the memo sent to the UC Regents, the biggest “Con” is that the University of California “may incur negative publicity, e.g., allegations may arise that the LLC’s activities are tantamount to a patent troll.” That’s why the memo seeks to reassure the Regents that “it is... the expectation that no enforcement action will be undertaken against startups or small business firms.” This apparently nonbinding “expectation” is small comfort.
The goal of the patent-based LLC doesn’t seem to be to share knowledge. If the universities wanted to do that, they could do it right now. They could do it for free, or do it for a contracted payment—no patents required.
The real goal seems to be finding alleged infringers, accusing them, and raising money. The targets will know that they’re not being offered an opportunity—they’ll be under attack. That’s why the lawyers working with UC have promised the Regents that when it comes time to launch lawsuits against one of the “pre-determined targets,” they will steer clear of small businesses.
The university LLC isn’t going to license their best patents. Rather, the UC Regents memo admits that they’re planning to license the worst of them—technologies that have not been successfully licensed via a “one patent, one license” transaction by either UCLA or UC Berkeley.
To be clear, universities aren’t patent trolls. Universities are centers for teaching, research, and community. But that broader social mission is exactly why universities shouldn’t go off and form a patent-holding company that is designed to operate similarly to a patent troll.
Patents aren’t needed to share knowledge, and dealing with them has been a net loss for U.S. universities. Universities need to re-think their tech transfer offices more broadly. In the meantime, the UC Regents should withdraw from this licensing deal as soon as possible. Other universities should consider doing the same. The people who will benefit the most from this aren’t the public or even the universities, but the lawyers. For the public interest and innovation, having the nation’s best universities supply a patent-trolling operation is a disaster in the making.
The fifteen members of the University Technology Licensing Program are expected to be:
- Brown University
- California Institute of Technology (Caltech)
- Columbia University
- Cornell University
- Harvard University
- Northwestern University
- Princeton University
- State University of New York at Binghamton
- University of California, Berkeley
- University of California, Los Angeles
- University of Illinois
- University of Michigan
- University of Pennsylvania
- University of Southern California
- Yale University
As the world stays home to slow the spread of COVID-19, communities are rapidly transitioning to digital meeting spaces. This highlights a trend EFF has tracked for years: discussions in virtual spaces shape and reflect societal freedoms, and censorship online replicates repression offline. As most of us spend increasing amounts of time in digital spaces, the impact of censorship on individuals around the world is acute.
Tracking Global Online Censorship is a new project to record and combat international speech restrictions, especially where censorship policies are exported from Europe and the United States to the rest of the world. Headed by EFF Director for International Freedom of Expression Jillian York, the project will seek accountability for powerful online censors—in particular, social media platforms such as Facebook and Google—and hold them to just, inclusive standards of expressive discourse, transparency, and due process in a way that protects marginalized voices, dissent, and disparate communities.
“Social media companies make mistakes at scale that catch a range of vital expression in their content moderation net. And as companies grapple with moderating new types of content during a pandemic, these error rates will have new, dangerous consequences,” said Jillian York. “Misapplication of content moderation systems results in the systemic silencing of marginalized communities. It is vital that we protect the free flow of information online and ensure that platforms provide users with transparency and a path to remedy.”
Support for Tracking Global Online Censorship is provided by the Swedish Postcode Foundation (Svenska Postkodstiftelsen). Established in 2003, the Swedish Postcode Foundation receives part of the Swedish Postcode Lottery’s surplus, which it then uses to provide financial support to non-governmental organizations creating positive changes through concrete efforts. The Foundation’s goal is to create a better world through projects that challenge, inspire, and promote change.
“Social media is a huge part of our daily life and a primary source of information. Social media companies enjoy an unprecedented power and control and the lack of transparency that these companies exercise does not run parallel to the vision that these same companies were established for. It is time to question, create awareness, and change this. We are therefore proud to support the Electronic Frontier Foundation in their work to do so,” says Marie Dahllöf, Secretary General of the Swedish Postcode Foundation.
We are at a pivotal moment for free expression. A dizzying array of actors have recognized the current challenges posed by intermediary corporations in an increasingly global world, but a large number of solutions seek to restrict—rather than promote—the free exchange of ideas. At the same time, as COVID-19 results in greater isolation, online expression has become more important than ever, and the impact of censorship greater. The Tracking Global Online Censorship project will draw attention to the myriad issues surrounding online speech, develop new and existing coalitions to strengthen the effort, and offer policy solutions that protect freedom of expression. In the long term, our hope is for corporations to stop chilling expression, to promote free access to time-sensitive news, foster engagement, and to usher in a new era of online expression in which marginalized communities will be more strongly represented within democratic society.
This week, EFF joined with several prominent right-to-repair groups to file an amicus brief in the United States District Court of Massachusetts defending the state’s recent right-to-repair law. This law, which gives users and independent repair shops access to critical information about the cars they drive and service, passed by ballot initiative with an overwhelming 74.9% majority.
Almost immediately, automakers asked to delay the law. In November, the Alliance for Automotive Innovation, a group that includes Honda, Ford, General Motors, Toyota, and other major carmakers, sued the state over the law. The suit claims that allowing people to have access to the information generated by their own cars poses serious security risks.
This argument is nonsense, and we have no problem joining our fellow repair advocates—iFixit, The Repair Association, US PIRG, SecuRepairs.org, and Founder/Director of the Brooklyn Law Incubator and Policy Clinic Professor Jonathan Askin—in saying so.Access Is Not a Threat
The Massachusetts law requires vehicles with a telematics platform—software that collects and transmits diagnostic information about your car—to install an open data platform. The Alliance for Automotive Innovation argues that the law makes it “impossible” to comply with both the state’s data access rules and federal standards.
Nonsense. Companies in many industries must balance data access and cybersecurity rules, including for electronic health records, credit reporting, and telephone call records. In all cases, regulators have recognized the importance of giving consumers access to their own information as well as the need to protect even the most sensitive information.
In fact, in cases such as the Equifax breach, consumer access to information was key to fighting fraud, the main consequence of the data breach. Locking consumers out of accessing their own information does nothing to decrease cybersecurity risks.Secrecy Is Not Security
Automakers are also arguing that restricting access to telematics data is necessary if carmakers are to protect against malicious intrusions.
Cybersecurity experts strongly disagree. “Security through obscurity”—systems that rely primarily on secrecy of certain information to prevent illicit access or use—simply does not work. It offers no real deterrent to would-be thieves, and it can give engineers a false sense of safety that can stop them from putting real protections in place.
Furthermore, there is no evidence that expanding access to telematics data would change much about the security of information. In fact, independent repair shops aren't any more or less likely than authorized shops to leak or misuse data, according to a recent report from the Federal Trade Commission. This should not be accepted as an excuse for carmakers to further restrict competition in the repair market.The Right to Repair Enhances Consumer Protection and Competition
Throughout the debate over the Massachusetts ballot initiative, the automotive industry has resorted to scare tactics to stop this law. But the people of Massachusetts didn’t fall for the industry’s version of reality, and we urge the court not to either.
The right to repair gives consumers more control over the things they own. It also supports a healthier marketplace, as it allows smaller and independent repair shops to offer their services—participation that clearly lowers prices and raises quality.
Time and time again, people have made it clear that they want the right to repair their cars. They’ve made that clear at the ballot box, as in Massachusetts, as well as in statehouses across the country.
That’s why EFF continues to stand behind the right to repair: If you bought it, you own it. It’s your right to fix it yourself or to take it to the repair shop of your choosing. Manufacturers want the benefits that come with locking consumers into a relationship with their companies long after a sale. But their efforts to stop the right to repair stands against a healthy marketplace, consumer protection, and common sense.
EFF and FSFP to Court: When Flawed Electronic Voting Systems Disenfranchise Voters, They Should Be Able to Challenge That with Access to the Courts
Atlanta, Georgia—The Electronic Frontier Foundation (EFF) and Free Speech for People (FSPF) urged a federal appeals court today to hold that a group of Georgia voters and the organization that supports them have standing to sue the Georgia Secretary of State over the implementation of defective voting systems they say deprives them of their right to vote and have their votes counted.
EFF and FSFP filed an amicus brief siding with the plaintiffs in Curling v. Raffenberger to defend Americans’ right to challenge in court any policy or action that disenfranchises voters.
The voters in the Curling lawsuit, originally filed in 2017, are seeking to block, or otherwise require protective measures for, Georgia’s new electronic voting, which has been found to have flaws that could block or deter some voters from exercising their right to vote and cause some votes to not be counted.
After reviewing a tremendous amount of evidence and testimony, a federal judge found that problems with the system’s scanners violate Georgians’ fundamental right to vote, and flaws in electronic pollbooks impose a severe burden on the rights of voters. The court ordered the state to take specific steps to fix the problems.
Lawyers for the Georgia Secretary of State’s office are appealing the orders and seeking to have the case thrown out. They argue that the voters lack standing to sue because they can’t show that they would be personally and individually harmed by the voting system and are merely speculating about potential harms to their voting rights. The Secretary of State went so far as to say the plaintiffs are, at best, “bystanders” making merely a general grievance about alleged harms that is not sufficient for standing.
In a brief filed in U.S. District Court of Appeals for the Eleventh Circuit, EFF and FSFP said the Supreme Court has long recognized that the right to vote is personal and individual. Directly depriving people of the right to vote is a concrete and particularized injury, and the Curling plaintiffs showed how the flaws both blocked voters from voting and scanned ballots from being counted.
“The plaintiffs in this case are seeking to vindicate their own rights to vote and have their votes counted,” said EFF Executive Director Cindy Cohn. “The fact that many other people would be harmed in a similar way doesn’t change that or negate the fact that the state’s choice of voting system can disenfranchise the voters in this case.”
EFF urged the court to look at the Ninth Circuit Court decision in EFF’s landmark case Jewel vs. NSA challenging dragnet government surveillance of Americans’ phone records. The court found that AT&T customers alleging the NSA’s spying program violated their individual rights to privacy had standing to sue the government, despite the fact that the program also impacted millions of other people, reversing a lower court’s ruling that their assertions of harm were just generalized grievances.
“When government or state policies cause personal harm, that meets the standard for standing even if many others are subject to the same harms,” said Houston Davidson, EFF public interest legal fellow.
EFF and FSFP also pushed back on Georgia’s attempt to minimize the problems with its systems as mere “glitches,” akin to a snowstorm or traffic jam on Election Day that do not require deep court review. They noted that the problems proven in the case are not minor but serious, preventable, and fundamental flaws that place unacceptable burdens on voters and jeopardize the accurate counting of their votes. Passing these problems off as minor, with unserious consequences for voters, can make it easier for Georgia to convince the court that it should not intervene.
Don’t fall for it, EFF and FSFP urged the court.
“It’s outrageous and wrong for Georgia to try to dismiss the flaws in the voting system as ‘glitches,’” said Davidson. “Technology problems in Georgia’s electronic pollbooks, scanners, and overall security are systematic and predictable. These issues jeopardize voters’ rights to cast their votes and have them counted. We hope the court recognizes the flaws for what they are and confirms the plaintiffs’ rights to hold the state accountable.”
For the EFF/FSFP brief:
For more on election security:
“Black lives matter on the streets. Black lives matter on the internet.” A year ago, EFF’s Executive Director, Cindy Cohn, shared these words in EFF's statement about the police killings of Breonna Taylor and George Floyd. Cindy spoke for all of us in committing EFF to redouble its efforts to support the movement for Black lives. She promised we would continue providing guides and resources for protesters and journalists on the front lines; support our allies as they navigate the complexities of technology and the law; and resist surveillance and other high-tech abuses while protecting the rights to organize, assemble, and speak securely and freely.
Like many of you, the anniversary of George Floyd's murder has inspired us to reflect on these commitments and the work of so many courageous people who stood up to demand justice. Our world has been irrevocably changed. While there is still an immeasurably long way to go toward becoming a truly just society, EFF is inspired by this leaderful movement and humbled as we reflect on the ways in which we have been able to support its critical work.
EFF believes that people engaged in the Black-led movement against police violence deserve to hold those in power accountable and inspire others through the act of protest, without fear of police surveillance of our faces, bodies, electronic devices, and other digital assets. So, as protests began to spread throughout the nation, we worked quickly to publish a guide to cell phone surveillance at protests, including steps protesters can take to protect themselves.
We also worked with the National Lawyers Guide (NLG) to develop a guide to observing visible, and invisible, surveillance at protests—in video and blog form. The published guide and accompanying training materials were made available to participants in the NLG’s Legal Observer program. The 25-minute videos—available in English and Spanish—explain how protesters and legal observers can identify various police surveillance technologies, like body-worn cameras, drones, and automated license plate readers. Knowing what technologies the police use at a protest can help defense attorneys understand what types of evidence the police agencies may hold, find exculpatory evidence, and potentially provide avenues for discovery in litigation to enforce police accountability.
We also significantly updated our Surveillance Self-Defense guide to attending protests. We elaborated on our guidance on documenting protests, in order to minimize the risk of exposing other protesters to harmful action by law enforcement or vigilantes; gave practical tips for maintaining anonymity and physical safety in transit to and at protests; and recommended options for anonymizing images and scrubbing metadata. Documenting police brutality during protest is necessary. Our aim is to provide options to mitigate risk when fighting for a better world.Protecting the Right to Record the Police
Using our phones to record on-duty police action is a powerful way to expose and end police brutality and racism. In the words of Darnella Frazier: "My video didn't save George Floyd, but it put his murderer away and off the streets." Many have followed in her courageous footsteps. For example, Caron Nazario used his phone to film excessive police force against him during a traffic stop. Likewise, countless protesters against police abuse have used their phones to document police abuse against other protesters. As demonstrations heated up last spring, EFF published advice on how to safely and legally record police.
EFF also has filed many amicus briefs in support of your right to record on-duty police. Earlier this year, one of these cases expanded First Amendment protection of this vital tool for social change. Unfortunately, another court proceeded to dodge the issue by hiding under "qualified immunity," which is one reason EFF calls on Congress to repeal this dangerous doctrine. Fortunately, six federal appellate courts have squarely vindicated your right to film police. We'll keep fighting until every court does so.Revealing Police Surveillance of Protesters
As we learned after Occupy Wall Street, the #NoDAPL movement, and the 2014-2015 Movement for Black Lives uprisings, sometimes it takes years to learn about all the police surveillance measures used against protest movements. EFF has helped expose the local, state, federal, and private surveillance that the government unleashed on activists, organizers, and protestors during last summer’s Black-led protests against police violence.
In July 2020, public records requests we sent to the semi-public Union Square Business Improvement District (USBID) in San Francisco revealed that the USBID collaborated with the San Francisco Police Department (SFPD) to spy on protesters. Specifically, they gave the SFPD a large “data dump” of footage (USBID’s phrase). They also granted police live access to their cameras for a week in order to surveil protests.
In February 2021, public records we obtained from the Los Angeles Police Department (LAPD) revealed that LAPD detectives had requested footage of protests from residents’ Ring surveillance doorbell cameras. The requests, from detective squads allegedly investigating illegal activity in proximity to the First Amendment-protected protest, sought an undisclosed number of hours of footage. The LAPD’s use of Ring doorbell cameras for political surveillance, and the SFPD’s use of USBID cameras for the same purpose, demonstrate how police are increasingly reliant on non-city and privately-owned, highly-networked security cameras, thus blurring the lines between private and public surveillance.Enforcing Legal Limits on Police Spying
In October 2020, EFF and the ACLU of Northern California filed a lawsuit against the City of San Francisco regarding its illegal video surveillance of protestors against police violence and racism, revealed through our public records requests discussed above. SFPD's real-time monitoring of dissidents violated the City's Surveillance Technology Ordinance, enacted in 2019, which bars city agencies like the SFPD from acquiring, borrowing, or using surveillance technology, unless they first obtain approval from the Board of Supervisors following a public process with ample opportunity for community members to make their voices heard.
The lawsuit was filed on behalf of three activists of color who participated in and organized protests against police violence in May and June of 2020. They seek a court order requiring San Francisco and its police to stop using surveillance technologies in violation of the Ordinance.Helping Communities Say “No” to Surveillance Technology
Around the country, EFF is working with local activists to ban government use of face recognition technology—a particularly pernicious form of biometric surveillance. Since 2019, when San Francisco became the first city to adopt such a ban, more than a dozen communities across the country have followed San Francisco's lead. In each city, residents stood up to say “no,” and their elected representatives answered that call. In the weeks and months following the nationwide protests against police violence, we continued to work closely with our fellow Electronic Frontier Alliance members, local ACLU chapters, and other dedicated organizers to support new bans on government face surveillance across the United States, including in Boston, MA, Portland, OR, Minneapolis, MN, and Kings County, WA.
Last year’s protests for police accountability made a big difference in New York City, where we actively supported the work of local advocates for three years to pass a surveillance transparency ordinance. That City's long overdue POST Act was passed as part of a three-bill package that many had considered longshots before the protests. However, amid calls to defund the police, many of the bill's detractors, including New York City Mayor Bill de Blasio, came to see the measure as appropriate and balanced.
EFF also aided our allies in St. Louis and Baltimore, who put the brakes on a panopticon-like aerial surveillance system, developed by a vendor ominously named Persistent Surveillance Systems. The spy plane program first invaded the privacy of Baltimore residents in the wake of the in-custody killing of Freddy Gray by police. EFF submitted a friend-of-the-court brief in a federal civil rights lawsuit, filed by ACLU, challenging Baltimore's aerial surveillance program. We were joined by the Brennan Center for Justice, the Electronic Privacy Information Center, FreedomWorks, the National Association of Criminal Defense Lawyers, and the Rutherford Institute. In St. Louis, EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the dangers and unconstitutionality of a bill that would have forced the City to enter into a contract to replicate the Baltimore spying program over St. Louis.
Protesters compelled companies around the country to reconcile their relationship to a deadly system of policing with their press releases in support of Black lives. Some companies heeded the calls from activists to stop their sale of face recognition technology to police departments. In June 2020, IBM, Microsoft, and Amazon paused these sales. Amazon said its pause would continue until such time as the government could "place stronger regulations to govern the ethical use of facial recognition."
This was, in many ways, an admission of guilt: companies recognized how harmful face recognition is in the hands of police departments. One year later, the regulatory landscape at the federal level has hardly moved. Following increased pressure by a coalition of civil rights and racial justice organizations, Amazon recently announced it was indefinitely extending its moratorium on selling Rekognition, its face recognition product, to police.
These are significant victories for activists, but the fight is not over. With companies like Clearview AI continuing to sell their face surveillance products to police, we still need a federal ban on government use of face recognition.The Fight Is Far From Over
Throughout the last year of historic protests for Black lives, it has been more apparent than ever that longstanding EFF concerns, such as law enforcement surveillance and freedom of expression, are part of our nation’s long-needed reckoning with racial injustice.
EFF will continue to stand with our neighbors, communities mourning the victims of police homicide, and the Black-led movement against police violence. We stand with the protesters demanding true and lasting justice. We stand with the journalists facing arrest and other forms of violence for exposing these atrocities. And we will stand with all those using their cameras, phones, and other digital tools to lift up the voices of the survivors, those we’ve lost, and all who demand a truly safe and just future.Related Cases: Williams v. San Francisco
Civil Society Groups Seek More Time to Review, Comment on Rushed Global Treaty for Intrusive Cross Border Police Powers
Electronic Frontier Foundation (EFF), European Digital Rights (EDRi), and 40 other civil society organizations urged the Council of Europe’s Parliament Assembly and Committee of Ministers to allow more time for them to provide much-needed analysis and feedback on the flawed cross border police surveillance treaty its cybercrime committee rushed to approve without adequate privacy safeguards.
Digital and human rights groups were largely sidelined during the drafting process of the Second Additional Protocol to the Budapest Convention, an international treaty that will establish global procedures for law enforcement in one country to access personal user data from technology companies in other countries. The CoE Cybercrime Committee (T-CY)—which oversees the Budapest Convention—adopted in 2017, as work on the police powers treaty began, internal rules that fostered a narrower range of participants for the drafting of this new Protocol.
The process has been largely opaque, led by public safety and law enforcement officials. And T-CY’s periodic consultations with civil society and the public have been criticized for their lack of detail, their short response timelines, and the lack of knowledge about countries' deliberation on these issues. The T-CY rushed approval of the text on May 28th, signing off on provisions that put few limitations and provide little oversight on police access to sensitive user data held by Internet companies around the world.
The Protocol now heads to the Council of Europe Parliamentary Assembly (PACE) Committee on Legal Affairs and Human Rights, which can recommend further amendments. We hope the PACE will hear civil society’s privacy concerns and issue an opinion addressing the lack of adequate data protection safeguards.
In a letter, dated March 31st, to PACE President Rik Daems and Chair of the Committee of Ministers Péter Szijjártó, digital and human rights groups said the treaty will likely be used extensively, with far-reaching implications on the security and privacy of people everywhere. It is imperative that fundamental rights guaranteed in the European Convention on Human Rights and in other agreements are not sidestepped in favor of law enforcement access to user data that is free of judicial oversight and strong privacy protections. The CoE’s plan is to finalize the Protocol's adoption by November and begin accepting signatures from countries sometime before 2022.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement,” EFF and its allies said in the letter. “The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments.”
In 2018 EFF, along with 93 civil society organizations from across the globe, asked the TC-Y to invite civil society as experts in the drafting plenary meetings, as is customary in other Council of Europe Committee sessions. The goal was for the experts to listen to Member States opinions and build on those discussions. But we could not work towards this goal since we were not invited to observe the drafting process. While EFF has participated in every public consultation of the TC-Y process since our 2018 coalition letter, the level of participation allowed has failed to comply with meaningful multi-stakeholder principles of transparency, inclusion and accountability. As Tamir Israel (CIPPIC) and Katitza Rodriguez (EFF) pointed out in their analysis of the Protocol:
With limited incorporation of civil society input, it is perhaps no surprise that the final Protocol places law enforcement concerns first while human rights protections and privacy safeguards remain largely an afterthought. Instead of attempting to elevate global privacy protections, the Protocol’s central safeguards are left largely optional in an attempt to accommodate countries that lack adequate protections. As a result, the Protocol encourages global standards to harmonize at the lowest common denominator, weakening everyone’s right to privacy and free expression.
The full text of the letter:
Re: Ensuring Meaningful Consultation in Cybercrime Negotiations
We, the undersigned individuals and organizations, write to ask for a meaningful opportunity to give the final draft text of the proposed second additional protocol to Convention 185, the Budapest Cybercrime Convention, the full and detailed consideration which it deserves. We specifically ask that you provide external stakeholders further opportunity to comment on the significant changes introduced to the text on the eve of the final consultation round ending on 6th May, 2021.
The Second Additional Protocol aims to standardise cross-border access by law enforcement authorities to electronic personal data. While competing initiatives are also underway at the United Nations and the OECD, the draft Protocol has the potential to become the global standard for such cross-border access, not least because of the large number of states which have already ratified the principal Convention. In these circumstances, it is imperative that the Protocol should lay down adequate standards for the protection of fundamental rights.
Furthermore, the initiative comes at a time when even routine criminal investigations increasingly include cross-border investigative elements and, in consequence, the protocol is likely to be used widely. The protocol therefore assumes great significance in setting international standards, and is likely to be used extensively, with far-reaching implications for privacy and human rights around the world. It is important that its terms are carefully considered and ensure a proportionate balance between the objective of securing or recovering data for the purposes of law enforcement and the protection of fundamental rights guaranteed in the European Convention on Human Rights and in other relevant national and international instruments.
In light of the importance of this initiative, many of us have been following this process closely and have participated actively, including at the Octopus Conference in Strasbourg in November, 2019 and the most recent and final consultation round which ended on 6th May, 2021.
Although many of us were able to engage meaningfully with the text as it stood in past consultation rounds, it is significant that these earlier iterations of the text were incomplete and lacked provisions to protect the privacy of personal data. In the event, the complete text of the draft Protocol was not publicly available before 12th April, 2021. The complete draft text introduces a number of significant alterations, most notably the inclusion of Article 14, which added for the first time proposed minimum standards for privacy and data protection. While external stakeholders were previously notified that these provisions were under active consideration and would be published in due course, the publication of the revised draft on 12th April offered the first opportunity to examine these provisions and consider other elements of the Protocol in the full light of these promised protections.
We were particularly pleased to see the addition of Article 14, and welcome its important underlying intent—to balance law enforcement objectives with fundamental rights. However, the manner in which this is done is, of necessity, complex and intricate, and, even on a cursory preliminary examination, it is apparent that there are elements of the article which require careful and thoughtful scrutiny, in the light of which might be capable of improvement.
As a number of stakeholders has noted, the latest (and final) consultation window was too short. It is essential that adequate time is afforded to allow a meaningful analysis of this provision and that all interested parties be given a proper chance to comment. We believe that such continued engagement can serve only to improve the text.
The introduction of Article 14 is particularly detailed and transformative in its impact on the entirety of the draft Protocol. Keeping in mind the multiple national systems potentially impacted by the draft Protocol, providing meaningful feedback on this long anticipated set of safeguards within the comment window has proven extremely difficult for civil society groups, data protection authorities and a wide range of other concerned experts.
Complicating our analysis further are gaps in the Explanatory Report accompanying the draft Protocol. We acknowledge that the Explanatory Report might continue to evolve, even after the Protocol itself is finalised, but the absence of elaboration on a pivotal provision such as Article 14 poses challenges to our understanding of its implications and our resulting ability meaningfully to engage in this important treaty process.
We know that the Council of Europe has set high standards for its consultative process and has a strong commitment to stakeholder engagement. The importance of meaningful outreach is all the more important given the global reach of the draft Protocol, and the anticipated inclusion of many signatory parties who are not bound by the Council’s central human rights and data protection instruments. Misalignments between Article 14 and existing legal frameworks on data protection such as Convention 108/108+ similarly demand careful scrutiny so that their implications are fully understood.
In these circumstances, we anticipate that the Council will wish to accord the highest priority to ensuring that fundamental rights are adequately safeguarded and that the consultation process is sufficiently robust to instill public confidence in the Protocol across the myriad jurisdictions which are to consider its adoption. The Council will, of course, appreciate that these objectives cannot be achieved without meaningful stakeholder input.
We are anxious to assist the Council in this process. In that regard, constructive stakeholder engagement requires a proper opportunity fully to assess the draft protocol in its entirety, including the many and extensive changes introduced in April 2021. We anticipate that the Council will share this concern, and to that end we respectfully suggest that the proposed text (inclusive of a completed explanatory report) be widely disseminated and that a minimum period of 45 days be set aside for interested stakeholders to submit comments.
We do realise that the T-CY Committee had hoped for an imminent conclusion to the drafting process. That said, adding a few months to a treaty process that has already spanned several years of internal drafting is both necessary and proportionate, particularly when the benefits of doing so will include improved public accountability and legitimacy, a more effective framework for balancing law enforcement objectives with fundamental rights, and a finalised text that reflects the considered input of civil society.
We very much look forward to continuing our engagement with the Council both on this and on future matters.
With best regards,
- Electronic Frontier Foundation (international)
- European Digital Rights (European Union)
- The Council of Bars and Law Societies of Europe (CCBE) (European Union)
- Access Now (International)
- ARTICLE19 (Global)
- ARTICLE19 Brazil and South America
- Association for Progressive Communications (APC)
- Association of Technology, Education, Development, Research and Communication - TEDIC (Paraguay)
- Asociación Colombiana de Usuarios de Internet (Colombia)
- Asociación por los Derechos Civiles (ADC) (Argentina)
- British Columbia Civil Liberties Association (Canada)
- Chaos Computer Club e.V. (Germany)
- Content Development & Intellectual Property (CODE-IP) Trust (Kenya)
- net (Sweden)
- Derechos Digitales (Latinoamérica)
- Digitale Gesellschaft (Germany)
- Digital Rights Ireland (Ireland)
- Danilo Doneda, Director of Cedis/IDP and member of the National Council for Data Protection and Privacy (Brazil)
- Electronic Frontier Finland (Finland)
- works (Austria)
- Fundación Acceso (Centroamérica)
- Fundacion Karisma (Colombia)
- Fundación Huaira (Ecuador)
- Fundación InternetBolivia.org (Bolivia)
- Hiperderecho (Peru)
- Homo Digitalis (Greece)
- Human Rights Watch (international)
- Instituto Panameño de Derecho y Nuevas Tecnologías - IPANDETEC (Central America)
- Instituto Beta: Internet e Democracia - IBIDEM (Brazil)
- Institute for Technology and Society - ITS Rio (Brazil)
- International Civil Liberties Monitoring Group (ICLMG)
- Iuridicium Remedium z.s. (Czech Republic)
- IT-Pol Denmark (Denmark)
- Douwe Korff, Emeritus Professor of International Law, London Metropolitan University
- Laboratório de Políticas Públicas e Internet - LAPIN (Brazil)
- Laura Schertel Mendes, Professor, Brasilia University and Director of Cedis/IDP (Brazil)
- Open Net Korea (Korea)
- OpenMedia (Canada)
- Privacy International (international)
- R3D: Red en Defensa de los Derechos Digitales (México)
- Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic - CIPPIC (Canada)
- Usuarios Digitales (Ecuador)
- org (Netherlands)
- Xnet (Spain)
 See, for example Access Now, comments on the draft 2nd Additional Protocol to the Budapest Convention on Cybercrime, available at: https://rm.coe.int/0900001680a25783; EDPB, contribution to the 6th round of consultations on the draft Second Additional Protocol to the Council of Europe Budapest Convention on Cybercrime, available at: https://edpb.europa.eu/system/files/2021-05/edpb_contribution052021_6throundconsultations_budapestconvention_en.pdf.
 Alessandra Pierucci, Correspondence to Ms. Chloé Berthélémy, dated 17 May 2021; Consultative Committee of the Convention for the Protection of Individuals with Regard to Automated Processing of Personal Data, Directorate General Human Rights and Rule of Law, Opinion on Draft Second Additional Protocol, May 7, 2021, https://rm.coe.int/opinion-of-the-committee-of-convention-108-on-the-draft-second-additio/1680a26489; EDPB, see footnote 1; Joint Civil Society letter, 2 May: available at https://edri.org/wp-content/uploads/2021/05/20210420_LetterCoECyberCrimeProtocol_6thRound.pdf.