EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 17 min ago

Chile’s New “Who Defends Your Data?” Report Shows ISPs’ Race to Champion User Privacy

Thu, 05/27/2021 - 12:53pm

Derechos Digitales’ fourth ¿Quien Defiende Tus Datos? (Who Defends Your Data?) report on Chilean ISPs' data privacy practices launched today, showing that companies must keep improving their commitments to user rights if they want to hold their leading positions. Although Claro  (América Móvil) remains at the forefront as in 2019’s report, Movistar (Telefónica) and GTD have made progress in all the evaluated categories. WOM lost points and ended in a tie with Entel in the second position, while VTR lagged behind.

Over the last four years, certain transparency practices that once seemed unusual in Latin America have become increasingly more common. In Chile, they have even become a default. This year, all companies evaluated except for VTR received credit for adopting three important industry-accepted best practices: publishing law enforcement guidelines, which help provide a glimpse into the process and standard companies use for analyzing government requests for user data; disclosing personal data processing practices in contracts and policies; and releasing transparency reports.

Overall, the publishing of transparency reports has also become more common. These are critical for understanding a company’s practice of managing user data and its handling of government data requests. VTR is the only company that has not updated its transparency report recently—since May 2019. After the last edition, GTD published its first transparency report and law enforcement guidelines. Similarly, for the first time Movistar has released specific guidelines for authorities requesting access to user's data in Chile, and received credit for denying legally controversial government requests for user's data.

Most of the companies also have policies stating their right to provide user notification when there is no secrecy obligation in place or its term has expired. But as in the previous edition, earning a full star in this category requires more than that. Companies have to clearly set up a notification procedure or make concrete efforts to put them in place. Derechos Digitales also urged providers to engage in legislative discussions regarding Chile’s cybercrime bill, in favor of stronger safeguards for user notification. Claro has upheld the right to notification within the country's data protection law reform and has raised concerns against attempts to increase the data retention period for communications metadata in the cybercrime bill.    

Responding to concerns over government’s use of location data in the context of the COVID pandemic, the new report also sheds light on whether ISPs’ have made public commitments not to disclose user location data unless it is anonymized and aggregated, without a previous judicial order. While the pandemic has changed society in many ways, it has not reduced the need for privacy when it comes to sensitive personal data. Companies’ policies should also push back sensitive personal data requests that seek to target groups rather than individuals. In addition, the study aimed to spot which providers went public about their anonymized and aggregate location data-sharing agreements with private and public institutions. Movistar is the only company that has disclosed such agreements.

Together, the six researched companies account for 88.3% of fixed Internet users and 99.2% of mobile connections in Chile.

This year's report rates providers in five criteria overall: data protection policies, law enforcement guidelines, defending users in courts or Congress, transparency reports, and user notification. The full report is available in Spanish, and here we highlight the main findings.

Main results

Data Protection Policies and ARCO Rights

Compared to 2019’s edition, Movistar and GTD improved their marks on data protection policies. Companies should not only publish those policies, but commit to support user-centric data protection principles inspired by the bill reforming the data protection law, under discussion in Chilean Congress. GTD has overcome its poor score from 2019, and has earned a full star in this category this year. Movistar received a partial score for failing to commit to the complete set of principles. On the upside, the ISP has devised a specific page to inform users about their ARCO rights (access, rectification, cancellation, and opposition). The report highlights other positive remarks for WOM, Claro, and Entel for providing a specific point of contact for users to demand these rights. WOM went above and beyond, and has made it easier for users to unsubscribe from the provider’s targeted ads database. 

Transparency Reports and Law Enforcement Guidelines

Both transparency reports and law enforcement guidelines have become an industry norm among Chile’s main ISPs. All featured companies have published them, although VTR has failed to disclose an updated transparency report since the 2019 study. Amid many advances since last edition, GTD  disclosed its first transparency report referring to government data requests during 2019. The company earned a partial score in this category for not releasing new statistical data about 2020’s requests.

As for law enforcement guidelines, not all companies clearly state the need for a judicial order to hand over different kinds of communication metadata to authorities. Claro, Entel, and GTD have more explicit commitments in this sense. VTR requests a judicial order before carrying out interception measures or handing call records to authorities. However, the ISP does not mention this requirement for other metadata, such as IP addresses. Movistar’s guidelines are detailed about the types of user data that the government can ask for, but it refers to judicial authorization only when addressing the interception of communications.

Finally, WOM’s 2021 guidelines explicitly require a warrant before handing phone and tower traffic data, as well as geolocation data. As the report points out, in early 2020, WOM was featured in the news as the only ISP to comply with a direct and massive location data request made by prosecutors, which the company denied. We’ve written about this case as an example of worrisome reverse searches, targeting all users in a particular area instead of specific individuals. Directly related to this concern, this year’s report underscores Claro’s and Entel’s commitment to only comply with individualized personal data requests. 

Pushing for User Notification about Data Requests

Claro remains in the lead when it comes to user notification. Beyond stating in the company policy that it has a right to notify users when this is not prohibited by law (as the other companies do, except for Movistar) – Claro’s policies also describe the user notice procedure for data requests in civil, labor, and family judicial cases. Derechos Digitales points out the ISP has also explored with the Public Prosecutor’s Office ways to implement such notification with regard to criminal cases, once the secrecy obligation has expired. WOM’s transparency report mentions similar efforts, urging authorities to collaborate in providing information to ISPs about the status of investigations and legal cases, so they are aware when a secrecy obligation is no longer in effect. As the company says:

“Achieving advances in this area would allow the various stakeholders to continue to comply with their legal duties and at the same time make progress in terms of transparency and safeguarding users' rights.”

Having Users' Backs Before Disproportionate Data Requests and Legislative Proposals

Companies can also stand with their users by challenging disproportionate data requests or defending users’ privacy in Congress. WOM and Claro have specific sections on their websites listing some of their work on this front (see, respectively, tabs “protocolo de entrega de información a la autoridad” y “relación con la autoridad”). Such reports include Claro’s meetings with Chilean senators who take part in the commission discussing the cybercrime bill. The ISP reports having emphasized concerns about the expansion of the mandatory retention period for metadata, as well as suggesting that the reform of the country’s data protection law should explicitly authorize telecom operators to notify users about surveillance measures. 

Entel and Movistar have received equally high scores in this category. Entel, in particular, has kept its fight against a disproportionate request made by Chile's telecommunications regulator (Subtel) for subscriber data. In 2018, the regulator asked for personal information pertaining to the totality of Entel's customer base in order to share those with private research companies for carrying out satisfaction surveys. Other Chilean ISPs received the same request, but only Entel challenged the legal grounds of Subtel’s authority for such a demand. The case, which was first reported for this category in the last edition, had a new development in late 2019, when the Supreme Court confirmed the sanctions against Entel for not delivering the data, but reduced the company’s fine. Civil society groups Derechos Digitales, Fundación Datos Protegidos, and Fundación Abriendo Datos have recently released a statement stressing how Subtel's request conflicts with data protection principles, particularly purpose limitation, proportionality, and data security.

Movistar's credit in this category also relates to a Subtel request for subscriber data, this one in 2019. The ISP denied the demand, pointing out a legal tension between the agency’s oversight authority to request customer personal data without user consent and privacy safeguards provided by Chile’s Constitution and data protection law that set limits on personal data-sharing.

***

Since its first edition in 2017, Chile’s reports have shown solid and continuous progress, fostering ISP competition toward stronger standards and commitments in favor of users’ privacy and transparency. Derechos Digitales’ work is part of a series of reports across Latin America and Spain adapted from EFF’s Who Has Your Back? report, which for nearly a decade has evaluated the practices of major global tech companies.

European Court on Human Rights Bought Spy Agencies’ Spin on Mass Surveillance

Wed, 05/26/2021 - 7:15pm

The European Court of Human Rights (ECHR) Grand Chamber this week affirmed what we’ve long known, that the United Kingdom’s mass surveillance regime, which involved the indiscriminate and suspicionless interception of people’s communications, violated basic human rights to privacy and free expression. We applaud the Strasbourg-based Grand Chamber, the highest judicial body of the Council of Europe, for the ruling and for its strong stance demanding new safeguards to prevent privacy abuses, beyond those required by a lower court in 2018.  

Yet, the landmark decision, while powerful in declaring that UK mass interception powers are unlawful, failed to protect journalists, and lacked legal safeguards to ensure British spy agency GCHQ wasn’t abusing its power, imprudently bought into spy agency propaganda that suspicionless interception powers must be granted to ensure national security. The Grand Chamber rejected the fact that mass surveillance is an inherently disproportionate measure and believed that any potential privacy abuses can be mitigated by “minimization and targeting” within the mass spying process. We know this doesn’t work. The Grand Chamber refused to insist that governments stop bulk interception--a mistake recognized by ECHR Judge Paulo Pinto de Albuquerque, who said in a dissenting opinion: 

For good or ill, and I believe for ill more than for good, with the present judgment the Strasbourg Court has just opened the gates for an electronic “Big Brother” in Europe.

The case at issue, Big Brother Watch and Others v. The United Kingdom, was brought in the wake of disclosures by whistleblower Edward Snowden, who confirmed that the NSA and GCHQ were routinely spying on hundreds of millions of innocent people around the globe. A group of more than 15 human rights organizations filed a complaint against portions of the UK's mass surveillance regime before the ECHR. In a decision in 2018, the court rejected the UK’s spying programs for violating the right to privacy and freedom of expression, but it failed to say that the UK's indiscriminate and suspicionless interception regime was inherently incompatible with the European Convention on Human Rights. EFF filed a Declaration as part of this proceeding. The court, however, acknowledged the lack of robust safeguards needed to provide adequate guarantees against abuse. The Grand Chamber’s decision this week came in an appeal to the 2018 ruling. 

The new ruling goes beyond the initial 2018 decision by requiring prior independent authorization for the mass interception of communications, which must include meaningful “end-to-end safeguards.” The Grand Chamber emphasized that there is considerable potential for mass interception powers to be abused, adversely affecting people’s rights. It warns that these powers should be subject to ongoing assessments of their necessity and proportionality at every stage of the process; to independent authorization at the outset, and to ex-post-facto oversight that should be sufficiently robust to keep the “interference” of people's rights to only what is “necessary” in a democratic society. Under powers given to UK security services in 2000, they only needed authorization by the Secretary of State (Home Office) for interception. The Grand Chamber ruled that, in lacking adequate safeguards like independent oversight, UK surveillance law did not meet the required “quality of law” standard and was incapable of keeping the “interference” to what was necessary.

In its ruling, the Grand Chamber assessed the quality of the UK's bulk interception law and developed an eight-part test that the legal framework of new surveillance laws must meet to justify authorization of bulk interception. The legal framework must make clear and consider the following: the circumstances in which an individual’s communications may be intercepted; the procedure to be followed for granting authorization; the procedures to be followed for selecting, examining and using intercept material; the precautions to be taken when communicating the material to other parties; the limits on the duration of interception, the storage of intercept material and the circumstances in which such material must be erased and destroyed; the procedures and modalities for supervision by an independent authority of compliance with the above safeguards and its powers to address non-compliance; the procedures for independent ex post facto review of such compliance and the powers vested in the competent body in addressing instances of non-compliance.

These are welcome safeguards against abuse. But the opinion doesn’t contain all good news. We are disappointed that the Grand Chamber found that the UK's practice of requesting intercepted material from foreign governments and intelligence agencies, rather than intercepting and collecting them directly, was not a violation of the right to privacy and free expression. Our friends at ARTICLE19 and others argued this, and it also reflects our views: Only truly targeted surveillance constitutes a legitimate restriction on free expression and privacy, and any surveillance measure should only be authorized by a competent judicial authority that is independent and impartial.

Back on the bright side, we were happy that the Grand Chamber once again rejected the UK government’s contention (akin to the U.S. government’s) that privacy invasions only occur once a human being looks at intercepted communications. The Grand Chamber confirmed that the legally significant “interference” with privacy begins as soon as communications are first intercepted—becoming more and more severe as they are stored and later used by government agents. The steps include interception and initial retention of communications data; application of specific selectors to the retained data;  the examination of selected data by analysts; and the subsequent retention of data and use of the “final product”, including the sharing of data with third parties. The Grand Chamber correctly applied its analysis to every step of the way, something U.S. Courts have yet to do. 

The Grand Chamber also found that the government had neglected to subject its targeting practices to enough authorization procedures. Bulk communications may be analyzed (by machines or by people) using “selectors”—that is, search terms such as account names or device addresses—and the government apparently did not specify how these selectors would be chosen or what kinds of selectors it might use in the course of surveillance procedures. It required analysts performing searches on people’s communications to document why they searched for terms connected to particular people’s identities, but did not have anyone else (other than an individual analyst) decide whether those search terms were OK.

The Grand Chamber ruled that acquiring communications metadata through mass interception powers is just as intrusive as intercepting communications content. It considers that the interception, retention, and searching of communications data should be analyzed taking into account the same safeguards as those applicable to the content of communications. However, the Grand Chamber decided that while the interception of communications data and content will normally be authorized at the same time, once obtained the two may be treated differently. The Court explained: 

In view of the different character of related communications data and the different ways in which they are used by the intelligence services, as long as the aforementioned safeguards are in place, the Court is of the opinion that the legal provisions governing their treatment may not necessarily have to be identical in every respect to those governing the treatment of content.

On concerns raised about the impact of surveillance on journalists and their sources, the Grand Chamber agreed that the UK was substantially deficient in not having proactive independent oversight of surveillance of journalists’ communications, whereby “a judge or other independent and impartial decision-making body” would have applied a higher level of scrutiny to this surveillance.

Overall, the Grand Chamber decision falls below the standards of the Court of Justice of the European Union (the Supreme Court of the European Union in matters of European Union law), although it does have some good safeguards. For instance, the Luxembourg Court of Justice of the European Union judgment, in Schrems I. v. Data Protection Commissioner, made clear that legal frameworks granting public authorities access to data on a generalized basis compromise "the essence of the fundamental right to private life," as guaranteed by Article 7 of the European Union Charter of Fundamental Rights.  In other words, any law that compromises the “essence to right private life” cannot ever be proportionate nor necessary. 

While we would like more, this decision still puts the Grand Chamber way ahead of U.S. courts deciding cases challenging bulk surveillance. Courts in the U.S. have tied themselves in knots trying to accommodate the U.S. government’s overbroad secrecy claims and the needs of the U.S. standing doctrine. In Europe, the UK did not claim that the case could not be decided due to secrecy.  More importantly,  the Grand Chamber was able to reach a decision on the merits without endangering the national security of the U.K. 

U.S. courts should take heed: the sky will not fall if you allow full consideration of the legality of mass surveillance in regular courts, rather than the truncated, rubber-stamp review currently done in secret by the Foreign Intelligence Surveillance Court (FISA). Americans, just like Europeans, deserve to communicate without being subject to bulk surveillance. While it contains a serious flaw, the Grand Chamber ruling demonstrates that the legality of mass surveillance programs can and should be subject to thoughtful, balanced, and public scrutiny by an impartial body, independent from the executive branch, that isn’t just taking the government’s word for it but applying laws that guarantee privacy, freedom of expression, and other human rights. 

Related Cases: Jewel v. NSA

Amid Systemic Censorship of Palestinian Voices, Facebook Owes Users Transparency

Tue, 05/25/2021 - 3:10pm

Over the past few weeks, as protests in—and in solidarity with—Palestine have grown, so too have violations of the freedom of expression of Palestinians and their allies by major social media companies. From posts incorrectly flagged by Facebook as incitement to violence, to financial censorship of relief payments made on Venmo, and the removal of Instagram Stories (which also heavily affected activists in Colombia, Canada, and Brazil), Palestinians are experiencing an unprecedented level of censorship during a time where digital communications are absolutely critical.

The vitality of social media during a time like this cannot be understated. Journalistic coverage from the ground is minimal—owing to a number of factors, including restrictions on movement by Israeli authorities—while, as the New York Times reported, misinformation is rife and has been repeated by otherwise reliable media sources. Israeli officials have even been caught spreading misinformation on social media. 

Palestinian digital rights organization 7amleh has spent the past few weeks documenting content removals, and a coalition of more than twenty organizations, including EFF, have reached out to social media companies, including Facebook and Twitter. Among the demands are for the companies to immediately stop censoring—and reinstate—the accounts and content of Palestinian voices, to open an investigation into the takedowns, and to transparently and publicly share the results of those investigations.

A brief history

Palestinians face a number of obstacles when it comes to online expression. Depending on where they reside, they may be subject to differing legal regimes, and face censorship from both Israeli and Palestinian authorities. Most Silicon Valley tech companies have offices in Israel (but not Palestine), while some—such as Facebook—have struck particular deals with the Israeli government to deal with incitement. While incitement to violence is indeed against the company’s community standards, groups like 7amleh say that this agreement results in inconsistent application of the rules, with incitement against Palestinians often allowed to remain on the platform.

Additionally, the presence of Hamas—which is the democratically-elected government of Gaza, but is also listed as a terrorist organization by the United States and the European Union—complicates things for Palestinians, as any mention of the group (including, at times, something as simple as the group’s flag flying in the background of an image) can result in content removals.

And it isn’t just Hamas—last week, Buzzfeed documented an instance where references to Jerusalem’s Al Aqsa mosque, one of the holiest sites in Islam, were removed because “Al Aqsa” is also contained within another designated group, Al Aqsa Martyrs’ Brigade. Although Facebook apologized for the error, this kind of mistake has become all too common, particularly as reliance on automated moderation has increased amidst the pandemic.

“Dangerous Individuals and Organizations”

Facebook’s Community Standard on Dangerous Individuals and Organizations gained a fair bit of attention a few weeks back when the Facebook Oversight Board affirmed that President Trump violated the standard with several of his January 6 posts. But the standard is also regularly used as justification for the widespread removal of content by Facebook pertaining to Palestine, as well as other countries like Lebanon. And it isn’t just Facebook—last Fall, Zoom came under scrutiny for banning an academic event at San Francisco State University (SFSU) at which Palestinian figure Leila Khaled, alleged to belong to another US-listed terrorist organization, was to speak.

SFSU fell victim to censorship again in April of this year when its Arab and Muslim Ethnicities and Diasporas (AMED) Studies Program discovered that its Facebook event “Whose Narratives? What Free Speech for Palestine?,” scheduled for April 23, had been taken down for violating Facebook Community Standards. Shortly thereafter, the program’s entire page, “AMED STUDIES at SFSU,” was deleted, along with its years of archival material on classes, syllabi, webinars and vital discussions not only on Palestine but on Black, Indigenous, Asian and Latinx liberation, gender and sexual justice and a variation of Jewish voices and perspectives including opposition to Zionism. Although no specific violation was noted, Facebook has since confirmed that the post and the page were removed for violating the Dangerous Individuals and Organizations standard. This was in addition to cancellations by other platforms including Google, Zoom, and Eventbrite. 

Given the frequency and the high-profile contexts in which Facebook’s Dangerous Individuals and Organizations Standard is applied, the company should take extra care to make sure the standard reflects freedom of expression and other human rights values. But to the contrary, the standard is a mess of vagueness and overall lack of clarity—a point that the Oversight Board has emphasized.

Facebook has said that the purpose of this community standard is to “prevent and disrupt real-world harm.” In the Trump ruling, the Oversight Board found that President Trump’s January 6 posts readily violated the Standard. “The user praised and supported people involved in a continuing riot where people died, lawmakers were put at serious risk of harm, and a key democratic process was disrupted. Moreover, at the time when these restrictions were extended on January 7, the situation was fluid and serious safety concerns remained.”

But in two previous decisions, the Oversight Board criticized the standard. In a decision overturning Facebook’s removal of a post featuring a quotation misattributed to Joseph Goebbels, the Oversight Board admonished Facebook for not including all aspects of its policy on dangerous individuals and organizations in the community standard.

Facebook apparently has self-designated lists of individuals and organizations subject to the policy that it does not share with users, and treats any quoting of such persons as an “expression of support” unless the user provides additional context to make their benign intent explicit, a condition also not disclosed to users. Facebook's lists evidently include US-designated foreign terrorist organizations, but also seems to go beyond that list.

As the Oversight Board concluded, “this results in speech being suppressed which poses no risk of harm” and found that the standard fell short of international human rights standards: “the policy lacks clear examples that explain the application of ‘support,’ ‘praise’ and ‘representation,’ making it difficult for users to understand this Community Standard. This adds to concerns around legality and may create a perception of arbitrary enforcement among users.” Moreover, “the policy fails to explain how it ascertains a user’s intent, making it hard for users to foresee how and when the policy will apply and conduct themselves accordingly.”

The Oversight Board recommended that Facebook explain and provide examples of the application of key terms used in the policy, including the meanings of “praise,” “support,” and “representation.” The Board also recommended that the community standard provide clearer guidance to users on making their intent apparent when discussing such groups, and that a public list of “dangerous” organizations and individuals be provided to users.

The United Nations Special Rapporteur on Freedom of Expression also expressed concern that the standard, and specifically the language of “praise” and “support,” was “excessively vague.”

Recommendations

Policies such as Facebook’s that restrict references to designated terrorist organizations may be well-intentioned, but in their blunt application, they can have serious consequences for documentation of crimes—including war crimes—as well as vital expression, including counterspeech, satire, and artistic expression, as we’ve previously documented. While companies, including Facebook, have regularly claimed that they are required to remove such content by law, it is unclear to what extent this is true. The legal obligations are murky at best. Regardless, Facebook should be transparent about the composition of its "Dangerous Individuals and Organizations" list so that users can make informed decisions about what they post.

But while some content may require removal under certain jurisdictions, it is clear that other decisions are made on the basis of internal policies and external pressure—and are often not in the best interest of the individuals that they claim to serve. This is why it is vital that companies include vulnerable communities—in this case, Palestinians—in policy conversations.

Finally, transparency and appropriate notice to users would go a long way toward mitigating the harm of such takedowns—as would ensuring that every user has the opportunity to appeal content decisions in every circumstance. The Santa Clara Principles on Transparency and Accountability in Content Moderation offer a baseline for companies.

Activists Mobilize to Fight Censorship and Save Open Science

Mon, 05/24/2021 - 4:16pm

Major publishers want to censor research-sharing resource Sci-Hub from the internet, but archivists are quickly responding to make that impossible. 

More than half of academic publishing is controlled by only five publishers. This position is built on the premise that users should pay for access to scientific research, to compensate publishers for their investment in editing, curating, and publishing it. In reality, research is typically submitted and evaluated by scholars without compensation from the publisher. What this model is actually doing is profiting off of a restriction on article access using burdensome paywalls. One project in particular, Sci-Hub, has threatened to break down this barrier by sharing articles without restriction. As a result, publishers are going to every corner of the map to destroy the project and wipe it from the internet. Continuing the long tradition of internet hacktivism, however, redditors are mobilizing to create an uncensorable back-up of Sci-Hub.

Paywalls: More Inequity and Less Progress

It’s an open secret at this point that the paywall model used by major publishers, where one must pay to read published articles, is at odds with the way science works which is one reason researchers regularly undermine it by sharing PDFs of their work directly. The primary functions paywalls serve now are to drive up contract prices with universities and ensure current research is only available to the most affluent or well-connected. The cost of access has gotten so out of control that even $35 billion dollar institutions like Harvard have warned that contract costs are becoming untenable. If this is the case for Harvard, it’s hard to see how smaller entities can manage these costs– particularly those in the global south. As a result, crucial and potentially life-saving knowledge is locked away from those who need it most. That’s why the fight for open access is a fight for human rights

Indeed, the past year has shown us the incredible power of open access after publishers made COVID-19 research immediately available at no cost. This temporary move towards open access helped support the unprecedented global public health effort that spurred the rapid development of vaccines, treatments, and better informed public health policies. This kind of support for scientific progress should not be reserved for a global crisis; instead, it should be the standard across all areas of research.

Sci-Hub and the Fight for Access

Sci-hub is a crucial piece of the movement towards open access. The project was started over 10 years ago by a researcher in Kazakhstan, Alexandra Elbakyan, with the goal “to remove all barriers in the way of science.” The result has been a growing library of millions of articles made freely accessible, running only on donations. Within six years it even became the largest Open Access academic resource in the world, and it has only grown since, bringing cutting-edge research to rich and poor countries alike

But that invaluable resource has come at a cost. Since its inception, Sci-Hub has faced numerous legal challenges and investigations. Some of these challenges have led to dangerously broad court orders. One such challenge is being addressed in India, where courts have been asked to block access to a site by publishers Elsevier, Wiley, and American Chemical Society. The courts have been hesitant, however, as the site has clear public importance, and local experts have argued that Sci-Hub is the only way for many in the country to access research.  In any event, one inevitable truth cannot be avoided: researchers want to share their work– not make publishers rich.

Archivists Rush to Defend Sci-Hub

With these challenges ongoing, SciHub’s Twitter account was permanently suspended under the site’s “counterfeit policy.” Given the timing of this suspension, Elbakyan and other academic activists believe it was directly related to the legal action in India. A few months later, Elbakyan shared on her personal twitter that Apple had granted the FBI access to her account data after a request in early 2019. 

Responding to these attacks last week, redditors on the archivist subreddit  r/DataHoarder have (once again) rallied to support the site. In a post two weeks ago, users appealed to the legacy of reddit co-founder Aaron Swartz and called for anyone with hard drive space and a VPN to defend ‘free science’ by downloading and seeding 850 torrents containing Sci-Hub’s 77 TB library. The ultimate goal of these activists is to then use these torrents, containing 85 million scientific articles, to make a fully decentralized and uncensorable iteration of Sci-Hub.

This project should ring utopian to anyone who values access to scientific knowledge, a goal publishers and the DOJ have taken great pains to block with legal obstacles. A fully decentralized, uncensorable, and globally accessible database for scientific work is a potential engine for greater research equity. The only potential losers with such a resource might be the old gatekeepers who rely on an artificial scarcity of scientific knowledge, and increasingly tools of surveillance, to extract exorbitant profit margins off the labor of scientists.

It’s Time to Fight for Open Access

Journal publishers must do their part to make research immediately available to all, freely and without privacy-invasive practices. There is no need for such a valuable resource such as Sci-Hub to live in the shadows of copyright litigation. While we hope publishers make this change willingly, there are other common-sense initiatives that could help. For example, there are federal bills like the Fair Access to Science and Technology Research Act (FASTR), or state bills such as California’s A.B. 2192, which can require government-funded research to be made freely available. The principle behind these bills is simple: if the public-funded the research, the public shouldn’t have to pay again to access it.

In addition to supporting legislation, students and academics can also advocate for Open Access on campus. Colleges can not only provide a financial incentive by breaking contracts with publishers but also support researchers in the process of making their own work Open Access. The UC system for example has required all research from their 10 campuses be made open access since 2013, a policy more public institutions can and should adopt.  Even talking about open access with peers on campus can stir interest in local organizing, and when it does our EFA local organizing toolkit and organizing team (organizing@eff.org) can help support these local efforts.

We need to lift these artificial restraints on science imposed by major publishers and take advantage of 21st-century technology. Initiatives taken by archivists activists such as those supporting Sci-Hub shouldn’t be caught in a game of cat and mouse but supported by policy and business models which allow such projects to thrive and promote equity.

EFF Sues Police Standards Agency to Obtain Use of Force Training Materials

Fri, 05/21/2021 - 4:51pm
Police Group Abusing Copyright Law to Withhold Documents, Violate Public Records Act

Woodland, California—The Electronic Frontier Foundation (EFF) sued the California Commission on Peace Officer Standards and Training (POST) to obtain materials showing how police are trained in the use of force, after the organization cited third-party copyright interests to illegally withhold them from the public.

The lawsuit, filed under California’s Public Records Act (PRA), seeks a court order forcing POST to make public unredacted copies of outlines for a number of police training courses, including training on use of force. As the country struggles to process the many painful-to-watch examples of extensive and deadly use of force by police, Californians have a right to know what officers are being trained to do, and how they are being trained. The complaint was filed yesterday in the Superior Court of California, Yolo County.

California lawmakers recognized the need for more transparency in law enforcement by passing SB 978, which took effect last year. The law requires POST and local law enforcement agencies to publish, in a conspicuous space on their websites, training manuals and other materials about policies and practices.

“POST is unlawfully hiding this material,” said EFF Staff Attorney Cara Gagliano. “SB 978 is clear—police must allow the public to see its training manuals. Doing so helps educate the community about what to expect and how to behave during police encounters, and helps to hold police accountable when they don’t comply with their training.”

As part of a 2020 review of POST’s compliance with the law, EFF discovered that the use of force training materials were not on its website. EFF requested the documents under the PRA and was sent copies of documents listing use of force training providers and certification dates. The only substantive documents it received were heavily redacted copies of the course outlines, with just the subject headings visible.

POST said it would not make public the material because the California Peace Officers Association (CPOA), which created the training manuals, had made a copyright claim over the materials and requested they not be published on a public website. POST agreed, citing compliance with federal copyright law.

But SB 978 mandates that POST must publish training manuals if the materials would be available to the public under the PRA, which does not contain any exception for copyrighted material. What’s more, the PRA says state agencies can’t allow “other parties” to control whether information subject to the law can be disclosed.

“Copyright law is not a valid excuse for POST to evade its obligation under the law to make training materials public,” said Gagliano. “Police and the organizations that create their training manuals are not above the law."

For the complaint:
https://www.eff.org/document/eff-v-post-complaint

For more on digital rights and the Black-led movement against police violence:
https://www.eff.org/issues/digital-rights-and-black-led-movement-against-police-violence

Contact:  CaraGaglianoStaff Attorneycara@eff.org

Washington State Has Sued a Patent Troll For Violating Consumer Protection Laws

Fri, 05/21/2021 - 12:12pm

Landmark Technology, a patent troll that has spent 20 years threatening and suing small businesses over bogus patents, and received EFF’s Stupid Patent of the Month award in 2019, has been sued by the State of Washington.

Washington Attorney General Bob Ferguson has filed a lawsuit claiming that Landmark Technology has violated the state’s Patent Troll Protection Act, which bans “bad faith” assertions of patent infringement. Following a widespread campaign of patent demand letters, more than 30 states passed some kind of law placing limits on bad-faith patent assertions.

These laws face an uphill battle to be enforced. First of all, the Constitution places important limits on the government’s ability to penalize the act of seeking legal redress. Second, the Federal Circuit has specifically held that a high bar of bad faith must be established for laws that would penalize patent assertion.

Washington’s case against Landmark could be a major test of state anti-troll laws, and whether state anti-trolling and consumer protection laws can dissuade some worst-of-the-worst patent troll behavior.

The lawsuit is filed against “Landmark Technology A,” a recently created LLC that appears to be largely identical to the now-defunct “Landmark Technology.” The new company asserts the same patent against the same type of targets. The patent’s inventor is Landmark Technology owner Lawrence Lockwood.

Over 1,000 Demand Letters

Landmark threatens and sues small businesses over U.S. Patent No. 7,010,508, which was issued to Lockwood in 2006 and claims rights to “automated multimedia data processing network for processing business and financial transactions between entities from remote sites.”

The Washington case reveals just how widespread Landmark’s threats are. From January 2019 to July 2020, Landmark sent identical demand letters to 1,176 small businesses all across the country. Those letters threaten to sue unless Landmark gets paid a $65,000 licensing fee. 

Landmark essentially insists that if you use a website for e-commerce, you infringe this patent. In recent years, it’s filed suit against candy companies, an educational toy maker, an organic farm, and a Seattle bottle maker, just to name a few. 

Or as the Washington State Attorney General put it:

[T]he company broadly and aggressively misuses the patent claims, targeting virtually any small business with a website, seemingly at random. Landmark claims that common, near-ubiquitous business webpages infringe on its patent rights — such as small business home pages, customer login pages, new customer registration and product-ordering pages.

“Landmark extorts small businesses, demanding payment for webpages that are essential for running a business,” Washington Attorney General Ferguson said. “It backs them into a corner — pay up now, or get buried in legal fees. I’m putting patent trolls on notice: Bully businesses with unreasonable patent assertions, and you’ll see us in court.”

According to the AG’s press release, four Washington companies settled for between $15,000 and $20,000 each to avoid litigation costs. The lawsuit seeks restitution for those companies.

The patents created by Landmark owner Lawnrence Lockwood patents have been used in well over 150 lawsuits filed by Landmark Technology and Landmark Technology A; as well as at least 40 cases filed by his earlier company PanIP, which sued dozens of early e-commerce websites by 2003. Given what we now know about the more than 1,000 letters sent just in 2019 and 2020, the litigation record seems like just the tip of the iceberg.

The U.S. Patent and Trademark Office found in a 2014 review that the ’508 patent was likely to be invalid because it didn’t actually explain how to do the things it claimed. However, that case settled before the patent could be invalidated.

The USPTO is an office that labors under industry capture. Its fees are paid by patent owners, and in practice it works for patent owners far too often—not users or small business owners. While review processes like inter partes review (IPR) are useful in restoring some balance to the system, it’s critical that the worst abusers of the patent system be treated as a serious consumer protection problem. It’s certainly worthwhile for states to experiment and try to find ways to deter abuse, within the bounds of due process.

Patent owners who demand licensing fees from hundreds or thousands of individuals based on a patent that clearly should be found invalid, for broadly used web technology, are essentially engaging in widespread extortion, as AG Ferguson states. When patent owners won’t let users set up even a basic, out-of-the-box website without facing a demand letter, it’s not just an economic problem—it’s a threat to free expression.

Fighting Disciplinary Technologies

Thu, 05/20/2021 - 4:49pm

An expanding category of software, apps, and devices is normalizing cradle-to-grave surveillance in more and more aspects of everyday life. At EFF we call them “disciplinary technologies.” They typically show up in the areas of life where surveillance is most accepted and where power imbalances are the norm: in our workplaces, our schools, and in our homes.

At work, employee-monitoring “bossware” puts workers’ privacy and security at risk with invasive time-tracking and “productivity” features that go far beyond what is necessary and proportionate to manage a workforce. At school, programs like remote proctoring and social media monitoring follow students home and into other parts of their online lives. And at home, stalkerware, parental monitoring “kidware” apps, home monitoring systems, and other consumer tech monitor and control intimate partners, household members, and even neighbors. In all of these settings, subjects and victims often do not know they are being surveilled, or are coerced into it by bosses, administrators, partners, or others with power over them.

Disciplinary technologies are often marketed for benign purposes: monitoring performance, confirming compliance with policy and expectations, or ensuring safety. But in practice, these technologies are non-consensual violations of a subject’s autonomy and privacy, usually with only a vague connection to their stated goals (and with no evidence they could ever actually achieve them). Together, they capture different aspects of the same broader trend: the appearance of off-the-shelf technology that makes it easier than ever for regular people to track, control, and punish others without their consent.

The application of disciplinary technologies does not meet standards for informed, voluntary, meaningful consent. In workplaces and schools, subjects might face firing, suspension, or other severe punishment if they refuse to use or install certain software—and a choice between invasive monitoring and losing one’s job or education is not a choice at all. Whether the surveillance is happening on a workplace- or school-owned device versus a personal one is immaterial to how we think of disciplinary technology: privacy is a human right, and egregious surveillance violates it regardless of whose device or network it’s happening on.

And even when its victims might have enough power to say no, disciplinary technology seeks a way to bypass consent. Too often, monitoring software is deliberately designed to fool the end-user into thinking they are not being watched, and to thwart them if they take steps to remove it. Nowhere is this more true than with stalkerware and kidware—which, more often than not, are the exact same apps used in different ways.

There is nothing new about disciplinary technology. Use of monitoring software in workplaces and educational technology in schools, for example, has been on the rise for years. But the pandemic has turbo-charged the use of disciplinary technology on the premise that, if in-person monitoring is not possible, ever-more invasive remote surveillance must take its place. This group of technologies and the norms it reinforces are becoming more and more mainstream, and we must address them as a whole.

To determine the extent to which certain software, apps, and devices fit under this umbrella, we look at a few key areas:

The surveillance is the point. Disciplinary technologies share similar goals. The privacy invasions from disciplinary tech are not accidents or externalities: the ability to monitor others without consent, catch them in the act, and punish them is a selling point of the system. In particular, disciplinary technologies tend to create targets and opportunities to punish them where none existed before.

This distinction is particularly salient in schools. Some educational technology, while inviting in third parties and collecting student data in the background, still serves clear classroom or educational purposes. But when the stated goal is affirmative surveillance of students—via face recognition, keylogging, location tracking, device monitoring, social media monitoring, and more—we look at that as a disciplinary technology.

Consumer and enterprise audiences. Disciplinary technologies are typically marketed to and used by consumers and enterprise entities in a private capacity, rather than the police, the military, or other groups we traditionally associate with state-mandated surveillance or punishment. This is not to say that law enforcement and the state do not use technology for the sole purpose of monitoring and discipline, or that they always use it for acceptable purposes. What disciplinary technologies do is extend that misuse.

With the wider promotion and acceptance of these intrusive tools, ordinary citizens and the private institutions they rely on increasingly deputize themselves to enforce norms and punish deviations. Our workplaces, schools, homes, and neighborhoods are filled with cameras and microphones. Our personal devices are locked down to prevent us from countermanding the instructions that others have inserted into them. Citizens are urged to become police, in a digital world increasingly outfitted for the needs of a future police state.

Discriminatory impact. Disciplinary technologies disproportionately hurt marginalized groups. In the workplace, the most dystopian surveillance is used on the workers with the least power. In schools, programs like remote proctoring disadvantage disabled students, Black and brown students, and students without access to a stable internet connection or a dedicated room for test-taking. Now, as schools receive COVID relief funding, surveillance vendors are pushing expensive tools that will disproportionately discriminate against the students already most likely to be hardest hit by the pandemic. And in the home, it is most often (but certainly not exclusively) women, children, and the elderly who are subject to the most abusive non-consensual surveillance and monitoring.

And in the end, it’s not clear that disciplinary technologies even work for their advertised uses. Bossware does not conclusively improve business outcomes, and instead negatively affects employees’ job satisfaction and commitment. Similarly, test proctoring software fails to accurately detect or prevent cheating, instead producing rampant false positives and overflagging. And there’s little to no independent evidence that school surveillance is an effective safety measure, but plenty of evidence that monitoring students and children does decrease perceptions of safety, equity, and support, negatively affect academic outcomes,  and can have a chilling effect on development that disproportionately affects minoritized groups and young women. If the goal is simply to use surveillance to give authority figures even more power, then disciplinary technology could be said to “work”—but at great expense to its unwilling targets, and to society as a whole.

The Way Forward

Fighting just one disciplinary technology at a time will not work. Each use case is another head of the same Hydra that reflects the same impulses and surveillance trends. If we narrowly fight stalkerware apps but leave kidware and bossware in place, the fundamental technology will still be available to those who wish to abuse it with impunity. And fighting student surveillance alone is untenable when scholarly bossware can still leak into school and academic environments.

The typical rallying cries around user choice, transparency, and strict privacy and security standards are not complete remedies when the surveillance is the consumer selling point. Fixing the spread of disciplinary technology needs stronger medicine. We need to combat the growing belief, funded by disciplinary technology’s makers, that spying on your colleagues, students, friends, family, and neighbors through subterfuge, coercion, and force is somehow acceptable behavior for a person or organization. We need to show how flimsy disciplinary technologies’ promises are; how damaging its implementations can be; and how, for every supposedly reasonable scenario its glossy advertising depicts, the reality is that misuse is the rule, not the exception.

We’re working at EFF to craft solutions to the problems of disciplinary technology, from demanding anti-virus companies and app stores recognize spyware more explicitly, pushing companies to design for abuse cases, and exposing the misuse of surveillance technology in our schools and in our streets. Tools that put machines in power over ordinary people are a sickening reversal of how technology should work. It will take technologists, consumers, activists and the law to put it right.

#ParoNacionalColombia and Digital Security Considerations for Police Brutality Protests

Wed, 05/19/2021 - 10:08pm

In the wake of Colombia’s tax reform proposal, which came as more Colombians fell into poverty as a result of the pandemic, demonstrations spread over the country in late April, reviving social unrest and socio-economic demands that led people to the streets in 2019.The government's attempts to reduce public outcry by withdrawing the tax proposal to draft a new text did not work. Protests continue online and offline. Violent repression on the ground by police, and the military presence in Colombian cities, have raised concerns among national and international groups—from civil organizations across the globe to human rights bodies that are calling on the government to respect people’s constitutional rights to assemble and allow free expression on the Internet and the streets. Media has reported on government crackdowns against the protestors, including physical violence, missing persons, and deaths, seizing of phones and other equipment used to document protests, and police action, as well as internet disruptions and content restrictions or takedowns by online platforms.

As the turmoil and demonstrations continue, we’ve put together some useful resources from EFF and allies we hope can help those attending protests and using technology and the Internet to speak up, report, and organize. Please note that the authors of this post come from primarily U.S.- and Brazil-based experiences. The post is by no means comprehensive. We urge readers to be aware that protest circumstances change quickly; digital security risks, and their mitigation, can vary depending on your location and other contexts. 

This post has two sections covering resources for navigating protests and resources for navigating networks.

Resources for Navigating Protests

Resources for Navigating Network Issues

Resources for Navigating Protests

To attend protests safely, demonstrators need to consider many factors and threats: these range from protecting themselves from harassment and their own devices’ location tracking capabilities, to balancing the need to use technologies for documenting law enforcement brutality and disseminating information. Another consideration is using encryption to protect data and messages from unintended readers. Some resources that may be helpful are:

For Protestors (Colombia)  For Bringing Devices to Protests For Using Videos and Photos to Document Police Brutality, Protect Protesters’ Faces, and Scrub Metadata

Resources for Navigating Network Issues

What happens if the Internet is really slow, down altogether, or there’s some other problem keeping people from connecting online? What if social media networks remove or block content from being widely seen, and each platform has a different policy for addressing content issues? We’ve included some resources for understanding hindrances to sending messages and posts or connecting online. 

For Network and Platform Blockages (Colombia)  For Network Censorship  For Selecting a Circumvention Tool

If circumvention (not anonymity) is your primary goal for accessing and sending material online, the following resources might be helpful. Keep in mind that Internet Service Providers (ISPs) are still able to see that you are using one of these tools (e.g. that you’re on a Virtual Private Network (VPN) or that you’re using Tor), but not where you’re browsing, nor the content of what you are accessing. 

VPNs

A few diagrams showing the difference between default connectivity to an ISP using a VPN and using Tor are included below (from the Understanding and Circumventing Network Censorship SSD guide).

Your computer tries to connect to https://eff.org, which is at a listed IP address (the numbered sequence beside the server associated with EFF’s website). The request for that website is made and passed along to various devices, such as your home network router and your ISP, before reaching the intended IP address of https://eff.org. The website successfully loads for your computer.

In this diagram, the computer uses a VPN, which encrypts its traffic and connects to eff.org. The network router and ISP might see that the computer is using a VPN, but the data is encrypted. The ISP routes the connection to the VPN server in another country. This VPN then connects to the eff.org website.

Tor 

Digital security guide on using Tor Browser, which uses the volunteer-run Tor network, from Surveillance Self-Defense (EFF): How to: Use Tor on macOS (English), How to: Use Tor for Windows (English), How to: Use Tor for Linux (English), Cómo utilizar Tor en macOS (Español), Cómo Usar Tor en Windows (Español), Como usar Tor en Linux (Español)

The computer uses Tor to connect to eff.org. Tor routes the connection through several “relays,” which can be run by different individuals or organizations all over the world. The final “exit relay” connects to eff.org. The ISP can see that you’re using Tor, but cannot easily see what site you are visiting. The owner of eff.org, similarly, can tell that someone using Tor has connected to its site, but does not know where that user is coming from.

For Peer-to-Peer Resources

Peer-to-Peer alternatives can be helpful during a shutdown or during network disruptions and include tools like the Briar App, as well as other creative uses such as Hong Kong protesters’ use of AirDrop on iOS devices.

For Platform Censorship and Content Takedowns

If your content is taken down from services like social media platforms, this guide may be helpful for understanding what might have happened, and making an appeal (Silenced Online): How to Appeal (English)

For Identifying Disinformation

Verifying the authenticity of information (like determining if the poster is part of a bot campaign, or if the information itself is part of a propaganda campaign) is tremendously difficult. Data & Society’s reports on the topic (English), and Derechos Digitales’ thread (Español) on what to pay attention to and how to check information might be helpful as a starting point. 

Need More Help?

For those on the ground who need digital security assistance, Access Now has a 24/7 Helpline for human rights defenders and folks at risk, which is available in English, Spanish, French, German, Portuguese, Russian, Tagalog, Arabic, and Italian. You can contact their helpline at https://www.accessnow.org/help/

Thanks to former EFF fellow Ana Maria Acosta for her contributions to this piece.

Community Control of Police Spy Tech

Wed, 05/19/2021 - 3:34pm

All too often, police and other government agencies unleash invasive surveillance technologies on the streets of our communities, based on the unilateral and secret decisions of agency executives, after hearing from no one except corporate sales agents. This spy tech causes false arrests, disparately burdens BIPOC and immigrants, invades our privacy, and deters our free speech.

Many communities have found Community Control of Police Surveillance (CCOPS) laws to be an effective step on the path to systemic change. CCOPS laws empower the people of a community, through their legislators, to decide whether or not city agencies may acquire or use surveillance technology. Communities can say “no,” full stop. That will often be the best answer, given the threats posed by many of these technologies, such as face surveillance or predictive policing. If the community chooses to say “yes,” CCOPS laws require the adoption of use policies that secure civil rights and civil liberties, and ongoing transparency over how these technologies are used.

 The CCOPS movement began in 2014 with the development of a model local surveillance ordinance and launch of a statewide surveillance campaign by the ACLU affiliates in California. By 2016, a broad coalition including EFF, ACLU of Northern California, CAIR San Francisco-Bay Area, Electronic Frontier Alliance (EFA) member Oakland Privacy, and many others passed the first ordinance of its kind in Santa Clara County, California.  EFF has worked to enact these laws across the country. So far, 18 communities have done so. You can press the play button below to see a map of where they are.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.google.com%2Fmaps%2Fd%2Fembed%3Fmid%3D1GN7RoV6w8hGOcKbu5qnycbwznnwWFpWu%22%20width%3D%22640%22%20height%3D%22480%22%20allow%3D%22autoplay%22%3E%C2%A0%3C%2Fiframe%3E Privacy info. This embed will serve content from google.com

 

These CCOPS laws generally share some common features. If an agency wants to acquire or use surveillance technology (broadly defined), it must publish an impact statement and a proposed use policy. The public must be notified and given an opportunity to comment. The agency cannot use or acquire this spy tech unless the city council grants permission and approves the use policy. The city council can require improvements to the use policy. If a surveillance technology is approved, the agency must publish annual reports regarding their use of the technology and compliance with the approved policies. There are also important differences among these CCOPS laws. This post will identify the best features of the first 18 CCOPS laws, to show authors of the next round how best to protect their communities. Specifically:

  • The city council must not approve a proposed surveillance technology unless it finds that the benefits outweigh the cost, and that the use policy will effectively protect human rights.
  • The city council needs a reviewing body, with expertise regarding surveillance technology, to advise it in making these decisions.
  • Members of the public need ample time, after notice of a proposed surveillance technology, to make their voices heard.
  • The city council must review not just the spy tech proposed by agencies after the CCOPS ordinance is enacted, but also any spy tech previously adopted by agencies. If the council does not approve it, use must cease.
  • The city council must annually review its approvals, and decide whether to modify or withdraw these approvals.
  • Any emergency exemption from ordinary democratic control must be written narrowly, to ensure the exception will not swallow the rule.
  • Members of the public must have a private right of action so they can go to court to enforce both the CCOPS ordinance and any resulting use policies.

Authors of CCOPS legislation would benefit by reviewing the model bill from the ACLU. Also informative are the recent reports on enacted CCOPS laws from Berkeley Law’s Samuelson Clinic, and from EFA member Surveillance Technology Oversight Project, as well Oakland Privacy and the ACLU of Northern California’s toolkit for fighting local surveillance.

Strict Standard of Approval

There is risk that legislative bodies may become mere rubber stamps providing a veneer of democracy over a perpetuation of bureaucratic theater. Like any good legislation, the power or fault is in the details.

Oakland’s ordinance accomplishes this by making it clear that legislative approval should not be the default. It is not the city council’s responsibility, or the community’s, to find a way for agency leaders to live out their sci-fi dreams. Lawmakers must not approve the acquisition or use of a surveillance technology unless, after careful deliberation and community consultation, they find that the benefits outweigh the costs, that the proposal effectively safeguards privacy and civil rights, and that no alternative could accomplish the agency’s goals with lesser costs—economically or to civil liberties.

A Reviewing Body to Assist the City Council

Many elected officials do not have the technological proficiency to make these decisions unassisted.  So the best CCOPS ordinances designate a reviewing body responsible for providing council members the guidance needed to ask the right questions and get the necessary answers. A reviewing body builds upon the core CCOPS safeguards: public notice and comment, and council approval. Agencies that want surveillance technology must first seek a recommendation from the reviewing body, which acts as the city’s informed voice on technology and its privacy and civil rights impacts.

When Oakland passed its ordinance, the city already had a successful model to draw from. Coming out of the battle between police and local advocates who had successfully organized to stop the Port of Oakland’s Domain Awareness Center, the city had a successful Privacy Advisory Commission (PAC). So Oakland’s CCOPS law tasked the PAC with providing advice to the city council on surveillance proposals.

While Oakland’s PAC is made up exclusively of volunteer community members with a demonstrated interest in privacy rights, San Francisco took a different approach. That city already had a forum for city leadership to coordinate and collaborate on technology solutions. Its fifteen-member Committee on Information Technology (COIT) is comprised of thirteen department heads—including the President of the Board of Supervisors—and two members of the public.

There is no clear rule-of-thumb on which model of CCOPS reviewing body is best. Some communities may question whether appointed city leaders might be apprehensive about turning down a request from an allied city agency, instead of centering residents' civil rights and personal freedoms. Other communities may value the perspective and attention that paid officials can offer to carefully consider all proposed surveillance technology and privacy policies before they may be submitted for consideration by the local legislative body. Like the lawmaking body itself, these reviewing bodies’ proceedings should be open to the public, and sufficiently noticed to invite public engagement before the body issues its recommendation to adopt, modify, or deny a proposed policy.

Public Notice and Opportunity to Speak Up

Public notice and engagement are essential. For that participation to be informed and well-considered, residents must first know what is being proposed, and have the time to consult with unbiased experts or otherwise educate themselves about the capabilities and potential undesired consequences of a given technology. This also allows time to organize their neighbors to speak out. Further, residents must have sufficient time to do so. For example, Davis, California, requires a 30-day interval between publication of a proposed privacy policy and impact report, and the city council’s subsequent hearing regarding the proposed surveillance technology.

New York City’s Public Oversight of Surveillance (POST) Act is high on transparency, but wanting on democratic power. On the positive side, it provides residents with a full 45 days to submit comments to the NYPD commissioner. Other cities would do well to provide such meaningful notice. However, due to structural limits on city council control of the NYPD, the POST Act does not accomplish some of the most critical duties of this model of surveillance ordinance—placing the power and responsibility to hear and address public concerns with the local legislative body, and empowering that body to prohibit harmful surveillance technology.

Regular Review of Technology Already in Use

The movement against surveillance equipment is often a response to the concerning ways that invasive surveillance has already harmed our communities. Thus, it is critical that any CCOPS ordinance apply not just to proposals for new surveillance tech, but also to the continued use of existing surveillance tech.

For example, city agencies in Davis that possessed or used surveillance technology when that city’s CCOPS ordinance went into effect had a four-month deadline to submit a proposed privacy policy. If the city council did not approve it within four regular meetings, then the agency had to stop using it. Existing technology must be subject to at least the same level of scrutiny as newer technology. Indeed, the bar should arguably be higher for existing technologies, considering the likely existence of a greater body of data showing their capabilities or lack thereof, and any prior harm to the community.

Moving forward, CCOPS ordinances must also require that each agency using surveillance technology issue reports about it on at least an annual basis. This allows the city council and public to monitor the use and deployment of approved surveillance technologies. Likewise, CCOPS ordinances must require the city council, at least annually, to revisit its decision to approve a surveillance technology. This is an opportunity to modify the use policies, or end the program altogether, when it becomes clear that the adopted protections have not been sufficient to protect rights and liberties.

In Yellow Springs, Colorado, village agencies must facilitate public engagement by submitting annual reports to the village council, and making them publicly available on their websites. Within 60 days, the village council must hold a public hearing about the report with opportunity for public comment. Then the village council must determine whether each surveillance technology has met its standards for approval. If not, the village council must discontinue the technology, or modify the privacy policy to resolve the failures.

Emergency Exceptions

Many CCOPS ordinances allow police to use surveillance technology without prior democratic approval, in an emergency. Such exceptions can easily swallow the rule, and so they must be tightly drafted.

First, the term “emergency” must be defined narrowly, to cover only imminent danger of death or serious bodily injury to a person. This is the approach, for example, in San Francisco. Unfortunately, some cities extend this exemption to also cover property damage. But police facing large protests can always make ill-considered claims that property is at risk.

Second, the city manager alone must have the power to allow agencies to make emergency use of surveillance technology, as in Berkeley. Suspension of democratic control over surveillance technology is a momentous decision, and thus should come only from the top.

Third, emergency use of surveillance technology must have tight time limits. This means days, not weeks or months. Further, the legislative body must be quickly notified, so it can independently and timely assess the departure from legislative control. Yellow Springs has the best schedule: emergency use must end after four days, and notification must occur within ten days.

Fourth, CCOPS ordinances must strictly limit retention and sharing of personal information collected by surveillance technology on an emergency basis. Such technology can quickly collect massive quantities of personal information, which then can be stolen, abused by staff, or shared with ICE. Thus, Oakland’s staff may not retain such data, unless it is related to the emergency or is relevant to an ongoing investigation. Likewise, San Francisco’s staff cannot share such data, except based on a court’s finding that the data is evidence of a crime, or as otherwise required by law.

Enforcement

It is not enough to enact an ordinance that requires democratic control over surveillance technology. It is also necessary to enforce it. The best way is to empower community members to file their own enforcement lawsuits. These are often called a private right of action. EFF has filed such surveillance regulation enforcement litigation, as have other advocates like Oakland Privacy and the ACLU of Northern California.

The best private rights of action broadly define who can sue. In Boston, for example, “Any violation of this ordinance constitutes an injury and any person may institute proceedings.” It is a mistake to limit enforcement just to a person who can show they have been surveilled. With many surveillance tools capturing information in covert dragnets, it can be exceedingly difficult to identify such people, or prove that you have been personally impacted, despite a brazen violation of the ordinance. In real and immutable ways, the entire community is harmed by unauthorized surveillance technology, including through the chilling of protest in public spaces.

Some ordinances require a would-be plaintiff, before suing, to notify the government of the ordinance violation, and allow the government to avoid a suit by ending the violation. But this incentivizes city agencies to ignore the ordinance, and wait to see whether anyone threatens suit. Oakland’s ordinance properly eschews this kind of notice-and-cure clause.

Private enforcement requires a full arsenal of remedies. First, a judge must have the power to order a city to comply with the ordinance. Second, there should be damages for a person who was unlawfully subjected to surveillance technology. Oakland provides this remedy. Third, a prevailing plaintiff should have their reasonable attorney fees paid by the law-breaking agency. This ensures access to the courts for everyone, and not just wealthy people who can afford to hire a lawyer. Davis properly allows full recovery of all reasonable fees. Unfortunately, some cities cap fee-shifting at far less than the actual cost of litigating an enforcement suit.

Other enforcement tools are also important. Evidence collected in violation of the ordinance must be excluded from court proceedings, as in Somerville, Massachusetts. Also, employees who violate the ordinance should be subject to workplace discipline, as in Lawrence, Massachusetts.

Next Steps

The movement to ensure community control of government surveillance technology is gaining steam. If we can do it in cities across the country, large and small, we can do it in your hometown, too. The CCOPS laws already on the books have much to teach us about how to write the CCOPS laws of the future.

Please join us in the fight to ensure that police cannot decide by themselves to deploy dangerous and invasive spy tech onto our streets. Communities, through their legislative leaders, must have the power to decide—and often they should say “no.”

 

Related Cases: Williams v. San Francisco

Help Bring Dark Patterns To Light

Wed, 05/19/2021 - 12:00pm

On social media, shopping sites, and even childrens’ apps, companies are using deceptive user experience design techniques to trick us into giving away our data, sharing our phone numbers and contact lists, and submitting to fees and subscriptions. Everyday, we’re exploited for profit through “dark patterns”: design tactics used in websites and apps to manipulate you into doing things you probably would not do otherwise. 

So today, we’re joining Consumer Reports, Access Now, PEN America, and Harry Brignull (founder of DarkPatterns.org), in announcing the Dark Patterns Tip Line. It’s an online platform hosted by Consumer Reports that allows people to submit and highlight deceptive design patterns they see in everyday products and services.

Your submissions will help privacy advocates, policymakers, and agency enforcers hold companies accountable for their dishonest and harmful practices. Especially misleading designs will be featured on the site.

Dark patterns can be deceptive in a variety of ways. For example, a website may trick visitors into submitting to unwanted follow-up emails by making the email opt-out checkbox on a checkout page harder to see: for instance, using a smaller font or placing the opt-out in an inconspicuous place in the user flow. Consider this example from Carfax

The screenshot was gathered from a Reddit user u/dbilbey to the Asshole Design subreddit community in September 2020.

Another example: Grubhub hid a 15% service fee under the misleadingly vague  “taxes and fees” line of its receipt. 

The screenshot was taken directly from the Grubhub iOS app in September 2020.

You can find many more samples of dark patterns on the “sightings” page of the Dark Patterns Tip Line.

The process for submitting a dark pattern to the Tip Line is simple. Just enter the name and type of company responsible, a short description of the deceptive design, and where you encountered it. You can also include a screenshot of the design.  Submitting to the Dark Pattern Tip Line requires you to agree to the Consumer Reports User Agreement and Privacy Policy. The Dark Patterns Tip Line site has some special limitations on Consumer Reports' use of your email, and the site doesn’t use cookies or web tracking. You can opt-out of some of the permissions granted in the Consumer Reports Privacy Policy here.

A sample submission to the Dark Patterns Tip Line.

Please share the Tip Line with people you think may be interested in submitting, such as community organizations, friends, family and colleagues. For this period, the Dark Patterns Tip Line is collecting submissions until June 9th.

Help us shine a light on these deceptive designs, and fight to end them, by submitting any dark patterns you’ve come across to the Dark Patterns Tip Line.

Coalition Launches ‘Dark Patterns’ Tip Line to Expose Deceptive Technology Design

Wed, 05/19/2021 - 11:42am
EFF Joins Groups Fighting Exploitative Data-Gathering in Apps and on the Web

San Francisco – The Electronic Frontier Foundation (EFF) has joined Consumer Reports, Access Now, PEN America, and DarkPatterns.org in launching the “Dark Patterns Tip Line”—a project for the public to submit examples of deceptive design patterns they see in technology products and services.

“Dark patterns” design tactics are used to trick people into doing all kinds of things they don’t mean to, from signing up for a mailing list to submitting to recurring billing. Examples seen by users every day include hard-to-close windows urging you to enter your email address on a news site, email opt-outs on shopping sites in difficult-to-find locations in difficult-to-read text, and pre-checked boxes allowing ongoing charges.

“Your submissions to the Dark Patterns Tip Line will help provide a clearer picture of peoples’ struggles with deceptive interfaces. We hope to collect and document harms from dark patterns and demonstrate the ways companies are trying to manipulate all of us with their apps and websites,” said EFF Designer Shirin Mori. “Then we can offer people tips to spot dark patterns and fight back.”

If you see a dark pattern, head to Darkpatternstipline.org, hosted by Consumer Reports. Then, click “submit a pattern,” and enter the name and type of company responsible, a short description of the misleading design, and where you found it. You can also include a screen shot. Submitting to the Dark Patterns Tip Line requires you to agree to the Consumer Reports’ user agreement and privacy policy. The Dark Patterns Tip Line site has some special limitations on Consumer Reports’ use of your email, and the site doesn’t use cookies or web tracking.

“If we want to stop dark patterns on the internet and beyond, we first have to assess what’s out there, and then use these examples to influence policymakers and lawmakers,” said Mori. “We hope the Dark Patterns Tip Line will help us move towards more fair, equitable, and accessible technology products and services for everyone.”

For the Dark Patterns Tip Line, hosted by Consumer Reports:
https://darkpatternstipline.org

Contact:  ShirinMoriDesignermori@eff.org

Lawsuit Against Snapchat Rightfully Goes Forward Based on “Speed Filter,” Not User Speech

Tue, 05/18/2021 - 2:53pm

The U.S. Court of Appeals for the Ninth Circuit has allowed a civil lawsuit to move forward against Snapchat, a smartphone social media app, brought by the parents of three teenage boys who died tragically in a car accident after reaching a maximum speed of 123 miles per hour. We agree with the court’s ruling, which confirmed that internet intermediaries are not immune from liability when the harm does not flow from the speech of other users.

The parents argue that Snapchat was negligently designed because it incentivized users to drive at dangerous speeds by offering a “speed filter” that could be used during the taking of photos and videos. The parents allege that many users believed that the app would reward them if they drove 100 miles per hour or faster. One of the boys had posted a “snap” with the “speed filter” minutes before the crash.

The Ninth Circuit rightly held in Lemmon v. Snap, Inc. that Section 230 does not protect Snapchat from the parents’ lawsuit. Section 230 is a critical federal law that protects user speech by providing internet intermediaries with partial immunity against civil claims for hosting user-generated content (see 47 U.S.C. § 230(c)(1)). Thus, for example, if a review site publishes a review that contains a statement that defames someone else, the reviewer may be properly sued for writing and uploading the defamatory content, but not the review site for hosting it.

EFF has been a staunch supporter of Section 230 since it was enacted in 1996, recognizing that the law has facilitated free speech and innovation online for 25 years. By partially shielding internet intermediaries from potential liability for what their users say and do on their platforms, Section 230 creates the legal breathing room for entrepreneurs to create a multitude of diverse spaces and services online. By contrast, with greater legal exposure, companies are incentivized in the opposite direction—to take down more user speech or to cease operations altogether.

However, this case against Snapchat shows that Section 230 does not—and was never meant to—shield internet intermediaries (such as social media platforms) from liability in all cases. Section 230 already has several exceptions, including for when online platforms host user speech that violates federal criminal law or intellectual property law.

In this case, the court explained that Section 230 does not protect companies when a claim is premised on harm that flows from the company’s own speech or actions, independent from the speech of other users. As the Ninth Circuit explained, the parents are aiming to hold Snapchat liable for creating a defective product with a feature that inspired users, including their children, to drive too fast. Nothing in the claim tries to hold Snapchat liable for publishing the “speed filter” post by one of the boys before they died in the crash. Nor would the parents “be permitted under § 230(c)(1) to fault Snap for publishing other Snapchat-user content (e.g., snaps of friends speeding dangerously) that may have incentivized the boys to engage in dangerous behavior.”

Thus, the court repeatedly emphasizes in the opinion that the parents’ claim “stand[s] independently of the content that Snapchat’s users create with the Speed Filter,” and internet intermediaries may lose Section 230 immunity for offering defective tools, “so long as plaintiffs’ claims do not blame them for the content that third parties generate with those tools.”

The Ninth Circuit also noted that the Lemmon case is distinguishable from other cases where the plaintiffs tried to creatively plead around Section 230 by arguing that the design of the website or app was the problem, when in fact the plaintiffs’ harm flowed from other users’ content—such as online content related to sex trafficking, illegal drug sales, and harassment. In these cases, the courts rightly granted the companies immunity under Section 230.

By emphasizing this distinction, we believe the decision does not create a troublesome incentive to censor user speech in order to avoid potential liability. 

One thing to keep in mind is that the Ninth Circuit’s decision not to apply Section 230 here does not automatically mean that Snapchat will be held liable for negligent product design. As we saw in a seminal Section 230 case, the website Roommates.com was denied Section 230 immunity by the Ninth Circuit, but later defeated a housing discrimination claim. The Lemmon case now goes back down to the district court, which will allow the case to proceed to a consideration of the merits.

EFF tells California Court that Forensic Software Source Code Must Be Disclosed to the Defendant

Fri, 05/14/2021 - 7:58pm

Last week, EFF filed an amicus brief in State v. Alvin Davis in California, in support of Mr. Davis's right to inspect the source code of STRMix, the forensic DNA software used at his trial. This is the most recent in a string of cases in which EFF has argued that a defendant has the right to examine DNA analysis software. Earlier this year, the courts in two of those cases, United States v. Ellis and State v. Pickett, agreed with EFF that the defendants were entitled to the source code of TrueAllele, one of STRMix's main competitors. 

Criminal defendants must be allowed to examine how DNA matching software used against them works to make sure that the software's result is reliable. Access to the source code cannot be replaced by testimony regarding how the program should work, since there could be coding errors. This is especially true for the newest generation of forensic DNA software, like STRMix and TrueAllele, which are fraught with reliability and accuracy concerns. In fact, a prior examination of STRMix led to the discovery that there were programming errors that could have created false results in 60 cases in Queensland, Australia.

That same worry is present in this case. Although the crime itself is harrowing, the evidence is anything but conclusive. An elderly woman was sexually assaulted and murdered in her home and two witnesses described seeing a black man in his 50s on the property on the day of the murder. Dozens of people had passed through the victim's home in the few months leading up to the murder, including Mr. Davis and another individual. Mr. Davis is an African American man who was in his 70s at the time of the murder and suffers from Parkinson’s disease. Another individual who met the witnesses’ description had a history of sex crimes including sexual assault with a foreign object.

DNA samples were taken from dozens of locations and items at the crime scene. Mr. Davis’s DNA was not found on many of those, including a cane that was allegedly used to sexually assault the victim. Traditional DNA software was not able to match Mr. Davis to the DNA sample from a shoelace that was likely used to tie up the victim—but STRMix did, and the prosecution relied heavily on the latter before the jury. The first trial against Mr. Davis, who is now in a wheelchair due to Parkinson’s, ended with a hung jury. He was convicted after a second trial and sentenced to life without parole. 

Hopefully the California court will follow the rulings in Ellis and Pickett, and recognize that there is no justice in convictions based on secret evidence.

Related Cases: United States v. EllisCalifornia v. Johnson

President Biden Revokes Unconstitutional Executive Order Retaliating Against Online Platforms

Fri, 05/14/2021 - 5:04pm

President Joe Biden on Friday rescinded a dangerous and unconstitutional Executive Order issued by President Trump that threatened internet users’ ability to obtain truthful information online and retaliated against services that fact-checked the former president. The Executive Order called on multiple federal agencies to punish private online social media services for content moderation decisions that President Trump did not like.

Biden’s rescission of the Executive Order comes after a coalition of organizations challenging the order in court called on the president to abandon the order last month. In a letter from Rock The Vote, Voto Latino, Common Cause, Free Press, Decoding Democracy, and the Center for Democracy & Technology, the organizations demanded the Executive Order’s rescission because “it is a drastic assault on free speech designed to punish online platforms that fact-checked President Trump.”

The organizations filed lawsuits to strike down the Executive Order last year, with Rock The Vote, Voto Latino, Common Cause, Free Press, and Decoding Democracy’s challenge currently on appeal in the U.S. Court of Appeals for the Ninth Circuit. The Center for Democracy & Technology’s appeal is currently pending in the U.S. Court of Appeal for the D.C. Circuit.

Cooley LLP, Protect Democracy, and EFF represent the plaintiffs in Rock The Vote v. Biden. We applaud Biden’s revocation of the “Executive Order on Preventing Online Censorship,” and are reviewing his rescission of the order and conferring with our clients to determine what impact it has on the pending legal challenge in the Ninth Circuit.

Trump issued the unconstitutional Executive Order in retaliation for Twitter fact-checking May 2020 tweets spreading false information about mail-in voting. The Executive Order issued two days later sought to undermine a key law protecting internet users’ speech, 47 U.S.C. § 230 (“Section 230”) and punish online platforms, including by directing federal agencies to review and potentially stop advertising on social media and kickstarting a federal rulemaking to re-interpret Section 230.

Related Cases: Rock the Vote v. Trump

Victory! California City Drops Lawsuit Accusing Journalists of Violating Computer Crime Law

Fri, 05/14/2021 - 3:59pm

The City of Fullerton, California has abandoned a lawsuit against two bloggers and a local website. The suit dangerously sought to expand California’s computer crime law in a way that threatened investigative reporting and everyday internet use.

The city’s lawsuit against the bloggers and the website Friends For Fullerton’s Future alleged, in part, that the bloggers violated the California Comprehensive Computer Data Access and Fraud Act because they improperly accessed non-public government records on the city’s file-sharing service that it used to disclose public records. But the settlement agreement between the city and bloggers shows those allegations lacked merit and badly misrepresented the city’s online security practices. It also vindicates the bloggers, who the city targeted for doing basic journalism.

The city’s poor approach to online security was apparent from the start. The city used Dropbox to create a shareable folder, which it called the “Outbox,” that was publicly accessible to anyone who had the link. And evidence in the lawsuit showed that city officials did not enable any of Dropbox’s built-in security features, such as requiring passwords or limiting access to particular individuals, before making the Outbox link publicly accessible.

Then the city widely shared the Outbox URL with members of the public, including the bloggers, when disclosing public records and for other city business. And because there were no restrictions or other controls on the Outbox folder, anyone with the link could access all the files and subfolders it contained, including files city officials claimed should not have been publicly accessible.

The crux of the city’s lawsuit alleged that the bloggers, Joshua Ferguson and David Curlee, accessed some Outbox subfolders and files “without permission,” in violation of California’s computer crime law, because the individuals did not follow officials’ directions to only access particular folders or files in the Outbox.

The city’s interpretation was a disturbing effort to stretch California’s criminal law, known as Section 502, to punish the journalists. That’s why EFF, along with the ACLU and ACLU of Southern California, filed a friend-of-the-court brief in support of the journalists and website. The Reporters Committee for Freedom of the Press also filed a brief in support of the bloggers. And an appellate court was scheduled to hear arguments in the case next week.

The city’s interpretation ignored that officials had made the entire Outbox public, such that anyone with the link would be able to access everything in it, just as anyone is able peruse any publicly accessible website. This configuration is the opposite of what the city should have done if it wanted to prevent access to sensitive information. Moreover, the city’s theory flouted open-access norms of the internet.

The city’s interpretation also sought to turn officials’ written directions to access only certain files into a violation of Section 502, a dangerous proposition that would give government officials broad discretion to criminalize internet access that they did not like. Also, the interpretation threatened to chill investigative journalism by criminalizing reporting about government records obtained by mistake or otherwise without officials’ permission, a dubious claim that the Supreme Court has repeatedly rejected on First Amendment grounds.

In the settlement, the city abandoned its Section 502 claims and admitted that its allegations did not accurately reflect its security practices for the Outbox folder. The settlement states “[t]he City acted on its belief that access controls were in place” when it filed its lawsuit and “that its primary goal was to retrieve confidential documents for the protection of city employees, residents and those doing business with the City.”

But a statement the city included in the settlement states:

However, due to errors by former employees of the City in configuring the account and lax password controls, some of the files and folders were in fact accessible and able to be downloaded and/or accessed without circumventing access controls.

The statement continues:

Based on the City’s additional investigation and through discussions with Mr. Ferguson and Mr. Curlee, the City now agrees that documents were not stolen or taken illegally from the shared file account as the City previously believed and asserted. The City retracts any and all assertions that Friends for Fullerton’s Future, Mr. Ferguson and/or Mr. Curlee acted illegally in accessing the documents.

The settlement also requires the city to pay Ferguson and Curlee $60,000 each as well as $230,000 for their attorney’s fees and costs.

EFF is thrilled that the city has walked away from its effort to penalize Ferguson, Curlee, and the blog for engaging in good journalism. And we congratulate the pair, the blog, and their attorney, Kelly Aviles, on being vindicated.

Of course, it would have been better if the city had never filed the lawsuit in the first place, which resulted in two rounds of appeals, including reversing a prior restraint issued against the blog, and a potentially dangerous expansion of Section 502. The statute, like the federal Computer Fraud and Abuse Act, is notoriously vague and can be misused to target individuals for their online activities.

Governor Newsom’s Budget Proposes Historic Investment in Public Fiber Broadband

Fri, 05/14/2021 - 1:22pm

This morning, California Governor Gavin Newsom announced his plans for the state’s multi-billion dollar surplus and federal recovery dollars, including a massive, welcome $7 billion investment in public broadband infrastructure. It's a plan that would give California one of the largest public broadband fiber networks in the country. The proposal now heads to the legislature to be ratified by June 15 by a simple majority. Here are the details:

The Plan: California Builds Fiber Broadband Highway; Locals Build the Onramps

Internet infrastructure shares many commonalities with public roads. Surface streets that crisscross downtowns and residential areas connect to highways via on-ramps. Those highways are a high-speed, high-capacity system that connect cities to one another over long distances.

In broadband, that highway function— connecting distant communities— is called "the middle mile," while those local roads, which connect with every home and business, are called "the last mile."

Governor Newsom's plan is for the State of California to build all that middle-mile infrastructure— high-speed links that will bring future-proof capacity to the state's small, remote, rural communities, putting them on par with the state's large and prosperous cities.

Laying fiber infrastructure like this brings terabits of broadband capacity to unserved and underserved communities in rural areas.  Simultaneously, this plan dramatically lowers the cost to the communities themselves, who are in charge of developing their own, locally appropriate last mile plans.

To make local efforts economically viable, the Governor’s budget envisions a long-term financing program, accessible by any municipality, cooperative, or local non-profit engaged in building local fiber infrastructure that connects to the state’s open access network.

Long-term financing and fiber go hand in hand. Fiber is future-proof, capable of meeting public broadband demand for decades to come. That long-term value is an uncomfortable fit with the short-term expectations of Big ISP market investors, whose focus on immediate returns has held back much-needed American investment in adequate digital infrastructure, fit for the 21st century.

The California plan for $500 million to be leveraged to access multiple billions in low-cost loans with 30- to 40-year repayment schedules is the patient money that fiber infrastructure needs— a visionary bet on the state's long-term future. The fiber itself will be useful for decades after that debt is retired, giving rural communities broadband access to cover all their projected needs into the 22nd century.

As we’ve noted in the past, national private ISPs have proven themselves unwilling to tackle the rural fiber challenge, even when they stand to make hundreds of millions of dollars by doing so. Their desire for fast profits over long-term investments is so great, they would rather bankrupt themselves before deploying fiber in rural areas. The same is true for low-income access even in the most densely populated cities, which the Governor's plan will enable local solutions to resolve.

The State Government Will Help Communities Prepare for the Fiber Future

A crucial aspect of the plan is the creation of technical assistance teams tasked with helping communities plan their fiber rollouts. This team is also charged with helping those communities design sustainable models that will deliver affordable broadband to all.

When the U.S. embarked upon a national electrification program in the early 20th century, government agencies didn't simply announce the program and retire to the sidelines while local communities worked out the details for themselves. Instead, the government formed myriad partnerships with local communities to help plan out their electrical grids, create financial plans, and train local operators so they could keep their new electrical grids humming. Governor Newsom’s budget updates this proven strategy for local fiber broadband networks.

EFF strongly supports this measure. Running a gigabit fiber network is technically challenging, but with guidance and technical support, it is well within the capacity of every community. State assistance in designing 21st century infrastructure plans combined with a state rollout of middle mile fiber networks is a powerful mixture of local empowerment and economic development.

We Have to Rally in Sacramento, Like Right Now

The cable lobby has long viewed fiber broadband as an existential threat to their high-speed broadband monopolies. EFF’s technical analysis by our engineering team found that fiber optics as a transmission medium vastly surpasses anything coaxial cable will be capable of doing simply as a matter of physics. That's why more than 1 billion fiber lines are being laid across advanced Asian nations from South Korea to China.

If Californians want cheap, symmetrical (fast uploads as well as downloads) gigabit (and beyond) internet at their homes and businesses, we must get our state legislature to pass this infrastructure plan next month.

Otherwise, most of us will remain trapped in a cable monopoly market paying 200% to 300% above competitive rates for our sluggish broadband service. Worse yet, when the next generation of applications and services requiring symmetrical gigabit, 10 gigabit, and even 100 gigabit speeds are developed in the coming years, Californians will be frozen out of them altogether.

At that point, the "digital divide" will be joined by a "speed chasm" in broadband access. We risk a major drag on our state's economic development. We can avoid that risk!  All we need is a long-term, future-proof investment in our communities and a law stating the obvious: all Californians deserve 21st century internet access.

How A Camera Patent Was Used to Sue Non-Profits, Cities, and Public Schools

Fri, 05/14/2021 - 1:21pm
Stupid Patent of the Month

Patent trolls are everyone’s problem. A study from 2019 showed that 32% of patent troll lawsuits are directed at small and medium-sized businesses. We told the stories of some of those small businesses in our Saved by Alice project.

But some patent trolls go even further. Hawk Technology LLC doesn’t just sue small businesses (although it does do that)—it has sued school districts, municipal stadiums, and non-profit hospitals. Hawk Tech has filed more than 200 federal lawsuits over the last nine years, mostly against small entities. Even after the expiration of its primary patent, RE43,462, in 2014, Hawk continued filing lawsuits on it right up until 2020. That’s possible because patent owners are allowed to seek up to six years of past damages for infringement.

One might have hoped that six years after the expiration of this patent, we might have seen the end of this aggressive patent troll. Nope. The U.S. Patent and Trademark Office has granted Hawk Tech another patent, U.S. Patent No. 10,499,091. It’s just as bad as the earlier one, and starting last summer, Hawk Tech has started to litigate.

Camera Plus Generic Terms

The ‘091 patent’s first claim simply claims a video surveillance system, then adds a bunch of computer terms. Those terms include things like “receiving video images at a personal computer,” “digitizing” images that aren’t already digital, “displaying” images in a separate window, “converting” video to some resolution level, “storing” on a storage device, and “providing a communications link.” These terms are utterly generic.

Claim 2 just describes allowing live and remote viewing and recording at the same time—basic streaming, in other words. Claim 3 adds the equally unimpressive idea of watching the recording later. The additional claims are no more impressive, as they basically insist that it was inventive in 2002 to livestream over the Internet—nearly a decade after the first concert to have a video livestream. Most laughably, claim 5 specifies a particular bit rate of Internet connection—as if that would make this non-invention patentable.

In order to be invalidated in court, however, the ‘091 patent would have to be considered by a judge. And Hawk Tech’s lawsuits get dismissed long before that stage—often in just a few months. That’s because the company reportedly settles cases at the bottom level of patent troll demands, typically for $5,000 or even less. That’s significantly less than a patent attorney would request even for a retainer to start work, and a tiny fraction of the $2 million (or sometimes much more) it can cost to defend a patent lawsuit through trial.

The patent monetization industry includes the kind of folks that can be counted on to sue a ventilator company in the middle of a pandemic. Even in that context, Hawk Tech has taken some remarkable steps.

Hawk Tech has sued a municipal stadium that hosts an Alabama college football team; a suburban Kentucky transit system with just 27 routes; non-profit thrift stores and colleges; and a Mississippi public school district that serves an area with a very high (46%) rate of child poverty. That last lawsuit is one of at least three different public school districts that Hawk Tech has sued.  These defendants would be hard pressed to mount a legal defense that could easily cost hundreds of thousands of dollars.

One type of company you won’t see on the long list of defendants is a company that actually makes camera systems. Instead, Hawk Tech finds those companies’ customers and goes after them. For instance, Hawk Tech drew up an infringement claim chart against Seon, a maker of bus camera and GPS systems; then used that chart to sue not Seon, but the Transit Authority of Northern Kentucky (TANK), based on a Seon pamphlet that pointed to TANK as a “case study.” Instead of suing camera company Eagle Eye, Hawk Tech sued the city of Mobile, Alabama, likely after seeing a promotional video made by Eagle Eye on how the city’s stadium used its camera systems.

The problem of what to do about patent trolls that demand nuisance-level settlements is a tough one. What may be a “nuisance” settlement in the eyes of large law firms can still be harmful to a charity or a public school serving impoverished students.

That’s why EFF has advocated for strong fee-shifting rules in patent cases. Parties who bring lawsuits based on bogus patents won’t be chastened until they are penalized by courts. We also have supported reforms like the 2013 Innovation Act, which would have allowed customer-based lawsuits like the Hawk Tech cases to be stayed in situations when the manufacturer of the allegedly infringing device steps in to litigate.

Right now, there are two different parties seeking to invalidate Hawk Tech’s ‘091 patent and collect legal fees. One is Nevada-based DTiQ, a camera company whose customers, including a Las Vegas sandwich shop, have been sued by Hawk Tech. Another is Castle Retail, a company that owns three supermarkets in Memphis. Let’s hope one of those cases gets to a judgment before Hawk Tech files off another round of bogus lawsuits against small companies—or public schools. 

How Your DNA—or Someone Else’s—Can Send You to Jail

Fri, 05/14/2021 - 12:47pm

Although DNA is individual to you—a “fingerprint” of your genetic code—DNA samples don’t always tell a complete story. The DNA samples used in criminal prosecutions are generally of low quality, making them particularly complicated to analyze. They are not very concentrated, not very complete, or are a mixture of multiple individual’s DNA—and often, all of these conditions are true. If a DNA sample is like a fingerprint, analyzing mixed DNA samples in criminal prosecutions can often be like attempting to isolate a single person’s print from a doorknob of a public building after hundreds of people have touched it. Despite the challenges in analyzing these DNA samples, prosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process. This is why it is essential that any DNA analysis tool’s source code is made available for evaluation. It is critical to determine whether the software is reliable enough to be used in the legal system, and what weight its results should be given. 

A Breakdown of DNA Data

To understand why DNA software analyses can be so misleading, it helps to know a tiny bit about how it works. To start, DNA sequences are commonly called genes. A more generic way to refer to a specific location in the gene sequence is a “locus” (plural “loci”). The variants of a given gene or of the DNA found at a particular locus are called “alleles.” To oversimplify, if a gene is like a highway, the numbered exits are loci, and alleles are the specific towns at each exit.

[P]rosecutors frequently introduce those analyses in trials, using tools that have not been reviewed and jargon that can mislead the jury—giving a false sense of scientific certainty to a very uncertain process.

Forensic DNA analysis typically focuses on around 13 to 20 loci and the allele present at each locus, making up a person’s DNA profile. By looking at a sufficient number of loci, whose alleles are distributed among the population, a kind of fingerprint can be established. Put another way, knowing the specific towns and exits a driver drove past can also help you figure out which highway they drove on.

To figure out the alleles present in a DNA sample, a scientist chops the DNA into different alleles, then uses an electric charge to draw it through a gel in a method called electrophoresis. Different alleles will travel at different rates, and the scientist can measure how far each one traveled and look up which allele corresponds to that length. The DNA is also stained with a dye, so that the more of it there is, the darker that blob will be on the gel.

Analysts infer what alleles are present based on how far they traveled through the gel, and deduce what amounts are present based on how dark the band is—which can work well in an untainted, high quality sample. Generally, the higher the concentration of cells from an individual and the less contaminated the sample by any other person’s DNA, the more accurate and reliable the generated DNA profile.

The Difficulty of Analyzing DNA Mixtures

Our DNA is found in all of our cells. The more cells that we shed, the higher the concentration of our DNA can be found, which generally also means more accuracy from DNA testing. However, our DNA can also be transferred from one object to another. So it’s possible that your DNA can be found on items you’ve never had contact with or at locations you’ve never been. For example, if you’re sitting in a doctor’s waiting room and scratch your face, your DNA may be found on the magazines on a table next to you that you never flipped through. Your DNA left on a jacket you lent a friend can transfer onto items they brush by or at locations they travel to. 

Given the ease at which DNA is deposited, it is no surprise that DNA samples from crime scenes are often a mixture of DNA from multiple individuals, or “donors.” Investigators gather DNA samples by swiping a cotton swab at the location that the perpetrator may have deposited their DNA, such as a firearm, a container of contraband, or the body of a victim. In many cases where the perpetrator’s bodily fluids are not involved, the DNA sample may only contain a small amount of the perpetrator’s DNA, which could be less than a few cells, and is likely to also contain the DNA of others. This makes trying to identify whether a person’s DNA is found in a complex DNA mixture a very difficult problem. It’s like having to figure out whether someone drove on a specific interstate when all you have is an incomplete and possibly inaccurate list of towns and exits they passed, all of which could have been from any one of the roads they used. You don’t know the number of roads they drove on, and can only guess at which towns and exits were connected. 

Running these DNA mixture samples through electrophoresis creates much noisier results, and often contains errors that indicate additional alleles at a locus or ignore alleles that are present. Human analysts then decide which alleles appear dark enough in the gel to count or which are light enough to ignore. At least, traditional DNA analysis worked in this binary way: an allele either counted or did not count as part of a specific DNA donor profile.

Probabilistic Genotyping Software and Their Problems 

Enter probabilistic genotyping software. The proprietors of these programs—the two biggest players are STRMix and TrueAllele—claim that their products, using statistical modeling, can determine the likelihood that a DNA profile or combinations of DNA profiles contributed to a DNA mixture, instead of the binary approach. Prosecutors often describe the analysis from these programs this way: It is X times more likely that defendant, rather than a random person, contributed to this DNA mixture sample.

However, these tools, like any statistical model, can be constructed poorly. And whether, what, and how assumptions are incorporated in them can cause the results to vary. They can be analogized to the election forecast models from FiveThirtyEight, The Economist, and The New York Times. They all use statistical modeling, but the final numbers are different because of the myriad design differences from each publisher. Probabilistic genotyping software is the same: they all use statistical modeling, but the output probability is affected by how that model is built. Like the different election models, different probabilistic DNA software has diverging approaches for which, and at what threshold, factors are considered, counteracted, or ignored. Additionally, input from human analysts, such as the hypothetical number of people who contributed to the DNA mixture, also change the calculation. If this is less rigorous than you expected, that’s exactly the point—and the problem. In our highway analogy, this is like a software program that purports to tell you how likely it is that you drove on a specific road based on a list of towns and exits you passed. Not only is the result affected by the completeness and accuracy of the list, but the map the software uses, and the data available to it, matter tremendously as well.

If this is less rigorous than you expected, that’s exactly the point—and the problem. 

Because of these complex variables, a probability result is always specific to how the program used was designed, the conditions at the lab, and any additional or discretionary input used during the analysis. In practice, different DNA analysis programs have resulted in substantially different probabilities for whether a defendant’s DNA appeared in the same DNA sample, even breathtaking discrepancies in the millions-fold!

And yet it is impossible to determine which result, or software, is the most accurate. There is no objective truth against which those numbers can be compared. We simply cannot know what the probability that a person contributed to a DNA mixture is. In controlled testing, we know whether a person’s DNA was part of a DNA mixture or not, but there is no way to figure out whether it was 100 times more likely that the donor’s DNA rather than an unknown person’s contributed to the mixture, or a million times more likely. And while there is no reason to assume that the tool that outputs the highest statistical likelihood is the most accurate, the software’s designers may nevertheless be incentivized to program their product in a way that is more likely to output a larger number, because “1 quintillion” sounds more precise than “10,000”—especially when there is no way to objectively evaluate the accuracy.

DNA Software Review is Essential

Because of these issues, it is critical to examine any DNA software’s source code that is used in the legal system. We need to know exactly how these statistical models are built, and looking at the source code is the only way to discover non-obvious coding errors. Yet, the companies that created these programs have fought against the release of the source code—even when it would only be examined by the defendant’s legal team and be sealed under a court order. In the rare instances where the software code was reviewed, researchers have found programming errors with the potential to implicate innocent people.

Forensic DNA analyses have the whiff of science—but without source code review, it’s impossible to know whether or not they pass the smell test. Despite the opacity of their design and the impossibility of measuring their accuracy, these programs have become widely used in the legal system. EFF has challenged—and continues to challenge—the failure to disclose the source code of these programs. The continued use of these tools, the accuracy of which cannot be ensured, threatens the administration of justice and the reliability of verdicts in criminal prosecutions.

Related Cases: California v. Johnson

FAQ: DarkSide Ransomware Group and Colonial Pipeline

Thu, 05/13/2021 - 3:09pm

With the attack on Colonial Pipeline by a ransomware group causing panic buying and shortages of gasoline on the US East Coast, many are left with more questions than answers to what exactly is going on. We have provided a short FAQ to the most common technical questions that are being raised, in an effort to shine light on some of what we already know.

What is Ransomware?

Ransomware is a combination word of “ransom”—holding stolen property to extort money for its return or release; and “malware”—malicious software installed on a machine. The principle is simple: the malware encrypts the victim’s files so that they can no longer use them and demands payment from the victim before decrypting them.

Most often, ransomware uses a vulnerability to infect a system or network and encrypt files to deny the owner access to those files. The key to decrypt the files is possessed by a third party—the extortionist—who then (usually through a piece of text left on the desktop or other obvious means) communicates instructions to the victim on how to pay them in exchange for the decryption key or program.

Most modern ransomware uses a combination of public-key encryption and symmetric encryption in order to lock out the victim from their files. Since the decryption and encryption key are separate in public-key encryption, the extortionist can guarantee that the decryption key is never (not even briefly, during the execution of the ransomware code) transmitted to the victim before payment.

Extortionists in ransomware attacks are mainly motivated by the prospects of payment. Other forms of cyberattack are most often used by hackers motivated by political or personal factors.

What is the Ransomware Industry?

Although ransomware has existed since the late 1980s, its use has expanded exponentially in recent years. This is partly due to the effectiveness of cryptocurrencies in facilitating payments to anonymous, remote recipients. An extortionist can demand payment in the form of bitcoin in exchange for decryption keys, rather than relying on older, much more regulated financial exchanges. This has driven the growth of a $1.4 billion ransomware industry in the US, based solely on locking out users and companies from their files. Average payments to extortionists are increasing as well. A report by Coveware shows a 31% growth in the average payment between Q2 and Q3 of 2020.

The WannaCry attack in 2017 was one of the largest ransomware incidents to date. Using a leaked NSA exploit dubbed “EternalBlue,” WannaCry spread to more than 200,000 machines across the world, demanding payment from operators of unpatched Windows systems. Displaying a message with a bitcoin address to send payment to, the attack cost hundreds of millions to billions of dollars. An investigation of WannaCry code by a number of information security firms and the FBI pointed to the hacking group behind the attack having connections to the North Korean state apparatus.

What is DarkSide?

The FBI revealed on Monday that the hacking group DarkSide is behind the latest ransomware attack on Colonial Pipeline. DarkSide is a relatively new ransomware group, only appearing on the scene in August 2020 in Russian-language hacking forums. They have poised themselves as a new type of ransomware-as-a-service business, attempting to inculcate “trust” and a sense of reliability between themselves and their victims. In order to ensure payment, DarkSide has found it useful to establish a reputation which ensures that when the victims deliver the ransom, they are guaranteed to receive a decryption key for their files. In this vein, the group has established a modern, polished website called DarkSide Leaks, aimed at reaching out to journalists and establishing a public face. They say that they solely target well-funded individuals and corporations which are able to pay the ransom asked for, and have a code of conduct claiming not to target hospitals, schools, or non-profits. They have also attempted to burnish their image with token donations to charity. Darkside, who reportedly typically asks for ransoms that range between $200,000 to $2,000,000, produced receipts showing a total of $20,000 in donations to charities Children International and The Water Project. The charities refused to accept the money.

DarkSide claims that they are not affiliated with any government, and that their motives are purely financial gain—a claim that has been assessed most likely to be true by cybersecurity firm Flashpoint. However, DarkSide code analyzed by the firm Cyberreason has been shown to check the systems language settings as a very first step, and halt the attack if the result is a language “associated with former Soviet Bloc nations.” This has fuelled speculation in the US that Russia may be affording the group special protection, or at least turning a blind eye to their misdeeds.

The result has been profitable for the cyber-extortion group. In mid-April, the group obtained $11 million from a high-profile victim. Bloomberg reports that Colonial Pipeline paid $5 million to the group.

What exactly happened last Friday?

Colonial Pipeline has operated continuously since the early 1960s, supplying 45% of the US East Coast gasoline supply, in addition to diesel and jet fuel. On Friday, May 8th, it shut down 5,500 miles of its pipeline infrastructure in response to a cyber-extortion attempt. The pipeline restarted on May 12th. Though the incident is still under investigation, the FBI confirmed on Monday what was already speculated: DarkSide was behind the attack.

In an apparent response to—though not an admission of involvement in—the attack, DarkSide released a statement on their website stating that they would introduce “moderation” to “avoid social consequences in the future.”

Why did they target Colonial Pipeline?

If patterns are any indication, DarkSide chose Colonial as a “big game” target due to the deep pockets of the firm, worth about $8 billion. Still, many suspect that DarkSide is now feeling a dawning sense of dread as the lateral effects of their attack are playing out: panic buying, gas shortages, and involvement by federal investigators as well as an executive order by President Biden intending to bolster America’s cyberdefenses as a response. Escalated to the level of an international incident, DarkSide may see the independence and latitude they are reported to enjoy dissipate under geopolitical pressure.

What can I do to defend myself against ransomware?

Frequently backing up your data to an external hard drive or cloud storage provider will ensure you are able to retrieve it later. If you already have a backup, do not plug the external hard drive into your computer after it is infected: the ransomware will likely target any new device that is recognized. You may need to reinstall your operating system, replace your hard drive, or bring it to a specialist to ensure complete removal of any infection.

You can also follow our guide to keeping your data safe. The Cybersecurity and Infrastructure Security Agency (CISA) has also provided a detailed guide on protecting yourself from ransomware. Note that it’s much easier to defend yourself against malware than to remove it once you’re infected, so it is always advisable to take proactive steps to defend yourself.

EFF to Ninth Circuit: Don’t Block California’s Pathbreaking Net Neutrality Law

Thu, 05/13/2021 - 1:13pm

Partnering with the ACLU and numerous other public interest advocates, businesses and educators, EFF has filed an amicus brief urging the Ninth Circuit Court of Appeals to uphold a district court’s decision not to block enforcement of SB 822, a law that ensures that that all Californians have fair access to all internet content and services.

For those who haven’t been following this issue: after the Federal Communications Commission rolled back net neutrality protections in 2017, California stepped up and passed a bill that does what the FCC wouldn’t: bar ISPs from blocking and throttling internet content and imposing paid prioritization schemes. The major ISPs promptly ran to court, claiming that California’s law is preempted– meaning, the FCC’s choice to abdicate binds everyone else – and asking the court to halt enforcement until the question was resolved. On February 23, 2021, Judge John Mendez said no, making it pretty clear that he did not think the ISP's challenge would succeed on the merits.  As expected, the parties then headed to the Ninth Circuit.

Our brief supporting the district court’s decision explains some of the stakes of SB 822, particularly for communities that are already as a disadvantage. Without legal protections, low-income Californians who rely on mobile devices for internet access and can’t pay for more expensive content may face limits on that access which is critical for distance learning, maintaining small businesses, and staying connected. Schools and libraries are also justifiably concerned that without net neutrality protections, paid prioritization schemes will degrade access to material that students and public need in order to learn. SB 822 addresses that by ensuring that large ISPs do not take advantage of their stranglehold on Californians’ internet access to slow or otherwise manipulate internet traffic.

The large ISPs also have a vested interest in shaping internet use to favor their own subsidiaries and business partners, at the expense of diverse voices and innovation. Absent meaningful competition, ISPs can leverage their last-mile monopolies to customers’ homes and bypass competition for a range of online services. That would mean less choice, lower quality, and higher prices for users—and new barriers to entry for innovators.

We hope the court recognizes how important SB 822 is, and upholds Judge Mendez’s ruling.

 

Pages