EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 49 min ago

Seven Times Journalists Were Censored: 2017 in Review

Sat, 12/30/2017 - 8:21pm

Social media platforms have developed into incredibly useful resources for professional and citizen journalists, and have allowed people to learn about and read stories that may never have been published in traditional media. Sharing on just one of a few large platforms like Facebook, Twitter, and YouTube may mean the difference between a story being read by a few hundred versus tens of thousands of people.

Unfortunately, these same platforms have taken on the role of censor. They have created moderation policies to increase polite speech on their platforms, but simply put: they are not very good at it. These moderation policies are applied in imbalanced ways, often without an appeal process, sometimes relying on artificial intelligence to flag content, and usually without transparency into the decision-making process. This results in the censorship and blocking of content of all types.

Globally, these content takedown processes often ignore the important evidentiary and journalistic roles content can play in countries where sharing certain information has consequences far beyond those in the U.S. We recommend any intermediary takedown practice include due process and be transparent, as recommended in our Manila Principles. And, as these examples demonstrate, social media platforms often make censorship decisions without due process, without transparency, and with end results that would make most people scratch their heads and wonder. 

We’re regularly documenting censorship and content takedowns like these on Onlinecensorship.org, a platform to document the who, what, and why of content takedowns on social media sites. Onlinecensorship.org is a project of the Electronic Frontier Foundation (EFF) and Visualizing Impact. 

While there are hundreds, and possibly thousands of examples, here are seven of the most egregious instances of social media platforms censoring journalism in 2017.

 1. Human Rights Abuses in Syria and Myanmar Removed from Youtube and Facebook 

Social media platforms can contain video or photographic evidence that can be used to build human rights abuse cases, especially in situations where the videos or photos aren’t safe on a hard drive due to potential loss or retaliation, or in instances where larger organizations have been blocked. But first-hand accounts like these are at constant risk on platforms like YouTube and Facebook. YouTube in particular has implemented artificial intelligence systems to identify and remove violent content that may be extremist propaganda or disturbing to viewers, and according to a report in the Intercept, removed documentation of the civil war in Syria. Facebook meanwhile removed photos and images of abuses by the Myanmar government against the Rohingya ethnic minority. 

2. A Buzzfeed Journalist’s Account is Locked on Twitter for a Seven Year-Old Tweet 

In November, Katie Notopoulos, a journalist for Buzzfeed, was banned from Twitter after a seven-year old tweet was reported by several people all at once. She was “mass-reported”, or subject to a campaign where many people reported her, for a 2011 tweet that read “Kill All White People.” After this, her account was locked until the offending tweet was removed. Twitter’s inconsistent content policies allow for this sort of targeted harassment, while making it difficult to know what is and what is not “acceptable” on the platform. 

3. Ukrainian News Site Liga is Banned from Facebook

In December, Facebook banned all links and all publications from independent Ukrainian news website Liga.net. They’ve since restored the links and posts, and are completing an internal investigation. According to Liga, Facebook told them they were banned because of "nudity." A Facebook representative told us that they were blocked because they had "triggered a malicious ad rule." Organizations can be banned and given confusing answers about why it's happening and what they can do about it due to murky moderation policies. A single platform with this sort of lack of transparency should not be able to flip a switch and stop a majority of the traffic to an entire domain without offering a concrete explanation to affected users. 

4. At Request of Indian Government, Twitter Suspends Accounts and Deletes Tweets Sympathetic to Kashmiri Independence

In August, the Indian government asked Twitter to suspend over two dozen Twitter accounts and remove over 100 tweets—some belonging to journalists and activists—that talked about the conflict in Kashmir, or showed sympathy for Kashmiri independence movements. The Indian government claimed the tweets violated Section 69A of India's Information Technology Act, which allows the government to block online content when it believes the content threatens the security, sovereignty, integrity, or defense of the country. 

The Indian government reported the tweets and Twitter accounts, and Twitter contacted the users explaining they would be censored. There were no individual explanations given for why these tweets or accounts were chosen, beyond highlighting the conflict in Kashmir. 

5. Panama Papers Co-Author Blocked from Facebook for Sharing Documents Critical of Maltese Government

Pulitzer prize-winning journalist Matthew Caruana Galizia was locked out of his Facebook account after sharing four posts that Facebook deleted for violating the social network’s community standards. The four posts contained allegations against Malta’s prime minister, his chief of staff, and his minister of energy. The posts included images of documents from the 11.5 million documents in the Panama Papers leak, a collection put together by the International Consortium of Investigative Journalists, of which he is a member. 

It’s unclear what community standard Facebook applied to delete the photos and lock the account, although it seems that it was due to the materials containing private information about individuals. Facebook has since announced that material that would otherwise violate its standards would be allowed if it was found to be “newsworthy, significant, or important to the public interest.” However, the expectation that Facebook moderators should decide what is newsworthy or important is part of the problem: the platform itself, through an undisclosed process, continues to be the gatekeeper for journalistic content. 

6. San Diego City Beat’s Article on Sexual Harassment Removed from Facebook 

Alex Zaragova, a writer for San Diego City Beat, had links to her article removed from Facebook because, according to them, it was an “attack.” The article, entitled “Dear dudes, you’re all trash,” critiqued men for their surprise and obliviousness in the light of multiple, high-profile sexual harassment scandals.

Presumably, the post ran afoul of Facebook’s policy against “hate speech,” which includes attacks against a group on the basis of gender. But as ProPublica noted this summer, those standards aren’t applied evenly: “White men” are a protected group, for example, but “black children” aren’t. 

If Facebook is going to continue to encourage publishers to publish their stories on the platform first, it needs to consider the effect its rules have on journalistic content. They’ve made efforts in the past to modify their standards for historically significant content. For example, they decided after much controversy to allow users to share images of the iconic Vietnam war photo of the ‘Napalm Girl’, recognizing “the history and global importance of this image in documenting a particular moment in time.” They should perhaps consider doing this for contemporary newsworthy content (especially content that expresses valuable critique and dissent from minority voices) that would otherwise run afoul of their rules. 

7. Snapchat and Medium Censor Qatari Media At Request of Saudi Arabia

The Kingdom of Saudi Arabia is one of the world’s most prolific censors. American companies—including Facebook and Google—have at times in the past voluntarily complied with content restriction demands from Saudi Arabia, though we know little about their context. 

In June, Medium complied with requests from the government to restrict access to content from two publications: Qatar-backed Al Araby Al Jadeed (“The New Arab”) and The New Khaliji News. In the interest of transparency, the company sent both requests to Lumen, a database which has collected and analyzed millions of takedown requests since 2001.

In September, Snap disappointed free expression advocates by joining the list of companies willing to team up with Saudi Arabia against Qatar and its media outlets. The social media giant pulled the Al Jazeera Discover Publisher Channel from Saudi Arabia. A company spokesperson told Reuters: “We make an effort to comply with local laws in the countries where we operate.”

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today! 

Time to Rethink Copyright Safe Harbors? 2017 in Review

Sat, 12/30/2017 - 7:35pm

Platform safe harbors have been in the crosshairs of copyright industry lobbyists throughout 2017. All year EFF has observed them advancing their plans around the world to weaken or eliminate the legal protections that have enabled the operation of platforms as diverse as YouTube, the Internet Archive, Reddit, Medium, and many thousands more. Copyright safe harbor rules empower these platforms by ensuring that they are free to host user-uploaded content, without manually vetting it (or, worse, automatically filtering it) for possible copyright infringements. Without that legal protection, it would be impossible for such platforms to operate as they do today.

In May this year we heard recording industry representatives call copyright safe harbor a greater threat to their industry than piracy. But that alarming claim isn't based in reality. In most countries that have copyright safe harbors, a platform comes under a responsibility to remove copyright-infringing content of a user once they are notified of it according to law. In the United States, that law is section 512 of the Digital Millennium Copyright Act (DMCA), and in Europe it is Article 14 of the E-Commerce Directive.

But Big Content isn't satisfied with such laws, because they place responsibility on copyright holders to request the removal of infringing content, and because the availability of free, user-uploaded content supposedly depresses the value of mainstream, paid entertainment. The content industry thinks a filtered, regulated Internet that suppresses user-uploaded content will deliver them higher revenues, and they describe the absence of these imaginary monopoly rents as a "value gap."

In Europe

Content industry lobbyists' worldwide efforts to eliminate or weaken copyright safe harbors have been most focused in Europe, where they have the ear of the European Commission, the body that introduces the first drafts of new European laws. Last year, the Commission produced a draft Directive on Copyright in the Digital Single Market that sought to reinterpret the E-Commerce Directive in such a way as to distinguish between Internet intermediaries such as cloud services and ISPs, which would remain entitled to the copyright safe harbor, and platforms that store and optimize the presentation of large volumes of user-uploaded content, which would not.

Supporting this purported reinterpretation of the law would be a new requirement on user-content platforms to implement automated upload filtering of allegedly copyright-infringing content. Throughout this year, various committees of the European Parliament have had the opportunity to review and propose amendments to the Commission's proposal, which EFF and European groups have joined in criticizing as illegal and disproportionate.

In North America

In the United States, copyright holder efforts to weaken copyright safe harbor protection have been proceeding on several fronts. The first is the Copyright Office's ongoing study on section 512 of the DMCA, in which many copyright holder representatives argued in favor of replacing the DMCA's "notice and takedown" system with an automatic filtering mandate. As we have explained, this would be nearly identical to the extreme censorship measure proposed by the European Commission.

The DMCA safe harbor was also weakened by the adoption late last year of a new rule requiring websites to renew their DMCA agent registration on a triennial basis. If they omit to do this, the website would lose its protection from copyright claims for user-uploaded content under the DMCA safe harbor. This unnecessary new requirement will hit the smallest online platforms the hardest, as it is they who are least likely to have the capacity to keep up with Copyright Office red tape, until they unexpectedly find themselves on the receiving end of a lawsuit.

A third front on which copyright holder groups are attacking copyright safe harbors is through the negotiations for a modernized North American Free Trade Agreement (NAFTA). In submissions to the United States Trade Representative (USTR), big content lobby groups such as the Recording Industry Association of America (RIAA) have slammed the DMCA 512 safe harbor as "antiquated", and advocated for its overhaul in NAFTA with new rules that would encourage mandatory filtering and blocking.

In Australia

Australia is an unusual case, in that although it passed copyright safe harbor laws in compliance with its 2005 Free Trade Agreement with the United States, due to what is widely acknowledged as having been a drafting error, the safe harbors only protected commercial ISPs, while excluding a number of other Internet intermediaries including websites and universities. You would think that it would be a no-brainer to fix such a simple drafting error, but that's without reckoning on the power of content industry lobbyists, who responded to an inquiry on the safe harbor provision earlier this year by urging the government not to close this loophole.

Sure enough, a dozen years after the original Australia-U.S. Free Trade Agreement, the government has now announced that for web platforms, the safe harbor drafting error will not be corrected for now, thereby leaving platforms exposed to claims from copyright owners over user-uploaded content.

Despite what big content lobbyists may say, this is no time to rethink copyright safe harbors. They are as vital to the Internet today as they have ever been. Weaker copyright safe harbors would mean fewer platforms willing to take the risk of hosting unmoderated speech of users. And with fewer platforms willing to host users' speech, the result would be a very different Internet from the one we know—one in which it would be much harder for users to exercise their freedom of expression online. So far from re-examining copyright safe harbors in 2018, we will be redoubling our efforts to defend them, as the European Union finally votes on its Digital Single Market Directive, North America negotiates the shape of the safe harbor rules in NAFTA, and users in Australia and other countries engage in their own battles to preserve freedom of expression online.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

The Worst Law in Technology Strikes Again: 2017 in Review

Fri, 12/29/2017 - 8:43pm

The latest on the Computer Fraud and Abuse Act? It’s still terrible. And this year, the detrimental impacts of the notoriously vague and outdated criminal computer crime statute showed themselves loud and clear. The statute lies at the heart of the Equifax breach, which might have been averted if our laws didn’t criminalize security research. And it’s at the center of a court case pending in the Ninth Circuit Court of Appeals, hiQ v. LinkedIn, which threatens a hallmark of today’s Internet: free and open access to publicly available information.

At EFF, we’ve spent 2017 working to make sure that courts and policy makers understand the role the CFAA has played in undermining security research, and that the Ninth Circuit rejects LinkedIn’s attempt to transform a criminal law meant to target serious computer break-ins into a tool for enforcing corporate computer use policies. We’ve also continued our work to protect programmers and developers engaged in cutting-edge exploration of technology via our Coders’ Rights Project—coders who often find themselves grappling with the messiness that is the CFAA. As this fight carries us into 2018, we stand ready to do all we can to rein in the worst law in technology.

Equifax: The CFAA Chills Another Security Researcher

The CFAA makes it illegal to engage in “unauthorized access” to a computer connected to the Internet, but the statute doesn’t tells us what “authorization” or “without authorization” means. This vague language might have seemed innocuous to some back in 1986 when the statute was passed, but in today’s networked world, where we all regularly connect to and use computers owned by others, courts cannot even agree on what the law covers. And as a result, this pre-Web law is causing serious problems.

One of the biggest problems: the law notorious for chilling the work of security researchers.

Most of the time, we never hear about the research that could have prevented a security nightmare. But with Equifax’s data breach, we did. As if the news of the catastrophic breach wasn’t bad enough, we learned in October—thanks to reporting by Motherboard—that a security researcher had warned Equifax “[m]onths before its catastrophic data breach . . . that it was vulnerable to the kind of attack that later compromised the personal data of more than 145 million Americans[.]” According to Equifax’s own timeline, the company didn’t patch the vulnerability for six months—and “only after the massive breach that made headlines had already taken place[.]”

The security researcher who discovered the vulnerability in Equifax’s system back in 2016 should have been empowered to bring their findings to someone else's attention after Equifax ignored them. If they had, the breach may have been avoided. Instead, they faced the risk of a CFAA lawsuit and potentially decades in federal prison.

In an era of massive data breaches that impact almost half of the U.S. population as well as people around the globe, a law that ostracizes security researchers is foolish—and it undermines the security of all of us. A security research exemption is necessary to ensure that our security research community can do their work to keep us all safe and secure without fear of prosecution. We’ve been calling for these reforms for years, and it’s long overdue.  

hiQ v. Linkedin: Abuse of the CFAA to Block Access to Publicly Available Information

One thing that’s consistently gotten in the way of CFAA reform: corporate interests. And 2016 was no different in this respect. This year, LinkedIn has been pushing to expand the CFAA’s already overly broad scope, so that it can use the statute to maintain its edge over a competing commercial service, hiQ Labs. We blogged about the details of the dispute earlier this year. The social media giant wants to use the CFAA to enforce its corporate policy against using automated scripts—i.e., scraping—to access publicly available information on the open Internet. But what that would mean is potentially criminalizing automated tools that we all rely on every day. The web crawlers that power Google Search, DuckDuckGo, and the Internet archive, for instance, are all automated tools that collect (or scrape) publicly information from across the Web. LinkedIn paints all “bots” as bad, but they are a common and necessary part of the Internet. Indeed, “good bots” were responsible for 23 percent of global Web traffic in 2016. Using them to access publicly available information on the open Internet should not be punishable as a federal felony.

Congress passed the CFAA to target serious computer break-ins. It did not intend to hand private companies a tool for enforcing their computer use policies. Using automated scripts to access publicly available data does not involve breaking into any computer, and neither does violating a website’s terms of use. Neither should be CFAA offenses.

LinkedIn’s expansive interpretation of the CFAA would exacerbate the law’s chilling effects—not only for the security research community, but also for journalists, discrimination researchers, and others who use automated tools to support their socially valuable work. Similar lawsuits are already starting to pop up across the country, including one by airline RyanAir alleging that Expedia's fair scraping violated the CFAA.

Luckily, a court in San Francisco called foul, questioning LinkedIn’s use of the CFAA to block access to public data, finding that the “broad interpretation of the CFAA invoked by LinkedIn, if adopted, could profoundly impact open access to the Internet, a result that Congress could not have intended when it enacted the CFAA over three decades ago.”

The case is now on appeal, and EFF, DuckDuckGo, and the Internet Archive have urged the Ninth Circuit Court of Appeals to uphold the lower court's finding and reject LinkedIn’s shortsighted request to transform the CFAA into a tool for policing the use of publicly available data on the open Internet. And we’re hopeful it will. During a Ninth Circuit oral argument in a different case in July, Judge Susan Graber pushed back [at around 33:40] on Oracle’s argument that automated scraping was a CFAA violation.

LinkedIn says it wants to protect the privacy of user data. But public data is not private, so why not just put the data behind its pre-existing username and password barrier? It seems that LinkedIn wants to take advantage of the benefits of the open Internet while at the same time abusing the CFAA to avoid the Web’s “open trespass norms.” The CFAA is an old, blunt instrument, and trying to use it to solve a modern, complicated dispute between two companies will undermine open access to information on the Internet for everyone. As we said in our amicus brief:

The power to limit access to publicly available information on the Internet under color of the law should be dictated by carefully considered rules that balance the various competing policy interests. These rules should not allow the handful of companies that collect massive amounts of user data to reap the benefits of making that information publicly available online—i.e., more Internet traffic and thus more data and more eyes for advertisers—while at the same time limiting use of that public information via the force of criminal law.

The Ninth Circuit will hear oral argument on the LinkedIn case in March 2018, and we’ll continue to fight LinkedIn’s expansive interpretation of the CFAA into the New Year.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today! Related Cases: hiQ v. LinkedIn

New Places, New Faces in Patents: 2017 in Review

Fri, 12/29/2017 - 4:56pm

This year was once again active in terms of patent law and policy. Throughout it all, EFF worked to protect end user and innovator rights. We pushed for a rule that would end the Eastern District of Texas’ unwarranted dominance as a forum for patent litigation. We also defended processes at the Patent Office that give it the opportunity to correct mistakes (many, many mistakes) made in issuing patents. And we fought to prevent new patent owner tactics that would increase consumer costs.

New Places

First, because of recent developments both at the Supreme Court and at the U.S. Court of Appeals for the Federal Circuit, this year we finally saw a shift away from the dominance of the Eastern District of Texas as the primary forum for patent litigation. The Supreme Court issued its highly anticipated decision in TC Heartland v. Kraft Foods, finding that patent cases are subject to a special statute when determining where they can be filed. This decision reversed a rule, announced by the Federal Circuit in 1990, that allowed patent owners to file in practically any far-flung corner of the country (enter the Eastern District, stage right). EFF filed an amicus brief urging the Supreme Court to recognize the problems created by the Federal Circuit.

Following close on the heels of TC Heartland was a second, arguably more important decision from the Federal Circuit in In re Cray. While TC Heartland determined what statute controlled patent venue, In re Cray clarified that the statute did not have the broad scope a court in the Eastern District of Texas was trying to give it.

Together, these two decisions are having an impact: Lex Machina reports 22% of patent cases were filed in the Eastern District of Texas this year (down from 44% and 37% in 2015 and 2016, respectively). When looking on a quarterly basis, the effect of these two cases is more pronounced. In the first quarter of 2017 (before TC Heartland and In re Cray were decided), 33% of cases were filed in the Eastern District. So far in the fourth quarter, only 12% of cases were filed there.

Where patent issues will be heard is also at issue in another Supreme Court case: Oil States v. Greene’s Energy. There, the Supreme Court has been asked whether Congress could, under the Constitution, designate the Patent Office as a forum to decide certain issues related to patent validity. EFF has supported the Patent Office procedures, which allow technical judges to decide on technical issues, and create a streamlined procedure that avoids many of the pitfalls of litigation. Many patent owners, on the other hand, do not like them as they see the Patent Office as improperly invalidating patents (a claim which is dubious at best, in our opinion). A decision that rejects the Patent Office procedures would be a significant setback in the fight against stupid patents that never should have been awarded. The Supreme Court heard oral argument in late November, and a decision is expected in the new year.

New Faces

We also saw some new faces in patents. Michelle Lee, the Director of the Patent Office, stepped down and Andrei Iancu, a partner at the law firm of Irell & Manella, was nominated to fill the position. (As of the time of writing this post, he was yet to be confirmed.) We do not know what the presumptive new director will do in the future, but EFF will continue to represent the public interest at the Patent Office when we can.

New patent owners also appeared. Specifically, significant controversy arose after Allergan, a large billion-dollar pharmaceutical company, paid a Native American tribe to take ownership of its patents. The deal saw the Tribe assert sovereign immunity in order to try to prevent the Patent Office from reviewing the patents to see whether they were properly issued. The move generated significant outcry, proposed legislation, and Congressional hearings. Whether the move will ultimately be successful is still to be determined.

New Year

Looking forward, we expect to see a continued push for stronger patent rights from those with vested interests in making it difficult to challenge bad patents. EFF will continue to fight for a more balanced policy that appropriately recognizes the public’s interests. 

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Court Challenges to NSA Surveillance: 2017 in Review

Thu, 12/28/2017 - 3:45pm

One of the government’s most powerful surveillance tools is scheduled to sunset in less than three weeks, and, for months, EFF has fought multiple legislative attempts to either extend or expand the NSA’s spying powers—warning the public, Representatives, and Senators about circling bills that threaten Americans’ privacy. But the frenetic, deadline-pressure environment on Capitol Hill betrays the slow, years-long progress that EFF has made elsewhere: the courts.

2017 was a year for slow, procedural breakthroughs.

Here is an update on the lawsuits that EFF and other organizations have against broad NSA surveillance powers.

Jewel v. NSA

EFF began 2017 with significant leverage in our signature lawsuit against NSA surveillance, Jewel v. NSA. The year prior, U.S. District Court Judge Jeffrey White in Oakland, California, ordered the U.S. government to comply with EFF’s “discovery” requests—which are inquiries for evidence when lawsuits advance towards trial. In several lawsuits, this process can take months. In Jewel v. NSA, simply allowing the process to begin took eight years.

This year, EFF waited expectantly for the U.S. government to provide materials that could prove our plaintiff was subject to NSA surveillance through the agency’s practice of tapping into the Internet’s backbone to collect traffic. But expectations were tempered. The U.S. government’s lawyers missed the discovery deadline, asked for an extension, and were given a new, tentative deadline by the judge: August 9, 2017.

The U.S. government’s lawyers missed that deadline, and asked for an extension, approved by the judge: October 9, 2017.

The U.S. government’s lawyers missed that deadline, and asked for another extension, this time indefinitely.                                                          

Producing the materials, the government attorneys claimed, was simply too difficult to do on a timely basis.

“[T]he volume of documents and electronic data that the government defendants must review for potentially responsive information is massive,” the attorneys wrote.

EFF strongly opposed the government’s request for an indefinite extension, and suggested a new deadline in January to comply with the court’s previous orders. The judge agreed and put an end to the delay. The deadline is now January 22, 2018.

The basic premise of our questions is simple: we want information that explains whether the plaintiffs’ data was collected. 

EFF hopes the government can follow the judge’s orders this time.

Mohamed Osman Mohamud v. United States

EFF filed an amicus brief this year asking the Supreme Court to overturn a lower court’s ruling that allowed government agents to bypass the Fourth Amendment when searching through the electronic communications of U.S. persons.

The amicus was filed after a decision in Mohamud v. United States, a lawsuit that concerns the electronic communications of American citizen Mohamed Mohamud. In 2010, Mohamud was arrested for allegedly plotting to use a car bomb during a Christmas tree lighting ceremony in his home state of Oregon. It was only after Mohamud’s conviction in U.S. v. Mohamud that he learned the government relied on evidence collected under Section 702 of the FISA Amendments Act for his prosecution.

Section 702 authorizes surveillance on non-U.S. persons not living in the United States. Mohamud fits neither of those categories. After learning that the evidence gathered against him was collected under Section 702, Mohamud challenged the use of this evidence, claiming that Section 702 was unconstitutional.

The U.S. Court of Appeals for the Ninth Circuit, which heard Mohamud’s counter arguments, disagreed. In a disappointing opinion that scuttles constitutional rights, the court ruled that Americans whose communications are incidentally collected under Section 702 have no Fourth Amendment rights when those communications are searched and read by government agents.

Together with Center for Democracy & Technology and New America’s Open Technology Institute, EFF supported Mohamud’s request that the U.S. Supreme Court reconsider the appellate court’s opinion.

“We urge the Supreme Court to review this case and Section 702, which subjects Americans to warrantless surveillance on an unknown scale,” said EFF Staff Attorney Andrew Crocker. “We have long advocated for reining in NSA mass surveillance, and the ‘incidental’ collection of Americans’ private communications under Section 702 should be held unconstitutional once and for all.”

United States v. Agron Hasbajrami

EFF also filed an amicus brief in the case of U.S. v. Agron Hasbajrami, a lawsuit with striking similarities to U.S. v. Mohamud.

In 2011, Agron Hasbajrami was arrested at JFK Airport before a flight to Pakistan for allegedly providing material support to terrorists. In 2013, Hasbajrami pleaded guilty to the charges.

Hasbajrami’s court case was set for July 2015. Before going to trial, Hasbajrami pleaded guilty a second time.

But then something familiar happened. Much like Mohamud, Hasbajrami learned that the evidence used to charge him was collected under Section 702. And, just like Mohamud, Hasbajrami is a U.S. person living inside the United States. He is a resident of Brooklyn, New York.

Hasbajrami was allowed to request to withdraw his plea, and his lawyers argued to remove the evidence against him from court. Hasbajrami’s judge denied the request, and the case was moved to the Second Circuit Court of Appeals.

EFF and ACLU together urged the Second Circuit Court of Appeals to make the right decision. There is opportunity for the appellate court to protect the constitutional rights of all Americans, defending their privacy and enshrining their security from warrantless search. We plead to the court to not make the same misguided decision made in Mohamud v. U.S.

Wikimedia Foundation v. NSA

The Wikimedia Foundation scored an enormous victory this year when an appeals court allowed the nonprofit’s challenge to NSA surveillance to move forward, reversing an earlier decision that threw the lawsuit out.

Represented by the ACLU, Wikimedia sued the NSA in 2015 for the use of its “upstream” program, the same program that EFF is suing the NSA over in Jewel v. NSA. Wikimedia argued that the program infringed both the First Amendment and Fourth Amendment.

Originally filed in the U.S. District Court for the District of Maryland, Wikimedia’s lawsuit was thrown out because the court ruled that Wikimedia could not prove it had suffered harm due to NSA surveillance. This ability to prove that a plaintiff was actually wronged by what they allege is called “standing,” and the court ruled Wikimedia—and multiple other plaintiffs—lacked it.

But upon appellate review, the Fourth Circuit Court of Appeals approved standing for Wikimedia in May 2017. However, the appellate court denied standing for other plaintiffs in the lawsuit, which included Human Rights Watch, The Nation Magazine, The Rutherford Institute, Amnesty International USA and more.

This victory on a small issue—standing—is an enormous victory in continuing the fight against NSA surveillance.

What Next? 

The judicial system can be slow and, at times, frustrating. And while victories in things like discovery and standing may seem only procedural, they are the first footholds into future successes.

EFF will continue its challenges against NSA surveillance in the courts, and we are proud to stand by our partners who do the same.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today! Related Cases: Wikimedia v. NSAJewel v. NSA

The Supreme Court Finally Takes on Law Enforcement Access to Cell Phone Location Data: 2017 in Review

Thu, 12/28/2017 - 2:53pm

Protecting the highly personal location data stored on or generated by digital devices is one of the 21st century’s most important privacy issues. In 2017, the Supreme Court finally took on the question of how law enforcement can get ahold of this sensitive information.

Whenever you use a cell phone, whether to make calls, send or receive texts, or browse the Internet, your phone automatically generates “cell site location information” (CSLI) through its interactions with cell towers. This means that cell providers like AT&T, Verizon, and T-Mobile have records of everywhere your phone has been, going back months and even years. And since almost everyone has a cell phone, cell providers have these records for nearly everyone.

The government has long argued that it doesn’t need a warrant to obtain CSLI from cell providers because of two 1970’s Supreme Court cases, Smith v. Maryland and United States v. Miller. Smith and Miller are the basis for the Third Party Doctrine, which holds that information you voluntarily share with a “third party”—such as deposit and withdrawal information shared with banks (Miller) or numbers dialed on a phone shared with the phone company (Smith)—isn’t protected by the Fourth Amendment because you can’t expect that third party to keep the information secret.

For years, courts around the country have been deeply divided on whether the Third Party Doctrine should apply to CSLI or whether the invasiveness of long term monitoring it enables should require a more privacy-protective rule. EFF has been involved in almost all of the significant past cases on this issue.

In June, the Supreme Court agreed to consider that question in Carpenter v. United States. In Carpenter, the government obtained 127 days of the defendant’s cell phone records from MetroPCS—without a warrant—to try to place him at the locations of several armed robberies around Detroit. As in other cases, the government argues that Mr. Carpenter had no reasonable expectation of privacy in these records, which it claimed were simultaneously incriminating but not precise enough to reveal his exact location and movements over those 127 days.

EFF filed briefs both encouraging the court to take the case and urging it to reject the Third Party Doctrine. We noted that cell phone usage has exploded in the last 30 years, and with it, the technologies to locate users have gotten ever more precise.

We attended the Supreme Court oral argument in Carpenter in late November. While it is always risky to predict the outcome of a case based on the argument, it appears that a number of the justices are concerned about the scope and invasiveness of tracking individuals using CSLI. Justice Alito agreed that this new technology is raising serious privacy concerns; Justice Roberts recognized that never before has the government had the ability to track every individual; and Justice Sotomayor was concerned that your cell phone could be tracked into the most intimate places like your bedroom or your doctor’s office.

The Supreme Court’s opinion in Carpenter will have important ramifications for the future, especially as our phones generate more—and more precise—location information every year, which is shared with third parties. But its reach could extend far beyond cell phones. Other increasingly popular technologies will force courts to consider these issues as well. For example, “Internet of Things” devices like smart thermostats that track when we’re home and when we’re not, watches that record our heart rates and rhythms, and clothing that tracks our emotions and communicates directly with retail stores may constantly generate and share data about us with little to no volition on our part.

The Supreme Court’s opinion in Carpenter will come out next year. We hope it meets this trend of sophisticated tracking with strong Fourth Amendment protection.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Related Cases: United States v. Graham

Nation-State Hacking: 2017 in Review

Wed, 12/27/2017 - 5:42pm

If 2016 was the year government hacking went mainstream, 2017 is the year government hacking played the Super Bowl halftime show. It's not Fancy Bear and Cozy Bear making headlines. This week, the Trump administration publicly attributed the WannaCry ransomware attack to the Lazarus Group, which allegedly works on behalf of the North Korean government. As a Presidential candidate, Donald Trump famously dismissed allegations that the Russian government broke into email accounts belonging to John Podesta and the Democratic National Committee, saying it could easily have been the work of a "400 lb hacker" or China. The public calling-out of North Korean hacking appears to signal a very different attitude towards attribution.

Lazarus Group may be hot right now, but Russian hacking has continued to make headlines. Shortly after the release of WannaCry, there came another wave of ransomware infections, Petya/NotPetya (or, this author's favorite name for the ransomware, "NyetYa"). Petya was hidden inside of a legitimate update to accounting software made by MeDoc, a Ukrainian company. For this reason and others, Petya was widely attributed to Russian actors and is thought to have primarily targeted Ukrainian companies, where MeDoc is commonly used. The use of ransomware as a wiper, a tool whose purpose is to render the computer unusable rather than to extort money from its owner, appears to be one of this year's big new innovations in the nation-state actors' playbook.

WannaCry and Petya both owe their effectiveness to a Microsoft Windows security vulnerability that had been found by the NSA and code named EternalBlue, which was stolen and released by a group calling themselves the Shadow Brokers. US agencies losing control of their hacking tools has been a recurring theme in 2017.  First companies, hospitals, and government agencies find themselves targeted by re-purposed NSA exploits that we all rushed to patch, then Wikileaks published Vault 7, a collection of CIA hacking tools that had been leaked to them, following it up with the publication of source code for tools in Vault 8. 

This year also saw developments from perennial bad actor Ethiopia. In December, Citizen Lab published a report documenting the Ethiopian government's ongoing efforts to spy on journalists and dissidents, this time with the help of software provided by Cyberbit, an Israeli company. The report also tracked Cyberbit as their salespeople demonstrated their surveillance product to governments including France, Vietnam, Kazakhstan, Rwanda, Serbia, and Nigeria. Other perennial bad actors also made a splash this year, including Vietnam, whose government was linked to Ocean Lotus, or APT 32 in a report from FireEye. The earliest known samples from this actor were found by EFF in 2014, when they were used to target our activists and researchers.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Keeping Copyright Site-Blocking At Bay: 2017 In Review

Wed, 12/27/2017 - 1:41pm

In 2017, major entertainment companies continued their quest for power to edit the Internet by blocking entire websites for copyright enforcement—and we’ve continued to push back.

Website blocking is a particularly worrisome form of enforcement because it’s a blunt instrument, always likely to censor more speech than necessary. Co-opting the Internet’s domain name system (DNS) as a tool for website blocking also threatens the stability of the Internet by inviting ever more special interests and governments to use the system for censorship.

This year, we’ve kept pressure on ICANN, the nonprofit body that makes domain name policy, to keep copyright enforcement out of their governing documents. And we’ve called out domain name registry companies who bypassed ICANN policy to create (or propose) their own private copyright enforcement machines. Public Interest Registry (PIR), the organization that manages the .org and .ngo top-level domains, announced in February that it intended to create a system of private arbitrators who would hear complaints of copyright infringement on websites. The arbitrators would wield the power to take away a website’s domain name, and possibly transfer it to the party who complained of infringement. The Domain Name Association (DNA), an industry trade association, also endorsed the plan.

EFF pointed out that this plan was developed in secret, without input from Internet users, and that it would bypass many of the legal protections for website owners and users that U.S. courts have developed over the years. Within weeks, PIR and DNA shelved this plan, apparently for good.

Unfortunately, some domain registries continue to suspend domain names based on accusations from major motion picture distributors (whom they call “trusted notifiers”) in a process that also bypasses the courts. Along with giving special privileges to luxury brands and other major trademark holders, and to U.S. pharmaceutical interests, these policies erode public trust in the domain name system, a key piece of Internet infrastructure.

There are worrisome developments in the courts as well. Major movie studios, record labels, and print publishers have continued to ask U.S. courts for broad injunctions that could force many kinds of intermediaries—all of free speech’s weak links—to help block websites.  They do this by filing lawsuits against a website, typically located outside the U.S., accusing it of copyright infringement. When the website’s owners don’t appear in court, the copyright holder seeks a default injunction written broadly to cover intermediaries like DNS registrars and registries, search engines, and content delivery networks, who can then be compelled to block the website. Several courts have granted these broad orders, including one that targets Sci-Hub, a site that gives access to research papers.

That’s concerning because, like the aborted efforts by domain registries, using default injunctions to block websites bypasses the normal rules created by the courts and Congress that define the role of Internet intermediaries. We hope that Internet companies continue to defend their users against censorship creep by fighting back against these orders. In the coming year, we’ll weigh in to help the courts understand why the current rules are worth sticking to.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Seven Awful DRM Moments from the Year (and Two Bright Spots!): 2017 in Review

Tue, 12/26/2017 - 9:14pm

The Apollo 1201 project is dedicated to ending all the DRM in the world, in all its forms, in our lifetime. The DRM parade of horribles has been going strong since the Clinton administration stuck America with Section 1201 of the Digital Millennium Copyright Act ("DMCA") in 1998. That law gave DRM special, hazardous legal protection: under that law, you're not allowed to remove DRM, even for a lawful purpose, without risking legal penalties that can include jailtime and even six-figure fines for a first offense.

That's a powerful legal weapon to dangle in front of the corporations of the world, who've figured out if they add a thin scrim of DRM to their products, they can make it a literal felony to use their products in ways that they don't approve of -- including creative uses, repair, tinkering and security research. (There's an exemption process, but it's burdensome and inadequate to protect many otherwise legal activities.

EFF is committed to halting that parade of horribles, but it hasn't been easy. Here are seven of the DRM low-points from 2017, and two bright spots that give us hope for the year to come.

  1. The World Wide Web Consortium published its standard for browser-based DRM. We fought this from its inception, and even conceived of a compromise that would allow the corporate members of the W3C to get DRM in browser, but limit their ability to leverage the DRM to inhibit security research; stop a11ies from making their products accessible for people with disabilities; thwart archiving by libraries; and control who got to compete with them. The corporate members refused and the W3C caved, publishing the Encrypted Media Extensions standard without the consensus that the organization has prided itself on for 25 years. Three billion web users now have browsers with new attack-surfaces and new risks to their financial, familial, educational, personal and professional life.
  2. Sony revives the DRM-encumbered robot pet. It's been 15 years since Sony used Section 1201 of the DMCA to shut down the community that had sprung up to extend the functionality of its Aibo robot dogs, threatening people with lawsuits and jailtime for modifying their dogs' operating systems. Now, Sony has brought back the Aibo and with it, revived its view that you can never truly own a product you buy from the company. The new, $1700 Aibo has a mandatory $26/month subscription fee, tethering it permanently to a Sony server. I will bet you anything that anyone releasing a mod that allows the Aibo to run as a standalone will get both a DMCA 1201 (circumventing DRM) and CFAA (violating terms of service) threat. Just your latest reminder that in the 21st century, we are increasingly relegated to the status of digital tenants, renting our gadgets on terms unilaterally set by their manufacturers.
  3. The most powerful DRM in the video games industry is cracked within hours of release. Denuvo is billed as the video game industry's "best in class" DRM, charging games publishers a premium to prevent people from playing their games without paying for them. In years gone by, Denuvo DRM would remain intact for as long as a month before cracks were widely disseminated. But the latest crop of Denuvo-restricted games were all publicly cracked within 24 hours. It's almost as though hiding secrets in code you give to your adversary was a fool's errand.
  4. Someone made a $400 kettle that only took DRM tea-leaves, and irony died forever. Did you buy a useless $400 "smart" juicer and now feel the need to accessorize it with more extrusions from the DRM dystopia timeline? Then The Leaf from Teaforia is just the thing: it was a tea-maker that used DRM-locked tea-pods to brew tea in your kitchen so you don't have to endure the hassle of having the freedom to decide whose tea you brew in your tea-brewing apparatus, and so that you can contribute to the impending environmental apocalypse by generating e-waste every time you make a cup of tea. If you were unfortunate enough to shell out $400 for this thing, you got played, because they went bankrupt in October.
  5. All the virtual rabbits in Second Life faced starvation because of DRM virtual rabbit-food. Every Ozimal digirabbit in the venerable virtual world Second Life faced terminal starvation (well, permanent hibernation) this year because a legal threat has shut down their food-server, and the virtual pets are designed so that they can only eat DRM-locked food, so the official food server's shutdown has doomed them all. Ozimals LLC, the company that created the digipets, shut down last year, and Malkavyn Eldritch, a volunteer, kept their food-server online. Edward Distelhurst and Akimeta Ltd say that Ozimals shut down owing him a lot of money. The case has dragged out at great length, with court orders and reported bad faith from the owners of Ozimals. Edward Distelhurst and Akimeta Ltd sent a cease-and-desist to Eldritch, demanding that he "cease all use of Ozimals intellectual property." This means that he's shut down the server, which immediately killed every virtual puffin in Second Life -- the virtual rabbits will take longer to die, because they can retain some virtual, DRM-locked food in their bellies before they starve to death.
  6. North Korea unveiled a DRM-encrusted surveillance tablet. The Ullim Tablet is the latest mobile device from North Korea to be subjected to independent analysis, and it takes the surveilling, creepy nature of the country's notoriously surveillant Android devices to new heights of badness. The Ullim analysis was conducted by researchers from Heidelberg's Enno Rey Netzwerke and presented at last year's Chaos Communications Congress in Hamburg. The Ullim tablet was made by installing a custom Android 4.4.2 version on a Chinese Z100 tablet that has had its network interfaces removed -- you get it online by attached a tightly-controlled network dongle that does wifi, Ethernet and dial-up. The Ullim Android customization removes many of the stock Google apps (such as Gmail) and adds in several apps designed to spy on the tablet's users. These include Red Flag, a background app that takes a screenshot every time an app is opened, logs browser history and reports on any attempts to tamper with the OS; and Trace Viewer, an app that for examining the forensic data created by Red Flag. Any logged in user can launch and use Trace Viewer, providing a reminder that everything you do with the tablet is being watched. The Ullim also watermarks all the files generated by the OS, linking them to the device's unique serial number, locks out any app not on a whitelist, and refuses to play back any media files that are not on a nationally maintained whitelist of approved programs.
  7. Oh, John Deere. Don't ever change. Meaning please, please change. John Deere claims that fixing your own tractor violates its copyright, because of DRM. So American farmers are installing bootleg Ukrainian firmware in their tractors, just to get the harvest in. Canadian farmers are braving Big Ag's wrath, too, and American farmers are coming up with Made in America ways to seize the means of production and make hay while the sun shines.

And now, a couple of most welcome bright spots:

  1. Portugal passes the world's first reasonable DRM law: Last June, Portugal enacted Law No. 36/2017 which bans putting DRM on public domain media or government works, and allows the public to break DRM that interferes with their rights in copyright, including private copying, accessibility adaptation, archiving, reporting and commentary and more. Regrettably, the law doesn't go so far as to authorize the creation of tools to break DRM that has been improperly used, so the public is forced to hunt around online for semi-legal tools with anonymous authors of unknown quality. (cough Ukrainian tractor firmware cough).
  2. Behold! The paleohistory of DRM, revealed! Redditor Vadermeer was in a local Goodwill Outlet and happened on a trove of files from Apple engineer Jack MacDonald from 1979-80, when he was manager of system software for the Apple II and ///. MacDonald's files include more than 100 pages of printed and handwritten notes for a scheme to create DRM for the Apple /// (then called the Sara) and the Lisa, a failed precursor to the Mac. These constitute a fascinating, candid and intimate history of the creation of a DRM scheme, a kind of microcosm for all the problems we see with DRM today, in which a platform tries to offer products to its sellers that it knows its customers will hate, and also be able to break. ne of the most amusing back-and-forths is the tick-tock between Randy Wigginton and Steve "Woz" Wozniak, who propose and then demolish rival DRM schemes, while also tearing apart successive versions of Visicalc DRM, which was then the state of the art. New managers come in and write memos saying, basically, "Are you nuts? You've proposed a grotesquely expensive hardware dongle that's going to eat one of the four expansion slots on this computer, that will stop working if the user upgrades their OS, that will require them to bring corrupt floppies back to the store to get a backup to work, and that we think people will be able to break in an hour -- let's go back to the drawing board, shall we?"

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

EFF Goes to Battle at the California Statehouse: 2017 in Review

Tue, 12/26/2017 - 6:59pm

In the wake of the 2016 election, California lawmakers quickly adopted the posture of “The Resistance.” For the digital rights community, this presented an opportunity to pursue legislation that had not previously enjoyed much political momentum. As a result, EFF staff found themselves trekking back and forth between San Francisco and Sacramento to testify on everything from surveillance transparency to broadband privacy.  In the end, we checked off a number of victories, but also some defeats, and created more opportunities for next year.

Here’s a selection of the California campaigns EFF launched in 2017. 

Broadband Privacy

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FcgW9PACVBq4%3Frel%3D0%26autoplay%3D1%22%20frameborder%3D%220%22%20gesture%3D%22media%22%20allow%3D%22encrypted-media%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

Ignoring the anger and opposition by the American public, Congress repealed the Federal Communications Commission (FCC) rules that blocked Internet provides (think Cox, Comcast, Time-Warner) from collecting and selling customers’ data without their consent.

California Assemblymember Ed Chau seized the opportunity to restore those rights and partnered with EFF to introduce A.B. 375. The telecom industry spent hundreds of thousands of dollars to fight the bill and even went so far as to circulate false information to legislators at the 11th hour. The bill failed to receive a Senate floor vote on the last night of the session. However, the bill remains alive and we’re ready to finish the job in 2018.

Protecting Immigrants and Religious Minorities

Shortly after the election, lawmakers introduced S.B. 54, a compendium bill meant to simultaneously protect immigrants from mass deportation, defend Muslims from being placed on religious registries, and curtail how much unnecessary data is being collected on all Californians by state agencies.  The political process resulted in the bill being split into three, with S.B. 54 continuing to create a firewall between California data and immigration enforcement, while S.B. 31 forbade California data from being used for religious registries, and S.B. 244 enhanced the privacy requirements for state agencies.

After a hard fought battle, S.B. 31 was signed into law, while S.B. 244 died in committee. Ultimately, EFF removed its support for S.B. 54 because the data protections were weakened (although the bill did create new, important measures for immigrant communities). 

A Public Process for Restricting Police Surveillance

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2F6Bjje6rRfoQ%3Frel%3D0%26autoplay%3D1%22%20frameborder%3D%220%22%20gesture%3D%22media%22%20allow%3D%22encrypted-media%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com


Over the last few years, local communities in the bay area such as Santa Clara County, Oakland, and Berkeley have begun pursuing measures that would require police agencies to seek approval from elected officials before acquiring surveillance technology. S.B. 21 would have instituted that requirement for every local law enforcement agency across the state. Police also would have been required to issue periodic reports on how often the technology was used, and how often it was misused. The bill passed the Senate and two Assembly committees, only to die without a vote in the Assembly’s Appropriations Committee. 

Although the bill failed, the momentum remains. EFF is supporting our local partners in the Electronic Frontier Alliance as they push for similar—if not stronger—ordinances on the local level.

Internet Access for Youth in State Care

EFF lent its technological expertise to a campaign by the Youth Law Center and Assemblymember Mike Gipson to ensure that youth in detention and foster care have access to computers and the Internet. A.B. 811 sailed through both legislative houses and landed on the governor desk. EFF testified in support of the bill when it came before the Senate’s Health and Human Services Committee. 

While Gov. Jerry Brown vetoed A.B. 811, all is not lost. Brown ordered the state’s juvenile detention authorities to draw up a plan to offer Internet access to youth. Furthermore, he indicated he might support a second go at a modified bill in 2018. EFF intends to join YLC and Gipson in this renewed effort to ensure that at-risk youth have access to the digital tools they need to succeed. 

License Plate Privacy

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FB9zBqgfIIZI%3Frel%3D0%26autoplay%3D1%22%20frameborder%3D%220%22%20gesture%3D%22media%22%20allow%3D%22encrypted-media%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

To combat the scourge of private license plate reader companies that are harvesting and selling our travel data, EFF drafted S.B. 712 to allow drivers to mask their vehicles’ license plates when lawfully parked. Currently, drivers are allowed to cover their entire cars to protect their paint jobs from the elements, so they should also be allowed to cover just a portion of their vehicle to protect their privacy.

Even in an age of bitter enmity between the political parties, S.B. 712 is proof that common ground can still be found. Republican Sen. Joel Anderson introduced the bill, and although it died in committee, it did receive cross-aisle support from some Democrats, such as Sen. Scott Wiener. We hope to pursue this legislation again in 2018.

Gang Database Reform

In 2016, EFF joined a coalition of civil rights and justice reform groups to pass A.B. 2298, a bill that started the process of overhauling California’s discriminatory gang databases. Midway through that effort, the California State Auditor released its investigation, showing that the system was riddled with problems that the original legislation did not anticipate. So this year, the coalition reassembled to support Assemblymember Shirley Weber’s follow-up bill, A.B. 90. 

Gov. Brown signed A.B. 90 in October. The new law mandates audits, creates a new oversight body, and requires policies to be supported by empirical research. 

Publication of Police Policies 

S.B. 345 would have required every law enforcement agency in the state, by default, to publish all their policies and training materials online. This was a landmark bill due to its support by both law enforcement associations and civil liberties organizations, who rarely share common ground on these issues.

Unfortunately, Gov. Brown vetoed the bill.  But he did leave the door open for more narrow reforms in 2018. 

Strengthening the California Public Records Act 

The California Public Records Act is notoriously toothless. If an agency unjustifiably rejects your request, delays the release of records, or requires unreasonable fees for copies, your only option is to take them to court, and even if you win, the agency is only be liable for your legal bills. A.B. 1479 would have allowed a judge to levy fines against agencies that behave badly. 

The legislature sadly balked at the last minute, reducing the bill to a weak pilot program where agencies were required to appoint a central records custodian. EFF pulled its support from the bill, and Gov. Brown vetoed it.

Fake News Fumble

Shortly after the election, policymakers began to worry about how false or exaggerated articles were being circulated over social media. In California, a well-intentioned bill, A.B. 1104, was written so broadly that it would have criminalized any “false or deceptive” information around an election, regardless of whether the statement was hyperbole, poetic license, or common error. EFF launched a Twitter campaign and the bill’s sponsor removed the unconstitutional section of the legislation.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

A Grim Year for Imprisoned Technologists: 2017 In Review

Mon, 12/25/2017 - 7:38pm

The world is taking an increasingly dim view of the misuses of technology and those who made their names (and fortunes) from them. In 2017, Silicon Valley companies were caught up in a ongoing trainwreck of scandals: biased algorithms, propaganda botnets, and extremist online organizing have dominated the media's headlines.

But in less-reported-on corners of the world, concerns about technology are being warped to hurt innocent coders, writers and human rights defenders. Since its founding, EFF has highlighted and defended cases of injustice and fearmongering perpetrated against innocent technologists. We advocate for unjustly imprisoned technologists and bloggers with our Offline project. In 2017, we continue to see fear being whipped up against those who oppose oppression with modern tools—as well as those who have done nothing more than teach and share technology so that we can all use and understand it better.

Take Dmitry Bogatov, software developer and math lecturer at Moscow's Finance and Law University. Bogatov ran a volunteer Tor relay, allowing people around the world to protect their identities as they used the Internet. It was one part of his numerous acts of high-tech public service, which include co-maintaining Xmonad and other Haskell software for the Debian project.

For his generosity, Bogatov has now spent over a hundred days in pretrial detention, wrongfully accused of posting extremist materials that were allegedly sent via through Tor server. Law enforcement officials around the world understand that data that appears to originate from a particular Tor machine is, in fact, traffic from its anonymised users. But that didn't stop Bogatov's prosecutors in Russia from accusing him of sending the data himself, under a pseudonym, to foment riots—and added new charges of "inciting terrorism" when a judge suggested the earlier charge was too weak to hold Bogatov in pre-trial detention.

Dmitry is still being denied his freedom, accused of a crime he clearly did not commit. The same is true for Emirati telecommunications engineer, Ahmed Mansoor, of the United Arab Emirates. Mansoor has been a tireless voice for victims of human rights abuses in the United Arab Emirates. In 2011, amidst the Arab uprisings, he was one of five Emirati citizens to be sentenced to prison for his social media postings. That case provoked international condemnation, and the group was soon pardoned. Mansoor was subsequently targeted with sophisticated government spyware on his iPhone; he recognised and passed on the malware link to experts, which led to the discovery of three previously unknown vulnerabilities in Apple's iOS.

In April, Mansoor was seized by the UAE authorities again. On the day of his arrest, the UAE’s official news agency saying that he had been arrested on the orders of the Public Prosecution for Cybercrimes and accused of using social media to promote sectarianism and hate, among other charges. Mansoor’s family did not hear from him for two weeks, and he has been denied access to a lawyer.

Just a year ago, Apple was able to roll out a security fix to their users because of Mansoor's swift, transparent, and selfless actions. Millions of people are safer because of Ahmed's actions, even as his family fears for his own physical and mental safety.

Mansoor's detention is new, but others continue to be jailed for their use of technology, year after year. Alaa abd el-Fattah ran Linux installfests across the Middle-East and was a key online voice in the Egyptian uprising. Since then he has been jailed, in turn, by the democratically elected Islamist President Mohammed Morsi, and then when Morsi was overthrown in a coup, by incoming President Abdel Fattah El-Sisi. Alaa's appeal against a five year prison sentence for protesting—widely seen as a means to silence him on social media—was refused in November of this year. Amnesty and the UN Working Group on Arbitrary Detention have both condemned Alaa's continuing imprisonment.

Another long-term case is that of Saeed Malekpour, who has been in jail in Iran since 2008. Malekpour returned from Canada to visit his sick Iranian father in October of that year, at a time when the Iranian Revolutionary Guard was starting to target technologists and Internet experts. As an open source coder, Malekpour had written a free front-end image management utility for websites. The Guard found this software on a Farsi pornography site, and used it to as a pretext to seize Malekpour from the streets of Tehran, charge him with running the web site, and sentencing him to death.

Malekpour's death sentence has been anulled twice following international pressure, but a change of government in his home country of Canada risked reducing the level of support for Malekpour. A campaign to encourage the new Trudeau administration to continue to advocate for Malekpour, even as Canada seeks to normalize relations with Iran, seems to be working. One of Malekpour’s advocates, former Liberal MP Irwin Cotler, has said that the Canadian government is now working on the case.

The continuing monitoring of Malekpour's life sentence is a small consolation, but better than the alternative. The same is true of the current tentative freedom of Peter Steudtner and Ali Gharavi.

Ali and Peter travel the world, teaching and advising Internet users on how to improve their privacy and digital security online (Ali was an advisor for EFF's Surveillance Self-Defence project). The two were arrested in a raid by Turkish police on a digital security workshop in July in Istanbul, along with Amnesty Turkeys' director, Idil Eser, and eight other human rights defenders.

The two technology consultants have been accused of aiding terrorists, despite the long history of both as peaceful advocates for secure online practices. After months of detention, concentrated diplomatic and public pressure led to both being released to join their families in Germany and Sweden. We're delighted that they are free, but their unjust prosecution—and that of their Turkish colleagues—continues in the Turkish courts.

Peter and Ali have dedicated their careers to sharing their knowledge of digital security with those who need it most. Dmitry Bogdanov voluntarily ran a server than anyone could use to protect their identies. Ahmed Mansoor went public with his high-tech harassment by the authorities, and improved the security of millions by doing so. Alaa encouraged a generation of Egyptians to use free software and social media to express themselves. Saeed Malekpour has spent nearly a decade in prison for giving his software away for free. What they have in common is not just a love of technology, but a wish that its power be used for good, by us all.

Their sacrifices would be recognized by Bassel Khartabil, the Syrian free culture advocate. Before his arrest and torture in 2012, Bassel was the driving force behind countless projects to turn technology for the public good in his country. He founded a hackerspace in Damascus, translated Creative Commons into a Middle Eastern context, and built out Wikipedia and Mozilla for his fellow Syrians. Bassel's generosity brought him notability and respect. His prominence and visibility as a voice outside the divided political power-bases of Syria made him an early target when the Syrian civil war became violent.

We learned this year that Bassel was killed by the Syrian government in 2015, shortly after he was removed from a civilian prison and sent into the invisibility of Syria's hidden security complexes.

The cases we cover in EFF's Offline project are all advocates for openness, transparency and the right to free expression, who have been unjustly imprisoned for their work. But transparency isn't just a noble goal for them: public visibility is what gives them hope and keeps them alive. We hope you'll keep them all your hearts as you enter 2018. Even as we mourn Bassel, we look forward to a better new year that will see our imprisoned colleagues free and safe again.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today! 

Protecting Immigrants from High Tech Surveillance: 2017 in Review

Mon, 12/25/2017 - 6:13pm

In 2017, the federal government surged its high tech snooping on immigrants and foreign visitors, including expanded use of social media surveillance, biometric screening, and data mining. In response, EFF ramped up its advocacy for the digital rights of immigrants. 

Social Media Surveillance 

EFF resisted government programs to collect, store, and analyze the publicly available social media information of immigrants and visitors. These programs threaten the digital expression and privacy of immigrants, and the many U.S. citizens who communicate with them.

Collection. The Department of Homeland Security (DHS) empowered its border officers to screen the social media information of certain visa applicants from China. Likewise, the State Department empowered its consular officials to gather this information from visa applicants worldwide. The DHS Secretary even floated the idea of requiring visitors to share their social media passwords

EFF opposed all of this surveillance.

EFF also advocated against a federal bill (S. 1757) requiring social media screening of visa applicants from so-called “high risk countries,” which would invite religious profiling against visitors from Muslim nations. These new government efforts build on earlier social media surveillance of immigrants and visitors, which EFF also opposed.

Storage. DHS disclosed that it stores the social media information it collects from immigrants and visitors in “Alien Files” (A-Files), a government record system that tracks people as they move through the immigration process. This announcement shows the federal government is holding onto social media information for an indefinite period of time, broadly sharing it, and using it for myriad purposes. EFF advocated against this excessive storage, sharing, and use of social media information.

Analysis. DHS is developing what it calls an “extreme vetting” system to automatically analyze immigrants’ social media information. EFF opposes this new program, which suffers many of the same flaws as algorithm-based predictive policing

Biometric Screening

DHS has long gathered biometric information from foreign citizens as they enter the United States. In 2017, DHS expanded its efforts to collect biometric information from foreign citizens as they exit the United States on certain flights. In a classic case of “mission creep,” DHS has also begun to collect biometric information from U.S. citizens on these flights. EFF advocated against two federal bills (S. 1757 and H.R. 3548) that would entrench and expand this biometric border screening. 

One of these bills (S. 1757) also would require DHS to collect DNA and other biometric information from anyone seeking an immigration benefit, and to share its biometric information about immigrants with federal, state, and local law enforcement agencies. EFF opposed these proposals, too.

Data Mining

DHS gathers and analyzes massive amounts of data in order to locate and deport undocumented immigrants. Sometimes, DHS obtains this data from state and local government agencies that collected it for reasons unrelated to immigration enforcement.

EFF advocated for a provision in a California bill (S.B. 54) that would have prohibited state and local law enforcement agencies from making their databases available for purposes of immigration enforcement. Unfortunately, this “database firewall” was later removed from the bill, at which point EFF pivoted to a position of neutrality.

EFF also supported a coalition effort to persuade corporate data brokers to refrain from making their data and services available to the federal government for purposes of mass deportations.

Other Snooping on Immigrants 

Cell-site simulators (CSSs), often called Stingrays, are police devices that masquerade as cell-phone towers and trick our phones into connecting to them. They are a form of mass surveillance that disrupt phone service and disparately burden communities of color. 

U.S. Immigration and Customs Enforcement has spent $10 million to purchase 59 CSSs, and used one to locate and arrest an undocumented immigrant. EFF opposes government use of CSSs to hunt down people whose only offense is to unlawfully enter or remain the United States. If government is allowed to use CSSs at all, it should only do so only to address serious violent crime. 

E-Verify is a massive federal data system that employers may use to verify the eligibility of job applicants to work in the United States. EFF advocated against a federal bill (H.R. 3711) that would require employers to use it. E-Verify is riddled with errors, and thus blocks many people from lawfully working. Moreover, data systems like E-Verify, which contain sensitive social security and passport numbers, are an attractive target for data thieves. 

DHS authorizes officers, with no suspicion at all, to search the smartphones and other electronic devices of everyone who crosses the U.S. border. Border officers do so tens of thousands of times per year. EFF teamed up with the ACLU to file a new lawsuit arguing that officers need a warrant for such searches. One of our eleven plaintiffs is Jeremy Dupin, a lawful permanent resident from Haiti and an award-winning journalist. The U.S. Constitution protects immigrants as well as U.S. citizens. 

Next Steps

EFF stands up for the digital rights of immigrants and foreign visitors for many reasons. First, digital liberty is a human right that all people should enjoy, including immigrants. Second, EFF opposes discriminatory intrusions on digital liberty, and some high tech surveillance of immigrants may be motivated by anti-immigrant or anti-Muslim animus. Third, government surveillance of immigrants and visitors often sweeps in information about the many U.S. citizens who associate and communicate with them. Fourth, government surveillance programs that begin by targeting immigrants and visitors often expand to target U.S. citizens, too.

These problems did not begin in 2017. EFF has long advocated against biometric and social media surveillance of immigrants, as well as E-Verify. But under President Donald Trump, intrusions on the digital liberty of immigrants are growing in intensity, as part of expanded immigration enforcement

EFF will continue stand with our immigrant friends and neighbors, and work to protect everyone’s digital liberties.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Medical Privacy Under Attack: 2017 in Review

Sun, 12/24/2017 - 8:04pm

If you care about maintaining privacy over medical records and prescriptions, this was not a good year.

Both the California Supreme Court and the U.S. Ninth Circuit Court of Appeals issued disappointing decisions that declined to recognize a significant privacy interest in prescription records. In California, the state’s high court ruled that the Medical Board of California can rifle through records of prescriptions for controlled substances—used to treat anxiety, depression, pain, and insomnia—without notifying patients, obtaining a court order, or showing any suspicion of wrongdoing. The Ninth Circuit reversed on procedural grounds a good ruling out of Oregon, which found that the Drug Enforcement Administration (DEA) couldn’t access sensitive prescription records without a warrant. Both courts punted to another day the question of whether the Fourth Amendment’s warrant requirement protects prescription records.

This precedent is concerning, especially in an era of digital pills that use stomach acid to generate electronic data about exactly when you take your medication. Prescription records reveal our medical and mental health conditions and histories. They are a subset of our medical and mental health files, and they are just as sensitive as any other medical or mental health records, which are afforded a heightened degree of privacy protection. Prescription records should be, too. Just as with any other medical records, the government should need a warrant supported by probable cause before accessing them. 

The courts may be responding to the opioid crisis in declining to address whether law enforcement’s warrantless access of controlled-substance prescription records violates the Constitution, but everyone should be able to expect privacy in their drug prescriptions and law enforcement should be required to get a warrant to access those records. Thanks to technology, getting a warrant is easier than ever. And it’s not too much to ask when we are talking about highly sensitive medical information.

The California Supreme Court Decision

The disappointing California Supreme Court decision, Lewis v. Superior Court, dates back to 2008, when the Medical Board obtained prescription information of hundreds of individuals from California’s CURES database without providing those individuals with any notice and without a court order or any suspicion of wrongdoing. CURES is California’s prescription drug monitoring program. The database contains sensitive information about controlled substances used to treat conditions such as anxiety, panic disorders, chronic and acute pain, depression, attention disorders, and insomnia. The hundreds of individuals whose information was accessed were all patients of Dr. Alwin Carl Lewis, who the board was investigating for recommending an objectionable diet to a patient.

Dr. Lewis objected in court to the board’s actions, arguing that it violated the privacy rights of his patients. Dr. Lewis lost in the lower courts and appealed his case to the California Supreme Court. EFF filed an amicus brief urging the court to require law enforcement agencies to get a warrant supported by probable cause before gaining access to patients’ sensitive CURES records. As we told the court, given the heightened privacy interest in medical records, granting the government unfettered access to prescription drug records violates both the Fourth Amendment and the California constitution’s privacy protections.  

In July, after nine years of litigation, the California Supreme Court ruled against Dr. Lewis. The court held that accessing patients’ CURES prescription records without a court order or any suspicion of wrongdoing did not violate their right to privacy under the California Constitution. The court held that the privacy interest in prescription records is “less robust” than the privacy interest associated with other medical records and declined to subject the Board’s actions to heightened scrutiny. The court held that the board only needed to show a competing interest—and not a compelling interest—in order to justify invading the privacy of patients by accessing their prescription records. Applying a general balancing test, the court held that the board’s interest in protecting the public from unlawful use of drugs and from negligent or incompetent physicians, outweighed the privacy interest in controlled substance prescription records, and that the board was therefore justified in its actions.

The court declined to address whether the Fourth Amendment requires law enforcement agencies to obtain a warrant before accessing patients’ CURES prescription records. It held that Dr. Lewis had waived any Fourth Amendment arguments by not raising them early enough in the case.

There’s at least one good part of this disappointing opinion: Justice Goodwin Lui’s concurring opinion. Justice Lui notes that even if privacy interests in prescription records are “less robust” than for other medical records, patients still retain a reasonable expectation of privacy in prescription drug records that reveal their medical conditions—which means that prescription drug records are still protected by the Fourth Amendment.

The Ninth Circuit Decision

The disappointing Ninth Circuit decision involves Oregon’s Prescription Drug Monitoring Program (PDMP), which tracks prescriptions for certain drugs dispensed by Oregon pharmacies. When the Oregon legislature created the PDMP in 2009, it also enacted robust privacy protections, including a requirement that law enforcement agents get a warrant before accessing patients’ PDMP data. The DEA claimed that despite this requirement, a federal statute allowed it to access Oregonians' private prescription records without a warrant. The state of Oregon sued the DEA for trying to circumvent Oregon law. Several patients—who were each taking controlled drugs to treat extreme pain conditions, gender identity disorders, or post-traumatic stress disorders— intervened in the case, along with a doctor and the American Civil Liberties Union, and argued that warrantless access of PDMP data violated the Fourth Amendment.

In 2015, a district court judge in Oregon held that patients have a reasonable expectation of privacy in their prescription drug records and that law enforcement agents must obtain a warrant supported by probable cause in order to search prescription information. The court recognized that prescription records are “intensely private” and stated that “[i]t is difficult to conceive of information that is more private or more deserving of Fourth Amendment protection.”

But in June 2017, the Ninth Circuit reversed that decision. The court held that the “Intervenors”—the patients, the doctor, and the ACLU—had not established standing to raise their Fourth Amendment challenges, and it threw the lower court’s decision out. The court did recognize the “particularly private nature” of prescription records, but it still ruled in favor of the DEA, holding that the federal Controlled Substance Act preempted Oregon’s state law requiring a warrant to access PDMP prescription records.

Fighting False Distinctions

The government wants courts to believe that prescription records are less private than other medical records. This tactic—using false distinctions to erode the scope of established privacy protection—is all too familiar. We see it in the government’s attempts to characterize metadata—such as the subject line of an email, the time the email was sent, a phone number called, or the length and time of the call— as less sensitive than the content of our communications. But if the government knows you spoke with an HIV testing service, then to your doctor, and then to your health insurance company, all in the same hour, they likely know what you discussed. The metadata gives them just as much, if not more, private information than the content. Likewise, if the government knows that you’ve been taking a variety of anti-depression medications over a period of four years and the exact prescription you are currently taking, they will be able to infer not only that you are suffering from depression, but also the type of depression and the symptoms you may have. This is sensitive medical information, and the government should need a warrant supported by probable cause before accessing it.

Both Lewis v. Superior Court and PDMP v. DEA left the door open for future Fourth Amendment challenges to warrantless access of prescription drug records. Those challenges likely aren’t far off.  And when the courts are finally ready to decide whether the Fourth Amendment protects prescription records, we’ll be there, urging them to do the right thing.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Security Education in Uncertain Times: 2017 in Review

Sun, 12/24/2017 - 7:03pm

From the time Donald Trump became president-elect in November 2016 and through 2017, EFF was flooded by requests for digital security workshops. They poured in from all over the country: educational nonprofits, legal groups, libraries, activist networks, newsrooms, scientist groups, religious organizations. There are a few reasons for this rise in digital security training requests. Certainly, the 2016 election made a lot of communities rethink their relationships with the U.S. government. At the same time, a rise in high-profile stories of groups being hacked, often using increasingly sophisticated methods, left many of those groups feeling vulnerable.

It doesn't work to drop into a roomful of people you've never met before and have no connection to, spend a day teaching them security basics, and then leave, never to interact with them again.

We've also seen a marked increase in technologists hungry for opportunities to share security advice in their communities, with their friends, and with groups they support. We heard from people who'd unwittingly become the de facto source for security education in their communities, but didn't have the right structure to teach effectively. We heard from nonprofit professionals who wanted to start offering security workshops in their spaces but didn't know how. And we heard from people who really wanted to help out but just had no idea where to start. 

When thinking about how we'd respond to this new demand and what value EFF could provide specifically in the security education space, we kept coming back to one point: the solution isn't to fly around the country hosting digital security trainings. It doesn't work to drop into a roomful of people you've never met before and have no connection to, spend a day teaching them security basics, and then leave, never to interact with them again. 

Our conversations repeatedly brought us to the effectiveness of training from within: learning security from a friend will be more effective than learning from an outsider. You can ask more honest questions and have deeper discussions. And when something goes wrong, you won't call an organization in another city across the country for help; you'll call your friend. 

We thought about what an alternative model for teaching digital security might look like, and where EFF and EFF supporters could uniquely add value. Influenced by human-centered design methods, we wanted to make sure that what we created was informed by a thorough understanding of these problems and the space, and that it complemented the many wonderful digital security resources already out there. The Security Education Companion came out of this long journey.

We began by conducting informal interviews at conferences and on calls, where we spoke at length with dozens of US-based and international digital security trainers and practitioners. From late 2016 through July 2017, we asked trainers a loose series of questions, including:

“What is the starting point for a security training?”

"What are the hardest things for participants to learn in a security training? What do participants tend to misunderstand?”

“What is the fundamental knowledge that people should have coming out of a security training?” 

We facilitated two webinars with the Electronic Frontier Alliance and learned more about the digital security training scene in various cities around the US. These conversations with trainers helped us to assess what seasoned digital security trainers are already doing, what kind of resources they are using, what kinds of resources are missing, and where more guidance is needed for newer teachers of digital security. We learned that many trainers use our Surveillance Self-Defense resources to inform their training, and we learned where trainers felt that these existing resources fell short. We shared these comments back with our SSD team, and we have worked hard to address these concerns.

We decided to narrow our audience to new teachers of digital security who would be teaching to their friends and neighbors.

Here at EFF, we were turning those findings into our new approach to security education. We compared and discussed existing resources on pedagogy, educational resources for new teachers, end-user focused digital security resources, and existing methods for teaching digital security. We tested out our draft security education materials and teaching approaches in our own digital security events, reflected on how they could be improved, and iterated on them. We looked through all our incoming training requests, created sixteen personas based on these requesting groups, and used these personas to help inform the beginnings of a curriculum. We also created a tone guide, requirements for our writing (such as striving to meet Simple English constraints), a glossary for our new terms, and guidelines for our graphics and materials to make it easier for end users to remix and localize them. We decided to narrow our audience to new teachers of digital security who would be teaching to their friends and neighbors.

When we had enough data to begin making materials, we used it to create twenty inclusive digital security learner personas and trainer personas. We used these personas to help us organize and prioritize what advice we felt was important to share with new teachers, what materials they might need for a basic digital security workshop, and lesson modules. As we designed the educational materials and the structure of the website, we shared our materials with a group of internal and external digital security trainers to test our materials with learners, collected feedback from their experiences, and made adjustments to our educational materials. We also began sharing our resources with a group of trusted digital security practitioners, and solicited feedback.

The Security Education Companion is growing and improving, and we are excited to share it with beginner teachers of digital security. Through 2018, we will continue to work hard to ensure that the Companion improves on the existing collective body of training knowledge and practice. Read our Security Education 101 articles and try out the lesson modules with your friends: we’d love to hear how they worked out.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

The Year the Open Internet Came Under Siege: 2017 Year in Review

Sat, 12/23/2017 - 12:47pm

The fight between the Federal Communications Commission’s choice to abandon the principles of net neutrality and the majority of Americans started early in 2017 and continued into the very last month of the year. But even with the FCC’s bad vote coming so late, we fought all year to build up momentum that will allow us to fix their blunder in 2018.

2017 started out with a warning: in his final address as chairman of the FCC, Tom Wheeler said that the future of a free and open Internet safeguarded by net neutrality was hanging by a thread. “All the press reports seem to indicate that the new commission will choose an ideologically based course,” said Wheeler. Wheeler also offered up the argument that “Network investment is up, investment in innovative services is up, and ISPs’ revenues—and stock prices—are at record levels. So, where’s the fire? Other than the desires of a few [providers] to be free of meaningful oversight, why the sudden rush to undo something that is demonstrably working?”

That would be a constant question posed throughout 2017: why would the FCC, under its new chairman, former Verizon lawyer Ajit Pai, move to eliminate something as functional and popular as net neutrality? After all, net neutrality protections guarantee that all information transmitted over the Internet be treated equally, preventing Internet service providers from prioritizing, say, their own content over that of competitors. It’s a logical set of rules that preserves the Internet as we know it. Net neutrality has been protected by the FCC for over a decade, culminating in the 2015 Open Internet Order, which we worked hard to get adopted in the first place.

As early as February, there were signs that the FCC was going to abandon its role guarding against data discrimination by ISPs. Early in the month, the FCC indicated it would cease investigating AT&T’s zero-rating practices. “Zero-rating” is when a company doesn’t count certain content against a user’s data limit. While zero-rating may sound good in theory, in reality it’s just your provider picking winners and losers and trying to influence how you use your data. AT&T was zero-rating content from DirecTV, which it owns. And, prior to Pai’s chairmanship, the FCC wanted to know if AT&T was treating all video service the same, in accordance with the principles of net neutrality. As Chairman, Pai abandoned the investigation.

The argument consistently put forward by opponents of net neutrality is that it imposes onerous rules on ISPs that stifle innovation and competition in the marketplace. The innovation claim is undermined by the many start-ups that lined up to defend net neutrality, telling the FCC that creativity depends on clear, workable rules. The competition claim is just as laughable, given that it is the large broadband companies that wanted net neutrality gutted—the same companies that are often the only option customers have. Net neutrality protections that forced monopolist ISPs to treat all data the same were some of the only competitive safeguards we had. Without them, Time Warner’s alleged practices of misleading customers and Internet content providers would lose the tempering effect the Open Internet Order provided.

On April 26, the fear and rumor became reality as the FCC chairman announced his intention to roll back the Open Internet Order and “reclassify” broadband Internet access so that ISPs would be allowed to block content and selectively throttle speeds, which was previously prohibited. We knew this was unpopular and would have a devastating effect on speech and the Internet, so we gave you a tool to tell that to the FCC. We knew that the vast majority of you support net neutrality, and we worked hard to make sure your voices were heard.

The new plan proposed by Pai claimed to make ISPs answerable to the Federal Trade Commission (FTC) instead of the FCC – even though a pending court case might keep the FTC from having any oversight of major telecommunications companies altogether. Even if it retains some authority, the FTC can only get involved when ISPs break the promises they chose to make—a flimsy constraint that telecom lawyers can easily write around. Sure enough, just as the FCC carried out Pai’s repeal, we saw Comcast roll back its promises on net neutrality. And that was just the start of the problems we have with Pai’s proposal. An attack on the open Internet is an attack on free speech, and that’s worth defending.

In June, we and a coalition of hundreds of other groups that included nonprofits, artists, tech companies large and small, libraries, and even some ISPs called for a day of action in support of net neutrality. That day came on July 12, when EFF and other websites “blocked” access to their websites unless visitors “upgraded” to “premium” Internet service, a parody of the real consequences that would follow the repeal of net neutrality. Our day of action resulted in 1.6 million comments sent to the FCC.

We kept busy in July, submitting our own comment to the FCC in strong opposition to the proposed repeal. Removing net neutrality protections would, we explained, open the door to blocking websites, selectively throttling Internet speeds for some content, and charging fees to access favored content over “fast lanes.” Our comment joined that of nearly 200 computer scientists, Internet engineers, and other technical luminaries who pointed out that the FCC’s plan was premised on a number of misconceptions about how the Internet actually works and how it is used.

Even with the comments from the engineers, the final version of the plan released by the FCC still contained incorrect information about how the Internet works. It became clear the FCC was forging ahead with a repeal, without stating a valid reason for doing so or listening to the voices of the public that were pouring in. With that in mind, we created a tool that makes it easy to tell Congress to protect the web and created a guide for other ways to get involved.

On December 14, the FCC voted 3-2 to roll back net neutrality and abdicate its responsibility to ensure a free and open Internet. That vote is not the end of the story, not by far. The new rule is being met with legal challenges from all sides, from public interest groups to state attorneys general to technology companies. Meanwhile, state governments have started introducing laws to protect net neutrality on a local level. Even as lawsuits begin, Congress can stop the FCC nightmare from going forward. Under the Congressional Review Act (CRA), Congress has a window of time to reverse an agency rule. This means that we, and you, must continue to monitor and pressure Congress to do so. So call Congress and urge them to use their power under the CRA to save the Open Internet Order.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Beating Back the Rise of Law Enforcement’s Digital Surveillance of Protestors: 2017 in Review

Sat, 12/23/2017 - 11:00am

In 2017, we’ve seen a dramatic rise in the number of high-profile cases where law enforcement has deployed digital surveillance techniques against political activists. From the arrest and prosecution of hundreds of January 20, 2017 Inauguration Day (J20) protestors to the systematic targeting, surveilling and infiltration of Water Protectors in Standing Rock, North Dakota, and the Black Lives Matter Movement over social media, law enforcement and private security firms have taken advantage of the wealth of information available online to thwart activists’ credibility and efficacy. 

While government surveillance and investigation of opposition groups may not be anything new, the tools and methods for conducting such surveillance and the sheer scope of information that can be captured about these groups is staggering. The magnitude of information now available in the digital age via platforms like Facebook, Instagram, and Twitter, continues to grow exponentially, documenting your location information, contact networks, calendars, and communications. Independently, consent-less access to these discrete data-points may seem little more than intrusive, but when aggregated together, this information creates a very intimate portrait of our day-to-day lives that law enforcement can and has used against dissenting voices.

When law enforcement comes knocking, it is increasingly up to the social media platforms and their users to stand up and call for help in protecting user rights and privacy. That’s exactly what happened in the J20 cases. This past summer, the U.S. Department of Justice (DOJ) tried to gag Facebook from warning its users about the DOJ’s demand for their information using a court-issued gag order. Rather than capitulate to government pressure, Facebook reached out to the community for help and we answered the call.

EFF and our allies told the court to invalidate the gag order because it infringed upon Facebook’s constitutional rights to free and anonymous speech and association. The First Amendment simply cannot abide the government’s forced silencing of Facebook from informing its users that the DOJ has obtained their data. Such compelled silence would deprive individuals of their right to seek government redress over invasions into their online anonymity and would presumptively restrain online speech, without any binding standards, fixed deadlines, or judicial review.

Fortunately, the DOJ finally came to its senses after EFF and our allies called public attention to the constitutional violations wrought by its gagging of Facebook, and moved to vacate its gag orders with the court rather than face the dressing down that was sure to come if the case had proceeded to argument. While we’re pleased with the result here, the DOJ still routinely uses gag orders that go far beyond the very narrow circumstances allowed by the First Amendment. We must remain vigilant in 2018 to see that the courts rein in such abuse of power. For if experience is any indication, the government will push its boundaries until someone stands up to them. 

For example, in the fall of 2017, the DOJ demanded user information on over 1.3-million visitors to the disruptJ20 website via a search warrant. Thankfully, disruptj20’s webhost, Dreamhost, refused to produce the data and, like Facebook, reached out to the community for support and filed a motion in opposition to the DOJ’s request. 

With the amplified public attention brought to the issue by EFF and other media groups, the DOJ finally backed down and narrowed the scope of its warrant to exclude most visitor logs, set a temporal limit for records, and withdrew its demand for unpublished content, like draft blog posts and photos. While the DOJ didn’t go quite as far as we’d like in reining in its request for protesters’ digital information, this was still a crucial win in the battle for user privacy and freedom of anonymous speech and association.

Despite ever-increasing law enforcement intrusion into protestors’ digital lives, we must stand strong against fear and self-censorship and look to one another to raise and answer the call for robust user privacy practices and protections from our social media platforms.  When we speak together, history has shown that our voices are strong enough to turn the tide back on the government’s digital intrusion into constitutionally protected activity. Join us as we continue the fight in 2018.

The six defendants in the first J20 trial were found not guilty on all counts by a jury on Dec. 21, 2017 . A second trial for a separate group of defendants will be scheduled in the New Year. 

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.Like what you're reading? Support digital freedom defense today!

Surveillance Battles: 2017 in Review

Fri, 12/22/2017 - 5:11pm

If you’ve been following EFF’s work, you’ll know that we’ve been fighting against the creeping surveillance state for over 20 years. Often, this means pushing back against the National Security Agency’s dragnet surveillance programs, but as new technology becomes available, new threats emerge.

Here are some of the biggest legislative fights we had in 2017.

FISA Section 702

Section 702 is a surveillance authority that is part of the FISA Amendments Act of 2008. It was created as a way for the intelligence community to collect foreign intelligence from non-Americans located outside of the United States. However, the way the law is currently written allows the NSA to “incidentally” collect communications from an untold number of Americans. We say “untold” because the government has never disclosed how many law-abiding Americans have had their communications vacuumed up by the NSA and other intelligence gathering organizations. In addition to being used to prevent terrorism, Section 702 allows for that collected information to be used in ordinary law enforcement activities. As we have witnessed in several recent Congressional hearings, even members of Congress tasked with overseeing these programs literally don’t know how many Americans have been impacted by this program because the FBI, the DOJ, and the NSA have refused repeated requests share that information.

Section 702 authority was set to expire on December 31, 2017, which means that Congress had a chance to make the many necessary changes needed to protect their constituents from excessive government surveillance. Various members of Congress have introduced some great bills, but other bills do nothing to prevent unwarranted dragnet surveillance.

We are disappointed that Congress hasn’t prioritized having a transparent debate about how law enforcement and intelligence agencies should be using their spying authorities while also respecting Americans’ Fourth Amendment rights. Sadly, as we approached the potential sunset of Section 702 at the end of the year with no consensus in sight, Congressional leadership punted by tacking a three week extension of Section 702 into a must-pass spending bill. The new deadline is January 19, 2017, and we hope that this time, Congress will use this opportunity to end warrantless, unconstitutional surveillance for good.

No matter what happens, we stand ready to continue the fight to rein in sweeping spying programs.

Facial Recognition and other Biometric Screening

In 2004, the U.S. Department of Homeland Security (DHS) began biometric screening of foreign citizens upon their arrival in the U.S. In 2016, DHS launched a pilot program to expand facial recognition screening to U.S. citizens, in addition to foreign travelers, on a daily international flight out of Atlanta. This summer, DHS has gone even further, and has started working to expand the screening to all travelers on certain flights out of certain airports, with the list of airports growing. Customs and Border Protections (CBP) has also announced plans to expand their facial recognition program to land borders in 2018, requiring any person driving into the U.S. to submit to biometric screening. DHS executives have even been quoted as saying that they would like to substitute biometric screenings at every place in the airport where we currently have to show ID.

While Congress did authorize automated tracking of foreign citizens as they enter and exit the U.S. in 1996, they have not authorized this intrusion into the lives of American travelers. DHS expanded these programs on their own, backed by President Trump’s revised travel ban.

Several Members of Congress are scrambling to codify DHS’s increased biometric surveillance, introducing several bills in 2017, such as Sen. Cornyn’s bill S. 1757, Sen. Thune’s bill S. 1872, Rep. McCaul’s bill H.R. 3548, and others. These bills would both authorize these programs, and in some cases, expand them even further. Additionally, it’s possible that expanded biometric screening could be included in upcoming legislation that contains permanent changes to DACA.

As we have written extensively, biometric screening, and especially its implementation as a law enforcement tool, is inherently problematic. Our faces are easy to capture and hard to change. Plus, facial recognition has significant accuracy problems, especially for non-white travelers. One of the biggest problems of this screening is data security. The Equifax database breach was a grave violation of privacy, based just on release of numbers (like dates of birth and Social Security numbers). The risk to privacy posed by breach of biometric databases is even greater. The government must answer questions about how the data will be stored, how long it will be stored, and how they will ensure that data is kept secure.

Our governments should not try to force us to abandon travel in order to protect the privacy of our faces. We will continue to oppose such bills that endanger Americans’ privacy, watch for biometric screening language sneaking into other bills, and work with our allies in Congress to beat back these threats to privacy.

Cell Site Simulators Devices

At the beginning of 2017, we were heartened when the House Oversight and Government Reform Committee (OGR) issued a bipartisan report acknowledging and detailing police abuse of cell-site simulators, also known as stingrays. OGR’s report also called on Congress to pass legislation requiring that this technology only be deployed based on a court-issued probable cause warrant. We agree that Congress should set forth clear guidelines like this on the limits of this authority.

Sadly, Congress has not yet passed this legislation, even as news broke that demonstrated how necessary these limits are. Through a FOIA request, Buzzfeed found that DHS used cell-site simulators 1,885 times from January 2013 through to October 2017 throughout the United States. However, how and why DHS used these devices remains unclear.

Sen. Ron Wyden asked these questions, sending a letter to U.S. Immigration and Customs Enforcement (ICE) requesting information on the agency’s use of the devices. Sen. Wyden asked what policies govern the use of stingrays in law enforcement operations, and what steps ICE takes to limit the interference on innocent Americans. ICE responded by saying that cell site simulators are allowed both under current law and current policy. ICE maintains that there is “virtually” no interference with “non-targeted” devices, though they offer no evidence to that effect. Similarly, ICE claims that their use of cell-site simulators is limited and current policy only allows their use with probable cause warrants.

While we are glad to know that ICE has a policy around these devices, we also know that policies can be easily changed. Given the expansion of cell-site simulator snooping under this Administration, we will continue to work with Congress to create more effective, legislative protections against law enforcement overreach.

Going into 2018

As surveillance technology becomes cheaper and more accessible, law enforcement and intelligence agencies are going to continue to seek access to it, often at great cost to our privacy. Our increasingly digital lives show the growing need for ironclad privacy protections, and EFF plans to continue leading this fight for your rights in 2018 and beyond.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

What It Means to Fight for Technology Users in 2017

Fri, 12/22/2017 - 3:49pm

EFF fights for technology users. We believe that empowering and protecting users should be baked into laws, policies, and court decisions, as well as into the technologies themselves. Since our founding in 1990, we have paired this goal with the common-sense recognition that in order to properly consider these questions, you need to listen to the voices of those who understand, deeply, how the technology works.

This guidestar has served us from the very beginning of EFF. EFF helps courts and lawmakers recognize that the speech of Internet users must be protected against government censorship, that technology users should not be subject to overbroad search and seizure, and that strong encryption ensures that users have security and privacy in the digital world. We work to protect against copyright and patent laws that threaten technologies that empower users to co-create our culture—technologies like podcasts, user-created videos, and peer-to-peer filesharing. As users come to rely on the Internet—to find a job, connect with loved ones, speak out on issues of the day and organize for a better tomorrow—EFF’s role in standing up for users becomes increasingly important. 

So what does it mean to stand for the users in 2017 and looking into 2018? It means standing up to protect the fundamentals of democracy online, especially addressing the newly urgent needs of users organizing politically, while keeping up our longstanding role of encrypting the Web and combating mass surveillance. Specifically, it means:

  • Fighting against illegal search and seizure of digital devices at the border, and educating users about how to protect their privacy even as invasive searches have ratcheted up significantly under the Trump Administration.
  • Leading the way in identifying, tracking, and sounding the alarm about broadband provider violations of network neutrality, and seeking to build real competition through municipal broadband and other options.
  • Developing materials to help technologically-savvy people better teach their fellow activists, colleagues, friends, and loved ones to protect themselves with the newly-launched Security Education Companion.  This is the latest step in our effort to ensure that everyone who needs it has basic Surveillance Self-Defense.
  • Tracking and exposing government surveillance and hacking, using tools that include the Freedom of Information Act, lawsuits, and the publication of our own investigations into state-sponsored malware attacks against activists around the world. 
  • Continuing to call out and oppose stupid patents, including more victories against a bad podcasting patent and standing up (and developing good speech law) against a patent troll who directly sued EFF.
  • Developing Privacy Badger, which protects users from third-party tracking cookies. 
  • Organizing against the ongoing effort to wrest control of your computer away from you via digital rights management.

And so much more.

As always, but perhaps more in the past year, many threats to users are cloaked under the best of intentions. For instance, a looming Internet censorship bill known as SESTA is supposed to crack down on sex trafficking, but it wouldn’t punish traffickers. Instead, it would create an incentive for websites to police every piece of content uploaded by users—photos, blog posts, even personal messages—pushing websites to make censorship the default for user content. Similarly, grounded in the experience of our Online Censorship project as well as our 27 years protecting marginalized voices online from wrongful censorship by platforms including Facebook, Google, and Twitter, we work to make sure that well-intentioned efforts to protect against Nazis, harassers, and potential terrorists don’t end up silencing the very people they are intended to protect. 

EFF is committed to ensuring that we build the kind of digital world that we all want to live in. As we head toward 2018, we’re larger than ever before, with over 85 staffers including lawyers, technologists, and activists. That’s because the task of standing up for technology users is larger than ever before. Please make your voice heard by joining with us. 

Donate to EFF

Donate to our year end campaign

We’re continuing a tradition of looking back at the complicated technology policy issues we’ve grappled with over the last year in retrospective blog posts. Over the coming days, we’ll be publishing over a dozen blog posts about how speech, privacy, and new technologies have changed society and the law in the last year. Please read the blog posts below, and check back every few days to see as we add more blog posts to this roundup. 

2017 Retrospective Blog Posts

Surveillance Battles

Level Up Digital Freedom

Fri, 12/22/2017 - 2:49pm

The Electronic Frontier Foundation has just launched the Year End Challenge, the final opportunity to raise funds to protect online civil liberties and the open Internet before the end of 2017. If you join before the end of 2017, you’ll help EFF receive additional challenge grants totaling $39,350! EFF is a U.S. 501(c)(3) nonprofit and donations are tax-deductible as allowed by law.

Take the challenge

Give Before 2018 to unlock Special grants

This has been an extraordinary year filled with triumphs, but also new challenges to civil liberties. Member support has allowed EFF to fight back. In just the last few months, EFF launched a lawsuit against the Department of Homeland Security to protect travelers from unlawful searches at the U.S. border; scored a victory against persistent spying by automated license plate readers; expanded our network of grassroots activists; and developed new digital security education tools to strengthen every community.

Looking ahead, we are carving a path forward in the fight for net neutrality and pushing for crucial reforms to rein in NSA spying. As we anticipate new battles and regroup for rematches, we build on more than 25 years of impact litigation, advocacy, and technology to protect users and defend digital civil liberties. What happens next will change the way that we learn and interact for years to come, and we need public support help to succeed in setting things right.

EFF is tremendously grateful for everything that members have helped us accomplish in the name of free expression, privacy, and the future of innovation. I urge you to reach out and support organizations like EFF that are stemming the tide of modern dystopia and building a better digital future.

Safari in Arms Race Against Trackers - Criteo Feels the Heat

Thu, 12/21/2017 - 7:00pm

Criteo is an ad company. You may not have heard of them, but they do retargeting, the type of ads that pursue users across the web, beseeching them to purchase a product they once viewed or have already bought. To identify users across websites, Criteo relies on cross-site tracking using cookies and other methods to follow users as they browse. This has led them to try and circumvent the privacy features in Apple's Safari browser which protects its users from such tracking. Despite this apparently antagonistic attitude towards user privacy, Criteo has also been whitelisted by the Acceptable Ads initiative. This means that their ads are unblocked by popular adblockers such as Adblock and Adblock Plus. Criteo pays Eyeo, the operator of Acceptable Ads, for this whitelisting and must comply with their format requirements. But this also means they can track any user of these adblockers who has not disabled Acceptable Ads, even if they have installed privacy tools such as EasyPrivacy with the intention of protecting themselves. EFF is concerned about Criteo’s continued anti-privacy actions and their continued inclusion in Acceptable Ads. 

Safari Shuts out Third Party Cookies...

All popular browsers give users control over who gets to set cookies, but Safari is the only one that blocks third-party cookies (those set by a domain other than the site you are visiting) by default. (Safari's choice is important because only 5-10% of users ever change default settings in software.) Criteo relies on third-party cookies. Since users have little reason to visit Criteo's own website, the company gets its cookies onto users’ machines through its integration on many online retail websites. Safari’s cookie blocking is a major problem for Criteo, especially given the large and lucrative nature of iPhone's user base. Rather than accept this, Criteo has repeatedly implemented ways to defeat Safari's privacy protections.

One workaround researchers detected Criteo using was to redirect users from sites where their service was present to their own. For example, if you visited wintercoats.com and clicked on a product category, you would be first diverted to criteo.com and then redirected to wintercoats.com/down-filled. Although imperceptible to the user, this detour was enough to persuade the browser that criteo.com is a site you chose to visit, and therefore a first party entitled to set a cookie rather than a third party. Criteo applied for a patent on this method in August 2013.

...And Closes the Backdoor

Last summer, however, Apple unveiled a new version of Safari with more sophisticated cookie handling—called Intelligent Tracking Prevention (ITP)—which killed off the redirect technique as a means to circumvent the cookie controls. The browser now analyzes if the user has engaged with a website in a meaningful way before allowing it to set a cookie. The announcement triggered panic among advertising companies, whose trade association, the Interactive Advertising Bureau, denounced the feature and rushed out technical recommendations to work around it. Obviously the level of user "interaction" with Criteo during the redirect described above fails ITP's test, which meant Criteo was locked out again.

It appears that Criteo’s response was to abandon cookies for Safari users and to generate a persistent identifier by piggybacking on a key user safety technology called HSTS. When a browser connects to a site via HTTPS (i.e. a site that supports encryption), the site can respond with an HTTP Strict Transport Security policy (HSTS), instructing the browser to only contact it using HTTPS. Without a HSTS policy, your browser might try to connect to the site over regular old unencrypted HTTP in the future—and thus be vulnerable to a downgrade attack. Criteo used HSTS to sneak data into the browser cache to produce an identifier it could use to recognize the individual's browser and profile them. This approach relied on the fact that it is difficult to clear HSTS data in Safari, requiring the user to purge the cache entirely to delete the identifier. For EFF, it is especially worrisome that Criteo used a technique that pits privacy protection against user security interests by targeting HSTS. Use of this mechanism was documented by Gotham City Research, an investment firm who have bet against Criteo’s stock.

In early December, Apple released an update to iOS and Safari which disabled Criteo’s ability to exploit HSTS. This led to Criteo revising down their revenue forecasts and a sharp fall in their share price.

    How is Criteo "Acceptable Advertising"? "... we sort of seek the consent of users, just like we had done before."1 - Erich Eichmann, CEO Criteo "Only users who don’t already have a Criteo identifier will see the header or footer, and it is displayed only once per device. Thanks to [the?] Criteo advertisers network, most of your users would have already accepted our services on the website of another of our partner. On average, only 5% of your users will see the headers or footers, and for those who do, the typical opt-out rate is  less  than .2%." - Criteo Support Center

Criteo styles itself as a leader in privacy practices, yet they have dedicated significant engineering resources to circumventing privacy tools. They claim to have obtained user consent to tracking based on a minimal warning delivered in what we believe to be a highly confusing context. When a user first visits a site containing Criteo’s script, they received a small notice stating, "Click any link to use Criteo’s cross-site tracking technology." If the user continues to use the site, they are deemed to have consented. Little wonder that Criteo can boast of a low opt-out rate to their clients. 

Due to their observed behaviour prior to the ITP episode, Criteo’s incorporation into the Acceptable Ads in December 2015 aroused criticism among users of ad blockers. We have written elsewhere about how Acceptable Ads creates a clash of interests between adblocking companies and their users, especially those concerned with their privacy. But Criteo’s participation in Acceptable Ads brings into focus the substantive problem with the program itself. The criteria for Acceptable Ads are concerned chiefly with format and aesthetic aspects (e.g. How big is the ad? How visually intrusive? Does it blink?) and excludes privacy concerns. Retargeting is unpopular and mocked by users, in part because it wears its creepy tracking practices on its sleeve. Our view is that Criteo’s bad behavior should exclude its products from being deemed "acceptable" in any way.

The fact that the Acceptable Ads Initiative has approved Criteo's user-tracking-by-misusing-security-features ads is indicative of the privacy problems we believe to be at the heart of the Acceptable Ads program. In March this year, Eyeo announced an Acceptable Ads Committee that will control the criteria for Acceptable Ads in the future. The Committee should start by instituting a rule which excludes companies that circumvent explicit privacy tools or exploit user security technologies for the purpose of tracking.