EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 47 min ago

Copyright Week 2018: Join Us in Fighting for Better Copyright Law and Policy

Mon, 01/15/2018 - 10:57am

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright law shapes the world we live in. It is supposed to encourage progress and creativity, enriching our culture and contributing to the growth of knowledge. However, the law is often used as a blunt instrument by a few prominent actors to preserve their cultural dominance. Less obviously, governments and other large industries have taken to using the law to hide information they don’t want us to see and use or to limit functionality and ownership of software and devices we buy and use. The law shouldn’t work this way. It should serve us all.

It doesn’t matter if you are a creator or simply someone who enjoys media; an inventor or someone who just wants to use, fix, or tinker with your devices; a researcher or someone who wants to look up information—copyright law impacts all of these things. And, right now, the law is out of whack. It’s balanced in favor of people who want to control things, instead of people who want to share things.

Thankfully, recent years have shown that we can push back and work to fix copyright law. Six years ago this week, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright infringing content. SOPA and PIPA were laws that would have made censorship frighteningly easy and stopped the Internet from being a place where innovation and ideas can flourish.

Since the battle over SOPA and PIPA, we’ve continued to identify threats and fight against them. We also continue to fight for a copyright law that does what it’s supposed to: actually encourage new innovation, art, and knowledge, not just enrich established industries.

One way we do that is with Copyright Week. Every year, joining together with a diverse group of organizations, we set aside this week to highlight and advocate a set of principles of copyright law. This year, they are:

  • Monday: Public Domain and Creativity. Copyright policy should encourage creativity, not hamper it. Excessive copyright terms inhibit our ability to comment, criticize, and rework our common culture.
  • Tuesday: Controlling Your Own Devices. As software-enabled devices become ubiquitous, so do onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
  • Wednesday: Transparency. Whether in the form of laws, international agreements, or website terms and standards, copyright policy should be made through a participatory, democratic, and transparent process.
  • Thursday: Copyright as a Tool of Censorship. Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.
  • Friday: Safe Harbors. Safe harbor protections allow online intermediaries to foster public discourse and creativity. Safe harbor status should be easy for intermediaries of all sizes to attain and maintain.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on Twitter.

As we said last year, and the year before that, if you too stand behind these principles, please join us by supporting them, sharing them, and telling your lawmakers you want to see copyright law reflect them.

A Step in the Right Direction: House Passes the Cyber Vulnerability Disclosure Reporting Act

Fri, 01/12/2018 - 10:41am

The House of Representatives passed the “Cyber Vulnerability Disclosure Reporting Act” this week. While the bill is quite limited in scope, EFF applauds its goals and supports its passage in the Senate.

H.R. 3202 is a short and simple bill, sponsored by Rep. Sheila Jackson Lee (D-TX), that would require the Department of Homeland Security to submit a report to Congress outlining how the government deals with disclosing vulnerabilities. Specifically, the mandated report would comprise two parts. First, a “description of the policies and procedures developed [by DHS] for coordinating cyber vulnerability disclosures,” or in other words, how the government reports flaws in computer hardware and software to the developers. And second, a possibly classified “annex” containing descriptions of specific instances where these policies were used to disclose vulnerabilities in the previous year, leading to mitigation of the vulnerabilities by private actors.

Perhaps the best thing about this short bill is that it is intended to provide some evidence for the government’s long-standing claims that it discloses a large number of vulnerabilities. To date, such evidence has been exceedingly sparse; for instance, Apple received its first ever vulnerability report from the U.S. government in 2016. Assuming the report and annex work as intended, the public’s confidence in the government’s ability to “play defense” may actually increase.

The bill has no direct interaction with the new Vulnerabilities Equities Process (VEP) charter, which was announced last November. As we said then, we think the new VEP is probably a step in the right direction, and this bill providers further support for transparency into the government's handling of vulnerabilities.

As an aside, we question the need to classify the annex describing actual instances of disclosed vulnerabilities. Except maybe under exceptional circumstances, this should be public, especially coming after dubious statements by officials like that by White House Cybersecurity Coordinator Rob Joyce when he said last week that “the U.S. government would never put a major company like Intel in a position of risk like this to try to hold open a vulnerability.” Reassurances like that remain hard to take at face value in light of the NSA’s recent history of sabotaging American companies’ computer security.

We’ll be watching as the bill moves to the Senate.

Related Cases: EFF v. NSA, ODNI - Vulnerabilities FOIA

House Fails to Protect Americans from Unconstitutional NSA Surveillance

Thu, 01/11/2018 - 1:49pm

The House of Representatives cast a deeply disappointing vote today to extend NSA spying powers for the next six years by a 256-164 margin. In a related vote, the House also failed to adopt meaningful reforms on how the government sweeps up large swaths of data that predictably include Americans’ communications.                                                                 

Because of these votes, broad NSA surveillance of the Internet will likely continue, and the government will still have access to Americans’ emails, chat logs, and browsing history without a warrant. Because of these votes, this surveillance will continue to operate in a dark corner, routinely violating the Fourth Amendment and other core constitutional protections.      

This is a disappointment to EFF and all our supporters who, for weeks, have spoken to defend privacy. And this is a disappointment for the dozens of Congress members who have tried to rein NSA surveillance in, asking that the intelligence community merely follow the Constitution.

Today’s House vote concerned S. 139, a bill to extend Section 702 of the Foreign Intelligence Surveillance Act (FISA), a powerful surveillance authority the NSA relies on to sweep up countless Americans’ electronic communications. EFF vehemently opposed S. 139 for its failure to enact true reform of Section 702.

As passed by the House today, the bill:

  • Endorses nearly all warrantless searches of databases containing Americans’ communications collected under Section 702.
  • Provides a narrow and seemingly useless warrant requirement that applies only for searches in some later-stage criminal investigations, a circumstance which the FBI itself has said almost never happens.
  • Allows for the restarting of “about” collection, an invasive type of surveillance that the NSA ended last year after being criticized by the Foreign Intelligence Surveillance Court for privacy violations.
  • Sunsets in six years, delaying Congress’ best opportunity to debate the limits NSA surveillance.

You can read more about the bill here.                             

Sadly, the House’s approval of S. 139 was its second failure today. The first was in the House’s inability to pass an amendment—through a 183-233 vote—that would have replaced the text of S. 139 with the text of the USA Rights Act, a bill that EFF is proud to support. You can read about that bill here.

The amendment to replace the text of S. 139 with the USA Rights Act was introduced by Reps. Justin Amash (R-MI) and Zoe Lofgren (D-CA) and included more than 40 cosponsors from sides of the aisle. Its defeat came from both Republicans and Democrats.

S. 139 now heads to the Senate, which we expect to vote by January 19. The Senate has already considered stronger bills to rein in NSA surveillance, and we call on the Senate to reject this terrible bill coming out of the House.

We thank every supporter who lent their voice to defend the Constitution. And we thank every legislator who championed civil liberties in this months-long fight. The debate around surveillance reform has evolved—and will continue to evolve—for years. We thank those who have come to understand that privacy does not come at the price of security. Indeed, we can have both.

Thank you to the scores of representatives who sponsored and co-sponsored the USA Rights Act amendment, or voiced support on the House floor today, including Reps. Amash, Lofgren, Jerrold Nadler, Ted Poe, Jared Polis, Mark Meadows, Tulsi Gabbard, Jim Sensenbrenner, Walter Jones Jr., Thomas Massie, Andy Biggs, Warren Davidson, Mark Sanford, Steve Pearce, Scott Perry, Sheila Jackson Lee, Alex Mooney, Paul Gosar, David Schweikert, Louie Gohmert, Ted Yoho, Joe Barton, Dave Brat, Keith Ellison, Lloyd Doggett, Rod Blum, Tom Garrett Jr., Morgan Griffith, Jim Jordan, Earl Blumenauer, Ro Khanna, Beto O’Rourke, Todd Rokita, Hank Johnson, Blake Farenthold, Mark Pocan, Dana Rohrabacher, Raúl Grijalva, Raúl Labrador, Peter Welch, Tom McClintock, Salud Carbajal, Ted Lieu, Bobby Scott, Pramila Jayapal, and Jody Hice.


Ninth Circuit Doubles Down: Violating a Website’s Terms of Service Is Not a Crime

Wed, 01/10/2018 - 2:24pm

Good news out of the Ninth Circuit: the federal court of appeals heeded EFF’s advice and rejected an attempt by Oracle to hold a company criminally liable for accessing Oracle’s website in a manner it didn’t like. The court ruled back in 2012 that merely violating a website’s terms of use is not a crime under the federal computer crime statute, the Computer Fraud and Abuse Act. But some companies, like Oracle, turned to state computer crime statutes—in this case, California and Nevada—to enforce their computer use preferences.

This decision shores up the good precedent from 2012 and makes clear—if it wasn’t clear already—that violating a corporate computer use policy is not a crime.

Oracle v. Rimini involves Oracle’s terms of use prohibition on the use of automated methods to download support materials from the company’s website. Rimini, which provides Oracle clients with software support that competes with Oracle’s own services, violated that provision by using automated scripts instead of downloading each file individually. Oracle sent Rimini a cease and desist letter demanding that it stop using automated scripts, but Oracle didn’t rescind Rimini’s authorization to access the files outright. Rimini still had authorization from Oracle to access the files, but Oracle wanted them to access them manually—which would have seriously slowed down Rimini’s ability to service customers.

Rimini stopped using automatic downloading tools for about a year but then resumed using automated scripts to download support documents and files, since downloading all of the materials manually would have been burdensome, and Oracle sued. The jury found Rimini guilty under both the California and Nevada computer crime statues, and the judge upheld that verdict—concluding that, under both statutes, violating a website’s terms of service counts as using a computer without authorization or permission.

Rimini Street appealed, and we filed an amicus brief last year urging the court to reject Oracle’s position. As we told the court, the district court’s reasoning turns millions of Internet users into criminals on the basis of innocuous and routine online conduct. By making it completely unclear what conduct is criminal at any given time on any given website, the district court’s holding is in violation of the long-held Rule of Lenity—which requires that criminal statutes be interpreted to give clear notice of what conduct is criminal. Not only do people rarely (if ever) read terms of use agreements, but the bounds of criminal law should not be defined by the preferences of website operators. And private companies shouldn’t be using criminal laws meant to target malicious actors as tool to enforce their computer use preferences or to interfere with competitors.

At oral argument in July 2017, Judge Susan Graber pushed back [at around 33:40] on Oracle’s argument that automated scraping was a violation of the computer crime law. And Monday, the 3-judge panel issued a unanimous decision rejecting Oracle’s position. As the court held:

“[T]aking data using a method prohibited by the applicable terms of use”— i.e., scraping — “when the taking itself generally is permitted, does not violate” the state computer crime laws.

The court even refers to our brief:

“As EFF puts it, ‘[n]either statute . . . applies to bare violations of a website’s terms of use—such as when a computer user has permission and authorization to access and use the computer or data at issue, but simply accesses or uses the information in a manner the website owner does not like.’”

We’re happy to see the Ninth Circuit clarify, again, that violating a website’s terms of service is not a crime. And we hope this decision influences another case pending before the court involving an attempt to use a computer crime statute to enforce terms of service and stifle competition, hiQ v. LinkedIn. That case addresses whether using automated tools to access publicly available information on the Internet—information that we are all authorized to access under the Web’s open access norms—is a crime. It’s not, and we hope the court agrees. It will hear oral argument in March in San Francisco.

Related Cases: United States v. David NosalhiQ v. LinkedIn

EFF to Court: Don’t Let Trolls Get Away With Asserting Stupid Software Patents

Tue, 01/09/2018 - 2:01pm

If trolls don’t face consequences for asserting invalid software patents, then they will continue to shake down productive companies. That is why EFF has filed an amicus brief [PDF] urging the court to uphold fee awards against patent trolls (and their lawyers) when they assert software patents that are clearly invalid under the Supreme Court’s decision in Alice v. CLS Bank (which held that an abstract idea is not eligible for a patent simply because it has been implemented on a generic computer). Our brief explains that the most abusive patent trolling tends to come from trolls that own abstract software patents.

This case began when a patent troll called AlphaCap Ventures sued Gust, a company that connects startups with investors around the world. Claiming its patent covered various forms of online equity financing, AlphaCap Ventures filed suit against ten different crowdfunding platforms. Most of the defendants settled quickly. (In many patent troll suits, even when the patent is very weak, the high cost of litigation pressures defendants to settle.) But Gust fought back. Faced with a defendant willing to actually challenge its patent, AlphaCap Ventures eventually dismissed its claim. The district court ruled that AlphaCap Ventures’ attorneys had litigated unreasonably and ordered them to pay Gust’s attorneys’ fees. The lawyers then appealed.

In their appeal, AlphaCap Ventures’ attorneys argue that the law of patent eligibility—particularly the law regarding when a claimed invention is an abstract idea and thus ineligible for patent protection—is so unsettled that a court should never award fees when a party loses on the issue. Our brief argues that this would be a very dangerous rule. Certainly, some patent eligibility questions are difficult. But that does not mean all eligibility questions are difficult. Our brief explains that many of the most prolific trolls have made objectively unreasonable eligibility arguments. Indeed, district courts have already awarded fees in a few cases where trolls made unreasonable arguments regarding patent eligibility under Alice.

A group of companies (Acushnet, Garmin, Red Hat, SAP, SAS, Symmetry, and Vizio) also filed an amicus brief [PDF]. The companies’ brief makes the important point that fee awards against lawyers are essential to deter abuse from patent trolls. This is because most patent trolls are shell companies and many are structured to ensure that the troll never loses. Indeed, we’ve seen few recent cases where patent trolls were hit with fee awards then claimed they had no money to pay.

To take one example, notorious patent troll Shipping & Transit LLC (and its predecessors) has filed hundreds of cases asserting a family of patents on notification technology. After courts finally hit it with fee awards, the troll claimed [PDF] poverty. This is despite having secured more than 800 payouts, likely totaling many millions of dollars.

It seems that patent trolls are set up to ensure that even when defendants win, they lose. Without fee awards against the lawyers, abusive patent trolling will continue to flourish. We hope the Federal Circuit agrees.

New York City Adopts Historic Policing Reform

Tue, 01/09/2018 - 11:43am

Prompted by a diverse grassroots movement, much of the country continues to debate important proposed policing reforms at the local level. Many local policing campaigns that EFF supports focus on ending the era of law enforcement agencies acquiring surveillance equipment in secret. The latest campaign to prove successful secured a new law advancing transparency in New York City not only in policy, but also on the ground: the Right to Know Act.

Adopted in a two-part measure, the Right to Know Act responds to the experience of New Yorkers and visitors subjected to law enforcement stops, frisks, and searches of personal possessions including digital devices like cell phones and tablets. The City Council’s passage of the measures comes in spite of fear-mongering and falsehoods promoted by police unions.

One part of the reform, supported by EFF and a litany of groups in New York (including some in the Electronic Frontier Alliance), was sponsored by Council member Antonio Reynoso. It requires NYPD officers to inform subjects of proposed consent searches that they have the right to decline consent, and also requires officers to document objective proof of such consent (using a body camera, for instance) before proceeding with a search.

This measure sensibly advances notice of rights in a manner consistent with the historic Miranda ruling that, since 1966, has allowed arrestees to receive notice of their rights upon being arrested. The Right to Know Act essentially advances Miranda-esque warnings so that civilians are informed of their rights at the search stage, where NYPD activities have prompted widespread and longstanding claims of abuse.

Like the host of reforms adopted by Providence, RI last year, the Right to Know Act in New York City represents a model policy that other cities would do well to consider. Residents of other cities inspired by the success of activists on the ground in these cities can seek EFF’s help in replicating their success by joining the Electronic Frontier Alliance.

Groups Line Up For Meaningful NSA Surveillance Reform

Mon, 01/08/2018 - 8:21pm

Multiple nonprofit organizations and policy think tanks, and one company have recently joined ranks to limit broad NSA surveillance. Though our groups work for many causes— freedom of the press, shared software development, universal access to knowledge, equal justice for all—our voices are responding to the same threat: the possible expansion of Section 702 of the FISA Amendments Act.

On January 5, the Rules Committee for the House of Representatives introduced S. 139. The bill—which you can read here—is the most recent attempt to expand Section 702, a law that the NSA uses to justify the collection of Americans’ electronic communications during foreign intelligence surveillance. The new proposal borrows some of the worst ideas from prior bills meant to reauthorize Section 702, while adding entirely new bad ideas, too.

Meaningless Warrant Requirements

The new proposal to expand Section 702 fails to protect Americans whose electronic communications are predictably swept up during broad NSA surveillance. Today, the NSA uses Section 702 to target non-U.S. persons not living in the United States, collecting emails both “to” and “from” an individual. Predictably, those emails include messages sent by U.S. persons. The government stores those messages in several databases that—because of a loophole—can then be searched and read by government agents who do not first obtain a warrant, even when those communications are written by Americans.

These searches are called “backdoor” searches because they skirt Americans’ Fourth Amendment rights to a warrant requirement.

The new proposal would require a warrant for such backdoor searches for only the most narrow of circumstances.

According to the bill, FBI agents would only have to obtain search warrants “in connection with a predicated criminal investigation opened by the Federal Bureau of Investigation that does not relate to the national security of the United States.”

That means an FBI agent would only need to get a warrant once she has found enough information to launch a formal criminal investigation. Should an FBI agent wish to search through Section 702-collected data that belongs to Americans, she can do so freely without a warrant.

The bill’s narrow warrant requirement runs the Fourth Amendment through a funhouse mirror, flipping its intentions and providing protections only after a search has been made.

“About” Collection

“About” collection is an invasive type of NSA surveillance that the agency ended last year, after years of criticism from the Foreign Intelligence Surveillance Court, which provides judicial oversight on Section 702. This type of collection allows the NSA to tap the Internet’s backbone and collect communications that are simply “about” a targeted individual. The messages do not have to be “to” or “from” the individual.

The new proposal to expand Section 702 regrettably includes a path for the Attorney General and the Director of National Intelligence to restart “about” collection. It is a model that we saw in an earlier Section 702 reauthorization bill in 2017. EFF vehemently opposed that bill, which you can read about here.

Working Together

Today, EFF sent a letter to House of Representatives leadership, lambasting any bills that would extend Section 702 and did not include robust backdoor search warrant requirements. You can read our letter here.

EFF also wrote a letter—joined by Aspiration Tech, Freedom of the Press Foundation, and Internet Archive—to House of Representatives Minority Leader Nancy Pelosi, demanding the same. You can read that letter here

GitHub, the communal coding company, joined the effort, sending a letter of their own to Minority Leader Pelosi’s office, too. Read GitHub’s letter here

And policy think tanks across America, including the Brennan Center for Justice and Center for American Progress, have written in opposition of S. 139.

For weeks, surveillance apologists have tried to ram NSA surveillance expansion bills through Congress. They are not letting up.

We will need your help this week more than ever. To start, you can call Leader Pelosi and let her know: any bill to extend Section 702 must include robust warrant requirements for American communications. 

Call today.

Supreme Court Won’t Hear Key Surveillance Case

Mon, 01/08/2018 - 7:22pm

The Supreme Court announced today that it will not review a lower court’s ruling in United States v. Mohamud, which upheld warrantless surveillance of an American citizen under Section 702 of the Foreign Intelligence Surveillance Act. EFF had urged the Court to take up Mohamud because this surveillance violates core Fourth Amendment protections. The Supreme Court’s refusal to get involved here is disappointing.

Using Section 702, the government warrantlessly collects billions of communications, including those belonging to a large but unknown number of Americans. The Ninth Circuit Court of Appeals upheld this practice only by creating an unprecedented exception to the Fourth Amendment. This exception allows the government to collect Americans’ communications without a warrant by targeting foreigners outside the United States, known as “incidental collection.”

We wish the Supreme Court had stepped in to fix this misguided ruling, but its demurral shouldn’t be taken to mean that Section 702 surveillance is totally fine. Some of the most controversial aspects of these programs have never been reviewed by a public court, let alone the Supreme Court. That includes “backdoor searches,” the practice of searching databases for Americans’ incidentally collected communications. Even in deciding Mohamud, the Ninth Circuit refused to address the constitutionality of backdoor searches.

Thorough judicial review of Section 702 surveillance remains one of EFF’s key priorities. In addition, as Congress nears a vote on the statute’s reauthorization, we’re pushing for legislative reforms to eliminate backdoor searches and other unconstitutional practices.

How to Assess a Vendor's Data Security

Mon, 01/08/2018 - 4:58pm

Perhaps you’re an office manager tasked with setting up a new email system for your nonprofit, or maybe you’re a legal secretary for a small firm and you’ve been asked to choose an app for scanning sensitive documents: you might be wondering how you can even begin to assess a tool as “safe enough to use.” This post will help you think about how to approach the problem and select the right vendor.

If the company can’t or won’t answer these questions, they are asking you to trust them based on very little evidence: this is not a good sign.

As every organization has unique circumstances and needs, we can’t provide definitive software recommendations or provider endorsements. However, we can offer some advice for assessing a software vendor and for gauging their claims of protecting the security and privacy of your clients and your employees.

If you are signing up for a service where you will be storing potentially sensitive employee or client data, or if you are considering using a mobile or desktop application which will be handling client or employee data, you should make sure that the company behind the product or service has taken meaningful steps to secure that data from misuse or theft, and that they won’t give the data to other parties—including governments or law enforcement—without your knowledge.

If you are the de facto IT person for a small organization—but aren’t sure how to evaluate software before adopting it—here are some questions you can ask vendors before choosing to buy their software. (For the purposes of this post, we will be focusing on concerns relating to the confidentiality of data, as opposed to the integrity and availability of data.)

If the company can’t or won’t answer these questions, they are asking you to trust them based on very little evidence: this is not a good sign.

Here are some general things to keep in mind before investing in software applications for your organization. When you’re researching vendors, consider the following:
  • Have there been past security issues with or criticisms of the tool?
  • If so, how quickly have they responded to criticisms about their tool? How quickly have they patched or made updates to fix vulnerabilities? Generally companies should provide updates to vulnerable software as quickly as possible, but sometimes actually getting the updated software can be difficult.
      • Note that criticisms and vulnerabilities are not necessarily a bad sign for the company, as even the most carefully-built software can have vulnerabilities. What’s more important is that the company takes security concerns seriously, and fixes them as quickly as possible.
    • Of course, companies selling products and enthusiasts advertising their latest software can be misled, be misleading, or even outright lie. A product that was originally secure might have terrible flaws in the future.
  • Do you have a plan to stay well-informed on the latest news about the tools that you use?
    • Setting up a Google alert for “[example product] data breach flaw vulnerability” is one way to find out about problems with a product that you use, though it probably won’t catch every problem.
    • You can also follow tech news websites or social media to keep up with information security news. You can check the “Security News” section of the Security Education Companion, which curates EFF Deeplinks posts relevant to software vulnerabilities, as well as other considerations for people teaching digital security to others.
  • Is this vendor honest about the limitations of their product?
    • If a vendor makes claims like “NSA-Proof” or “Military Grade Encryption” without stating what the security limitations of the product are, this can be a sign that the vendor is overconfident in the security of their product. A vendor should be able to clearly state the situations that their security model doesn’t defend against.
  • Does the company provide a guarantee about the availability of your data?
    • This is sometimes called a “Service Level Agreement” (SLA).
    • How likely is it that this company is going to stick around? Does it seem like they have sustainable business practices?
    • If the service disappears, is there a way to access your data and move it to another service provider or app? Or will it be gone forever?
    • Is there any chance that they will ban you from using their app or service, and thus also lock you out from accessing your data? Think about if there are any limits to how the service can be used.
Questions to ask the vendor:

Note that you may not be able to hit all of the following points—however, asking these questions will give you a better sense of what to expect from the service.

  • Does the vendor have a privacy policy on their website? Do they share or sell data to any third parties?
    • If you have the means to chat with a lawyer while reviewing the privacy policy, you can ask about:
      • Notification: Do they promise to notify us of any legal demand before handing over any of our data, or data about us (with no exceptions)?
      • Viewing: Do they promise not to look at our data themselves, except when they absolutely need to?
      • Sharing: Do they require anyone who they share the data with to abide by the same privacy policy and notification terms?
      • Restriction: Are they only using the data for the purpose of which they provided?
  • Will the vendor disclose any client data to their partners or other third parties in the normal course of business? If so, are those conditions clearly stated? What are the privacy practices of those other entities?
  • Does the vendor follow current best practices in responding to government requests in your jurisdiction?
    • Do they require a warrant before handing over user content?
    • Do they publish regular transparency reports?
    • Do they publish law enforcement guides?
  • Do they have a dedicated security team? If so, what is their incident response plan? Do they have any specifics about responding to breaches of data security?
  • Have they had a recent security audit? If there were any security issues found, how quickly were they fixed?
  • How often do they get security audits? Will they share the results, or at least share an executive summary?
  • What measures do they take to secure private data from theft or misuse?
  • Have they had a data breach? (This is not necessarily a bad thing, especially if they have a plan for how to prevent them in the future. This is really about what was breached— for example, was it a contact list from a webform, or their health information files?)
    • If they had a data breach in the past, what measures have they taken to prevent a data breach in the future?
  • How does the company notify customers about data breaches?
  • Does the vendor give advance notice when it changes its data practices?
  • Does the vendor encrypt data in transit? Do they default to secure connections? (For example, does a website redirect an unencrypted HTTP website to an encrypted HTTPS site?)  What is the vendor’s disaster recovery plan and backup scheme?
  • What internal controls exist for vendor’s staff accessing logs, client data and other sensitive information?
  • Does this service allow two-factor authentication on login?
    • If not, why not? How soon do they plan to implement it?
  • Do they push regular software updates?
While many companies don’t yet do this, it is still good to ask:
  • Do they encrypt stored data? (This is also called “encrypted at rest.” For example, when it’s “in the cloud”/on their computers, is it encrypted?)
  • Do they have a bug bounty program? If they do not have a bug bounty program in place, how do they respond to vulnerability reports? (If they are hostile to security researchers, this is a bad sign.)
If the service is free...

It is often said that “if the software is free, then you are the product”—this is true of any company that has targeted advertising as a business model. This is even true of the free products that nonprofits use. For this reason, free services and apps should be treated with extra caution. If you are pursuing a free service, in addition to asking the questions above, you will want to consider the following additional points.

  • How does the vendor make money? Do they make money by selling access to—or products based on—your private data?
  • Will they respond to customer service requests?
  • How likely are they to invest in security infrastructure?
If your organization has legally-mandated requirements for protecting data...
  • If your organization has a unique legal circumstance (e.g. needing to abide by attorney-client privilege, HIPAA requirements for those in the medical profession, COPPA and FERPA for working with K-12 students), ask:
    • Is the client data being stored and transmitted in accordance with the legally mandated standards of your field?
    • How often do they re-audit that they are in compliance with these standards?
    • Are the audit results publicly available?
  • If you use education technology or if you work with youth under 18 years old, consider following up with this series of questions for K-12 software vendors: check out EFF’s white paper on student privacy and recommendations for school stakeholders.

These questions do not on their own guarantee that the vendor or product will be perfectly private or secure, but that’s not a promise any vendor or software can make (and if they did, it would be a red flag). However, the answers to these questions should at least give you some idea of whether the vendor takes security and privacy seriously or not, and can therefore help you make an informed decision about whether use their product. For more information about considerations for smaller organizations evaluating tools, check out Information Ecology’s Security Questions for Providers.

Guest Author: Jonah Sheridan - Information Ecology

New CBP Border Device Search Policy Still Permits Unconstitutional Searches

Mon, 01/08/2018 - 4:22pm

U.S. Customs and Border Protection (CBP) issued a new policy on border searches of electronic devices that's full of loopholes and vague language and that continues to allow agents to violate travelers’ constitutional rights. Although the new policy contains a few improvements over rules first published nine years ago, overall it doesn’t go nearly far enough to protect the privacy of innocent travelers or to recognize how exceptionally intrusive electronic device searches are.

Nothing announced in the policy changes the fact that these device searches are unconstitutional, and EFF will continue to fight for travelers’ rights in our border search lawsuit.

Below is a legal analysis of some of the key features of the new policy.

The New Policy Purports to Require Reasonable Suspicion for Forensic Searches, But Contains a Huge Loophole and Has Other Problems

CBP’s previous policy permitted agents to search a traveler’s electronic devices at the border without having to show that they suspect that person of any wrongdoing. The new policy creates a distinction between two different types of searches, “basic” and “advanced.” Basic searches are when agents manually search a device by tapping or mousing around a device to open applications or files. Advanced searches are when agents use other devices or software to conduct forensic analysis of the contents of a device.

The updated policy states that basic searches can continue to be conducted without suspicion, while advanced searches require border agents to have “reasonable suspicion of activity in violation of the laws enforced or administered by CBP.” [5.1.4]

This new policy dichotomy appears to be inspired by the U.S. Court of Appeals for the Ninth Circuit’s 2013 case U.S. v. Cotterman, which required reasonable suspicion for forensic searches. CBP’s new policy defines advanced searches as those where a border agent “connects external equipment, through a wired or wireless connection, to an electronic device not merely to gain access to the device, but to review, copy, and/or analyze its contents.”

The Cotterman ruling has been only applicable in the western states within the Ninth Circuit’s jurisdiction, whereas this new policy is nationwide. It’s notable, however, that CBP has taken five years to address Cotterman in a public document.

There are at least four problems with this new rule.

First, this new rule has one huge loophole—border agents don’t need to have reasonable suspicion to conduct an advanced device search when “there is a national security concern.” This exception will surely swallow the rule, as “national security” can be construed exceedingly broadly and CBP has provided few standards for agents to follow. The new policy references individuals on terrorist watch lists, but then mentions unspecified “other articulable factors as appropriate.”

Second, as we argue in our lawsuit against CBP and its sister agencies (now called Alasaad v. Nielsen), the Constitution requires border agents to obtain a probable cause warrant before searching electronic devices given the unprecedented and significant privacy interests travelers have in their digital data. Only a reasonable suspicion standard for electronic device searches at the border, and no court oversight of those searches, is insufficient under the Fourth Amendment to protect personal privacy. Thus, the new policy is wrong to state that it goes “above and beyond prevailing constitutional and legal requirements.” [4]

Third, it is inappropriate to have a legal rule hinge on the flimsy distinction between “manual/basic” and “forensic/advanced” searches. As we’ve argued previously, while forensic searches can obtain deleted files, “manual” searches can be effectively just as intrusive as “forensic” searches given that the government obtains essentially the same information regardless of what search method is used: all the emails, text messages, contact lists, photos, videos, notes, calendar entries, to-do lists, and browsing histories found on mobile devices. And all this data collectively can reveal highly personal and sensitive information about travelers—their political beliefs, religious affiliations, health conditions, financial status, sex lives, and family details.

Fourth, this new rule broadly asserts that border agents need only “reasonable suspicion of activity in violation of the laws enforced or administered by CBP” before conducting an advanced search. We argue that the Constitution requires that agents’ suspicions be tied to data on the device—in other words, border agents must have a basis to believe that the device itself contains evidence of a violation of an immigration or customs law, not a general belief that the traveler has violated an immigration or customs law.

The New Policy Explicitly (and Wrongly) Requires Travelers to Unlock Their Devices at the Border

The new policy basically states that travelers must unlock or decrypt their electronic devices and/or provide their device passwords to border agents. Specifically: “Travelers are obligated to present electronic devices and the information contained therein in a condition that allows inspection of the device and its contents.” [5.3.1]

This is simply wrong—as we explained in our border guide (March 2017), travelers have a right to refuse to unlock, decrypt, or provide passwords to border agents. However, there may be consequences, such as travel delay, device confiscation, or even denial of entry for non-U.S. persons.

The New Policy Confirms Border Agents Cannot Search Cloud Content, But Details Betray CBP’s Stonewalling of EFF's FOIA Request

The new policy finally confirms that CBP agents must avoid accessing data stored in the cloud when they conduct device searches by placing devices in airplane mode or otherwise disabling network connectivity. [5.1.2] In April 2017, the agency said that border agents could only access data that is stored locally on the device. EFF filed a Freedom of Information Act (FOIA) request to get a copy of that policy and to learn precisely how agents avoided accessing data stored remotely.

CBP initially stonewalled our efforts to get answers via our FOIA request, redacting the portions of the policy that explained how border agents avoided searching cloud content. But after we successfully appealed and got more information released, and CBP Acting Commissioner Kevin McAleenan made additional public statements, we were able to learn that border agents were disabling network connectivity on the devices.

Frustratingly, CBP continued to claim that the specific methods border agents used to disable network connectivity—which we suspected was primarily toggling on airplane mode—were secret law enforcement techniques. The redacted document states:

To avoid retrieving or accessing information stored remotely and not otherwise present on the device, where available, steps such as [REDACTED] must be taken prior to search.

Prior to conducting the search of an electronic device, an officer will [REDACTED].

Those details should never have been redacted under FOIA. CBP apparently now agrees. Section 5.1.2 of the new policy states:

To avoid retrieving or accessing information stored remotely and not otherwise present on the device, Officers will either request that the traveler disable connectivity to any network (e.g., by placing the device in airplane mode), or, where warranted by national security, law enforcement, officer safety, or other operational considerations, Officers will themselves disable network connectivity.

It thus appears that the new policy contains much of the same information that CBP redacted in response to our FOIA request. The fact that such information is now public in CBP’s updated policy makes the agency’s initial stonewalling all the more unreasonable. 

Border Agents Will Now Handle Attorney-Client Privileged Information Differently

The new policy provides more robust procedures for data that is protected by the attorney-client privilege (the concept that communications between attorneys and their clients are secret) or that is attorney work product (materials prepared by or for lawyers, or for litigation). A “filter team” will be used to segregate protected material. []

Unfortunately, no new protections are provided for other types of sensitive information, such as confidential source or work product information carried by journalists, or medical records.

Conspicuously Absent: Any Updates to ICE’s Border Device Search Policy

While we welcome the improvements in the new policy, it’s important to note that it only applies to CBP. U.S. Immigration and Customs Enforcement (ICE), which includes agents from Homeland Security Investigations (HSI), has not issued a comparable new policy. And often times ICE/HSI agents are the ones who conduct border searches, not CBP agents, so any enhanced privacy protections found in the new policy are wholly inapplicable to searches by these agents.

 CBP Must Update Policy in Three Years

Finally, the new policy must be reviewed again by CBP in three years. This is important, given that much has changed in the nine years since the original policy was published in 2009, yet CBP never updated its policy to reflect changes in the law that occurred during that time.

The loopholes and failures of CBP’s new policy for border searches of electronic devices demonstrate that the government continues to flout Fourth Amendment rights at the border. We look forward to putting these flawed policies before a judge in our lawsuit Alasaad v. Nielsen.

Related Cases: Alasaad v. Nielsen

EFF Supports Stricter Requirements for DNA Collection From Minors

Sun, 01/07/2018 - 3:35pm

When the San Diego police targeted black children for DNA collection without their parents' knowledge in 2016, it highlighted a critical loophole in California law. Now, State Assemblymember Gonzalez Fletcher has introduced legislation—A.B. 1584—that would ensure cops cannot stop-and-swab youth without judicial approval or parental consent. EFF strongly supports this move.

A.B. 1584 would require law enforcement to obtain a court order, a search warrant, or the written consent of both the minor and their parent or legal guardian before collecting DNA from the minor, except in a few narrow circumstances when DNA collection is already required under existing law.

Current California law attempts to place limitations on when law enforcement can collect DNA from children. Existing law states that law enforcement can collect DNA from minors only in extremely limited circumstances: after a youth has been convicted of or plead guilty to a felony, or if they are required to register as a sex offender or are in a court-mandated sex offender treatment program. But here's the loophole: this only applies to DNA that law enforcement seizes for inclusion in statewide or federal databases. That means local police departments have been able to maintain local databases not subject to these strict limitations.

In San Diego, as Voice of San Diego reported, this resulted in at least one case where police stopped a group of kids who were walking through a park after leaving a basketball game at a rec center. The boys were each asked to sign a form consenting to a cheek swab. The ACLU is currently suing SDPD over the incident.

DNA can reveal an extraordinary amount of private information about a person, from familial relationships to medical history to predisposition for disease. Children should not be exposed to this kind of privacy invasion without strict guidelines and meaningful parental notification and consent.

We urge the California Legislature to strengthen privacy protections for California kids and send A.B. 1584 to the governor's desk.

California Senate to Hear EFF’s License Plate Cover Bill

Fri, 01/05/2018 - 3:28pm

Across the country, private companies are deploying vehicles mounted with automated license plate readers (ALPRs) to drive up and down streets to document the travel patterns of everyday drivers. These systems take photos of every license plate they see, tag them with time and location, and upload them to a central database. These companies—who are essentially data brokers that scrape information from our vehicles—sell this information to lenders, insurance companies, and debt collectors. They also sell this information to law enforcement, including U.S. Department of Homeland security, which recently released its updated policy for leveraging commercial ALPR data for immigration enforcement. 

The Atlantic has called this collection of our license plates “an unprecedented threat to privacy.” This data, collected in aggregate, can reveal intimate details about our lives, including what doctors we visit, where we worship, where we take our kids to school, and where we sleep at night. Companies marketing this data claim that the technology can predict our movements and link us to our associates based on which vehicles are often parked next to each other. 

To address this threat, EFF is a sponsor of S.B. 712, a California bill introduced by Sen. Joel Anderson that would allow drivers to cover their license plates when lawfully parked. The legislation, which was filed in 2017, will receive a fresh vote by the California Senate Transportation and Housing Committee on Jan. 9. 

Current California law forbids vehicle owners from doing anything to their license plates that would interfere with ALPRs.  However, there is one exception: drivers may cover their entire vehicles to protect their cars from the elements. S.B. 712 argues that if it’s OK to cover the entire vehicle, including the license plate, then it should be legal to cover just the license plate, presuming the driver is parked legally.  Law enforcement officers would have the authority to lift the cover to examine the plate number. 

In practical application, S.B. 712 would allow a patient to cover their plate when they park at a reproductive health clinic to keep that sensitive medical information private. It would allow a visitor at a mosque, church, or temple to cover their plate to protect their religious activities from being sold by data brokers. Clients of immigration lawyers could cover their plates when they visit the firms to ensure they can access counsel without triggering an Immigrations & Customs Enforcement alert. 

%3Ciframe%20width%3D%22560%22%20height%3D%22315%22%20src%3D%22https%3A%2F%2Fwww.youtube-nocookie.com%2Fembed%2FB9zBqgfIIZI%3Frel%3D0%26autoplay%3D1%22%20frameborder%3D%220%22%20gesture%3D%22media%22%20allow%3D%22encrypted-media%22%20allowfullscreen%3D%22%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube-nocookie.com

Dave Maass testifies on S.B. 712 in 2017

In opposing the bill, law enforcement has overstated the usefulness of ALPR data collected from parked vehicles. During a May 9, 2017 hearing, law enforcement representatives were unable to present any data supporting their claims. Following that hearing, EFF filed dozens of public records requests around the state of California to find that data. We found that less than 0.1% of license plate data collected by police are connected to a crime at the point of collection, but the remaining 99.9% of the data is stored and shared anyway.

For example, the Sacramento Police Department collects on average 25-million plate scans each year. Only a tiny number of those plates, 0.1%, were connected to an active investigation when the information was collected. And yet the rest of the plates, 24.97 million of them, are shared with more than 750 agencies nationwide with little vetting or control.

The records further showed that some jurisdictions had even worse results: In 2016, the San Diego Police Department collected 493,000 plates, but only .02% (98 plates) were connected to a crime. Of those, only a single vehicle was connected to a felony. Over one 90-day period, the City of Irvine collected 217,000 plates, but again, only .02%-a grand total of 40 plates-were connected to a crime, usually vehicle theft. ­­­ 

EFF is joined by the ACLU in supporting this legislation. We believe S.B. 712 provides a balanced solution since it does not create a new burden on companies or state agencies, but rather empowers drivers to protect their privacy if they so choose.

Read EFF's letter in support of S.B. 712. For more information on ALPRs, you can review EFF’s Street-Level Surveillance project, which outlines how ALPRs work and the threat they present to privacy. 

State Child Care Laws Should Not Require Teenage Kids to Submit Biometric Data to the FBI

Fri, 01/05/2018 - 3:04pm

Former EFF legal intern Holden Benon co-wrote this blog post.

Jennifer Parrish, a child care provider in Minnesota who runs a day care out of her home, finds herself at a crossroads due to a recently passed Minnesota law. The law imposes new background check requirements on child care providers, including that they provide biometric information. But the law doesn’t apply just to the providers themselves; it also requires anyone age 13 and up who lives with a family day care provider to submit to the same background check, whether or not they have committed any crime. This means Jennifer’s 14-year-old son, along with about 12,000 other kids in Minnesota, must provide his fingerprints and a face recognition photograph to the state, which will send them to the FBI to be stored for his lifetime in the FBI’s vast biometrics database.

New Federal Funding for Child Care Requires a Biometric Background Check

In 2014, President Obama signed into law the federal Child Care and Development Block Grant, which provides states with additional child care funding if they enact new policies and procedures designed to improve the quality of early care, education, and afterschool programs for kids.  Among the new Block Grant requirements, states must now conduct criminal background checks for anyone 18 and over who lives in a “family child care home” (a home where the child care provider lives and also takes care of other peoples’ kids), whether or not they have any actual interaction with children. The law states that background checks are to be conducted through the FBI’s Next Generation Identification repository (NGI). 

States Can Enact Stricter Laws

To receive these additional funds, states have to adopt and comply with the provisions set forth in the Block Grant. Minnesota has done so, but is also taking the background check requirements a step further.

Minnesota passed a law expanding the Federal law, in part, by defining “child care staff persons” as anyone who is 13 years old and over and lives in the home. The law further states that these teenage children must submit photographs and fingerprints, which will be retained by the FBI in NGI.

Texas passed a similar law requiring each child 14 years of age or older staying at a family home to submit a complete set of fingerprints to the FBI’s NGI database. But other states, such as California and Vermont, passed laws that are closer to the Federal standard. 

Minnesota’s Background Check Requirement Violates Privacy Rights

The FBI’s NGI database stores civil and criminal fingerprints together.  This means that any fingerprints submitted for licensing or for a background check will most likely end up living indefinitely in NGI—to be searched thousands of times a day for any crime, no matter how minor, by over 20,000 law enforcement agencies across the country and around the world.

Photographs are stored in the face recognition component of NGI. While the FBI says this part of the database currently separates photos taken for a non-criminal purpose from criminal mugshots, if a person is ever arrested for any crime—even for shoplifting at a grocery store—their non-criminal photographs will be combined with their criminal record and will become fair game for the same criminal database searches as any mugshot photo. 

Parents should not have to worry that their children’s biometrics will be collected and stored in NGI, but that’s exactly what will happen under Minnesota’s new law. This creates a very real possibility that kids will be implicated for crimes they didn’t commit. Consider, hypothetically, that a 13-year-old submits fingerprints and photograph to NGI so that their parents’ daycare complies with state licensing requirements.  Later, the kid is arrested for stealing candy from a drug store.  Any law enforcement agency in the country with access to NGI could then find the kid’s original licensing photograph through a search of NGI’s criminal face recognition database. If they happen to look similar to someone recorded in a grainy security camera video committing another crime, they could become a suspect for this crime solely because the face recognition system flagged them as a match.

Face recognition is notoriously inaccurate, especially for young people, people of color, and women. The FBI has admitted its system will only make a correct match—assuming the suspect is already in the database—about 85% of the time. Teenage children should not bear the risk of being later implicated in serious crimes—and their parents shouldn’t have to worry this will happen—simply because their family home also functions as a legal daycare.

Government Transparency

Unsurprisingly, the Minnesota state government has not been transparent with family day care providers about how this provision will impact their children’s privacy rights. 

For one, the specific provision that invades these rights was tucked into the end of a 672-page document, making it difficult for child care providers to understand and comply with the law. Also, according to Parrish and another family daycare provider, Jennifer Seydel, the Minnesota legislature did not afford interested individuals the opportunity to submit public comments criticizing or explaining the negative consequences of the law. 

It is also troubling that despite the explicit text of the Minnesota law, the Minnesota Department of Human Services has signaled at various times that it will not subject minors to the stated fingerprinting and photograph requirements.  In fact, the Department has stated that fingerprints or photos of minors will not be collected unless the individual has a Minnesota criminal record and an “offender status.”  Minnesota child care authorities also state on their Frequently Asked Questions page that the statute only affects individuals over the age of 18, despite the clear language set forth in the statute. The plain language of the Minnesota statute directly contradicts these representations. 

Further, the Department has stated that this is a measure for “providers who want to make sure that the person they’ve hired is the person who has the background study.” But the requirement that children living with their parents submit their photographs and fingerprints to the FBI repository does nothing to ensure that the person a provider has hired “is the person who has the background study.”

The vast majority of child care laws are likely well-intentioned.  But lawmakers should be cautious not to negatively impact children on the other side of the equation.  When drafting new laws, lawmakers should take into account the potential impact on individuals’ privacy rights. As Parrish has told us, Minnesota’s law makes her son very uncomfortable because it means he has to have his fingerprints and photographs taken and submitted to the FBI. Parrish says he’s never broken a law and is a good kid. He feels like he’s being treated as though he’s done something wrong. In Parrish’s situation, protecting the children at her family child care facility should not come at the expense of invading the privacy of her own fourteen-year-old child.

Related Cases: FBI's Next Generation Identification Biometrics Database

California Introduces Its Own Bill to Protect Net Neutrality

Fri, 01/05/2018 - 2:03pm

2018 has barely begun, and so has the fight to preserve net neutrality. January 3 was the first day of business in the California state legislature, and state Sen. Scott Wiener used it to introduce legislation to protect net neutrality for Californians.

As the FCC has sought to abandon its role as the protector of a free and open Internet at the federal level, states are seeking ways to step into the void. Prior to December, the FCC’s rules prevented Internet service providers (ISPs) from blocking or slowing down traffic to websites. The rules also kept ISPs from charging users higher rates for faster access to certain websites or charging websites to be automatically included in any sort of “fast lane.” On December 14th, the FCC voted to remove these restrictions and even tried to make it harder for anyone else to regulate ISPs in a similar way.

Wiener’s proposed legislation, co-authored by ten state assembly and Senate Democrats, has a number of ways to ensure that telecom companies operating in California adhere to the principals of net neutrality. Washington and New York have similar bills in progress and Wiener isn’t even the only California legislator proposing legislation, as state Sen. Kevin de León has introduced a net neutrality bill as well.

The substance of the legislation is still in the works, but the intent is to leverage the state's assets as a means to require networks to operate neutrally. In essence, the California bill would require net neutrality of businesses that operate within the state of California if they are relying on state infrastructure or state funding to provide the service.

EFF supports this bill, as the FCC’s actions in December mean states must provide whatever protections they can to safeguard the Internet as we know it. However, state laws can only restore network neutrality for some Americans, and only a federal rule can ensure that everyone in the country has access to a neutral net.

Even as state legislatures craft bills, state attorneys general are joining public interest groups and members of Congress to challenge the FCC in federal court. Congress has the ability to reverse a change in federal regulation—which is technically what the FCC’s rule change is—with a simple majority within 60 legislative days of the order being published in the federal register. That means you can ask your member of Congress to save net neutrality now, since the rule is expected to be published and the vote therefore required this year.

Software Copyright Back Before Federal Circuit: Time for the Court to Get it Right

Thu, 01/04/2018 - 12:58pm

Should a company be able to shut down competition by asserting copyright in a collection of software commands? Tech giant Cisco Systems thinks so: it’s gone to court to try to prevent its competitor, Arista Networks, from building competing Ethernet switches that rely in part on commands Cisco argues it initially developed. Cisco lost the first round in a California district court, but it’s hoping for a better outcome from the Court of Appeals for the Federal Circuit.

As we explain in a brief we’ve submitted supporting Arista, Cisco is wrong. First, where the collection of commands in question is simply a group of standard, highly functional directives, arranged based on logic and industry standards, it shouldn’t be copyrightable at all. Second, any copyright that does exist must be sharply limited, as a matter of law and good practical policy. Without such limits, the software industries will find themselves embroiled in the same elaborate and expensive cross-licensing arrangements we see in the patent space and/or face an explosion of litigation. Either option will discourage innovation and competition.

So we were pleased last year when a jury found that Arista was not liable for copyright infringement based on a doctrine known as “scènes à faire.” Scènes à faire is a time-honored rule that prohibits copyright in materials that are too standard to really qualify as creative. For example, the expressive descriptions of Hogwarts—the shifting staircases, the talking paintings and so on—in J.K. Rowling’s Harry Potter books may be copyrightable, but not the idea that there would be a school for magicians. Similarly, the movie West Side Story might be copyrightable, but not the basic plot of star-crossed lovers affiliated with rival factions. Scènes à faire helps make sure that copyright can’t be used to monopolize ideas.

When it comes to computer programming and software, scènes à faire limits the ability of a copyright owner to claim copyright in basic programming elements. For example, scènes à faire prevented one company from claiming copyright infringement based on similarity between two programs’ organizational charts, as they were “simple and obvious” in light of the needs of the programs.

Here, as Arista notes in its own brief on appeal, the jury had plenty of reasons to find that any copying Arista did was noninfringing because the part copied was nothing more than what was basic and expected in the industry.  We agree. Moreover, as this case shows, strong copyright defenses, including scènes à faire, are vitally important for the thriving computer software industry, innovation, and competition. The jury got it right, and set a valuable decision in the process.

That said, this case should never have gone to trial, and it wouldn’t have if the Federal Circuit hadn’t made a fundamental mistake in 2014 in a different case:  Oracle v. Google.

Some background is necessary here: The Federal Circuit normally doesn’t hear copyright cases, and was only hearing the Oracle case because of a quirk in patent law. (Because of the same quirk, that court will decide Cisco v. Arista.) Since the issues in Oracle did not relate to patent law, the Federal Circuit was required to follow law from the Ninth Circuit Court of Appeals. In deciding Oracle, the court considered whether under Ninth Circuit law, the section of the Copyright Act that forbids copyright protection of ideas, processes, systems, and similar concepts meant that the Java APIs were not copyrightable. In finding that the APIs were entitled to protection, the Oracle court based its decision on the belief that the Ninth Circuit would find the APIs copyrightable, because there was more than one way to express them.

However, since the Oracle decision, the Ninth Circuit decided a case about copyright in Bikram yoga poses. In the Bikram’s Yoga case, the Ninth Circuit applied the same section of the Copyright Act as in Oracle. But unlike the Federal Circuit, the Ninth Circuit determined that a “sequence” of 26 yoga poses and two breathing exercises, performed in a particular order, was not subject to copyright protection, even though there were multiple ways to sequence the poses. And if a system of yoga poses isn’t copyrightable, then a system of APIs for operating a computer program definitely isn’t. The Federal Circuit misunderstood Ninth Circuit law, and it should use this new case as a chance to fix that mistake.

Amicus briefs in support of Arista were also submitted by Professor Pam Samuelson; Public Knowledge; CCIA (along with the American Antitrust Institute); and GitHub, Inc. (along with Mozilla Corp., Engine Advocacy, and Software Freedom Conservancy). Mathworks, SAS, Oracle, and others submitted a brief in support of Cisco.

Related Cases: Oracle v. Google

Wiretap Orders That Defy Geographical Limitations Mandated by Congress Must Not Be Tolerated

Tue, 01/02/2018 - 4:00pm

The Supreme Court should recognize and give teeth to the critical, privacy-protecting limitations Congress placed on wiretaps, EFF told the court in an amicus brief we filed with the National Association of Criminal Defense Lawyers.

When law enforcement officials wiretap someone’s cell phone, the law doesn’t allow them to tap any phone they want anywhere in the country. The Wiretap Act (also known as “Title III” because it comes from Title III of the 1968 Omnibus Crime Control and Safe Street Act) permits wiretapping, but only under the narrowest of circumstances and subject to restrictive requirements carefully drawn to protect extremely sensitive privacy interests. 

One of those requirements is that judges can only authorize wiretap orders for interceptions that occur within their districts. In other words, either the cell phone, the place of interception, or both, must be in the judge’s district for a wiretap to be valid under Title III. So an order issued by a judge whose district is comprised of a single state, say Kansas, can only authorize the interception of calls on a phone in Kansas or from an interception point in Kansas. In Dahda. v. U.S.a federal judge in Kansas issued a wiretap order allowing the defendants’ phones to be tapped anywhere in the country. This clearly runs counter to Title III’s geographic limitations. 

There are strong policy reasons supporting these territorial limitations. A wiretap is a massive invasion of privacy because it allows the government to listen—in real-time—on our phone, text, and email conversations. Law enforcement can access any other information—like photos or documents—that we exchange during these conversations. When Congress legalized wiretapping, it sought to ensure that a wiretap is approved, monitored, and overseen by the judge with the closest nexus to the investigation, in consultation with prosecutors and investigators in charge of the case. Judges must closely supervise the use of wiretaps, making sure that they are still needed and are contributing useful information to prosecutors. The territorial limitations placed on wiretaps were designed to help judges keep a close watch on interceptions so they can ensure the intrusions into our private communications are as limited as possible. 

Those privacy interests are even more acute when wiretaps are aimed at cell phones, which is almost always the case nowadays. (The law was amended in 1986 to extend its restrictions to electronic communications.) In 2016, over 43 million conversations were intercepted, 93% of which were from mobile devices. The amount of private information that can be gleaned from digital phones dwarfs the information that could be intercepted when wiretapping was first legalized. The devices we carry with us every day and keep at our bedsides contain intimate details of our private lives—our locations, our private texts and email, our conversations, photos, and videos. The vast majority of communications intercepted by wiretaps are non-incriminating.

Given these realities, the limitations and restrictions of Title III are even more important now than they were when the law was passed 50 years ago. Without territorial limitations on wiretaps, prosecutors could forum shop, seeking out courts that authorize the most wiretaps to get approval for their own. 

Title III has a remedy for invalid wiretap orders: it specifies that evidence gathered from a deficient wiretap order can’t be used in court against the defendants. In Dahda v. U. S., we urged the Supreme Court to suppress, meaning throw out, the evidence gathered under wiretap orders that failed to meet the requirements of Title III. 

We hope the Supreme Court sends a strong message to judges and prosecutors: wiretap orders that flout the territorial limitations established by Title III won’t be tolerated. 

Open Access Weathers a Governmental Sea Change: 2017 in Review

Mon, 01/01/2018 - 8:24pm

In the first few weeks of 2017, just days after President Donald Trump took office, reports emerged that the Environmental Protection Agency and the Department of Agriculture were instructing scientists on staff not to talk to the public or the press. The reports raised serious questions among open access advocates: what does it mean to advocate for public access to publicly funded scientific research at a time when the future of public funding for science itself is in question?

Put most simply, open access is the practice of making research and other materials freely available online, ideally under licenses that allow anyone to share and adapt them. Open access publishing has long been the center of a debate over the future of academic publishing: on one side of the debate sit citizen scientists, journalists, and other members of the public eager to access and use scientific research even though they can’t afford expensive journal subscriptions and don’t have institutional access to even-more-expensive online repositories. On the other, a handful of large publishers with a massive vested interest in preserving the status quo.

In recent years, the U.S. government was a key player in the fight for open access. In 2013, the White House directed all agencies that fund scientific research to enact policies requiring that that research be made available to the public after a year, one of the biggest wins for open access in the past decade. More recently, the Executive Branch spearheaded strong sense policies on access to government-funded software and educational resources.

With the White House Office on Science and Technology Policy—the office behind many of those common-sense policies—still vastly understaffed, it’s difficult to decipher where this administration stands on open access. A year ago, we wondered whether the White House open access memo would survive an administration bent on reversing any orders with Barack Obama’s name on them. For what it’s worth, the policies implemented under the open access memo have stayed intact, even as the 2018 budget comes with new restrictions on the sorts of research those agencies can fund.

We continue to advocate for Congress to lock open access mandates into law, but movement on that front has been slow. The Fair Access to Science and Technology Research Act (FASTR) would harden the government’s open access mandate against future administrations’ whims. It has strong support in both parties, but unfortunately, very few members of Congress seem interested in making it a priority.

In one of the many head-scratching moments of 2017, Senator Rand Paul incorporated the text of FASTR into his BASIC Research Act, a bill targeting what Sen. Paul refers to as “silly research.” We doubt that Paul’s bill will gain much momentum, but it’s very telling that even members of Congress who are very skeptical of funding for scientific research see the obvious benefits of an open access mandate.

Of course, Congress’ lack of interest in open access hasn’t stopped the big publishers from pulling every trick in the book to maintain their control of academic publishing. Last year, we noticed an attempt by Elsevier to extend its control of academic publishing beyond the traditional journals it’s dominated for decades. As we said then, the company’s strategy seemed to have become “if you can’t control the content anymore, then attempt to control the processes and infrastructures of scholarly publishing itself.” That trend continued in 2017 with Elsevier acquiring Digital Commons, the platform that serves as the technical backbone for numerous open access repositories.

2017 also saw a continuation of major publishers’ long, quixotic campaign to silence Sci-Hub. This time, the American Chemical Society convinced a court to order Internet infrastructure providers to cut access to the rogue repository, a clear example of copyright being abused as a tool of censorship.

As we enter 2018, it’s crucial that we don’t lose the progress we’ve made on securing publicly funded resources on behalf of the public. Congress must act to make sure that open access requirements survive no matter who’s running the government.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today! 

Broadband Privacy: 2017 Year in Review

Mon, 01/01/2018 - 12:34pm

It seems like a no-brainer that an Internet Service Provider (ISP) should have to get your permission to snoop on and use the private information you generate as you browse the Internet. In 2017, pressure from the telecom industry led to Congress and the president rolling back protections for broadband privacy, but there are many ways EFF was and is still fighting this battle.

Late in 2016, the FCC passed rules to protect your privacy from invasions by your ISP. The rules—which prohibited things like selling your personal information to marketers, inserting undetectable tracking headers into your traffic, or recording your browsing history to build up a behavioral advertising profile on you without your consent—were a clear victory for privacy. Unsurprisingly, ISPs wishing to profit even more off of their customers were not happy about the restrictions and began lobbying hard to overturn the rules.

The telecoms’ lobbying relied on easily disprovable myths that also echoed the rhetoric they offered during the debate on net neutrality. (In particular, the incorrect claims that the Federal Trade Commission (FTC) would be an adequate safeguard and that companies are unable to compete under the current laws). EFF provided a memo to congressional staffers to rebut these misconceptions. Despite the obvious benefit the rules provided, in March, Congress rushed through and passed S.J.Res.34, which overturned the hard-won protections issued by the FCC the previous year.

The fight for privacy didn’t end with the federal government. By the end of 2017 at least 21 states had introduced proposals to restore the legal rights to privacy for broadband Internet users that Republicans in Congress stripped away. EFF has supported many of these efforts and submitted testimony for California, Oregon, and Minnesota and Hawaii while supporting privacy advocate efforts in many other states. EFF continues to make the legal and technical arguments as to why we need these privacy protections restored.

The fight in California foreshadows how these state-by-state battles could go. Despite strong support and a willingness by the bill’s supporters to negotiate for a reasonable, balanced set of privacy protections, including support from small ISPs, California’s bill stalled at the end of the year when Google, Facebook, and the large ISPs started spreading misinformation, claiming that privacy rules would somehow help terrorists and hackers. However, you can only fool legislators for so long. It's hard to imagine any state legislator looking at the current landscape for broadband Internet—after  Comcast, AT&T, and Verizon have successfully eliminated crucial protections for competition, policy, privacy, and net neutrality—and thinking that the new status quo is just fine.

With overwhelming public support for these consumer protections, those who care about online privacy have a lot of grassroots power to compel change—power we’ll continue to use to win back the protections lost in 2017.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Tipping the Scales on HTTPS: 2017 in Review

Sun, 12/31/2017 - 7:47pm

The movement to encrypt the web reached milestone after milestone in 2017. The web is in the middle of a massive change from non-secure HTTP to the more secure, encrypted HTTPS protocol. All web servers use one of these two protocols to get web pages from the server to your browser. HTTP has serious problems that make it vulnerable to eavesdropping and content hijacking. By adding Transport Layer Security (or TLS, a prior version of which was known as Secure Sockets Layer or SSL) HTTPS fixes most of these problems. That’s why EFF, and many like-minded supporters, have been pushing for web sites to adopt HTTPS by default.

In February, the scales tipped. For the first time, approximately half of Internet traffic was protected by HTTPS. Now, as 2017 comes to a close, an average of 66% of page loads on Firefox and are encrypted, and Chrome shows even higher numbers.

At the beginning of the year, Let’s Encrypt had issued about 28 million certificates. In June, it surpassed 100 million certificates. Now, Let’s Encrypt’s total issuance volume has exceeded 177 million certificates. Certificate Authorities (CAs) like Let’s Encrypt issue signed, digital certificates to website owners that help web users and their browsers independently verify the association between a particular HTTPS site and a cryptographic key. Let's Encrypt stands out because it offers these certificates for free. And, with EFF’s Certbot, they are easier than ever for web masters and website administrators to get.

Throughout the entire year, projects like Secure the News and Pulse have been tracking HTTPS adoption among news sites and government sites, respectively.

Browsers have been pushing the movement to encrypt the web further, too. Early this year, Chrome and Firefox started showing users “Not secure” warnings when HTTP websites asked them to submit password or credit card information. In October, Chrome expanded the warning to cover all input fields, as well as all pages viewed in Incognito mode. Chrome has eventual plans to show a “Not secure” warning for all HTTP pages.

One of the biggest CAs, Symantec, was threatened with removal of trust by Firefox and Chrome. Symantec had long been held up as an example of a CA that was “too big to fail.” Removing trust directly would break thousands of important websites overnight. However, browsers found many problems with Symantec’s issuance practices, and the browsers collectively decided to make the leap, using a staged distrust mechanism that would minimize impact to websites and people using the Internet. Symantec subsequently sold their CA business to fellow CA DigiCert for nearly a billion dollars, with the expectation that DigiCert’s infrastructure and processes will issue certificates with fewer problems. Smaller CAs WoSign and StartCom were removed from trust by Chrome and Firefox last year.

The next big step in encrypting the web is ensuring that most websites default to HTTPS without ever sending people to the HTTP version of their site. The technology to do this is called HTTP Strict Transport Security (HSTS), and is being more widely adopted. Notably, the registrar for the .gov TLD announced that all new .gov domains would be set up with HSTS automatically. A related and more powerful setting, HTTP Public Key Pinning (HPKP), was targeted for removal by Chrome. The Chrome developers believe that HPKP is too hard for site owners to use correctly, and too risky when used incorrectly. We believe that HPKP was a powerful, if flawed, part of the HTTPS ecosystem, and would rather see it reformed than removed entirely.

The Certification Authority Authorization (CAA) standard became mandatory for all CAs to implement this year. CAA allows site owners to specify in DNS which CAs are allowed to issue for their site, and may reduce misissuance events. Let's Encrypt led the way on this by enforcing CAA from first launch, and EFF is glad to see this protection extended to the broader CAA ecosystem.

There’s plenty to look forward to in 2018. In a significant improvement to the TLS ecosystem, for example, Chrome plans to require Certificate Transparency starting next April. As browsers and users alike pressure websites for ubiquitous HTTPS, and as the process of getting a certificate gets easier and more intuitive for web masters, we expect 2018 to be another banner year for HTTPS growth and improvement.

We particularly thank Feisty Duck for the Bulletproof TLS Newsletter, which provides updates on many of these topics.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!

Communities from Coast to Coast Fight for Control Over Police Surveillance: 2017 in Review

Sun, 12/31/2017 - 4:05pm

Americans in 2017 lived under a threat of constant surveillance, both online and offline. While the battle to curtail unaccountable and unconstitutional NSA surveillance continued this year with only limited opportunities appearing in Congress, the struggle to secure community control over surveillance by local police has made dramatic and expanding strides across the country at the local level.

In July, Seattle passed a law making it the nation’s second jurisdiction to require law enforcement agencies to seek community approval before acquiring surveillance technology. Santa Clara County in California, which encompasses most of Silicon Valley, pioneered this reform in spring 2016 before similar proposals later spread across the country.

Two other jurisdictions in the San Francisco Bay Area—the cities of Oakland and Berkeley—have conducted multiple public hearings on proposed reforms to require community control. Both cities are nearing decision points for local legislators who in 2018 will consider whether to empower themselves and their constituents, or whether instead to allow secrecy and unaccountability to continue unfettered.

Other communities across California have also mobilized. In addition to Oakland and Berkeley, EFF has supported proposed reforms in Palo Alto and before the Bay Area Rapid Transit Board, and also addressed communities seeking similar measures in Davis, Humboldt County (where a local group in the Electronic Frontier Alliance hosted two public forums in March and another in December), and Santa Cruz (where local activists began a long running local dialog in 2016).

Reflecting this interest from across the state, the California State Senate approved a measure, S.B. 21, which would have applied the transparency and community control principles of the Santa Clara County ordinance to hundreds of law enforcement agencies across the state. While the measure was successful before the state Senate, and also cleared two committees in the State Assembly, it died without a vote in the state Assembly’s appropriations committee.

While S.B. 21 was not enacted in 2017, we anticipate similar measures advancing in communities across California in 2018. In many other states, municipal bodies have already begun considering analogous policies.

In New York City, over a dozen council members have supported the Public Oversight of Surveillance Technology (POST) Act, which would require transparency before the New York Police Department acquires new surveillance technology. This is an important step forward, though without reform elements that, in Santa Clara County and Seattle, have placed communities in control over police surveillance. The support of local policymakers may help bring to the public debate underlying facts about the proposed reform which appear to have escaped figures who oppose it, including Mayor Bill de Blasio.

In Cambridge, Massachusetts, policymakers began a conversation last year that continued throughout 2017. This October, a coalition of local allies hosted a public forum about a proposed ordinance that the City Council will reportedly consider in 2018. They included Digital Fourth (a member of the EFA), the Pirate Party, and students at the Berkman Klein Center for Internet & Society at Harvard University, one of whom wrote that “[w]ithout appropriate controls, technologies intended for one purpose can be twisted for another.”

In the Midwest, Missouri has emerged as a potentially crucial state in the nationwide battle over surveillance technology. Years after grassroots opposition to police violence vaulted the town of Ferguson to international recognition, St. Louis city policymakers introduced B.B. 66, a measure modeled closely on Santa Clara County’s.

While the Missouri state legislature has yet to consider a similar proposal, it did consider—without yet adopting—another proposed reform to limit law enforcement surveillance. In particular, S.B. 84 would have limited the parameters under which state and local police could deploy cell-site simulators, which use cell phone networks to track a user’s location or monitor data or voice transmissions. This is just one of many invasive surveillance platforms available to law enforcement.

Nearby states have also taken notice of cell-site simulators. Illinois has already enacted a strong law constraining the use of those particular tools, while Nebraska considered a bill that would have prohibited police from using cell-site simulators at all. This established support for limiting one surveillance tool across the region suggests potential traction for process reforms, like Seattle’s and Santa Clara County’s, that would apply to all platforms. 

The fight against unaccountable secret government surveillance will continue across the United States in 2018. While Congress has yet to enact legislation this year protecting the American people from NSA surveillance, local and state legislatures are heeding the call to conduct effective oversight and to empower the communities they represent.

This article is part of our Year In Review series. Read other articles about the fight for digital rights in 2017.

Like what you're reading? Support digital freedom defense today!