EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 52 min ago

Video Game Developer Says He Won't Send a Takedown of a Bad Review, Does So Anyway

Fri, 03/09/2018 - 4:09pm

Oh what a tangled web we weave when first we get into a Twitter fight with someone who gave our video game a bad review on YouTube. And when we say that we would never send a DMCA takedown for it. And when one mysteriously turns up anyway.

This is one of the most confusing series of events ever to surround a takedown. First, Richard La Ruina, a man who claims to be a top pickup artist, created a somewhat controversial dating game called Super Seducer. Then, YouTuber IAmPattyJack (also known as Chris Hodgkinson) covered the game in his “_____ Is the Worst Game Ever” series.

La Ruina took poorly to the bad review Hodgkinson gave Super Seducer and showed up in the video’s comments when it only had about 100 views. Hodgkinson and La Ruina then got into it on Twitter, which did eventually resolve itself into La Ruina acknowledging that giving a review copy to someone who does a “Worst Game Ever” series was perhaps not the smartest move.  

That’s when it got weird. Someone else on Twitter applauded La Ruina for admitting he was wrong instead of sending a DMCA takedown. La Ruina responded “ah yeah we have our DMCA subscription,” which is not a thing. (As others have pointed out, he may have meant a service that makes filing DMCA takedowns easier.)

Hodgkinson showed back up to say that this was not something La Ruina wanted to do, and La Ruina said he “decided not to, I believe in freedom and democracy and all that american [sic] stuff. We only DMCA when people rip our products.” It got weirder when, contrary to what La Ruina had stated on Twitter, a DMCA notice resulted in the review getting taken down. Hodgkinson then got an apology letter from La Ruina’s PR people, saying the notice had been retracted, and offering to pay for any lost income Hodgkinson would have as a result of the video vanishing. La Ruina sent Hodgkinson $50, which Hodgkinson said he did not want. It took a while, but the video is finally back on YouTube.

La Ruina’s apparent first instinct—that he should not send a DMCA takedown aimed at a review—was the correct one. It’s not infringement and therefore not what takedown notices are for. But La Ruina also wrongly framed it as his choice, stemming out of benevolence on his part, and not a necessary aspect of the takedown process. And that is where we constantly run into problems. DMCA takedowns are supposed to be for infringement and not silencing criticism. But the perception that they are a tool for that is so pervasive that merely following the rules makes you look like the good guy.

Even with all of those factors, the video was still down for days. It seems that the DMCA ends up being a censorship tool even when people say they will do the right thing.

This is an entry in the Takedown Hall of Shame, highlighting the worst of bogus copyright and trademark complaints.

Senators Pressure Platforms for Private Censorship of Drug Information

Fri, 03/09/2018 - 1:18pm

Last month Senators Chuck Grassley (R-Iowa), Dianne Feinstein (D-Calif.), Amy Klobuchar (D-Minn.), John Kennedy (R-La.) and Sheldon Whitehouse (D-R.I.) separately wrote to Google, Microsoft, Yahoo and Pinterest accusing them of facilitating trade in illegal narcotics and prescription drugs. The near-identical letters demand that each of the recipients:

consider removing from its platform content that advertises the use of or enables the sale of illicit narcotics, including the sale of prescription drugs without a valid prescription.  We further request that [it] consider action to ensure that future, similar content is banned.

The letter specifies that the platforms concerned should censor search results for illicit drugs, and ensure that when users search for prescription medicines they be "automatically directed" to approved U.S.-based suppliers. Attachments to the letters include printouts of organic search listings, with a few results on each page circled, apparently containing information about suppliers who will sell drugs without prescription. (The same printouts reveal some stern anti-drug warnings in the top few results, both organic and paid.)

The letters were announced in a mailing to members of the Alliance for Safe Online Pharmacies (ASOP), a pharma industry lobby group, on the same day that the letters were sent. (Beyond that, we don't know whether there was any coordination between the Senators and ASOP in drafting the letter; and because Congress is exempt from FOIA requests, it would be difficult for us to find out.)

ASOP is also one of the principal contributors to United States Trade Representative (USTR) reports such as the Special 301 Report and the Notorious Markets List, and it makes similar censorship demands in its submissions to those reports. For example in its submission [PDF] to the 2017 Notorious Markets report, ASOP recommends that domain name registrars should "voluntarily lock and suspend illegitimate websites" rather than requiring a court order.

By "illegitimate", ASOP doesn't mean that the website is selling fake drugs; its complaint extends to branded drugs that are merely "transported without the requisite quality controls" (ie. sent through the mail). Neither is it targeting only recreational drugs; ASOP's submission acknowledges that most overseas drug sales are for "chronic illness and/or maintenance drugs for diseases such as HIV/AIDS, hypertension, [and] hypercholesterolemia." Rather, an "illegitimate" online pharmacy in ASOP lingo is one that doesn't comply with U.S. law that prohibits online medicine sales from overseas—even though, because they are overseas, they are not actually subject to U.S. law in the first place.

There might well be a case to be made for tighter regulation of sales of prescription and non-prescription drugs online. But to progress from that proposition to the proposal that information about such drugs should be censored from search engines and online marketplaces, and without a court order at that, is quite a leap. It's concerning that ASOP's recommendations are often incorporated holus bolus into the USTR's reports without independent verification, and that the responsibility for fact-checking of its claims is placed on rebuttal submissions from third-parties.

We are even more concerned about the approach taken by the Senators who wrote the letter to major platforms. For U.S. Senators, with the imprimatur of official authority that their offices represent, to prevail on platforms to privately censor content, is a blatant form of Shadow Regulation, intended to intimidate them into compliance.

If the Senators are serious in their desire for these Internet platforms to censor organic search results, they could table a bill aimed at achieving that object, and have it debated in both houses of Congress. Instead, knowing that such a law would likely be unconstitutional, they are seeking to achieve the same result without a transparent and accountable lawmaking process. The Senators should know better, and we encourage platforms receiving such letters to resist these extra-legal demands.

Stop SESTA/FOSTA: Don’t Let Congress Censor the Internet

Thu, 03/08/2018 - 11:08am

The U.S. Senate is about to vote on a bill that would be disastrous for online speech and communities.

The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) might sound appealing, but it would do nothing to fight sex traffickers. What it would do is silence a lot of legitimate speech online, shutting some voices out of online spaces.

This dangerous bill has already passed the House of Representatives, and it’s expected to come up for a Senate vote in the next few days. If you care about preserving the Internet as a place where everyone can gather, learn, and share ideas—even controversial ones—it’s time to call your senators.

Take Action


The version of FOSTA that’s passed the House is actually a Frankenstein combination of two different bills, an earlier version of FOSTA and a bill called the Stop Enabling Sex Traffickers Act (SESTA).

How would one bill do so much damage to communities online? Simple: it would scare online platforms into censoring their users.

Online platforms are enabled by a law referred to as Section 230. Section 230 protects online platforms from liability for some types of speech by their users. Without Section 230, social media would not exist in its current form, and neither would the plethora of nonprofit and community-based online groups that serve as crucial outlets for free expression and knowledge sharing.

If Congress undermined these important protections by passing SESTA/FOSTA, many online platforms would be forced to place strong restrictions on their users’ speech, censoring a lot of people in the process. And as we’ve discussed before, when platforms clamp down on their users’ speech, marginalized voices are disproportionately silenced.

Censorship is not the solution to sex trafficking. This is our last chance: call your senators now and urge them to oppose SESTA/FOSTA.

Take Action


Fair Use and Platform Safe Harbors in NAFTA

Wed, 03/07/2018 - 6:05pm

Negotiators from Mexico, Canada and the United States were in Mexico City this week for a tense seventh round of negotiations over a modernized version of NAFTA, the North American Free Trade Agreement. With President Trump's announcement of tough new unilateral tariffs on imports of steel and aluminum, and the commencement of the Mexican election season later this month, pressure to conclude the deal—or for the United States to withdraw from it—is mounting. In all of this, there is a risk that the issues that are of concern to Internet users are being sidelined.

Protesters at the 7th round of NAFTA

One of these issues is the need for balance in the intellectual property chapter of the agreement, in particular by requiring the countries to have copyright limitations and exceptions such as fair use. This is particularly important if, as we have reason to fear, the rest of the chapter contains provisions that exceed the international copyright norms established in the TRIPS Agreement. According to the latest unofficial information that we have, the United States Trade Representative (USTR) is not negotiating for a fair use provision in NAFTA. Without such a provision, the new NAFTA will be worse than even the original version of the TPP, which did have a copyright balance provision, albeit an optional and weak one.

The new NAFTA should also include platform safe harbors, to ensure that Internet intermediaries, such as ISPs, social networking websites, open WiFi hotspots or caching providers, are not held liable for the speech of their users. EFF addressed this issue in its remarks at ¿Modernización o retroceso? Amenazas al medio ambiente e internet en la renegociación del TLCAN, a forum held at the Mexican Senate on Friday March 2.

We emphasized in our presentation that we aren't arguing for platform safe harbors for the benefit of the large platforms themselves. The platforms are far from perfect, and the decisions that they make to restrict users' content are frequently wrong. But that's exactly why safe harbors are important. Without safe harbor rules, the Internet platforms that most Internet users depend upon to communicate and share online are likely to censor more of their users' speech, in an effort to reduce their own possible legal exposure.

A Tale of Two Safe Harbors

Two separate safe harbor provisions are planned for NAFTA, and both are in trouble. The first is the copyright safe harbor, which in the U.S. is based on section 512 of the DMCA or Digital Millennium Copyright Act.  In a nutshell, this safe harbor would protect Internet platforms from liability when their users infringe copyright, so long as the platforms take the allegedly infringing material down after they get a complaint. Canada also has a copyright safe harbor system, which is a little different (and better) because it doesn't require the platform to take the content down, only to notify the person who uploaded it about the complaint.

The copyright safe harbor in NAFTA is under pressure from rightsholders who want to impose secondary liability on platforms who don't do enough to limit copyright infringement by users. Due to the secrecy surrounding the agreement we haven't seen exactly what the more limited provision might look like, but we can guess from industry stakeholder lobbying that it will include a requirement to adopt effective online enforcement regimes [PDF], possibly similar to the SOPA-like censorship system currently under consideration in Canada.

The second safe harbor under consideration in NAFTA would apply to almost everything else that isn't copyright, for example defamation and hateful speech. In the US, that safe harbor is found in Section 230 (also called CDA 230). Unlike the DMCA, it doesn't require the platform in question to automatically take anything down. For example, under U.S. law a search engine isn't required to censor its search results if one of the results that comes up is alleged to be defamatory. And a good thing too, or we would see even more private censorship.

Mexico and Canada don't have an equivalent to Section 230, and the U.S. is proposing that they should—not so much because it promotes freedom of expression online, but because it would make it easier for American online platforms to operate safely and legally throughout the region. From what we have heard in the corridors of the closed negotiations, Canada and Mexico are pushing back hard on a Section 230-like provision in NAFTA, but for now the USTR is continuing to maintain it as a negotiating objective.

It would be great if EFF and other groups representing users could speak directly with negotiators on issues such as fair use, the need to avoid placing restrictive conditions on copyright safe harbor rules, and the benefits that a Section 230-style safe harbor could bring to the online freedom of expression of Internet users throughout North America. But unfortunately the NAFTA negotiations are so closed and opaque that it's difficult for us to do that. We'll keep doing what we can to let the negotiators know our concerns, but ultimately what's needed is a much more open and inclusive process, to ensure that trade agreements such as NAFTA reflect the needs of all rather than just those of well-connected corporate lobbies.

Ten Hours of Static Gets Five Copyright Notices

Wed, 03/07/2018 - 4:30pm

Sebastian Tomczak blogs about technology and sound, and has a YouTube channel. In 2015, Tomczak uploaded a ten-hour video of white noise. Colloquially, white noise is persistent background noise that can be soothing or that you don’t even notice after a while. More technically, white noise is many frequencies played at equal intensity. In Tomczak’s video, that amounted to ten hours of, basically, static.

In the beginning of 2018, as a result of YouTube’s Content ID system, a series of copyright claims were made against Tomczak’s video. Five different claims were filed on sound that Tomczak created himself. Although the claimants didn’t force Tomczak’s video to be taken down they all opted to monetize it instead. In other words, ads on the ten-hour video would now generate revenue for those claiming copyright on the static.

Normally, getting out of this arrangement would have required Tomczak to go through the lengthy counter-notification process, but Google decided to drop the claims. Tomczak believes it’s because of the publicity his story got. But hoping your takedown goes viral or using the intimidating counter-notification system is not a workable way to get around a takedown notice.

YouTube’s Content ID system works by having people upload their content into a database maintained by YouTube. New uploads are compared to what’s in the database and when the algorithm detects a match, copyright holders are informed. They can then make a claim, forcing it to be taken down, or they can simply opt to make money from ads put on the video.

And so it is that an automated filter matched part of ten hours of white noise to, in one case, two different other white noise videos owned by the same company and resulted in Tomczak getting copyright notices.

Copyright bots like Content ID are tools and, like any tool, can be easily abused. First of all, they can match content but can’t tell the difference between infringement and fair use. And, as what happened in this case, match similar-sounding general noise. These mistakes don’t make the bots great at protecting free speech.

Some lobbyists have advocated for these kinds of bots to be required for platforms hosting third-party content. Beyond the threat to speech, this would be a huge and expensive hurdle for new platforms trying to get off the ground. And, as we can see from this example, it doesn’t work properly without a lot of oversight.

This article is part of the Takedown Hall of Shame, which collects the worst of the worst of bogus copyright and trademark complaints that have threatened all kinds of creative expression on the Internet.

Back to Takedown Hall of Shame

Geek Squad's Relationship with FBI Is Cozier Than We Thought

Tue, 03/06/2018 - 1:59pm

After the prosecution of a California doctor revealed the FBI’s ties to a Best Buy Geek Squad computer repair facility in Kentucky, new documents released to EFF show that the relationship goes back years. The records also confirm that the FBI has paid Geek Squad employees as informants.

EFF filed a Freedom of Information Act (FOIA) lawsuit last year to learn more about how the FBI uses Geek Squad employees to flag illegal material when people pay Best Buy to repair their computers. The relationship potentially circumvents computer owners’ Fourth Amendment rights.

The documents released to EFF show that Best Buy officials have enjoyed a particularly close relationship with the agency for at least 10 years. For example, an FBI memo from September 2008 details how Best Buy hosted a meeting of the agency’s “Cyber Working Group” at the company’s Kentucky repair facility.

The memo and a related email show that Geek Squad employees also gave FBI officials a tour of the facility before their meeting and makes clear that the law enforcement agency’s Louisville Division “has maintained close liaison with the Geek Squad’s management in an effort to glean case initiations and to support the division’s Computer Intrusion and Cyber Crime programs.”

Another document records a $500 payment from the FBI to a confidential Geek Squad informant. This appears to be one of the same payments at issue in the prosecution of Mark Rettenmaier, the California doctor who was charged with possession of child pornography after Best Buy sent his computer to the Kentucky Geek Squad repair facility.

Other documents show that over the years of working with Geek Squad employees, FBI agents developed a process for investigating and prosecuting people who sent their devices to the Geek Squad for repairs. The documents detail a series of FBI investigations in which a Geek Squad employee would call the FBI’s Louisville field office after finding what they believed was child pornography.

The FBI agent would show up, review the images or video and determine whether they believe they are illegal content. After that, they would seize the hard drive or computer and send it to another FBI field office near where the owner of the device lived. Agents at that local FBI office would then investigate further, and in some cases try to obtain a warrant to search the device. 

Some of these reports indicate that the FBI treated Geek Squad employees as informants, identifying them as “CHS,” which is shorthand for confidential human sources. In other cases, the FBI identifies the initial calls as coming from Best Buy employees, raising questions as to whether certain employees had different relationships with the FBI.

In the case of the investigation into Rettenmaier’s computers, the documents released to EFF do not appear to have been made public in that prosecution. These raise additional questions about the level of cooperation between the company and law enforcement.

For example, documents reflect that Geek Squad employees only alert the FBI when they happen to find illegal materials during a manual search of images on a device and that the FBI does not direct those employees to actively find illegal content.

But some evidence in the case appears to show Geek Squad employees did make an affirmative effort to identify illegal material. For example, the image found on Rettenmaier’s hard drive was in an unallocated space, which typically requires forensic software to find. Other evidence showed that Geek Squad employees were financially rewarded for finding child pornography. Such a bounty would likely encourage Geek Squad employees to actively sweep for suspicious content.

Although these documents provide new details about the FBI’s connection to Geek Squad and its Kentucky repair facility, the FBI has withheld a number of other documents in response to our FOIA suit. Worse, the FBI has refused to confirm or deny to EFF whether it has similar relationships with other computer repair facilities or businesses, despite our FOIA specifically requesting those records. The FBI has also failed to produce documents that would show whether the agency has any internal procedures or training materials that govern when agents seek to cultivate informants at computer repair facilities.

We plan to challenge the FBI’s stonewalling in court later this spring. In the meantime, you can read the documents produced so far here and here.

Related Cases: FBI Geek Squad Informants FOIA Suit

Namecheap Relaunches Move Your Domain Day to Support Internet Freedom

Tue, 03/06/2018 - 1:42pm

Domain name registrar Namecheap has relaunched Move Your Domain Day, encouraging customers to raise money for online freedom with every domain move. Namecheap will donate up to $1.50 per domain transfer to the Electronic Frontier Foundation when customers switch to their service on March 6.

With this year’s promotion Namecheap hopes to draw attention and much-needed funding to EFF’s work fighting for Internet freedom. It's especially urgent since the Federal Communications Commission’s disappointing move to abandon landmark net neutrality and broadband privacy protections. Despite this setback, EFF is committed to defending the open web we love. If you’re in the U.S., visit our action center and tell your representatives to restore net neutrality. Not sure where your lawmakers stand on the issue? You can use EFF’s handy tool to check your reps.

The original Move Your Domain Day came into being in 2011 when popular domain name registrar GoDaddy spoke out in support of the hugely unpopular Internet blacklist bills SOPA and PIPA. The ensuing backlash from Internet users led to a call for customers to leave GoDaddy in favor of companies better-aligned with their online freedom goals. As a result, the first Move Your Domain Day raised over $64,000 for EFF’s work on this and other issues. The response reflected the overwhelming public sentiment that eventually toppled SOPA/PIPA and proved Internet users are powerful when they work together.

We are grateful to Namecheap for including us in this year’s campaign and for standing on EFF’s side in numerous online rights battles over the years. We’re also grateful to EFF’s 44,000 members around the world for ensuring that Internet users have an advocate.

More information on Move Your Domain Day: https://www.namecheap.com/promotions/move-your-domain-day

Offline/Online Project Highlights How the Oppression Marginalized Communities Face in the Real World Follows Them Online

Tue, 03/06/2018 - 1:44am

People in marginalized communities who are targets of persecution and violence—from the Rohingya in Burma to Native Americans in South Dakota—are using social media to tell their stories, but finding that their voices are being silenced online.

This is the tragic and unjust consequence of content moderation policies of companies like Facebook, which is deciding on a daily basis what can be and can’t be said and shown online. Platform censorship has ratcheted up in these times of political strife, ostensibly to combat hate speech and online harassment. Takedowns and closures of neo-Nazi and white supremacist sites have been a matter of intense debate. Less visible is the effect content moderation is having on vulnerable communities.

Flawed rules against hate speech have shut down online conversations about racism and harassment of people of color. Ambiguous “community standards” have prevented Black Lives Matter activists from showing the world the racist messages they receive. Rules against depictions of violence have removed reports about the Syrian war and accounts of human rights abuses of Myanmar's Rohingya. These voices, and the voices of aboriginal women in Australia, Dakota pipeline protestors and many others are being erased online. Their stories and images of mass arrests, military attacks, racism, and genocide are being flagged for takedown by Facebook. The powerless struggle to be heard in the first place; online censorship further marginalizes vulnerable communities. This is not OK.

In response, EFF and Visualizing Impact launched an awareness project today that highlights the online censorship of communities across the globe that are struggling or in crisis. Offline/Online is a series of visuals demonstrating that the inequities and oppression these communities face in the physical world are being replicated online. The visuals can be downloaded and shared on Twitter, Facebook, and Snapchat, or printed out for distribution.

In one, the displacement of nearly 700,000 Rohingya Muslims from Myanmar because of state violence is represented in a photo showing Rohingya children trying to board a small boat. Rohingya refugees, many of whom are women and children, are arriving in Bangladesh with wounds from gunshot and fire, according to the United Nations.

And online? Facebook is an essential means of communication in Myanmar. Activists there and in the West have documented the violence against the Rohingya online, only to have their Facebook posts removed and accounts suspended.

Inequity offline, censorship online.

The EFF/Visualizing Impact project exposes this pattern among Palestinians, aboriginal women in Australia, Native Americans, Dakota pipeline protestors, and black Americans. We believe this is just the tip of the iceberg. We are already far down the slippery slope from judicious moderation of online content to outright censorship. With two billion Facebook users worldwide, there are likely more vulnerable communities being subject to online censorship.

Our hope is that activists, concerned citizens, and online communities will post and share Inequity Offline/Censorship Online visuals (found here) many times, raising awareness about the impact of censorship on marginalized communities—a story that is underreported. Sharing the visuals is a step all of us can take to combat online censorship. It may help restore the speech and voices being erased online.

Blunt Measures on Speech Serve No One: The Story of the San Diego City Beat

Mon, 03/05/2018 - 2:40pm

It’s no secret: Social media has changed the way that we access news. According to the Pew Research Center, two-thirds of Americans report getting at least some of their news on social media. Another study suggests that globally, for those under 45, online news is now as important as television news. But thanks to platforms’ ever-changing algorithms, content policies, and moderation practices, news outlets face significant barriers to reaching online readers.

San Diego CityBeat's recent experience offers a sad case in point. CityBeat is an alt-weekly focusing on news, music, and culture. Founded in 2002, the publication has a print circulation of 44,000 and is best known for its independence and no-holds barred treatment of public officials and demo tapes. The site is also known for its quirky—and, it turns out, controversial—headlines.

It was one of those headlines that caused CityBeat to run afoul of Facebook’s censors. In late November, the platform removed links posted by CityBeat on their own page to a piece by popular columnist Alex Zaragoza. Her piece, entitled “Dear dudes, you’re all trash,” critiqued men for their complacency and surprise in the light of several high-profile sexual assault and harassment scandals. Zaragoza's similar post on her own timeline was also removed.

Ryan Bradford, the web editor of CityBeat, said that Facebook notified him about the post on a weekend. “It didn’t really occur to me how serious it was” at first, he says. “We’d been flagged for content before, [such as] artistic images that contain nudity.”

He had posted the link to CityBeat’s page a few days prior, even including the article’s sub-hed—“Even the “good ones” are safe in their obliviousness and complacency.” The message he received from Facebook pointed him to the Community Standards, but—as was the case with Egyptian journalist Wael Abbas—did not explicitly state which rule the content had violated. Users frequently complain that Facebook provides scant explanation for its removals.

Bradford thought of appealing but, he told us, “Sending a complaint seemed futile. It feels like you’re sending it out into the ocean.” And in this case, appealing wouldn't have been an option, as Facebook only allows users to appeal account deactivations, not removals of individual items.

By not notifying users about how their content has violated the rules, the company is setting up users for failure. Users must receive clear information about the rules they've violated and how they can appeal content decisions.  

As we’ve said previously, private censorship isn’t the best way to fight hate or defend democracy. Corporations are often in a tough position when it comes to dealing with hate speech and other content, but blunt measures that classify a nuanced article in a reputable publication about sexual assault as verboten due to harsh language serve no one. Although corporations have the right to make their own decisions about what types of content users can post, they should seek to maximize freedom of expression. CEO Mark Zuckerberg claims that the company stands for freedom of speech, but the decision to ban Zarazoga's piece says otherwise. 

Or, as Bradford puts it: “To start censoring innocuous stuff that ultimately sends a positive message is a detriment to the online community.”

You can read more about our position on private censorship here, and learn more about the issue at Onlinecensorship.org.

Work with EFF This Summer! Apply to be a Google Public Policy Fellow

Fri, 03/02/2018 - 6:12pm

If you’re a student who is passionate about emerging Internet and technology policy issues, come work with EFF this summer as a Google Public Policy Fellow! This is a paid opportunity for students currently enrolled in higher education institutions to work alongside EFF’s international team on projects advancing debate on key public policy issues.

EFF is looking for someone who shares our passion for the free and open Internet. You'll have the opportunity to work on a variety of issues, including censorship and global surveillance. Applicants must have strong research and writing skills, the ability to produce thoughtful original policy analysis, a talent for communicating with many different types of audiences, and be independently driven. More specific information can be found here.

  • Program timeline is June 5, 2018 - August 11, 2018, with regular programming throughout the summer. If selected, you can work with EFF to adjust start and completion dates.
  • The application period opens Friday, March 2, 2018 and all applications must be received by 12:00AM midnight ET, Tuesday, March 20, 2018.
  • The accepted applicant will receive a stipend of USD $7,500 in 2018 for their 10-week long Fellowship.

To apply with the Electronic Frontier Foundation, follow this link.

Note: This internship is associated with EFF's international team and is separate from EFF's summer legal internship program.

Fair Use Protects So Much More Than Many Realize

Fri, 03/02/2018 - 4:51pm

With copyright being abused to shut down innovation and speech, and copyright terms lasting for generations, fair use is more important than ever. Without fair use, we’d see less creativity. We’d see less news reporting and commentary. And we’d see far less innovation.

Fair use allows people to use copyrighted materials for certain purposes without payment or permission. If something is fair use, it is not infringing on a copyright.

A video remix or a story that critiques culture by incorporating famous characters and giving them new meaning or context is an example of fair use in action. Culture grows because creators are constantly reworking what’s in it. If Superman is portrayed as someone other than a white man, that is a clearly a commentary on the symbol of “truth, justice, and the American way.”

Commentary also relies on fair use. Criticism is made stronger when the material being interrogated can be included in the critique. It is difficult to show why someone was wrong or add context to someone else’s report without including at least part of it. We recently wrote about the Second Circuit’s decision that part of the service offered by TVEyes, a subscription company that provides searchable transcripts and video archives of television and radio, was not fair use. In particular, the court seemed to say that what makes TVEyes so objectionable was that it made material available without Fox News’ permission. One of the reasons fair use is so important to the First Amendment is because it doesn’t require permission. Who would let researchers, academics, and journalists get access to their material for the purpose of saying if and how they’re wrong?

The ways fair use improves our creative culture and our commentary are apparent every time we see fan art on the Internet or watch news commentary. The ways fair use protects innovation can be more subtle.

Copyright also covers software, which is working its way into every part of our life. We’re entering a world where your lights, toothbrush, coffeemaker, and television are all connected to the Internet. And transmitting all sorts of information all the time. But if you want to ask an expert how to change that, you’re probably going to need fair use.

Much of the problem lies with Section 1201 of the Digital Millennium Copyright Act, which bans breaking restrictions on copyrighted works. That means, for example, that if someone wants to develop an app that better secures your phone but doing so means breaking the digital lock the manufacturer put there, then that inventor faces trouble. Or, say you want to pay a mechanic to fix your car, but that requires them to break the encryption on the computer in it, then Section 1201 would prevent you from getting that help.

Section 1201 can prevent access to things that fair use allows people to use. For example, you may want to make fair use of a clip from a DVD but be banned from breaking a lock to rip the clip. And because of the impact that could have on fair use, there is a process for securing an exemption to it. The exemption process occurs every three years, and we’ll get a new set of exemption in 2018.

Because fair use is important for creativity, commentary, and innovation, and because the ban on circumvention makes that so much harder, convincing the Copyright Office to issue common-sense exemptions is necessary. In 2018, EFF is asking for exemptions for:

  • Repair, diagnosis, and tinkering with any software-enabled device, including “Internet of Things” devices, appliances, computers, peripherals, toys, vehicle, and environmental automation systems;
  • Jailbreaking personal computing devices, including smartphones, tablets, smartwatches, and personal assistant devices like the Amazon Echo and the forthcoming Apple HomePod;
  • Using excerpts from video discs or streaming video for criticism or commentary, without the narrow limitations on users (noncommercial vidders, documentary filmmakers, certain students) that the Copyright Office now imposes;
  • Security research on software of all kinds, which can be found in consumer electronics, medical devices, vehicles, and more;
  • Lawful uses of video encrypted using High-bandwidth Digital Content Protection (HDCP, which is applied to content sent over the HDMI cables used by home video equipment).

It would be even better if hoops like this didn’t exist for fair use to jump through, but while they do, it’s important to keep showing how important it is.

This week is Fair Use/Fair Dealing Week, an annual celebration of the important doctrines of fair use and fair dealing. It is designed to highlight and promote the opportunities presented by fair use and fair dealing, celebrate successful stories, and explain these doctrines.

The Post-TPP Future of Digital Trade in Asia

Fri, 03/02/2018 - 1:12pm

On March 8, trade representatives from eleven Pacific rim countries including Canada, Mexico, Japan, and Australia are expected to ratify the Trans-Pacific Partnership, now known as the Comprehensive and Progressive Agreement for Trans-Pacific Partnership (CPTPP). The agreement has been slimmed down both in its content—22 items in the text have been suspended, including the bulk of the intellectual property chapter—and also in its membership, with the exclusion of the United States which had been the driver of those suspended provisions.

What remains in the CPTPP is the agreement's Electronic Commerce (also called digital trade) chapter, which will set new, flawed rules for the region on topics such as the free flow of electronic data, access to software source code, and even rules applicable to domain name privacy and dispute resolution. But it's not the only Asian trade agreement seeking to set such rules. There's another lesser-known but equally important agreement under negotiation by sixteen countries, called the Regional Comprehensive Economic Partnership Agreement (RCEP).

Like CPTPP, RCEP would cover issues that are critical to the digital economy such as custom duties on electronic products, supply of cross-border services, paperless trading, telecommunications, intellectual property, source code disclosure, privacy and cross-border data flows. But unlike CPTPP, RCEP includes the giants of China and India, meaning that the agreement would represent a massive 28.5 percent of global trade. While India's commitment to the deal has become somewhat equivocal, RCEP holds an important place in China's ambitions to consolidate its leadership role in the region.

India Not Ready to Compromise

The RCEP negotiating parties met last month in Indonesia between February 2 and 9, and although continuing secrecy in the negotiation process makes it difficult to accurately assess progress, a series of missed deadlines point to growing uncertainty about the conclusion of the talks.

One of the sticking points is that countries such as India are pushing for a strong services pact which would facilitate the free movement of professionals, whereas China, South Korea, Japan, Australia and New Zealand remain reluctant to commit. On the other hand the Indian government is being cautious about opening up its markets and has incentives to draw out negotiations with elections scheduled next year. India's position on intellectual property is also different from other negotiating countries such as Japan and Korea, which are pushing for a harder, TPP-like line.

As pressure to conclude the deal has intensified, calls for India to exit or block an speedy conclusion of the agreement have also grown louder. At the Indo-ASEAN meeting in New Delhi, Indonesian Trade Minister reiterated that the ASEAN bloc expected India not to block attempts to conclude the RCEP this year. Mounting expectations may lead India to withdraw from the talks, a move that would impact the strategic and economic value of the agreement.

Can Digital Trade Improve Internet Freedom in China?

With India's continuing participation in doubt, Beijing has thrown its weight behind the agreement. Chinese Foreign Ministry spokesperson Hua Chunying recently underscored that Beijing attaches great importance to the RCEP talks and plans to ensure ratification of the agreement by year end. Following the US withdrawal from the TPP, China sees an early conclusion of the RCEP as critical for creating confidence in and promoting its regional and global trade leadership, especially given its absence from the CPTPP.

Addressing the lack of progress on RCEP has gained urgency as China's trade war with the US has intensified. US is contemplating legislation that would forbid U.S. government agencies from purchasing ICT equipment produced by Chinese ICT companies, or their subsidiaries and affiliates. If the law is passed government agencies would be restricted from doing business with any entity that uses equipment produced by those companies.

Last week, concerns about China banning the use of Virtual Private Networks (VPNs) as part of its proposed regulation for telecommunications networks prompted the US to demand an intervention from the World Trade Organization (WTO). What makes this development interesting is that it is the first time that a trade resolution has been sought to address, even incidentally, a serious human rights issue for Chinese Internet users. It is also interesting that the remedy sought is under the existing WTO rules, which at least raises questions about the added value of the new generation of digital trade agreements such as CPTPP and RCEP.

As countries head into the next round of RCEP negotiations the challenge before negotiators is reaching an speedy conclusion versus ensuring a balanced agreement. It's going to be difficult to achieve that balance with the current level of secrecy and lack of consultation surrounding the agreement. Just as the same flaws in the negotiation process for the CPTPP have resulted in an agreement that fails to address users' needs or to preserve their digital rights, RCEP is unlikely to have anything more to offer for Internet users and innovators.

Playboy Drops Misguided Copyright Case Against Boing Boing

Wed, 02/28/2018 - 6:09pm

In a victory for journalism and fair use, Playboy Entertainment has given up on its lawsuit against Happy Mutants, LLC, the company behind Boing Boing. Earlier this month, a federal court dismissed Playboy’s claims but gave Playboy permission to try again with a new complaint, if it could dig up some new facts. The deadline for filing that new complaint passed this week, and today Playboy released a statement suggesting that it is standing down. That means both Boing Boing and Playboy can go back to doing what they do best: producing and reporting on culture and technology.

This case began when Playboy filed suit accusing Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. The post in question, from February 2016, reported that someone had uploaded scans of the photos, and noted they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing.

Together with law firm Durie Tangri, EFF filed a motion to dismiss [PDF]. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The judge agreed, dismissing the lawsuit and writing that he was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”

It’s hard to understand why Playboy brought this case in the first place, turning its legal firepower on a small news and commentary website that hadn’t uploaded or hosted any infringing content. We’re also a little perplexed as to why Playboy seems so unhappy that the Boing Boing post is still up when the links they complain about have been dead for almost two years. In any event, this suit now appears to be over and the Boing Boing team can focus on doing what they love: sharing news, commentary, and awesome things with the world.

Related Cases: Playboy Entertainment Group v. Happy Mutants

Stupid Patent of the Month: Buying A Bundle of Diamonds

Wed, 02/28/2018 - 2:01pm

This month’s Stupid Patent shows what happens when the patent system strays outside its proper boundaries. US Patent No. 8,706,513 describes a “fungible basket of investment grade gems” for use in “financial instruments.” In other words, it’s a rating and trading system that attempts to turn diamonds into a tradeable commodity like oil, gold, or corn.

Of course, creating new types of investment vehicles isn’t really an invention. And patents on newfangled financial techniques like this were generally barred following Bilski v. Kappos, a 2008 Supreme Court case that prevents the patenting of purely financial instruments. Since then, the law has become even less favorable to abstract business method patents like this one. In our view, the ’513 patent would not survive a challenge under Bilski or the Supreme Court’s 2014 decision in Alice v. CLS Bank.

Despite its clear problems, the ’513 patent is being asserted in court—and one of the people best placed to testify against the patent may not be allowed to.

The public’s right to challenge a patent in court is a critical part of the US patent system, that has always balanced the exclusive power of a patent. It’s especially important since patents are often granted by overworked examiners who get an average of 18 hours to review applications. 

But there are two types of persons that, increasingly, aren't allowed to challenge problematic patents: inventors of patents, and even partial owners of patents. Under a doctrine known as “assignor estoppel,” the Federal Circuit has barred inventors from challenging patents that they acquired for a former employer. Assignor estoppel was originally meant to cover a narrow set of circumstances—inventors who engaged in fraud or bad dealing, for instance—but the nation’s top patent court now routinely applies it to prevent inventors from challenging patents.

Patent scholar Mark Lemley flagged this problem in a 2016 paper, noting assignor estoppel could be used to control the free movement of employees or quash a legitimate competitor. “Inventors as a class are put under burdens that we apply to no other employee,” he wrote. “If they start a company, or even go to work for an existing company in the same field, they will not be able to defend a patent suit from their old employer.”

In this case, the Federal Circuit’s expansive view of assignor estoppel may prevent a person who owned just a fraction of a patent from fighting back when that patent gets used in an attempt to quash a competing business.

Despite the fact that this gemological trading system should never have been granted a patent, so far, it’s being successfully used by its owner to beat up on a competitor—and the competitor could be barred from even challenging the patent by assignor estoppel.

Competing Diamond Companies

GemShares was created in 2008 to market “diamond investment products.” The original partners were joined in business by a man named Arthur Lipton, who bought 20% of GemShares in 2013. He struck a deal not to compete with GemShares.

GemShares says [PDF] Lipton broke that deal in 2014, when he started working on his own project, a “secure diamond smart card,” and filed for patents related to it. But in addition to breach of contract, GemShares sued for patent infringement. They said Lipton’s new business violated the ’513 patent.

The litigation also involves breach of contract claims, and allegations of fraud from Lipton’s former partner. Without getting into the weeds on all that, the defendant in this case may not even be allowed to argue that the “gem financial product” patent is invalid. Earlier this month, the judge overseeing the case issued an order [PDF] noting that “the Federal Circuit has upheld the doctrine of assignor estoppel, which precludes an inventor-assignor of a patent sued for infringement from arguing the patent's invalidity.”

The Federal Circuit has made assignor estoppel so powerful, in fact, that Lipton’s 20% ownership contract with GemShares may be enough to stop him and his lawyers from mounting an invalidity defense.

It’s bad policy to stop the public from challenging bad patents, and assignor estoppel should only be used in narrow cases, like outright fraud. As it’s been applied by the Federal Circuit, it’s destined to be used in exactly the way that Lemley warned it would—as an anticompetitive cudgel.

We agree with the brief signed by Lemley and more than two dozen other law professors [PDF] in EVE-USA, Inc. v. Mentor Graphics Corp., arguing that the Supreme Court should take up this issue and keep assignor estoppel within the narrow limits it originally intended.

State Lawmakers Want to Block Pornography at the Expense of Your Free Speech, Privacy, and Hard-Earned Cash

Wed, 02/28/2018 - 12:55pm

More than 15 state legislatures are considering the “Human Trafficking Prevention Act” (HTPA). But don’t let the name fool you: this bill would do nothing to address human trafficking. Instead, it would only threaten your free speech and privacy in a misguided attempt to block and tax online pornography.

EFF opposed versions of this bill in over a dozen states last year, and the bill failed in all of them. Now HTPA is back, and we have written in opposition against the bill again to urge lawmakers to oppose it this year.

The gist of the model legislation is this: Device manufacturers would be forced to install "obscenity filters" on cell phones, tablets, computers, and any other Internet-connected devices. Those filters could only be removed if consumers pay a $20 fee. In addition to violating the First Amendment and burdening consumers and businesses, this would allow the government to intrude into consumers’ private lives and restrict their control over their own devices.

On top of that, the story of this bill’s provenance is bizarre and highly recommended reading for any lawmakers considering it. In short, the HTPA is part of a multi-state effort coordinated by the same person behind a bill to delegitimize same-sex marriages as “parody marriages.” In this post, however, we’ll be focusing on the policy itself.

Read EFF's opposition letter against HB 2422, Missouri's iteration of the Human Trafficking Prevention Act.

HTPA—also sometimes named the Human Trafficking and Child Exploitation Prevention Act—has been introduced in the following states: Hawaii (Version 1, 2), Illinois, Indiana, Iowa, Kansas, Maryland, Mississippi, Missouri, New Mexico, New Jersey (Assembly, Senate), New York, Rhode Island, South Carolina, Tennessee (House, Senate), Virginia, West Virginia (Senate, House), and Wyoming.

While some versions of the legislation vary, each hits the following points.

Pre-Installed Filters

Manufacturers of Internet-enabled devices would be required to pre-install filters to block webpages and applications that contain sexual content. Although different versions of the bill specify this content differently, the end result is the same: an unconstitutional restriction on the lawful speech people can access and engage with on the Internet.

A Censorship Tax

After overriding consumer choice and forcing people to purchase filtering software they don’t necessarily want, the bill would require users to pay a $20 fee per device to remove the filters and to exercise their First Amendment rights to look at legal content. Between smartphones, tablets, desktop computers, TVs, gaming consoles, routers, and other Internet-enabled devices, consumers could end up paying a small fortune to unlock all of the devices in their home.

Data Collection

Anyone who wants to unlock the filters on their devices would have to put their request in writing, show ID, and verify that they’ve been shown a “written warning regarding the potential dangers” of removing the obscenity filter. That means that companies would be maintaining records on everyone who wanted their “Human Trafficking” filters removed.  As EFF Stanton Fellow Camille Fischer explains in our opposition letter:   

To be clear, the HTPA’s deactivation process does not simply chill speech; it also requires consumers to sacrifice their privacy and anonymity, as the price of exercising their First Amendment rights. If enacted, consumers would be forced to identify themselves when making a written request for filter deactivation, creating a humiliating situation that suggests they want access to controversial sexual material. … In short, HTPA deactivation would be a frightening form of thought-based surveillance.

Unlocking such filters would not just be about accessing pornography. A gamer could be seeking to improve the performance of their computer by deleting unnecessary software. A parent may want to install premium child safety software that is incompatible with a pre-installed filter. And, of course, many users will simply want to freely surf the Internet without repeatedly being denied access to legal content.

Building A Censorship Machine

The bill would force the companies we rely upon for open access to the Internet to create a massive, easily abused censorship apparatus. Tech companies would be required to operate call centers or online reporting centers to monitor complaints about which sites should or should not be filtered.

The technical requirements for this kind of aggressive platform censorship at scale are simply unworkable. If the attempts of social media sites to censor pornographic images are any indication, we cannot count on algorithms to distinguish, for example, nude art from medical information from pornography. Facing risk of legal liability, companies would likely over-censor and sweep up legal content in their censorship net.

Do The Right Thing

Already lawmakers are starting to see through this legislation. In 2018, the bill has died in committees in Mississippi and Virginia. Democratic senators in New Mexico who introduced the legislation pulled back the bill days after EFF raised the alarm.

Legislators should continue to do the right thing: uphold the Constitution, protect consumers, and not use the real problem of human trafficking as an excuse to deprive users of their privacy and free speech.

Ninth Circuit Court of Appeals Has New Opportunity to Protect Device Privacy at the Border

Tue, 02/27/2018 - 9:04pm

The U.S. Court of Appeals for the Ninth Circuit has a new opportunity to strengthen personal privacy at the border. When courts recognize and strengthen our Fourth Amendment rights against warrantless, suspicionless searches of our electronic devices at the border, it’s an important check on the government’s power to search anyone, for any or no reason, at airports and border checkpoints.

EFF recently filed amicus briefs in two cases, U.S. v. Cano and U.S. v. Caballero, before the Ninth Circuit arguing that the Constitution requires border agents to have a probable cause warrant to search travelers’ electronic devices.

Border agents, whether from U.S. Customs and Border Protection (CBP) or U.S. Immigration and Customs Enforcement (ICE), regularly search cell phones, laptops, and other electronic devices that travelers carry across the U.S. border. The number of device searches at the border has increased six-fold in the past five years, with the increase accelerating during the Trump administration. These searches are authorized by agency policies that generally permit suspicionless searches without any court oversight.

The last significant ruling on device privacy at the border in the Ninth Circuit, whose rulings apply to nine western states, was in U.S. v. Cotterman (2013). In that case, the court of appeals held that the Fourth Amendment required border agents to have had reasonable suspicion—a standard between no suspicion and probable cause—before they conducted a “forensic” search, aided by sophisticated software, of the defendant’s laptop. Unfortunately, the Ninth Circuit also held that a manual search of an electronic device is “routine” and so the traditional border search exception to the warrant requirement applies—that is, no warrant or any suspicion of wrongdoing is needed.

However, the year after the Ninth Circuit decided Cotterman, the U.S. Supreme Court decided Riley v. California (2014). Although that case did not involve the border context, its analysis and ultimate holding are highly instructive. The Supreme Court held that, while police may search those they arrest without a warrant, when it comes to an arrestee’s cell phone they need a probable cause warrant. The court based its holding on the extraordinary privacy interests that individuals have in the massive amounts of sensitive digital data that their cell phones contain. The court emphasized that electronic devices are nothing like physical containers, such as wallets.

Similarly, in the border search context, electronic devices are nothing like luggage or other physical items that travelers carry across the border. With the vast amounts and kinds of personal data that electronic devices contain—data that can reveal our political affiliations, religious beliefs and practices, sexual and romantic lives, financial status, health conditions, and family and professional associations—EFF argues that the Constitution requires the government to meet a higher burden before accessing this information.

Additionally, we argue that the method of search is irrelevant to the legal analysis of what standards should apply to border searches of electronic devices. Border agents significantly invade travelers’ privacy when they search a cell phone or laptop—whether by hand or with forensic software. In fact, the cell phone searches in Riley were manual searches, yet the Supreme Court applied the maximum Fourth Amendment protection available.

The Ninth Circuit has not yet ruled on whether or how Riley applies to border searches of electronic devices. With Cano and Caballero, the court of appeals has a fresh opportunity to do so—and hopefully will strengthen privacy protections for travelers within its jurisdiction. Affirming the importance of digital privacy, the Caballero court stated, “If it could, this Court would apply Riley.” Yet both district courts felt constrained by Cotterman and so did not require a warrant.

With these Ninth Circuit briefs, EFF has now filed a total of five amicus briefs since 2015 arguing that border agents need a probable cause warrant to search electronic devices at the border. All of these cases, like Riley, were criminal cases where the defendants moved to suppress the evidence obtained from their devices without a warrant. That these were criminal cases should not alter the constitutional analysis. Even though the defendants in Riley were reasonably suspected of having committed crimes, the Supreme Court still required a warrant under the Fourth Amendment.

Additionally, our Alasaad v. Nielsen case against CBP and ICE is the first civil case post-Riley challenging unconstitutional border searches of electronic devices. Our clients are 11 Americans—10 citizens and one lawful permanent resident—who have not been accused of any wrongdoing. Yet they were subjected to highly intrusive searches of their cell phones and other electronic devices when they tried to re-enter the country.

Thus, whether through our civil case or the criminal appeals where we serve as amicus, we’re hopeful that the courts will explicitly apply Riley to the border and protect the digital privacy of thousands of travelers from unjustified government intrusion.

Second Circuit Gouges TVEyes With Terrible Fair Use Ruling

Tue, 02/27/2018 - 8:55pm

In a decision that threatens legitimate fair uses, the Second Circuit ruled against part of the service offered by TVEyes, which creates a text-searchable database of broadcast content from thousands of television and radio stations in the United States and worldwide. The service is invaluable to people looking to investigate and analyze the claims made on broadcast television and radio. Sadly, this ruling is likely to interfere with that valuable service.

TVEyes allows subscribers to search through transcripts of broadcast content and gives a time code for what the search returns. It also allows its subscribers to search for, view, download, and share ten-minute clips. It’s used by exactly who you’d think would need a service like this: journalists, scholars, politicians, and so on in order to monitor what’s being said in the media. If you’ve ever read a story where a public figure’s words now are contrasted with contradictory things they said in the past, then you’ve seen the effects of TVEyes.

In 2014, the district court hearing the case threw out a number of arguments made by Fox news and held that a lot of what TVEyes does is fair use, but asked to hear more about customers’ ability to archive video clips,  share links to video clips via email, download clips, and search for clips by date and time (as opposed to keywords). In 2015, the district court found the archiving feature to be a fair use, but found the other features to be “infringing.”  

And now the 2nd Circuit has reversed [PDF] the 2015 finding that the archiving was fair use and upholds the finding that the rest of the TVEye’s video features are not fair use. That’s a hugely disappointing result that could result in a decrease in news analysis and commentary.

Fair use is determined by a look at four factors: the purpose and character of the use (ie, how “transformative” it is), the nature of the copyrighted work, the amount and substantiality used, and the effect the use has on the market.

The Second Circuit decision does acknowledge that TVEyes’ functions are transformative “insofar as it enables users to isolate, from an ocean of programming, material that is responsive to their interests and needs, and to access that material with targeted precision.” Where the court gets this wrong is in saying that because that material is delivered in “unaltered from its original form with no new expression, meaning, or message,” this factor only weighs slightly in favor of TVEyes. A researcher or a journalist watching ten minutes relevant to a specific search is doing something very different from an average television viewer. The new and different purpose being served by TVEyes means this factor should have favored the service more than just slightly.

The court found that the second factor, not really a big player in this analysis, was neutral. TVEyes argued that it was providing access to facts, which are not copyrightable, so this factor weighed in their favor. The court replied that just because works are factual doesn’t mean they can be copied and shared wholesale.

The court found that the third factor favored Fox because the ten-minute clips are long relative to the “brevity of the average news segment on a particular topic.” The result, in the court’s eyes, being that users would see all of the segment on the topic they were searching for, destroying the need to go watch Fox News. The court envisions a future where media criticism is limited to organizations with the budget and stamina to assign someone to watch Fox News 24 hours a day.

The biggest failure is in the court’s analysis of the fourth factor. The court says that TVEyes successfully charging its subscribers $500 a month shows that it has created a profitable business that is somehow displacing a channel’s prospective revenue, especially since it allows people to watch content without the owner’s permission. That ignores a fundamental characteristic of fair use.

If use of someone’s words was contingent on the permission of the person who said them, you would never be able to critique what was being said. Fair use allows the use of copyrighted material without permission for this very reason. It’s not in the interest of anyone to license out clips of their material for the purpose of it being debunked, which is why the service provided by TVEyes is so valuable.

Moreover, the market for a cable news subscription and the market for a service like TVEyes are not the same. And restricting that service to the hands of the copyright holder will keep important criticism and commentary from being done. Now more than ever we need rulings that reaffirm the importance of news analysis rather than ones that devalue it, as the Second Circuit did here.

We're disappointed the court took such a limited view of the importance of this kind of use and it's incorrect and dangerous to consider this a plausible market. That's circular reasoning that threatens many traditional fair uses where one could theoretically get a license but should not have to because stopping the use isn't a legitimate application of copyright law.

House Vote on FOSTA is a Win for Censorship

Tue, 02/27/2018 - 6:10pm

The bill passed today 388-25 by the U.S. House of Representatives marks an unprecedented push towards Internet censorship, and does nothing to fight sex traffickers.

H.R. 1865, the Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA), allows for private lawsuits and criminal prosecutions against Internet platforms and websites, based on the actions of their users. Facing huge new liabilities, the law will undoubtedly lead to platforms policing more user speech.

The Internet we know today is possible only because of Section 230 of the Communications Decency Act, which prevents online platforms from being held liable for their users’ speech, except in certain circumstances. FOSTA would punch a major hole in Section 230, enabling lawsuits and prosecutions against online platforms—including ones that aren’t even aware that sex trafficking is taking place.

If websites can be sued or prosecuted because of user actions, it creates extreme incentives. Some online services might react by prescreening or filtering user posts. Others might get sued out of existence. New companies, fearing FOSTA liabilities, may not start up in the first place.

The tragedy is that FOSTA isn’t needed to prosecute or sue sex traffickers. As we’ve said before, Section 230 simply isn’t broken. Right now, there is nothing preventing federal prosecution of an Internet company that knowingly aids in sex trafficking. That includes anyone hosting advertisements for sex trafficking, which is explicitly a federal crime under 18 U.S.C. § 1591, as amended by the 2015 SAVE Act. The website that produced the most discussion around this issue, Backpage.com, is reportedly under federal investigation.

The array of online services protected by Section 230, and thus hurt by FOSTA, is vast. It includes review sites, online marketplaces, discussion boards, ISPs, even news publications with comment sections. Even small websites host thousands or millions of users engaged in around-the-clock discussion and commerce. By attempting to add an additional tool to hold liable the tiny minority of those platforms whose users who do awful things, FOSTA does real harm to the overwhelming majority, who will inevitably be subject to censorship.

Websites run by nonprofits or community groups, which have limited resources to police user content, would face the most risk. Perversely, some of the discussions most likely to be censored could be those by and about victims of sex trafficking. Overzealous moderators, or automated filters, won’t distinguish nuanced conversations and are likely to pursue the safest, censorial route.

We hope the Senate will reject FOSTA and uphold Section 230, a law that has protected a free and open Internet for more than two decades. Call your senator now and let them know that online censorship isn’t the solution to fighting sex trafficking.

Take action

Stop fosta

Tell Congress to Protect the Open Internet

Tue, 02/27/2018 - 2:26pm

Today, EFF is participating in a national Day of Action to push Congress to preserve the net neutrality rules the FCC repealed in December. With a simple majority, Congress can use the Congressional Review Act (CRA) to overturn the FCC’s new rule. We’re asking for members of the House and Senate to commit to doing so publicly.

On Thursday, February 22, the FCC’s so-called “Restoring Internet Freedom Order” was published in the Federal Register. Under the CRA, Congress has 60 working days to vote to overturn that Order. We’re asking representatives to publicly commit to doing just that. In the House of Representatives, that means supporting Representative Mike Doyle’s bill, which has 150 co-sponsors. In the Senate, Senator Ed Markey’s bill is just one vote away from passing.

Net neutrality means that Internet service providers (ISPs) should treat all data that travels over their networks fairly, without improperly discriminating in favor of particular apps, sites or services. For many years, net neutrality principles in various forms, have forbidden unfair practices like blocking or throttling particular services and sites, as well as paid prioritization, where an ISP charges content providers to get better or faster or more consistent access to the ISP's customer or prioritizes its own content over a competitor’s. Thanks to the hard work of millions of Internet users, these protections were enshrined in the FCC’s 2015 Open Internet Order. The new Order eviscerated those protections; Congress can use the CRA to bring them back.

Because net neutrality is so popular, politicians often say they support it – but lip service is not enough.  A vote to restore the net neutrality protections in the 2015 Open Internet Order is a clear, concrete thing that you can ask your representatives to do to support real net neutrality.

For that reason, we’re launching Check Your Reps, a website that allows you to see whether or not your representatives are voting yes on bringing back the 2015 Open Internet Order, email them voicing your support for net neutrality, and share what you’ve learned.

The clock is ticking:  make sure you tell your representatives to act.

Take Action

Tell Your Representatives to Bring Back Net Neutrality Protections

Can India's Biometric Identity Program Aadhaar Be Fixed?

Tue, 02/27/2018 - 9:55am

The Supreme Court of India has commenced final hearings in the long-standing challenge to India's massive biometric identity apparatus, Aadhaar. Following last August’s ruling in the Puttaswamy case rejecting the Attorney General's contention that privacy was not a fundamental right, a five-judge bench is now weighing in on the privacy concerns raised by the unsanctioned use of Aadhaar.

The stakes in the Aadhaar case are huge, given the central government’s ambitions to export the underlying technology to other countries. Russia, Morocco, Algeria, Tunisia, Malaysia, Philippines, and Thailand have expressed interest in implementing biometric identification system inspired by Aadhaar. The Sri Lankan government has already made plans to introduce a biometric digital identity for citizens to access services, despite stiff opposition to the proposal, and similar plans are under consideration in PakistanNepal and Singapore. The outcome of this hearing will impact the acceptance and adoption of biometric identity across the world.

At home in India, the need for biometric identity is staked on claims that it will improve government savings through efficient, targeted delivery of welfare. But in the years since its implementation, there is little evidence to back the government's savings claims. A widely-quoted World Bank's estimate of $11 billion annual savings (or potential savings) due to Aadhaar has been challenged by economists.

The architects of Aadhaar also invoke inclusion to justify the need for creating a centralized identity scheme. Yet, contrary to government claims, there is growing evidence of denial of services for lack of Aadhaar card, authentication failures that have led to death, starvation, denial of medical services and hospitalization, and denial of public utilities such as pensions, rations, and cooking gas. During last week's hearings , Aadhaar's governing institution, the Unique Identity Authority of India (UIDAI), was forced to clarify that access to entitlements would be maintained until an adequate mechanism for authentication of identity was in place, issuing a statement that "no essential service or benefit should be denied to a genuine beneficiary for the want of Aadhaar."

Centralized Decision-Making Compromises Aadhaar's Security

The UIDAI was established in 2009 by executive action as the sole decision-making authority for the allocation of resources, and contracting institutional arrangements for Aadhaar numbers. With no external or parliamentary oversight over its decision-making, UIDAI engaged in an opaque process of private contracting with foreign biometric service providers to provide technical support for the scheme.  The government later passed the Aadhaar Act in 2016 to legitimize UIDAI's powers, but used a special maneuver that enabled it to bypass the House of Parliament, where the government lacked a majority, and prevented its examination by the Parliamentary Standing Committee. The manner in which Aadhaar Act was passed further weakens the democratic legitimacy of the Aadhaar scheme as a whole.

The lack of accountability emanating from UIDAI's centralized decision-making is evident in the rushed proof of the concept trial of the project. Security researchers have noted that the trial sampled data from just 20,000 people and nothing in the UIDAI's report confirms that each electronic identity on the Central ID Repository (CIDR) is unique or that de-duplication could ever be achieved. As mounting evidence confirms, the decision to create the CIDR was based on an assumption that biometrics cannot be faked, and that even if they were, it would be caught during deduplication.

It emerged during the Aadhaar hearings that UIDAI has neither access to, nor control of the source code of the software used for Aadhaar CIDR. This means that to date there has been no independent audit of the software that could identify data-mining backdoors or security flaws. The Indian public has also become concerned about the practices of the foreign companies embedded in the Aadhaar system. One of three contractors to UIDAI who were provided full access to classified biometric data stored in the Aadhaar database and permitted to “collect, use, transfer, store and process the data" was US-based L-1 Identity Solutions. The company has since been acquired by a French company, Safran Technologies, which has been accused of hiding the provenance of code bought from a Russian firm to boost software performance of US law enforcement computers. The company is also facing a whistleblower lawsuit alleging it fraudulently took more than $1 billion from US law enforcement agencies.

Compromised Enrollment Scheme

The UIDAI also outsourced the responsibility for enrolling Indians in the Aadhaar system. State government bodies and large private organizations were selected to act as registrars, who, in turn, appointed enrollment agencies, including private contractors, to set up and operate mobile, temporary or permanent enrollment centers. UIDAI created an incentive based model for successful enrollment, whereby registrars would earn Rs 40-50 (about 75c) for every successful enrollment. Since compensation was tied to successful enrollment, the scheme created the incentive for operators to maximize their earning potential.

By delegating the collection of citizens' biometrics to private contractors, UIDAI created the scope for the enrollment procedure to be compromised.  Hacks to work around the software and hardware soon emerged, and have been employed in scams using cloned fingerprints to create fake enrollments. Corruption, bribery, and the creation of Aadhaar numbers with unverified, absent or false documents have also marred the rollout of the scheme. In 2016, on being detained and questioned, a Pakistani spy produced an Aadhaar card bearing his alias and fake address as proof of identity. The Aadhaar card had been obtained through the enrollment procedure by providing fake identification information.

An India Today investigation has revealed that the misuse of Aadhaar data is widespread, with agents willing to part with demographic records collected from Aadhaar applicants for Rs 2-5 (less than a cent). Another report from 2015 suggests that the enrollment client allows operators to use their fingerprints and Aadhaar number to access, update and print demographic details of people without their consent or  biometric authentication.

More recently, an investigation by The Tribune exposed that complete access to the UIDAI database was available for Rs 500 (about $8). The reporter paid to gain access to the data including name, address, postal code, photo, phone number and email collected by UIDAI. For an additional Rs 300, the service provided access to software which allowed the printing of the Aadhaar card after entering the Aadhaar number of any individual. A young Bangalore-based engineer has been accused of developing an Android app "Aadhaar e-KYC", downloaded over 50,000 times since its launch in January 2017. The software claimed to be able to access Aadhaar information without authorization.

In light of the unreliability of information in the Aadhaar database and systemic failure of the enrollment process, the biometric data collected before the enactment of the Aadhaar Act is an important issue before the Supreme Court. The petitioners have sought the destruction of all biometrics and personal information captured between 2009-2016 on the grounds that it was collected without informed consent and may have been compromised.

Authentication Failures

The original plans for authentication of a person holding an Aadhaar number under Section 2(c) of the Aadhaar Act, 2016 were meant to involve returning a "Yes" if the person's biometric and demographic data matched those captured during the enrollment process, and "No" if it did not. But somewhere along the way, this policy changed, and in 2016, the UIDAI introduced a new mode of authentication, whereby on submitting biometric information  against the Aadhaar number would result in their demographic information being returned.

This has created a range of public and private institutions using Aadhaar-based authentication for the provision of services. However authentication failures due to incorrect captured fingerprints, or a change in biometric details because of old age or wear and tear are increasingly common. The ability to do electronic authentication is also limited in India and therefore, printed copies of Aadhaar number and demographic details are considered as identification.

There are two main issues with this. First, as Aadhaar copies are just pieces of paper that can be easily faked, the use and acceptance of physical copies creates avenue for fraud.  UIDAI could limit the use of physical copies: however doing so would deprive beneficiaries if authentication fails. Second, Aadhaar numbers are supposed to be secret: using physical copies encourage that number to be revealed and used publicly. For the UIDAI whose aim is speedy enrollment and provision of services despite authentication failure, there is no incentive to stop the use of printed Aadhaar numbers.

Data security has also been weakened because institutions using Aadhaar for authentication have not met the standards for processing and storing data. Last year, UIDAI had to get more than 200 Central and State government departments, including educational institutes, to remove lists of Aadhaar beneficiaries, along with their name, address, and Aadhaar numbers had been uploaded and available on their public websites.

Securing Aadhaar

Can Aadhaar be secured? Not without significant institutional reforms, no. Aadhaar does not have an independent threat-analyzing agency: securing biometric data that has been collected falls under the purview of UIDAI. The agency does not have a Chief Information Officer (CIO) and has no defined standard operating procedures for data leakages and security breaches. Demographic information linked to an Aadhaar number, made available to private parties during authentication, are already being collected and stored externally by those parties; the UIDAI has no legal power or regulatory mechanism to prevent this. The existence of parallel databases means that biometric and demographic information is increasingly scattered among government departments and private companies, many of whom have little conception of, or incentive to ensure data security.

Second order tasks of oversight and regulatory enforcement serve a critical function in creating accountability. Although UIDAI has issued legally-enforceable rules, there is no monitoring or enforcement agency, either within UIDAI or without, to see if these rules are being followed. For example, an audit of enrollment centers revealed that UIDAI had no way of knowing if operators were retaining biometrics nor for how long.

UIDAI has also neither adopted, nor encouraged reporting of software vulnerabilities or testing enrollment hardware. Reporting of security vulnerabilities provides learning opportunities and improves coordination; security researchers can fulfill the critical task of enabling institutions to identify failures, allowing  incremental improvements to the system. But far from encouraging such security research, UIDAI has filed FIRs against researchers and reporters that uncovered flaws in the Aadhaar ecosystem.

As controversies over its ability to keep its data secure has grown, the agency has stuck to its aggressive stance, vehemently refuting any suggestion of the vulnerabilities in the Aadhaar apparatus. This attitude is perplexing given the number of data breaches and procedural gaps that are being uncovered every day. UIDAI is so confident of its security that it filed an affidavit before the Supreme Court in the Aadhaar case which claims that the data cannot be hacked or breached. UIDAI's defiance of their own patchy record hardly provides much cause for confidence.

The Way Forward 

The current Aadhaar regime is structured to radically centralize the implementation of Indian government and private digital authentication systems. But a credible national identity system cannot be created by an opaque, unaccountable centralized agency that chooses not to follow democratic procedures when creating its rules. It would have made more sense to confine UIDAI's role to maintaining the legal structure that secures the individual right over their data, enforces contracts, ensures liability for data breaches, and performs dispute resolution. In that way, the jurisdictional authority of UIDAI would be limited to tasks where competition cannot be an organizing principle.

The present scheme has created a market of institutions that use Aadhaar for authentication of identity in the provision of services with varying degree of transparency and privacy. The central control of the scheme is too rigid in some ways, as the bureaucratic structure of Aadhaar does not facilitate adaptation to security threats, or allow vendors or private companies to improve data protection practices. Yet in other ways, it is not strong enough, given the security lapses that it has enabled by giving multiple parties free access to the Aadhaar database.

By making Aadhaar mandatory, UIDAI has taken away the right of individuals to exit these unsatisfactory arrangements. The coercive measures taken by the State to encourage the adoption of Aadhaar have introduced new risks to individuals' data and national security. Even the efficiency argument has fallen flat, as it is negated by the unreliability of Aadhaar authentication. The tragedy of Aadhaar is that not only does it fail to generate efficiency and justice, but also introduces significant economic and social costs.

All in all, it's hard to see how this mess can be fixed without scrapping the system and—perhaps—starting again from scratch. As drastic as that sounds, the current Supreme Court challenge may, ironically, provide a golden opportunity to revamp the fatally flawed existing institutional arrangements behind Aadhaar, and provide the Indian government with a fresh opportunity to learn from the mistakes that brought it to this point.