San Francisco - The Electronic Frontier Foundation (EFF) has launched its “Catalog of Missing Devices”—a project that illustrates the gadgets that could and should exist, if not for bad copyright laws that prevent innovators from creating the cool new tools that could enrich our lives.
“The law that is supposed to restrict copying has instead been misused to crack down on competition, strangling a future’s worth of gadgets in their cradles,” said EFF Special Advisor Cory Doctorow. “But it’s hard to notice what isn’t there. We’re aiming to fix that with this Catalog of Missing Devices. It’s a collection of tools, services, and products that could have been, and should have been, but never were.”
The damage comes from Section 1201 of the Digital Millennium Copyright Act (DMCA 1201), which covers digital rights management software (DRM). DRM was designed to block software counterfeiting and other illegal copying, and Section 1201 bans DRM circumvention. However, businesses quickly learned that by employing DRM they could thwart honest competitors from creating inter-operative tools.
Right now, that means you could be breaking the law just by doing something as simple as repairing your car on your own, without the vehicle-maker’s pricey tool. Other examples include rightsholders forcing you to buy additional copies of movies you want to watch on your phone—instead of allowing you to rip the DVD you already own and are entitled to watch—or manufacturers blocking your printer from using anything but their official ink cartridges.
But that’s just the beginning of what consumers are missing. The Catalog of Missing Devices imagines things like music software that tailors your listening to what you are reading on your audiobook, or a gadget that lets parents reprogram talking toys to replace canned, meaningless messaging.
“Computers aren’t just on our desktops or in our pockets—they are everywhere, and so is the software that runs them,” said EFF Legal Director Corynne McSherry. “We need to fix the laws that choke off competition and innovation with no corresponding benefit.”
The Catalog of Missing Devices is part of EFF’s Apollo 1201 project, dedicated to eradicating all DRM from the world. A key step is eliminating laws like DMCA 1201, as well as the international versions of this legislation that the U.S. has convinced its trading partners to adopt.
For the Catalog of Missing Devices:
Last week AT&T has decided it’s good business to advocate for an “Internet Bill of Rights.” Of course, that catchy name doesn’t in any way mean that what AT&T wants is a codified list of rights for Internet users. No, what AT&T wants is to keep a firm hold on the gains it has made in the last year at the expense of its customers’ rights.
There is nothing in the history—the very recent history—of AT&T to make anyone believe that it has anyone’s actual best interests at heart. Let’s take a look at how this company has traditionally felt about privacy and network neutrality.Few companies have done more to combat privacy and network neutrality than AT&T.
It takes an incredible amount of arrogance for AT&T to take out a full page ad in the New York Times calling for an “Internet Bill of Rights” after spending years effectively waging the most far-reaching lobbying campaign to eliminate every consumer right. In some ways, it should strike you as a type of conquerors decree after successfully laying waste to the legal landscape to remake it in its own image. But AT&T’s goal is abundantly clear: It does not like the laws that exist today to police its conduct in privacy and network neutrality so it wishes to rewrite them while hoping Americans ignore its past actions.AT&T’s Fight Against Privacy
In 2017, Congress repealed broadband privacy. It was easy to be frustrated and angry with the government, but remember, when it happened, AT&T was there arguing that losing your privacy was good for you. In fact, it even argued that you didn’t need to worry because AT&T and other ISPs were still regulated.
In its own words: “for example, AT&T and other ISPs’ actions continue to be governed by Section 222 of the Communications Act.” This is deeply ironic as, at the same time, AT&T was arguing that the Communications Act would protect us while simultaneously lobbying the Federal Communications Commission (FCC) stop applying Section 222 to their broadband. which just happened this December when the FCC repealed the 2015 Open Internet order.
AT&T has not stopped there either. Having won on the national level, it is, right now, using the same double-talk to stop states from passing ISP privacy laws to fill the gap it created.
In California, for example, it stated on the record that “AT&T and other major Internet service providers have committed to legally enforceable Privacy Principles that are consistent with the privacy framework developed by the FTC over the past twenty years.” Which is a long way of saying, “There is no need to pass a state law because the Federal Trade Commission can enforce the law on us.”
As with the arguments made in front of Congress and the FCC, AT&T both states that there are laws that cover ISPs and that those laws don’t exist. What exactly is AT&T saying about the FTC’s enforcement power — the power it is saying obviates the need for stat laws — in the courts today?
That they are exempt from it.
The image above is from litigation known as the FTC vs AT&T Mobility case, which is still ongoing. The core of AT&T's argument is because their telephone service is a common carrier service, and the FTC is prohibited from regulating common carriers (even though their broadband product is no longer a common carrier with the repeal of the 2015 Open Internet Order), that the entire company is exempt. It has so far prevailed on that argument in the 9th Circuit and many proponents of repealing network neutrality incorrectly, though spiritedly, claimed that their decision to end common carrier regulations of broadband would enhance the FTC's power over AT&T. However, AT&T is going so far as to argue – today – that it does not even matter how the FCC regulates ISP broadband, it is just de facto exempt from FTC power.
(Footnote from AT&T's legal filing.)
All of this is to say that AT&T is waging against a sustained, current war on user privacy. It was AT&T that was inserting ads into the traffic of people who use their wifi hotspots in airports. It also used “Carrier IQ,” which gave them the capability to track everything you do, from what websites you visit to what applications you use. It took a class action lawsuit for the carriers to begin backing down from this idea. And if it was not for Verizon getting into legal trouble with the federal government for use of the undeletable “super cookie,” AT&T would have followed suit to get in on the action.AT&T’s Fight Against Network Neutrality
“Some companies want us to be a big dumb pipe that gets bigger and bigger. No one gets a free ride. Those that want to use this will pay.” - former AT&T CEO in 2006
This famous remark by AT&T was probably the most straightforward and honest statement the ISP has made in regards to their thoughts on network neutrality. In addition to obviously misconstruing the facts, it’s a manifestation of AT&T’s belief that an open and free Internet is a threat to their bottom line. At each and every iteration of the network neutrality debate at the federal agency, AT&T has raised objections to enforcing net neutrality.
AT&T has made arguments against being required to operate in a non-discriminatory manner.
Which makes sense, since over those years, AT&T has violated net neutrality on multiple occasions. Just last year, the FCC determined that AT&T was engaging in discriminatory, anti-competitive practices by zero-rating its own DIRECTV content while simultaneously charging its competitors unfavorable rates to get the same treatment. While FCC Chairman Ajit Pai halted the investigation and rescinded its findings to eliminate their legal impact on behalf of AT&T and Verizon, the facts are indisputable that AT&T was giving away its own video programming for free in order to drive customers to subscribe to DIRECTV, while stifling any competing video streaming services. The Department of Justice under President Trump shares these concerns when it filed its anti-trust lawsuit against AT&T to block its acquisition of Time Warner content on the grounds that it will harm online video competition. But that’s just the tip of the iceberg.
Back in 2012, AT&T blocked its customers from using FaceTime, Apple’s video chat app, unless they switched to data plans that were generally more expensive. Not only was this a clear case of blocking based on content for purely business reasons, AT&T tried to claim that doing so didn’t violate net neutrality. This two-faced argument shows just how far the company is willing to go in its double-speak to get away with violating real net neutrality.
If AT&T wants the public to take their “Internet Bill of Rights” advocacy seriously, rather than come across as disingenuous in their public relations campaign, then it needs to actively change how it lobbies Congress and the state legislatures. Rather than deploying its efforts to actively oppose every effort to restore network neutrality and privacy, they should be supporting those efforts. Until then, this is just another example of a major ISP coopting a message they fought hard to defeat (and lost) but are now pretending they support in hopes that Internet users look the other way.
Last month, Congress reauthorized Section 702, the controversial law the NSA uses to conduct some of its most invasive electronic surveillance. With Section 702 set to expire, Congress had a golden opportunity to fix the worst flaws in the NSA’s surveillance programs and protect Americans’ Fourth Amendment rights to privacy. Instead, it reupped Section 702 for six more years.
But the bill passed by Congress and signed by the president, labeled S. 139, didn’t just extend Section 702’s duration. It also may expand the NSA’s authority in subtle but dangerous ways.
The reauthorization marks the first time that Congress passed legislation that explicitly acknowledges and codifies some of the most controversial aspects of the NSA’s surveillance programs, including “about” collection and “backdoor searches.” That will give the government more legal ammunition to defend these programs in court, in Congress, and to the public. It also suggests ways for the NSA to loosen its already lax self-imposed restraints on how it conducts surveillance.
Background: NSA Surveillance Under Section 702
First passed in 2008 as part of the FISA Amendments Act—and reauthorized last week until 2023—Section 702 is the primary legal authority that the NSA uses to conduct warrantless electronic surveillance against non-U.S. “targets” located outside the United States. The two publicly known programs operated under Section 702 are “upstream” and “downstream” (formerly known as “PRISM”).
Section 702 differs from other foreign surveillance laws because the government can pick targets and conduct the surveillance without a warrant signed by a judge. Instead, the Foreign Intelligence Surveillance Court (FISC) merely reviews and signs off on the government’s high-level plans once a year.
In both upstream and downstream surveillance, the intelligence community collects and searches communications it believes are related to “selectors.” Selectors are search terms that apply to a target, like an email address, phone number, or other identifier.
Under downstream, the government requires companies like Google, Facebook, and Yahoo to turn over messages “to” and “from” a selector—gaining access to things like emails and Facebook messages.
Under upstream, the NSA relies on Internet providers like AT&T to provide access to large sections of the Internet backbone, intercepting and scanning billions of messages rushing between people and through websites. Until recently, upstream resulted in the collection of communications to, from, or about a selector. More on “about” collection below.
The overarching problem with these programs is that they are far from “targeted.” Under Section 702, the NSA collects billions of communications, including those belonging to innocent Americans who are not actually targeted. These communications are then placed in databases that other intelligence and law enforcement agencies can access—for purposes unrelated to national security—without a warrant or any judicial review.
In countless ways, Section 702 surveillance violates Americans’ privacy and other constitutional rights, not to mention the millions of people around the world whose right to communications privacy is also ignored.
This is why EFF vehemently opposed the Section 702 reauthorization bill that the President recently signed into law. We’ve been suing since 2006 over the NSA’s mass surveillance of the Internet backbone and trying to end these practices in the courts. While S. 139 was described by some as a reform, the bill was really a total failure to address the problems with Section 702. Worse still, it may expand the NSA’s authority to conduct this intrusive surveillance.
Codified “About” Collection
One key area where the new reauthorization could expand Section 702 is the practice commonly known as “about” collection (or “abouts” collection in the language of the new law). For years, when the NSA conducted its upstream surveillance of the Internet backbone, it collected not just communications “to” and “from” a selector like an email address, but also messages that merely mentioned that selector in the message body.
This is a staggeringly broad dragnet tactic. Have you ever written someone’s phone number inside an email to someone else? If that number was an NSA selector, your email would have been collected, though neither you nor the email’s recipient was an NSA target. Have you ever mentioned someone’s email address through a chat service at work? If that email address was an NSA selector, your chat could have been collected, too.
“About” collection involves scanning and collecting the contents of Americans’ Fourth Amendment-protected communications without a warrant. That’s unconstitutional, and the NSA should never have been allowed to do it in the first place. Unfortunately, the FISC and other oversight bodies tasked with overseeing Section 702 surveillance often ignore major constitutional issues.
So the FISC permitted “about” collection to go on for years, even though the collection continued to raise complex legal and technical problems. In 2011, the FISC warned the NSA against collecting too many “non-target, protected communications,” in part due to “about” collection. Then the court imposed limits on upstream, including in how “about” communications were handled. And when the Privacy and Civil Liberties Oversight Board issued its milquetoast report on Section 702 in 2014, it said that “about” collection pushed “the entire program close to the line of constitutional reasonableness.”
For its part, the NSA asserted that “about” collection was necessary technically to ensure the agency actually collected all the to/from communications it was supposedly entitled to.
In April 2017, we learned that the NSA’s technical and legal problems with “about” collection were even more pervasive than previously disclosed, and it had not been complying with the FISC’s already permissive limits. As a result, the NSA publicly announced it was ending “about” collection entirely. This was something of a victory, following years of criticism and pressure from civil liberties groups and internal government oversight. But the program suspension rested on technical and legal issues that may change over time, and not a change of heart or a controlling rule. Indeed, the suspension is not binding on the NSA in the future, since it could simply restart “about” collection once it figured out a “technical” solution to comply with the FISC’s limits.
Critically, as originally written, Section 702 did not mention “about” collection. Nor did Section 702 provide any rules on collecting, accessing, or sharing data obtained through “about” collection.
But the new reauthorization codifies this controversial NSA practice.
According to the new law, “The term ‘abouts communication’ means a communication that contains a reference to, but is not to or from, a target of an acquisition authorized under section 702(a) of the Foreign Intelligence Surveillance Act of 1978.”
Under the new law, if the intelligence community wants to restart “about” collection, it has a path to doing so that includes finding a way to comply with the FISC’s minimal limitations. Once that’s done, an affirmative act of Congress is required to prevent it. If Congress does not act, then the NSA is free to continue this highly invasive “about” collection.
Notably, by including collection of communications that merely “contain a reference to . . . a target,” the new law may go further than the NSA’s prior practice of collecting communications content that contained specific selectors. The NSA might well argue that the new language allows them to collect emails that refer to targets by name or in other less specific ways, rather than actually containing a target’s email address, phone number, or other “selectors.”
Beyond that, the reauthorization codifies a practice that, up to now, has existed solely due to the NSA’s interpretation and implementation of the law. Before this year’s Section 702 reauthorization, the NSA could not credibly argue Congress had approved the practice. Now, if the NSA restarts “about” collection, it will argue it has express statutory authorization to do so. Explicitly codifying “about” collection is thus an expansion of the NSA’s spying authority.
Finally, providing a path to restart that practice absent further Congressional oversight, when that formal procedure did not exist before, is an expansion of the NSA’s authority.
For years, the NSA has pushed its boundaries. The NSA has repeatedly violated its own policies on collection, access, and retention, according to multiple, unsealed FISC opinions. Infamously, by relying on an unjustifiable interpretation of a separate statute—Section 215—the NSA illegally conducted bulk collection of Americans’ phone records for years. And even without explicit statutory approval, the NSA found a way to create this bulk phone record program and persuade the FISC to condone it, despite having begun the bulk collection without any court or statutory authority whatsoever.
History teaches that when Congress gives the NSA an inch, the NSA will take a mile. So we fear that the new NSA spying law’s unprecedented language on “about” collection will contribute to an expansion of the already excessive Section 702 surveillance.
Codified Backdoor Searches
The Section 702 reauthorization provides a similar expansion of the intelligence community’s authority to conduct warrantless “backdoor searches” of databases of Americans’ communications. To review, the NSA’s surveillance casts an enormously wide net, collecting (and storing) billions of emails, chats, and other communications involving Americans who are not targeted for surveillance. The NSA calls this “incidental collection,” although it is far from unintended. Once collected, these communications are often stored in databases which can be accessed by other agencies in the intelligence community, including the FBI. The FBI routinely runs searches of these databases using identifiers belonging to Americans when starting—or even before officially starting—investigations into domestic crimes that may have nothing to do with foreign intelligence issues. As with the initial collection, government officials conduct backdoor searches of Section 702 communications content without getting a warrant or other individualized court oversight—which violates the Fourth Amendment.
Just as with "about" collection, nothing in the original text of Section 702 authorized or even mentioned the unconstitutional practice of backdoor searches. While that did not stop the FISC from approving backdoor searches under certain circumstances, it did lead other courts to uphold surveillance conducted under Section 702 and ignore whether these searches are constitutional.
Just as with "about" collection, the latest Section 702 reauthorization acknowledges backdoor searches for the first time. It imposes a warrant requirement only in very narrow circumstances: where the FBI runs a search in a “predicated criminal investigation” not connected to national security. Under FBI practice, a predicated investigation is a formal, advanced case. By all accounts, though, backdoor searches are normally used far earlier. In other words, the new warrant requirement will rarely, if ever, apply. It is unlikely to prevent a fishing expedition through Americans’ private communications. Even where a search is inspired by a tip about a serious domestic crime [.pdf], the FBI should not have warrantless access to a vast trove of intimate communications that would otherwise require complying with stringent warrant procedures.
But following the latest reauthorization, the government will probably argue that Congress gave its OK to the FBI searching sensitive data obtained through NSA spying under Section 702, and using it in criminal cases against Americans.
In sum, the latest reauthorization of Section 702 is best seen as an expansion of the government’s spying powers, and not just an extension of the number of years that the government may exercise these powers. Either way, the latest reauthorization is a massive disappointment. That’s why we’ve pledged to redouble our commitment to seek surveillance reform wherever we can: through the courts, through the development and spread of technology that protects our privacy and security, and through Congressional oversight.
President Donald Trump’s first State of the Union address last night was remarkable for two reasons: for what he said, and for what he didn’t say.
The president took enormous pride last night in claiming to have helped “extinguish ISIS from the face of the Earth.”
But he failed to mention that Congress passed a law at the start of this year to extend unconstitutional, invasive NSA surveillance powers. Before it passed the House, the Senate, and received the president’s signature, the law was misrepresented by several members of Congress and by the president himself.
On the morning the House of Representatives voted to move the law to the Senate, the president weighed in on Twitter, saying that “today’s vote is about foreign surveillance of foreign bad guys on foreign land.”
Make no mistake: the bill he eventually signed—S. 139—very much affects American citizens. That bill reauthorized Section 702 original enacted as part of the FISA Amendments Act—a legal authority the NSA uses to justify its collection of countless Americans’ emails, chat logs, and browsing history without first obtaining a warrant. The surveillance allowed under this law operates largely in the dark and violates Americans’ Fourth Amendment right to privacy.
Elsewhere in his speech, the president trumpeted a future America with rebuilt public infrastructure. He foretold of “gleaming new roads, bridges, highways, railways, and waterways across our land.”
What the president didn’t say, again, is worrying. The president failed to mention that the Federal Communications Commission, now led by his personal choice in chairman, made significant steps in dismantling another public good: the Internet.
Last year, the FCC voted to repeal net neutrality rules, subjecting Americans to an Internet that chooses winners and losers, fast lanes and slow ones. The FCC’s order leaves Americans open to abuse by well-funded corporations that can simply pay to have their services delivered more reliably—and quickly—on the Internet, and it creates a system where independent business owners and artists are at a disadvantage to have their online content viewed by others.
And the president last night mentioned fair trade deals and intellectual property. He complimented his administration’s efforts in rebalancing “unfair trade deals that sacrificed our prosperity and shipped away our companies, our jobs, and our Nation’s wealth.” He promised to “protect American workers and American intellectual property through strong enforcement of our trade rules.”
Trump didn’t mention that the United States' demands for the copyright and patent sections of a renegotiated NAFTA closely mirror those of the TPP, with its unfair expansion of copyright law. It’s ironic that one of the TPP’s most vocal critics would seemingly champion one of its most dangerous components.
The president gave Americans a highlight reel last night about his perceived accomplishments. But he neglected to tell the full story about his first year in the White House.
As civil liberties are threatened and constitutional rights are violated, EFF is continuing to fight. We are still supporting net neutrality. We are still taking the NSA to court over unconstitutional surveillance. And we are still working to protect and expand your rights in the digital world, wherever the fight may take us.
EFF Asks California Court to Reverse Ruling That Could Weaken Open Records Rules, Impede Public Access to Government Records
State agencies in California are collecting and using more data now than they ever, and much of this data includes very personal information about California residents. This presents a challenge for agencies and the courts—how to make government-held data that’s indisputably of interest available to the public under the state’s public records laws while still protecting the privacy of Californians.
EFF filed an amicus brief today urging a state appeals court to reverse a San Francisco trial judge’s ruling that would impede and possibly preclude the public’s ability to access state-held data that includes private information on individuals—even if that data is anonymized or redacted to protect privacy.
The California Public Records Act (CPRA) has a strong presumption in favor of disclosure of state records. And the California state constitution recognizes that public access to government information is paramount in enabling people to oversee the government and ensure it’s acting in their best interest. But the state constitution also recognizes a strong privacy right, so public access to information must be carefully balanced with personal privacy.
To keep records secret, agencies must show that concealment, not transparency, best serves the public interest. This balancing test was at issue in a lawsuit brought by UCLA law professor Richard Sanders and the First Amendment Coalition (FAC), who are seeking access to information from the California Bar Association about the race, ethnicity, test scores, and other information of tens of thousands of people who took the state bar exam to become lawyers. The state bar refused to release the data to protect the confidentiality of test-takers, even though no names or personal identifying information would be disclosed. The case is Sander v. State Bar of California.
A trial court sided with the bar. The case eventually went all the way to the California Supreme Court, which correctly recognized the strong public interest in disclosing the data so the effect of law school admissions policies on exam performance could be studied. It’s “beyond dispute” that the public has the right to access the information, the court said in a unanimous decision, as long as the identity of individual test takers remained confidential. It sent the case back to the trial court to decide if and how much material could be released.
This is where things took a wrong turn. Sanders and FAC presented several possible protocols to protect bar exam takers’ privacy, including three complicated anonymization techniques, but the trial court ruled that, even under these protocols, the data couldn’t be released. The court improperly placed the burden on Sanders and FAC to show that there was absolutely no way anyone’s identity could be revealed—including if the anonymized data were combined with other obscure but publicly-available personal information. In doing so, the court failed to adhere to the CPRA’s balancing tests, which require the state bar to show that the public interest in protecting the privacy of bar takers—even after their data is stripped of identifying information—clearly outweighs the public interest in the data.
In a particularly dangerous finding, the court held the CPRA couldn’t require the state bar to apply anonymization protocols because that would constitute creating a “new record” from the existing data. However, the CPRA clearly requires agencies to produce as much public information as possible, even if that means using a “surgical scalpel” to separate information that’s exempt from disclosure under the CPRA from non-exempt information. Techniques for protecting exempt information while still releasing otherwise non-exempt government records that are of great interest to the public must evolve as the government’s means of collecting, compiling, and maintaining such records has evolved. Protocols that propose to anonymize data, such as those presented by Sander and FAC, represent one such technique. California courts should not avoid grappling with whether anonymization can protect privacy by dismissing it out of hand as the creation of a “new record.”
The California’s Public Records Act is a vital check on government secrecy. With the explosive growth of government data, particularly law enforcement surveillance data, we can’t stand by while courts sidestep the task of evaluating anonymization protocols that will increasingly play a role in balancing public access rights under the CPRA and laws like it in other states. If upheld, the Sanders ruling could weaken the public’s ability to access other electronic records and government data that contains private identifying information. EFF has fought in court to gain access to license plate records indiscriminately collected on millions of drivers by Los Angeles law enforcement agencies. The California Supreme Court ruled that police can’t keep those records secret, paving the way for EFF to analyze how this huge surveillance program works. But the records could identify drivers, so the next step is to figure out how the data can be made public in a redacted or anonymized form to protect drivers’ privacy. We are watching the Sanders case closely, and hope the appeals court does the right thing: reverse the trial court’s findings, require it to fully address the proposed anonymization protocols, and properly apply the balancing tests under the CPRA.Related Cases: Automated License Plate Readers- ACLU of Southern California & EFF v. LAPD & LASD
Yesterday, the California Senate approved legislation that would require Internet service providers (ISPs) in California to follow the now-repealed 2015 Open Internet Order. While well-intentioned, the legislators sadly chose an approach that is vulnerable to legal attack.
The 2015 Open Internet Order from the Federal Communications Commission provided important privacy and net neutrality protections, such as banning blocking, throttling, and paid prioritization. It is important for states to fill the void left behind by the FCC’s abandonment of those protections.
States are constrained, however, because federal policy can override, or “pre-empt,” state regulation in many circumstances. State law that doesn’t take this into account can be invalidated by the federal law. It’s a waste to pass a bill that is vulnerable to legal challenge by ISPs when strong alternatives are available.
In a letter to the California Senate, EFF provided legal analysis explaining how the state can promote network neutrality in a legally sustainable way. Unfortunately, SB 460, the legislation approved by the California Senate, is lacking many of the things EFF’s letter addressed.Better Approaches Left Behind by SB 460
Today, California spends $100s of millions on ISPs, including AT&T, as part of its California broadband subsidy program. The state could require that recipients of that funding provide a free and open Internet, to ensure that taxpayer funds are used to benefit California residents rather than subsidizing a discriminatory network. This is one of the strongest means the state has to promote network neutrality, and it is missing from SB 460.
California also has oversight and power over more than 4 million utility poles that ISPs benefit from accessing to deploy their networks. In fact, California is expressly empowered by federal law to regulate access to the poles and the state legislature can establish network neutrality conditions in exchange for access to the poles. Again, that is not in the current bill passed by the Senate.
Lastly, each city negotiates a franchise with the local cable company and often the company agrees to a set of conditions in exchange for access to valuable, taxpayer-funded rights of way. California’s legislature can directly empower local communities to negotiate with ISPs to require network neutrality in exchange for the benefit of accessing tax-payer funded infrastructure. This is also not included in the current bill.States Should Put Their Full Weight in Support of Network Neutrality
Any state moving legislation to promote network neutrality should invoke all valid authority to do so. At the very least, California should view the additional legal approaches we have recommended as backups, to be relied upon if the current proposal is held invalid by a court.
If SB 460’s approach to directly regulating ISPs is found to be invalid, ultimately all the legislation does is require state agencies to contract with ISPs that follow the 2015 Open Internet Order. While an important provision, it can already be required with a stroke of the pen tomorrow under a Governor’s Executive Order much in the same way as Montana and New York. And while the 2015 Open Internet Order was a good start, why not bring to bear all the resources a state has to secure such an important principle for Californians?
EFF hopes that subsequent network neutrality legislation such as Senator Wiener’s SB 822 can cover what is missing in SB 460 or that future amendments in the legislative process can bring the full weight of the state of California to bear in favor of network neutrality. Both options remain available and it is our hope that California’s legislators understand that the millions of Americans who are fighting hard to keep the Internet free and open expect elected officials that side with us to deploy their power wisely and effectively.
The importance of keeping the Internet free and open necessitates nothing less.
For more than three years now, we’ve been highlighting weak patents in our Stupid Patent of the Month series. Often we highlight stupid patents that have recently been asserted, or ones that show how the U.S. patent system is broken. This month, we’re using a pretty silly patent in the U.S. to highlight that stupid U.S. patents may soon—depending on the outcome of a current Supreme Court case—effectively become stupid patents for the entire world.
Lenovo was granted U.S. Patent No. 9,875,007 [PDF] this week. The patent, entitled “Devices and Methods to Receive Input at a First Device and Present Output in Response on a Second Device Different from the First Device,” relates to presenting materials on different screens.
The first claim of the patent is relatively easy to read and understandable, for a patent. What Lenovo claims is:
A first device, comprising:
at least one processor;
storage accessible to the at least one processor and bearing instructions executable by the at least one processor to:
receive user input to present an output on a display; and
determine a second device different from the first device on which to present the output based at least in part on identification by the first device of the second device as having a relatively larger display on which to present the output than the first device.
This claim describes a distinction a child may make, in asking a parent to put something up on the “big screen.” It covers a generic computing device, programmed to make a comparison between the size of display screens, and choose one of the screens based on that comparison, something that any person would know how to do, and any programmer would know how to implement. A review of what happened [PDF] at the Patent Office shows the fine (and trivial) distinctions Lenovo made over what was known in order to claim there was an invention. Lenovo argued that although previous technologies allowed for displaying material on second devices with larger screens, those technologies didn’t do it by identifying second devices by determining the size of the screens. Even if what Lenovo claims is true (though we have doubts that they were the first to do this), we’re not sure why as a public policy matter Lenovo should be entitled to a monopoly on this “invention.” It seems more the product of basic skill and design rather than anything inventive. Generally, people are not supposed to get patents on things that are obvious.
It’s quite possible that Lenovo will never assert this patent against anyone, and it will become like many patents, just a piece of paper on a wall. But what if Lenovo decided to assert this patent?
We’re highlighting this patent in order to bring attention to the fact that a U.S. Supreme Court case being decided this term could make this patent not just a stupid U.S. patent, but effectively a stupid worldwide patent.
Generally, countries’ patent laws only have domestic effect. If you want to have patent protection in the U.S., you need to file for a U.S. patent and show your application complies with U.S. patent law. If you want protection in India, you also need to file in India and show it complies with Indian patent law. There are differences between the patent laws of various countries; some provide more protection, some provide less. That patent laws differ in different countries is generally considered a feature, not a bug.
There’s an important exception in the U.S., however, to this general idea that patent rights are limited to a particular country. Under the Patent Act, specifically 35 U.S.C. § 271(f), if combining certain parts would constitute patent infringement in the U.S., then someone who knowingly supplies those same parts with the intention that they be combined abroad is also liable for infringement. Basically, you can’t make parts A and B in the U.S., ship them abroad and tell people to combine them into AB, if you know that you’d infringe a patent in the U.S. if you just combined them into AB in the U.S. and then shipped them abroad. The point of this narrow rule is to prevent people from offshoring the final step of a process, where they’re purposefully doing so in order to evade U.S. patent rights.
There is a new case currently pending at the U.S. Supreme Court called WesternGeco v. Ion Geophysical that relates to this fairly narrow law, § 271(f). The question the Supreme Court has been asked is: if someone is liable for patent infringement under § 271(f), can the patent owner recover lost profits relating to the combinations that were made abroad? If confined to § 271(f), the result (whichever way the court could rule) would be a fairly narrow decision. Most cases asserting patent infringement are not brought under that basis, and the circumstances that would cause infringement under that section don’t arise very often.
But at an earlier stage of the case, the Solicitor General, whose opinion is often given great weight by the Supreme Court, advanced [PDF] a startling idea: patent owners should be able to collect damages for any act that was a foreseeable result of the U.S. based infringement in every case of patent infringement, not just those brought under § 271(f), even if the act occurred completely abroad.
This would result in a dramatic expansion of the scope of U.S. patent remedies, making them, effectively, act the same as worldwide patent rights for any product that has a global market.
An example is useful here. Suppose a display systems designer BobCo designed a system that it sold to Lenovo’s competitor CompCo in China to include in CompCo’s goods built in China. CompCo sells its goods globally, and some of the goods end up in the U.S., infringing on Lenovo’s patent in the United States. Under the Solicitor General’s view, Lenovo should be able to sue BobCo for violating the U.S. patent, and recover all the profits that BobCo made from CompCo for all the goods sold worldwide—regardless of whether they ever ended up in the U.S. Lenovo could get this reward despite the fact that all of BobCo’s acts (other than importing a few of the products) occurred abroad. Even though technically only the products that entered the U.S. infringed the U.S. patent, Lenovo’s remedy for violation of those rights would include recovery for all products worldwide.
This example is essentially the facts of a case called Power Integrations, where the U.S. Court of Appeals for the Federal Circuit (correctly, in our view), rejected such a broad scope of remedies, stating that U.S. patent laws “do not  provide compensation for a defendant’s foreign exploitation of a patented invention, which is not an infringement at all.” The Solicitor General is now challenging this rule.
It’s not hard to see how the Solicitor General’s rule could interfere with rights held by others abroad—including the rights of the public—and would mean that U.S. patent rights would be effectively exported to other countries. Innovations that are in the public domain in China or Germany, for example, could all of a sudden become more expensive because a U.S. patent rights holder gets to impose a cost on goods in those countries because some end up in the U.S. For example, even though Lenovo has applied for a patent in China that is related to the U.S. patent, no patent has (yet) issued. Lenovo’s attempt to get a patent in Germany has also not yet been successful. But under the Solicitor General’s theory, if sales in Germany or China were a foreseeable result of infringing sales in the U.S., Lenovo could impose costs in those countries regardless of whether they actually were entitled to a patent in that jurisdiction. We’re also concerned that such a rule would lead to a race to the bottom: if U.S. courts impose a worldwide patent tax to benefit U.S. patent holders, other countries may try to do the same as well. This could lead to companies being subject to multiple claims of infringement with multiple payments of worldwide damages, again increasing costs to consumers.
We’ve now spent years highlighting some pretty silly U.S. patents. We hope the Supreme Court rejects the Solicitor General’s desire to expand the scope of U.S. patent remedies, and refuses to turn stupid U.S. patents into, effectively, stupid worldwide patents.
Este artículo fue escrito por Edison Lanza, Relator Especial para la Libertad de Expresión de la Comisión Interamericana de Derechos Humanos.
En poco más de 20 años se hizo evidente el potencial de Internet para el ejercicio de las libertades, la educación, el impacto de las redes sociales; y la revolución que supuso para el comercio, el entretenimiento y la innovación. Por supuesto, un cambio de tal naturaleza también entraña desafíos como la diseminación del discurso que incita a la violencia; los riesgos para la privacidad; la necesidad de que toda la humanidad acceda a la red; la difusión de noticias falsas y el papel de las plataformas en la circulación de la información. Con todo, los beneficios e impactos positivos de internet parecían justificar el optimismo respecto a la marcha de la revolución digital.
Pero el fin de la historia, ya se sabe, no está a la vuelta de la esquina. El 14 de diciembre de 2017 la administración del presidente Donald Trump dio un paso que tiene el potencial de cambiar la naturaleza de internet como fuerza democratizadora al derogar a nivel del gobierno federal la regla que garantizaba la "neutralidad de la red" (Net Neutrality).
Esta norma, que había sido aprobada por la Federal Comunications Comission (FCC) durante la administración del presidente Barack Obama, consideraba a Internet un servicio público de telecomunicaciones. La neutralidad de la red impedía a los ISPs (proveedores de servicios de internet, por su sigla en inglés) que brindan banda ancha (fija y móvil), manipular el flujo en Internet de cualquiera de las siguientes formas: 1) bloquear cualquier contenido legal o paquete de datos (sin importar su origen, dispositivo o destinatario); 2) ralentizar en su carretera un contenido o aplicación sobre los demás; y 3) favorecer cierto tráfico sobre otro, creando líneas más rápidas para unas aplicaciones a cambio de una contraprestación.
Esa decisión llegó tras una década de disputas jurídicas. Las empresas de telecomunicaciones argumentaban que para incrementar el acceso a Internet (adquirir espectro, colocar antenas, tender fibra óptica directa al hogar, etc) las inversiones quedaban a su cargo, pero la neutralidad les impedía desarrollar un modelo de negocios segmentado, basado en ofrecer acceso diferenciado a determinados servicios o aplicaciones según las necesidades de cada usuario. Según este argumento, las empresas tecnológicas gozaban en cambio de toda la libertad en el nivel Over The Top (OTT) para llevar tráfico hacia aplicaciones cada vez más sofisticadas, lo que a su vez, aumenta la exigencia de más ancho de banda.
Desde Sillicon Valley respondían que el problema nunca fue el principio de neutralidad de la red, sino la falta de comprensión de la nueva economía por parte de las "telcos": después de todo -argumentan-, el mensaje de texto en telefonía móvil surgió antes que las aplicaciones en Internet, pero no supieron ver lo que tenían delante de sus ojos: nada impedía a las "telcos" desarrollar el video on demand, las compras en línea o las aplicaciones para el transporte, por citar algunos ejemplos de innovación.
Semanas atrás, en una votación dividida, la mayoría republicana (3 a 2) en la FCC suprimió la net neutrality: internet es ahora un servicio de información privado y los intermediarios sólo tienen obligaciones de transparentar cómo gestionan la red. La FCC también declinó en la Comisión de Comercio su autoridad para regular posibles monopolios y oligopolios, fusiones o compras que supongan niveles excesivos de concentración en internet.
Aunque aquellos que celebraron la medida no promueven el bloqueo de contenidos por razones políticas o ideológicas, si afirman que las fuerzas del mercado, una vez liberadas de la regulación, se encargarán de hacer surgir negocios, más inversión y ofrecerán mejores condiciones de acceso a internet.
En contraposición, un grupo de 20 científicos e ingenieros considerados los padres fundadores de internet escribieron una carta abierta al Congreso de Estados Unidos en la que afirman que la nueva mayoría en la FCC no entiende como funciona internet. Allí advierten sobre el impacto que el fin de la neutralidad de la red tendría para la innovación y para el derecho a crear, compartir y acceder a información en línea.
Argumentos aparte, desde el punto de vista de los derechos humanos el cambio aprobado por la FCC trae consigo graves preocupaciones que los Relatores para la Libertad de Expresión hemos puesto de manifiesto. Internet se ha desarrollado a partir de determinados principios de diseño, cuya aplicación sostenida en el tiempo ha permitido un ambiente descentralizado, abierto y neutral. La inteligencia de la red estestableceranos de administraciones democriones de la red, dado quecancha a favor de las "telcos"uscar, recibir y difundir informá en las puntas: el valor lo genera todo aquel que pueda conectarse a la red, subir y compartir información, ideas y aplicaciones.
El entorno original de internet ha sido, precisamente, clave para garantizar la libertad de buscar, recibir y difundir información sin distinción de fronteras y, sin duda, tuvo un efecto positivo en la diversidad y el pluralismo. De hecho, esta característica (la neutralidad) fue elevada a principio fundamental tanto en el sistema interamericano de derechos humanos, como en el sistema universal, a través de diversas declaraciones y decisiones.
Vale precisar que la batalla jurídica por mantener el principio vigente en Estados Unidos recién comienza: ya suman 20 los estados -entre ellos California, Nueva York-, cuyos fiscales generales han interpuesto acciones contra la nueva regla con el objetivo de garantizar las libertades, un dato relevante en un país donde la primera enmienda es asunto serio. Aunque es difícil que la mayoría republicana cambie una política del Ejecutivo, el Congreso estudia anular la decisión (las encuestas indican que la idea de una Internet libre y abierta es compartida por el 70% de la población). Por otra parte, los estados pueden establecer normas que obliguen a observar el principio de neutralidad en sus jurisdicciones y, de hecho, el estado de Montana acaba de aprobar una norma para garantizar la neutralidad de la red dentro de sus límites.
Ahora bien, que pasaría si el nuevo modelo se impone. Sin duda la red descentralizada que conocemos se transformaría en un espacio centralizado, con unos pocos intermediarios con el poder de distribuir el acceso a las aplicaciones y contenidos. Los procesos de concentración y fusiones entre empresas de telecomunicaciones y empresas tecnológicas seguramente se aceleren, y esto podría relegar a pequeños emprendimientos a una Internet de baja calidad; al final, para el usuario común Internet podría convertirse en un espacio fragmentado y de aplicaciones dominantes. Otra visión tecnológicamente más optimista sugiere que, pese a este retroceso en los principios, la red no cambiará su naturaleza: no habrá un despliegue de censura radical de parte de los ISP's y buena parte de los consumidores -en particular la generación de los milenial y las siguientes- van a seguir reclamando acceso a una Internet completa, abierta y neutral.
Se podrá decir que algunas plataformas o gigantes como Google y Facebook ya tienen un grado de concentración excesiva y pueden competir con las corporaciones de telecomunicaciones. Es cierto, pero bajo la neutralidad de la red miles de sitios que acceden a la vida digital, se sirven de (y sirven a) las redes más grandes, conviven en un ecosistema abierto.
Y finalmente cabe preguntarse: ¿Qué impacto tendrá esta medida en el resto del mundo? En América Latina Argentina, Brasil, México y Chile ya avanzaron en leyes que garantizan este principio. ¿Habrá un efecto contagio en la región? ¿Que harán las empresas de telecomunicaciones que operan en la región? ¿Qué modelo seguirá Europa y los países nórdicos, algunos de los cuales elevaron a derecho constitucional el acceso a una Internet libre y abierta? Y los gobiernos autoritarios alrededor del mundo: ¿utilizarán el fin de la neutralidad de la red para justificar una política aún más agresiva de bloqueo y filtrado de medios de comunicación, páginas web y aplicaciones que consideran un peligro para la supervivencia de su régimen?
The California Senate has rejected a bill to allow drivers to protect their privacy by applying shields to their license plates when parked. The simple amendment to state law would have served as a countermeasure against automated license plate readers (ALPRs) that use plates to mine our location data.
As is the case with many privacy bills, S.B. 712 had received strong bipartisan support since it was first introduced in early 2017. The bill was sponsored by Sen. Joel Anderson, a prominent conservative Republican from Southern California, and received aye votes from Sens. Nancy Skinner and Scott Wiener, both Democrats representing the Bay Area.
Each recognized that ALPR data represents a serious threat to privacy, since ALPR data can reveal where you live, where you work, where you worship, and where you drop your kids at school. Law enforcement exploits this data with insufficient accountability measures. It is also sold by commercial vendors to lenders, insurance companies, and debt collectors.
Just last week, news broke that Immigrations & Customs Enforcement would be exploiting a database of more than 6.5 billion license plate scans collected by a private vendor.
This measure was a simple way to empower people to protect information about where they park their cars, be it an immigration resource center, a reproductive health center, a marijuana dispensary, a place of worship, or a gun show.
Under lobbying from law enforcement interests, senators killed the bill with a 12-18 vote.
Privacy on our roadways is one of the most pressing issues in transit policy. The federal government—including the Drug Enforcement Agency and Immigrations & Customs Enforcement—are ramping up their efforts to use ALPR data, including data procured from private companies. Major vulnerabilities in computer systems are revealing how dangerous it can be for government agencies and private data brokers to store our sensitive personal information.
If the Senate is going to begin 2018 killing a driver privacy measure, it is incumbent on them to spend the rest of the year probing the issue to find a new solution.Related Cases: Automated License Plate Readers (ALPR)
On January 25th, Reuters reported that software companies like McAfee, SAP, and Symantec allow Russian authorities to review their source code, and that "this practice potentially jeopardizes the security of computer networks in at least a dozen federal agencies." The article goes on to explain what source code review looks like and which companies allow source code reviews, and reiterates that "allowing Russia to review the source code may expose unknown vulnerabilities that could be used to undermine U.S. network defenses."
The spin of this article implies that requesting code reviews is malicious behavior. This is simply not the case. Reviewing source code is an extremely common practice conducted by regular companies as well as software and security professionals to ensure certain safety guarantees of the software being installed. The article also notes that “Reuters has not found any instances where a source code review played a role in a cyberattack.” At EFF, we routinely conduct code reviews of any software that we elect to use.
Just to be clear, we don’t want to downplay foreign threats to U.S. cybersecurity, or encourage the exploitation of security vulnerabilities— on the contrary, we want to promote open-source and code review practices as stronger security measures. EFF strongly advocates for the use and spread of free and open-source software for this reason.
Not only are software companies disallowing foreign governments from conducting source code reviews, trade agreements are now being used to prohibit countries from requiring the review of the source code of imported products. The first such prohibition in a completed trade agreement will be in the Comprehensive and Progressive Trans-Pacific Partnership (CPTPP, formerly just the TPP), which is due to be signed in March this year. A similar provision is proposed for inclusion in the modernized North American Free Trade Agreement (NAFTA), and in Europe’s upcoming bilateral trade agreements. EFF has expressed our concern that such prohibitions on mandatory source code review could stand in the way of legitimate measures to ensure the safety and quality of software such as VPN and secure messaging apps, and devices such as routers and IP cameras.
The implicit assumption that "keeping our code secret makes us safer" is extremely dangerous. Security researchers and experts have made it explicit time and time again that relying solely on security through obscurity simply does not work. Even worse, it gives engineers a false sense of safety, and can encourage further bad security practices.
Even in times of political tension and uncertainty, we should keep our wits about us. Allowing code review is not a direct affront to national security— in fact, we desperately need more of it.
Private Censorship Is Not the Best Way to Fight Hate or Defend Democracy: Here Are Some Better Ideas
From Cloudflare’s headline-making takedown of the Daily Stormer last autumn to YouTube’s summer restrictions on LGBTQ content, there's been a surge in “voluntary” platform censorship. Companies—under pressure from lawmakers, shareholders, and the public alike—have ramped up restrictions on speech, adding new rules, adjusting their still-hidden algorithms and hiring more staff to moderate content. They have banned ads from certain sources and removed “offensive” but legal content.
These moves come in the midst of a fierce public debate about what responsibilities platform companies that directly host our speech have to take down—or protect—certain types of expression. And this debate is occurring at a time in which only a few large companies host most of our online speech. Under the First Amendment, intermediaries generally have a right to decide what kinds of expression they will carry. But just because companies can act as judge and jury doesn’t mean they should.
To begin with, a great deal of problematic content sits in the ambiguous territory between disagreeable political speech and abuse, between fabricated propaganda and legitimate opinion, between things that are legal in some jurisdictions and not others. Or they’re things some users want to read and others don’t. If many cases are in grey zones, our institutions need to be designed for them.
We all want an Internet where we are free to meet, create, organize, share, associate, debate and learn. We want to make our voices heard in the way that technology now makes possible. No one likes being lied to or misled, or seeing hateful messages directed against them, or flooded across our newsfeeds. We want our elections free from manipulation and for the speech of women and marginalized communities to not be silenced by harassment. We should all have the ability to exercise control over our online environments: to feel empowered by the tools we use, not helpless in the face of others' use.
But in moments of apparent crisis, the first step is always to look simple solutions. In particular, in response to rising concerns that we are not in control, a groundswell of support has emerged for even more censorship by private platform companies, including pushing platforms into ever increased tracking and identification of speakers.
We are at a critical moment for free expression online and for the role of the Internet in the fabric of democratic societies. We need to get this right.Platform Censorship Isn’t New, Hurts the Less Powerful, and Doesn’t Work
Widespread public interest in this topic may be new, but platform censorship isn’t. All of the major platforms set forth rules for their users. They tend to be complex, covering everything from terrorism and hate speech to copyright and impersonation. Most platforms use a version of community reporting. Violations of these rules can prompt takedowns and account suspensions or closures. And we have well over a decade of evidence about how these rules are used and misused.
The results are not pretty. We’ve seen prohibitions on hate speech used to shut down conversations among women of color about the harassment they receive online; rules against harassment employed to shut down the account of a prominent Egyptian anti-torture activist; and a ban on nudity used to censor women who share childbirth images in private groups. And we've seen false copyright and trademark allegations used to take down all kinds of lawful content, including time-sensitive political speech.
Platform censorship has included images and videos that document atrocities and make us aware of the world outside of our own communities. Regulations on violent content have disappeared documentation of police brutality, the Syrian war, and the human rights abuses suffered by the Rohingya. A blanket ban on nudity has repeatedly been used to take down a famous Vietnam war photo.
These takedowns are sometimes intentional, and sometimes mistakes, but like Cloudflare’s now-famous decision to boot off the Daily Stormer, they are all made without accountability and due process. As a result, most of what we know about censorship on private platforms comes from user reports and leaks (such as the Guardian’s “Facebook Files”).
Given this history, we’re worried about how platforms are responding to new pressures. Not because there’s a slippery slope from judicious moderation to active censorship — but because we are already far down that slope. Regulation of our expression, thought, and association has already been ceded to unaccountable executives and enforced by minimally-trained, overworked staff, and hidden algorithms. Doubling down on this approach will not make it better. And yet, no amount of evidence has convinced the powers that be at major platforms like Facebook—or in governments around the world. Instead many, especially in policy circles, continue to push for companies to—magically and at scale—perfectly differentiate between speech that should be protected and speech that should be erased.
If our experience has taught us anything, it’s that we have no reason to trust the powerful—inside governments, corporations, or other institutions—to draw those lines.
As people who have watched and advocated for the voiceless for well over 25 years, we remain deeply concerned. Fighting censorship—by governments, large private corporations, or anyone else—is core to EFF’s mission, not because we enjoy defending reprehensible content, but because we know that while censorship can be and is employed against Nazis, it is more often used as a tool by the powerful, against the powerless.First Casualty: Anonymity
In addition to the virtual certainty that private censorship will lead to takedowns of valuable speech, it is already leading to attacks on anonymous speech. Anonymity and pseudonymity have played important roles throughout history, from secret ballots in ancient Greece to 18th century English literature and early American satire. Online anonymity allows us to explore controversial ideas and connect with people around health and other sensitive concerns without exposing ourselves unnecessarily to harassment and stigma. It enables dissidents in oppressive regimes to tell their stories with less fear of retribution. Anonymity is often the greatest shield that vulnerable groups have.
Current proposals from private companies all undermine online anonymity. For example, Twitter’s recent ban on advertisements from Russia Today and Sputnik relies on the notion that the company will be better at identifying accounts controlled by Russia than Russia will be at disguising accounts to promote its content. To make it really effective, Twitter may have to adopt new policies to identify and attribute anonymous accounts, undermining both speech and user privacy. Given the problems with attribution, Twitter will likely face calls to ban anyone from promoting a link to suspected Russian government content.
And what will we get in exchange for giving up our ability to speak online anonymously? Very little. Facebook for many years required individuals to use their “real” name (and continues to require them to use a variant of it), but that didn’t stop Russian agents from gaming the rules. Instead, it undermined innocent people who need anonymity—including drag performers, LGBTQ people, Native Americans, survivors of domestic and sexual violence, political dissidents, sex workers, therapists, and doctors.
Study after study has debunked the idea that forcibly identifying speakers is an effective strategy against those who spread bad information online. Counter-terrorism experts tell us that “Censorship has never been an effective method of achieving security, and shuttering websites and suppressing online content will be as unhelpful as smashing printing presses.”
We need a better way forward.Step One: Start With the Tools We Have and Get Our Priorities Straight
Censorship is a powerful tool and easily misused. That’s why, in fighting back against hate, harassment, and fraud, censorship should be the last stop. Particularly from a legislative perspective, the first stop should be looking at the tools that already exist elsewhere, rather than rushing to exceptionalize the Internet. For example, in the United States, defamation laws reflect centuries of balancing the right of individuals to hold others accountable for false, reputation-damaging statements, and the right of the public to engage in vigorous public debate. Election laws already prohibit foreign governments or their agents from purchasing campaign ads—online or offline—that directly advocate for or against a specific candidate. In addition, for sixty days prior to an election, foreign agents cannot purchase ads that even mention a candidate. Finally, the Foreign Agent Registration Act also requires information materials distributed by a foreign entity to contain a statement of attribution and to file copies with the U.S. Attorney General. These are all laws that could be better brought to bear, especially in the most egregious situations.
We also need to consider our priorities. Do we want to fight hate speech, or do we want to fight hate? Do we want to prevent foreign interference in our electoral processes, or do we want free and fair elections? Our answers to these questions should shape our approach, so we don’t deceive ourselves into thinking that removing anonymity in online advertising is more important to protecting democracy than, say, addressing the physical violence by those who spread hate, preventing voter suppression and gerrymandering, or figuring out how to build platforms that promote more informed and less polarizing conversations between the humans that use them.Step Two: Better Practices for Platforms
But if we aren’t satisfied with those options, we have others. Over the past few years, EFF—in collaboration with Onlinecensorship.org and civil society groups around the world—has developed recommendations to companies aimed at fighting censorship and protecting speech. Many of these are contained within the Manila Principles, which provide a roadmap for companies seeking to ensure human rights are protected on their platforms.
In 2018, we’ll be working hard to push companies toward better practices around these recommendations. Here they are, in one place.Meaningful Transparency
Over the years, we and other organizations have pushed companies to be more transparent about the speech that they take down, particularly when it’s at the behest of governments. But when it comes to decisions about acceptable speech, or what kinds of information or ads to show us, companies are largely opaque. We believe that Facebook, Google, and others should allow truly independent researchers—with no bottom line or corporate interest—access to work with, black box test and audit their systems. Users should be told when bots are flooding a network with messages and, as described below, should have tools to protect themselves. Meaningful transparency also means allowing users to see what types of content are taken down, what’s shown in their feed and why. It means being straight with users about how their data is being collected and used. And it means providing users with the power to set limitations on how long that data can be kept and used.Due Process
We know that companies make enforcement mistakes, so it’s shocking that most lack robust appeals processes—or any appeals processes at all. Every user should have the right to due process, including the option to appeal a company's takedown decision, in every case. The Manila Principles provide a framework for this.Empower Users With Better Platform Tools
Platforms are building tools that let user filter ads and other content, and this should continue. This approach has been criticized for furthering “information bubbles,” but those problems are less worrisome when users are in charge and informed, than when companies are making these decisions for users with one eye on their bottom lines. Users should be in control of their own online experience. For example, Facebook already allows users to choose what kinds of ads they want to see—a similar system should be put in place for content, along with tools that let users make those decisions on the fly rather than having to find a hidden interface. Use of smart filters should continue, since they help users can better choose content they want to see and filter out content they don’t want to see. Facebook’s machine learning models can recognize the content of photos, so users should be able to choose an option for "no nudity" rather than Facebook banning it wholesale. (The company could still check that by default in countries where it's illegal.)
When it comes to political speech, there is a desperate need for more innovation. That might include user interface designs and user controls that encourage productive and informative conversations; that label and dampen the virality of wildly fabricated material while giving readers transparency and control over that process. This is going to be a very complex and important design space in years to come, and we’ll probably have much more to say about it in future posts.Empower Users With Third-Party Tools
Big platform companies aren’t the only place where good ideas can grow. Right now, the larger platforms limit the ability of third parties to offer alternative experiences on the platforms, using closed APIs, blocking scraping and limiting interoperability. They enforce their power to limit innovation on the platform through a host of laws, including the Computer Fraud and Abuse Act (CFAA), copyright regulations, and the Digital Millennium Copyright Act (DMCA). Larger platforms like Facebook, Twitter and YouTube should facilitate user empowerment by opening their APIs even to competing services, allowing scraping and ensuring interoperability with third party products, even up to forking of services.Forward Consent
Community guidelines and policing are touted as a way to protect online civility, but are often used to take down a wide variety of speech. The targets of reporting often have no idea what rule they have violated, since companies often fail to provide adequate notice. One easy way that service providers can alleviate this is by having users affirmatively accept the community guidelines point by point, and accept them again each time they change.Judicious Filters
When implemented by the platform, we worry about filtering technologies that automatically take down speech, because the default for online speech should always to be to keep it online until a human has reviewed it. Some narrow exceptions may be appropriate, e.g., where a file is an exact match of a file already found to be infringing, where no effort was made to claim otherwise and avenues remain for users to challenge any subsequent takedown. But in general platforms can and should simply use smart filters to better flag potentially unlawful content for human review and to recognize when their user flagging systems are being gamed by those seeking to get the platform to censor others.Platform Competition and User Choice
Ultimately, users also need to be able to leave when a platform isn’t serving them. Real data portability is key here and this will require companies to agree to standards for how social graph data is stored. Fostering competition in this space could be one of the most powerful incentives for companies to protect users against bad actors on their platform, be they fraudulent, misleading or hateful. Pressure on companies to allow full interoperability and data portability could lead to a race to the top for social networks.No Shadow Regulations
Over the past decade we have seen the emergence of the secretive web of backroom agreements between companies that seeks to control our behavior online, often driven by governments as a shortcut and less accountable alternative to regulation. One example among many: under pressure from the UK Intellectual Property Office, search engines agreed last year to a "Voluntary Code of Practice" that requires them to take additional steps to remove links to allegedly unlawful content. At the same time, domain name registrars are also under pressure to participate in copyright enforcement, including “voluntarily” suspending domain names. Similarly, in 2016, the European Commission struck a deal with the major platforms, which, while ostensibly about addressing speech that is illegal in Europe, had no place for judges and the courts, and concentrated not on the letter of the law, but the companies' terms of service.
Shadow regulation is dangerous and undemocratic; regulation should take place in the sunshine, with the participation of the various interests that will have to live with the result. To help alleviate the problem, negotiators should seek to include meaningful representation from all groups with a significant interest in the agreement; balanced and transparent deliberative processes; and mechanisms of accountability such as independent reviews, audits, and elections.Keep Core Infrastructure Out of It
As we said last year, the problems with censorship by direct hosts of speech are tremendously magnified when core infrastructure providers are pushed to censor. The risk of powerful voices squelching the less powerful is greater, as are the risks of collateral damage. Internet speech depends on an often-fragile consensus among many systems and operators. Using that system to edit speech, based on potentially conflicting opinions about what can be spoken on the Internet, risks shattering that consensus. Takedowns by some intermediaries—such as certificate authorities or content delivery networks—are far more likely to cause collateral censorship. That’s why we’ve called these parts of the Internet free speech’s weakest links.
The firmest, most consistent, defense these potential weak links can take is to simply decline all attempts to use them as a control point. They can act to defend their role as a conduit, rather than a publisher. Companies that manage domain names, including GoDaddy and Google, should draw a hard line: they should not suspend or impair domain names based on the expressive content of websites or services.Toward More Accountability
There are no perfect solutions to protecting free expression, but as this list of recommendations should suggest, there’s a lot that companies—as well as policymakers—can do to protect and empower Internet users without doubling down on the risky and too-often failing strategy of censorship.
We'll continue to refine, and critique the proposals that we and others make, whether they're new laws, new technology, or new norms. But we also want to play our part to ensure that these debates aren't dominated by existing interests and a simple desire for rapid and irrevocable action. We'll continue to highlight the collateral damage of censorship, and especially to highlight the unheard voices who have been ignored in this debate—and have the most to lose.
Note: Many EFF staff contributed to this post. Particular thanks to Peter Eckersley, Danny O’Brien, David Greene, and Nate Cardozo.
ETICAS Releases First Ever Evaluations of Spanish Internet Companies' Privacy and Transparency Practices
It’s Spain's turn to take a closer look at the practices of their local Internet companies, and how they treat their customers’ personal data.
Spain's ¿Quien Defiende Tus Datos? (Who Defends Your Data?) is a project of ETICAS Foundation, and is part of a region-wide initiative by leading Iberoamerican digital rights groups to shine a light on Internet privacy practices in Iberoamerica. The report is based on EFF's annual Who Has Your Back? report, but adapted to local laws and realities (A few months ago Brazil’s Internet Lab, Colombia’s Karisma Foundation, Paraguay's TEDIC, and Chile’s Derechos Digitales published their own 2017 reports, and Argentinean digital rights group ADC will be releasing a similar study this year).
ETICAS surveyed a total of nine Internet companies. These companies’ logs hold intimate records of the movements and relationships of the majority of the population in the country. The five telecommunications companies surveyed—Movistar, Orange, Vodafone-ONO, Jazztel, MásMóvil—together make up the vast majority of the fixed, mobile, and broadband market in Spain. ETICAS also surveyed the four most popular online platforms for buying and renting houses—Fotocasa, Idealista, Habitaclia, and Pisos.com. ETICAS, in the tradition of Who Has Your Back?, evaluated the companies for their commitment to privacy and transparency, and awarded stars based on their current practices and public behavior. Each company was given the opportunity to answer a questionnaire, to take part in a private interview, and to send any additional information they felt appropriate, all of which was incorporated into the final report. This approach is based on EFF’s earlier work with Who Has Your Back? in the United States, although the specific questions in ETICAS’ study were adapted to match Spain’s local laws and realities.
ETICAS rankings for Spanish ISPs and phone companies are below; the full report, which includes details about each company, is available at: https://eticasfoundation.org/qdtd
ETICAS reviewed each company in five categories:
- According to law: whether they publish their law enforcement guidelines and whether they hand over data according to the law.
- Notification: whether they provide prior notification to customers of government data demands.
- Transparency: whether they publish transparency reports.
- Promote users’ privacy in courts or congress: whether they have publicly stood to promote privacy.
Companies in Spain are off to a good start but still have a ways to go to fully protect their customers’ personal data and be transparent about who has access to it. This years' report shows Telefónica-Movistar taking the lead, followed closely by Orange, but both still have plenty of room for improvement, especially on Transparency Reports and Notification. For 2018, competitors could catch up with efforts to provide better user notification of surveillance, publish transparency reports, law enforcement guidelines, or publicly make clear data protection policies.
ETICAS is expected to release this report annually to incentivize companies to improve transparency and protect user data. This way, all Spaniards will have access to information about how their personal data is used and how it is controlled by ISPs so they can make smarter consumer decisions. We hope the report will shine with more stars next year.
Sharing your personal fitness goals—lowered heart rates, accurate calorie counts, jogging times, and GPS paths—sounds like a fun, competitive feature offered by today’s digital fitness trackers, but a recent report from The Washington Post highlights how this same feature might end up revealing not just where you are, where you’ve been, and how often you’ve traveled there, but sensitive national security information.
According to The Washington Post report, the fitness tracking software company Strava—whose software is implemented into devices made by Fitbit and Jawbone—posted a “heat map” in November 2017 showing activity of some of its 27 million users around the world. Unintentionally included in that map were the locations, daily routines, and possible supply routes of disclosed and undisclosed U.S. military bases and outposts, including what appear to be classified CIA sites.
Though the revealed information itself was anonymized—meaning map viewers could not easily determine identities of Strava customers with the map alone—when read collectively, the information resulted in a serious breach of privacy.
Shared on Twitter, the map led to several discoveries, the report said.
“Adam Rawnsley, a Daily Beast journalist, noticed a lot of jogging activity on the beach near a suspected CIA base in Mogadishu, Somalia.
Another Twitter user said he had located a Patriot missile system site in Yemen.
Ben Taub, a journalist with the New Yorker, homed in on the location of U.S. Special Operations bases in the Sahel region of Africa.”
On Monday, according to a follow-up report by The Washington Post, the U.S. military said it was reviewing guidelines on how it uses wireless devices.
As the Strava map became more popular, the report said, Internet users were able to further de-anonymize the data, pairing it to information on Strava’s website.
According to The Washington Post's follow-up report:
“On one of the Strava sites, it is possible to click on a frequently used jogging route and see who runs the route and at what times. One Strava user demonstrated how to use the map and Google to identify by name a U.S. Army major and his running route at a base in Afghanistan.”
The media focused on one particular group affected by this privacy breach: the U.S. military. But of course, regular people’s privacy is impacted even more by privacy leaks such as this. For instance, according to a first-person account written in Quartz last year, one London jogger was surprised to learn that, even with strict privacy control settings on Strava, her best running times—along with her first and last name and photo—were still visible to strangers who peered into her digital exercise activity. These breaches came through an unintended bargain, in which customers traded their privacy for access to social fitness tracking features that didn’t exist several years ago.
And these breaches happened even though Strava attempted to anonymize its customers’ individual data. That clearly wasn’t enough. Often, our understanding of “anonymous” is wrong—invasive database cross-referencing can reveal all sorts of private information, dispelling any efforts at meaningful online anonymity.
While “gamified” fitness trackers, especially ones that have social competition built-in, are fun, they are really just putting a friendly face on big brother. When we give control over our personal data—especially sensitive data such as location history—to third parties, we expect it to be kept private. When companies betray that trust, even in “anonymized” form such as the Strava heat map, unintended privacy harms are almost guaranteed. Clearly communicated privacy settings can help us in situations like these, but so can company decisions to better protect the data they publish online.
Last month, the Federal Trade Commission and the U.S. Department of Education held a workshop in Washington, DC. The topic was “Student Privacy and Ed Tech.” We at EFF have been trying to get the FTC to focus on the privacy risks of educational technology (or “ed tech”) for over two years, so we eagerly filed formal comments.
We’ve long been concerned about how technology impacts student privacy. As schools and classrooms become increasingly wired, and as schools put more digital devices and services in the hands of students, we’ve been contacted by a large number of concerned students, parents, teachers, and even administrators.
They want to know: What data are ed tech providers collecting about our kids? How are they using it? How well do they disclose (if at all) the scope of their data collection? How much control (if any) do they give to schools and parents over the retention and use of the data they collect? Do they even attempt to obtain parental consent before collecting and using incredibly sensitive student data?
In the spring of 2017, we released the results of a survey that we conducted in order to plumb the depths of the confusion surrounding ed tech. And as it turns out, students, parents, teachers, and even administrators have lots of concerns—and very little clarity—over how ed tech providers protect student privacy.
Drawing from the results of our survey, our comments to the FTC and DOE touched on a broad set of concerns:
- The FTC has ignored our student privacy complaint against Google. Despite signing a supposedly binding commitment to refrain from collecting student data without parental consent beyond that needed for school purposes, Google openly harvests student search and browsing behavior, and uses that data for its own purposes. We filed a formal complaint with the FTC more than two years ago but have heard nothing back.
- There is a consistent lack of transparency in ed tech privacy policies and practices. Schools issue devices to students without their parents’ knowledge and consent. Parents are kept in the dark about what apps their kids are required to use and what data is being collected.
- The investigative burden too often falls on students and parents. With no notice or help from schools, the investigative burden falls on parents and even students to understand the privacy implications of the technology students are using.
- Data use concerns are unresolved. Parents have extensive concerns about student data collection, retention, and sharing. Many ed tech products and services have weak privacy policies. For instance, it took the lawyers at EFF months to get a clear picture of which privacy policies even applied to Google’s student offerings, much less how they interacted.
- Lack of choice in ed tech is the norm. Parents who seek to opt their children out of device or software use face many hurdles, particularly those without the resources to provide their own alternatives. Some districts have even threatened to penalize students whose parents refuse to consent to what they believe are egregious ed tech privacy policies and practices.
- Overreliance on “privacy by policy.” School districts generally rely on the privacy policies of ed tech companies to ensure student data protection. Parents and students, on the other hand, want concrete evidence that student data is protected in practice as well as in policy.
- There is an unfilled need for better privacy training and education. Both students and teachers want better training in privacy-conscious technology use. Ed tech providers aren’t fulfilling their obligations to schools when they fail to provide even rudimentary privacy training.
- Ed tech vendors treat existing privacy law as if it doesn’t apply to them. Because the Family Educational Rights and Privacy Act (“FERPA”) generally prohibits school districts from sharing student information with third parties without written parental consent, districts often characterize ed tech companies as “school officials.” However, districts may only do so if—among other things—providers give districts or schools direct control over all student data and refrain from using that data for any other purpose. Despite the fact that current ed tech offerings generally fail those criteria, vendors generally don’t even attempt to obtain parental consent.
We believe it is incumbent upon school districts to fully understand the data and privacy policies and practices of the ed tech products and services they wish to use, demand that ed tech vendors assent to contract terms that are favorable to the school districts and actually protect student privacy, and be ready not to do business with a company who does not engage in robust privacy practices.
While we understand that school budgets are often tight and that technology can actually enhance the learning experience, we urge regulators, school districts, and the ed tech companies themselves to make student privacy a priority. We hope the FTC and DOE listen to what we, and countless concerned students, parents, and teachers, have to say.
The news that Immigrations & Customs Enforcement is using a massive database of license plate scans from a private company sent shockwaves through the civil liberties and immigrants’ rights community, who are already sounding the alarm about how mass surveillance will be used to fuel deportation efforts.
The concerns are certainly justified: the vendor, Vigilant Solutions, offers access to 6.5 billion data points, plus millions more collected by law enforcement agencies around the country. Using advanced algorithms, this information—often collected by roving vehicles equipped with automated license plate readers (ALPRs) that scan every license plate they pass—can be used to reveal a driver’s travel patterns and to track a vehicle in real time.
ICE announced the expansion of its ALPR program in December, but without disclosing what company would be supplying the data. While EFF had long suspected Vigilant Solutions won the contract, The Verge confirmed it in a widely circulated story published last week.
In California, this development raises many questions about whether the legislature has taken enough steps to protect immigrants, despite passing laws last year to protect residents from heavy-handed immigration enforcement.
But California lawmakers should have already seen this coming. Two years ago, The Atlantic branded these commercial ALPR databases, “an unprecedented threat to privacy.”
Vigilant Solutions tells its law enforcement customers that accessing this data is “as easy as adding a friend on your favorite social media platform.” As a result, California agencies share their data wholesale with hundreds of entities, ranging from small towns in the Deep South to a variety of federal agencies.
An analysis by EFF of records obtained from local police has identified more than a dozen California agencies that have already been sharing ALPR data with ICE through their Vigilant Solutions accounts. The records show that ICE, through its Homeland Security Investigations offices in Newark, New Orleans, and Houston, has had access to data from more than a dozen California police departments for years.
At least one ICE office has access to ALPR data collected by the following police agencies:
- Anaheim Police Department
- Antioch Police Department
- Bakersfield Police Department
- Chino Police Department
- Fontana Police Department
- Fountain Valley Police Department
- Glendora Police Department
- Hawthorne Police Department
- Montebello Police Department
- Orange Police Department
- Sacramento Police Department
- San Diego Police Department
- Simi Valley Police Department
- Tulare Police Department
ICE agents have also obtained direct access to this data through user accounts provided by local law enforcement. For example, an ICE officer obtained access through the Long Beach Police Department’s system in November 2016 and ran 278 license plate searches over nine months. Two CBP officers further conducted 578 plate searches through Long Beach’s system during that same period.
It’s important to note that ALPR technology collects and stores data on millions of drivers without any connection to a criminal investigation. As EFF noted, this data can reveal sensitive information about a person, for example, if they visit reproductive health clinics, immigration resource centers, mosques, or LGBTQ clubs. Even attendees at gun shows have found their plates captured by CBP officers, according to the Wall Street Journal.
Police departments must take a hard look at their ALPR systems and un-friend DHS. But the California legislature also has a chance to offer a defense measure for drivers who want to protect their privacy.
S.B. 712 would allow drivers to apply a removable cover to their license plates when they are lawfully parked, similar to how drivers are currently allowed to cover their entire vehicles with a tarp to protect their paint jobs from elements. While this would not prevent ALPRs from collecting data from moving vehicles, it would offer privacy for those who want to protect the confidentiality of their destinations.
Before the latest story broke, S.B. 712 was brought to the California Senate floor, where it initially failed on a tied vote, with many Republicans and Democrats—including Sens. Joel Anderson (R-Alpine) and Scott Wiener (D-San Francisco)—joining in support.
Unfortunately, several Democrats, such as Senate President Kevin de León and Sen. Connie Leyva, who have positioned themselves as immigrant advocates, voted against the bill the first time around. Others, such as Sens. Toni Atkins and Ricardo Lara, sat the vote out.
The Senate has one last chance to pass the bill and send it to the California Assembly by January 31. The bill is urgently necessary to protect the California driving public from surveillance.
Californians: join us today in urging your senator to stand up for privacy, not the interests of ICE or the myriad of financial institutions, insurance companies, and debt collectors who also abuse this mass data collection.
EFF has been working on multiple fronts to end a widespread violation of digital liberty—warrantless searches of travelers’ electronic devices at the border. Government policies allow border agents to search and confiscate our cell phones, tablets, and laptops at airports and border crossings for no reason, without explanation or any suspicion of wrongdoing. It’s as if our First and Fourth Amendment rights don’t exist at the border. This is wrong, which is why we’re working to challenge and hopefully end these unconstitutional practices.
EFF and the ACLU filed a brief today in our Alasaad v. Nielsen lawsuit to oppose the government’s attempt to dismiss our case. Our lawsuit, filed in September 2017 on behalf of 11 Americans whose devices were searched, takes direct aim at the illegal policies enforced by the U.S. Department of Homeland Security and its component agencies, U.S. Customs and Border Protection (CBP) and U.S. Immigration and Customs Enforcement (ICE). In our brief we explain that warrantless searches of electronic devices at the border violate the First and Fourth Amendments, and that our 11 clients have every right to bring this case.
This is just the latest action we’ve taken in the fight for digital rights at the border. EFF is pushing back against the government’s invasive practices on three distinct fronts: litigation, legislation, and public education.A Rampant Problem
Over the past few years there has been a dramatic increase in the number of searches of cell phones and other electronic devices conducted by border agents. CBP reported that in fiscal year 2012 the number of border device searches was 5,085. In fiscal year 2017, the number had increased to 30,200—a six-fold increase in just five years.
DHS claims the authority to ransack travelers’ cell phones and other devices and the massive troves of highly personal information they contain. ICE agents can do so for any reason or no reason. Under a new policy issued earlier this month, CBP agents can do so without a warrant or probable cause, and usually can do so without even reasonable suspicion.
Also, agents can and do confiscate devices for lengthy periods of time and subject them to extensive examination.
These practices are unconstitutional invasions of our privacy and free speech. Our electronic devices contain our emails, text messages, photos and browsing history. They document our travel patterns, shopping habits, and reading preferences. They expose our love lives, health conditions, and religious and political beliefs. They reveal whom we know and associate with. Warrantless device searches at the border violate travelers’ rights to privacy under the Fourth Amendment, and freedoms of speech, press, private association, and anonymity under the First Amendment.
These practices have existed at least since the George W. Bush administration and continued through the Obama administration. But given the recent dramatic uptick in the number of border device searches since President Trump took office, a former DHS chief privacy officer, Mary Ellen Callahan, concluded that the increase was “clearly a conscious strategy,” and not “happenstance.”
But the U.S. border is not a Constitution-free zone. The Fourth Amendment requires the government to obtain a probable cause warrant before conducting a border search of a traveler’s electronic device. This follows from the U.S. Supreme Court case Riley v. California (2014). The court held that police need a warrant to search the cell phones of people they arrest.
The warrant process is critical because it provides a check on government power and, specifically, a restraint on arbitrary invasions of privacy. In seeking a warrant, a government agent must provide sworn testimony before a neutral arbiter—a judge—asserting why the government believes there’s some likelihood (“probable cause”) that the cell phone or other thing to be searched contains evidence of criminality. If the judge is convinced, she will issue the search warrant, allowing the government to access your private information even if you don’t consent.
Right now, there are no such constraints on CBP and ICE agents—but we’re fighting in court and in Congress to change this.Litigation
On September 13, 2017, EFF along with ACLU filed our lawsuit, Alasaad v. Nielsen, against the federal government on behalf of ten U.S. citizens and one lawful permanent resident whose smartphones and other devices were searched without a warrant at the U.S. border. The plaintiffs include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. Several are Muslims or people of color. All were reentering the country after business or personal travel when border agents searched their devices. None were subsequently accused of any wrongdoing.
Each of the Alasaad plaintiffs suffered a substantial privacy invasion. Some plaintiffs were detained for several hours while agents searched their devices, while others had their devices confiscated and were not told when their belongings would be returned. One plaintiff was even placed in a chokehold after he refused to hand over his phone. You can read the detailed stories of all the Alasaad plaintiffs.
In the Alasaad lawsuit, we are asking the U.S. District Court for Massachusetts to find that the policies of CBP and ICE violate the Fourth Amendment. We also allege that the search policies violate the First Amendment. We are asking the court to enjoin the federal government from searching electronic devices at the border without first obtaining a warrant supported by probable cause, and from confiscating devices for lengthy periods without probable cause.
In the past year, EFF also has filed three amicus briefs in U.S. Courts of Appeals (in the Fourth, Fifth, and Ninth Circuits). In those briefs, we argued that border agents need a probable cause warrant to search electronic devices. There are extremely strong and unprecedented privacy interests in the highly sensitive information stored and accessible on electronic devices, and the narrow purposes of the border search exception—immigration and customs enforcement—are not served by warrantless searches of electronic data.Legislation
EFF is urging the U.S. Congress to pass the Protecting Data at the Border Act. The Act would require border agents to obtain a probable cause warrant before searching the electronic devices of U.S. citizens and legal permanent residents at the border.
The Senate bill (S. 823) is sponsored by Sen. Ron Wyden (D-OR) and Sen. Rand Paul (R-KY). Rep. Polis (D-CO), Rep. Smith (D-WA), and Rep. Farenthold (R-TX) are taking the lead on the House bill (H.R. 1899).
In addition to creating a warrant requirement, the Act would prohibit the government from delaying or denying entry or exit to a U.S. person based on that person’s refusal to hand over a device passcode, online account login credential, or social media handle.
You can read more about this critical bill in our call to action, and our op-ed in The Hill. Please contact your representatives in Congress and urge them to co-sponsor the Protecting Data at the Border Act.Public Education
Finally, EFF published a travel guide that helps travelers understand their individual risks when crossing the U.S. border (which includes U.S. airports if flying from overseas), provides an overview of the law around border searches, and offers technical guidance for securing digital data.
Our travel guide recognizes that one size does not fit all, and it helps travelers make informed choices regarding their specific situation and risk tolerance. The guide is a useful resource for all travelers who want to keep their digital data safe.Alasaad v. Nielsen
EFF and ACLU Ask Court to Allow Legal Challenge To Proceed Against Warrantless Searches of Travelers’ Smartphones, Laptops
Boston, Massachusetts—The Electronic Frontier Foundation (EFF) and the American Civil Liberties Union (ACLU) urged a federal judge today to reject the Department of Homeland Security’s attempt to dismiss an important lawsuit challenging DHS’s policy of searching and confiscating, without suspicion or warrant, travelers’ electronic devices at U.S. borders.
EFF and ACLU represent 11 travelers—10 U.S. citizens and one lawful permanent resident—whose smartphones and laptops were searched without warrants at the U.S. border in a groundbreaking lawsuit filed in September. The case, Alasaad v. Nielsen, asks the court to rule that the government must have a warrant based on probable cause before conducting searches of electronic devices, which contain highly detailed personal information about people’s lives. The case also argues that the government must have probable cause to confiscate a traveler’s device.
The plaintiffs in the case include a military veteran, journalists, students, an artist, a NASA engineer, and a business owner. The government seeks dismissal, saying the plaintiffs don’t have the right to bring the lawsuit and the Fourth Amendment doesn’t apply to border searches. Both claims are wrong, the EFF and ACLU explain in a brief filed today in federal court in Boston.
First, the plaintiffs have “standing” to seek a court order to end unconstitutional border device searches because they face a substantial risk of having their devices searched again. This means they are the right parties to bring this case and should be able to proceed to the merits. Four plaintiffs already have had their devices searched multiple times.
Immigration and Customs Enforcement (ICE) policy allows border agents to search and confiscate anyone’s smartphone for any reason or for no reason at all. Customs and Border Protection (CBP) policy allows border device searches without a warrant or probable cause, and usually without even reasonable suspicion. Last year, CBP conducted more than 30,000 border device searches, more than triple the number just two years earlier.
“Our clients are travelers from all walks of life. The government policies that invaded their privacy in the past are enforced every day at airports and border crossings around the country,” said EFF Staff Attorney Sophia Cope. “Because the plaintiffs face being searched in the future, they have the right to proceed with said Cope.
Second, the plaintiffs argue that the Fourth Amendment requires border officers to get a warrant before searching a traveler’s electronic device. This follows from the Supreme Court’s 2014 decision in Riley v. California requiring that police officers get a warrant before searching an arrestee’s cell phone. The court explained that cell phones contain the “privacies of life”—a uniquely large and varied amount of highly sensitive information, including emails, photos, and medical records. This is equally true for international travelers, the vast majority of whom are not suspected of any crime. Warrantless border device searches also violate the First Amendment, because they chill freedom of speech and association by allowing the government to view people’s contacts, communications, and reading material.
“Searches of electronic devices at the border are increasing rapidly, causing greater numbers of people to have their constitutional rights violated,” said ACLU attorney Esha Bhandari. “Device searches can give border officers unfettered access to vast amounts of private information about our lives, and they are unconstitutional absent a warrant.”
Below is a full list of the plaintiffs along with links to their individual stories, which are also collected here:
- Ghassan and Nadia Alasaad are a married couple who live in Massachusetts, where he is a limousine driver and she is a nursing student.
- Suhaib Allababidi, who lives in Texas, owns and operates a business that sells security technology, including to federal government clients.
- Sidd Bikkannavar is an optical engineer for NASA’s Jet Propulsion Laboratory in California.
- Jeremy Dupin is a journalist living in Massachusetts.
- Aaron Gach is an artist living in California.
- Isma’il Kushkush is a journalist living in Virginia.
- Diane Maye is a college professor and former captain in the U. S. Air Force living in Florida.
- Zainab Merchant, from Florida, is a writer and a graduate student in international security and journalism at Harvard.
- Akram Shibly is a filmmaker living in New York.
- Matthew Wright is a computer programmer in Colorado.
For more EFF information on this case:
For more ACLU information on this case:
For more on privacy at the border:
Europe's General Data Protection Regulation (GDPR) will come into effect in May 2018, and with it, a new set of tough penalties for companies that fail to adequately protect the personal data of European users. Amongst those affected are domain name registries and registrars, who are required by ICANN, the global domain name authority, to list the personal information of domain name registrants in publicly-accessible WHOIS directories. ICANN and European registrars have clashed over this long-standing contractual requirement, which does not comply [PDF] with European data protection law.
This was one of the highest profile topics at ICANN's 60th meeting in Abu Dhabi which EFF attended last year, with registries and registrars laying the blame on ICANN, either for their liability under the GDPR if they complied with their WHOIS obligations, or for their contractual liability to ICANN if they didn't. ICANN has recognized this and has progressively, if belatedly, being taking steps to remediate the clash between its own rules, and the data protection principles that European law upholds.A Brief History of Domain Privacy at ICANN
ICANN's first step in improving domain privacy, which dates from 2008 and underwent minor revisions in 2015, was to create a very narrow and cumbersome process for a party bound by privacy laws that conflicted with its contractual requirements to seek an exemption from those requirements from ICANN. Next in 2015, ICANN commenced a Policy Development Process (PDP) for the development of a Next-Generation gTLD Registration Directory Services (RDS) to Replace WHOIS, whose work remains ongoing, with the intention that this new RDS would be more compatible with the privacy laws, probably by providing layered access to registrant data to various classes of authorized users.
Meanwhile, ICANN considered whether to limit registrants' access to a privacy workaround that allowed registrants to register their domain via a proxy, thereby keeping their real personal details private. Although it eventually concluded that access to privacy proxy registration services shouldn't be limited [PDF], these don't amount to a substitute for the new RDS that will incorporate privacy by design, because not all registrars provide this option, only do so as an opt-in service, or via a third party who charges money for it.
Meanwhile, effective July 2017, ICANN amended its contract with registries to require them to obtain the consent of registrants for their information to be listed online. But again, this is no substitute for the new RDS, because consent that is required as a condition of registering a domain wouldn't qualify as "freely given" under European law. ICANN followed up in November 2017 with a statement that it would abstain from taking enforcement action against registries or registrars who provided it with a "compliance model" that sought to reconcile their contractual obligations with the requirements of data protection law.Three Interim Options
Finally, with the GDPR deadline hard approaching and with the work of the Next-Generation RDS group nowhere near completion, ICANN has issued a set of three possible stop-gap measures for public comment. These three models, based upon legal advice belatedly obtained by ICANN last year [PDF], are intended to protect registries and registrars from liability under the GDPR during the interim period between May 2018 and the final implementation of the recommendations of the Next-Generation RDS PDP. In simple terms the three options are:
- Allowing anyone who self-certifies that they have a legitimate interest in accessing personal data of an individual registrant to do so.
- Setting up a formal accreditation/certification program under which only a defined set of third-party requestors would be authorized to gain access to individual registrants' personal data.
- Access to personal data of registrants would only be available under a subpoena or other order from a court or other judicial tribunal of competent jurisdiction.
None of these are perfect solutions for retroactively enforcing new privacy on ICANN's old procedures. In EFF's comments on ICANN's proposals, we ended up supporting the third option; or actually, a variation of it. Whereas in ICANN's option 3 proposal a case by case evaluation of each field in each registration would be required to determine whether it contains personal data, this seems impractical. Instead, as with option 2, it should be assumed that the name, phone number, and address fields contain personal data, and these should be withheld from public display.1
ICANN's first option, which would allow anyone to claim that they have a legitimate interest in obtaining registrants' personal data, is unlikely to hold water against the GDPR —they could simply lie, or may be mistaken about what amounts to a legitimate interest. The second option is likely to be unworkable in practice, especially for implementation in such a short space of time. By requiring ICANN to make a legal evaluation of the legitimate interests of third parties in gaining access to personal information of registrants, ICANN's legal advisers acknowledge that this option would:
require the registrars to perform an assessment of interests in accordance with Article 6.1(f) GDPR on an individual case-by-case basis each time a request for access is made. This would put a significant organizational and administrative pressure on the registrars and also require them to obtain and maintain the competence required to make such assessments in order to deliver the requested data in a reasonably timely manner.
Moreover, the case most commonly made for third party access to registration data is for law enforcement authorities and intellectual property rights holders to be able to obtain this data. We already have a system for the formal evaluation of the claims of these parties to gain access to personal data; it's the legal system, through which they can obtain a warrant or a subpoena, either directly if they are in the same country as the registry or registrar, or via a treaty such as a Mutual Legal Assistance Treaty (MLAT) if they are not. This is exactly what ICANN's Model 3 allows, and it's the appropriate standard for ICANN to adopt.Is the Sky Falling?
Many ICANN stakeholders are concerned that access to the public WHOIS database could change. Amongst the most vocal opponents of new privacy protections for registrants include some security researchers and anti-abuse experts, for whom it would be impractical to go to a court for a subpoena for that information, even if the court would grant one. Creating, as Model 2 would do, a separate class of Internet "super-users" who could use their good work as a reason to examine the personal information databases of the registrars seems a tempting solution. But we would have serious concerns about seeing ICANN installed as the gatekeeper of who is permitted to engage in security research or abuse mitigation, and thereby to obtain privileged access to registrant data.
Requiring a warrant or subpoena for access to personal data of registrants isn't as radical as its opponents make out. There are already a number of registries, including the country-code registries of most European countries (which are not subject to ICANN's WHOIS rules) that already operate in this way. Everyone who is involved in WHOIS research — be they criminals using domains for fraud, WHOIS scraping spammers, or anti-abuse researchers — is already well aware of these more privacy-protective services. It's better for us all to create and support methods of investigation that accept this model of private domain registration, than open up ICANN or its contracted parties to the responsibility of deciding what they should do if, for example, the cyber-security wing of an oppressive government begins to search for the registration data of dissidents.
There are other cases in which it makes sense to allow members of the public to contact the owner of a domain, without having to obtain a court order. But this could be achieved very simply if ICANN were simply to provide something like a CAPTCHA-protected contact form, which would deliver email to the appropriate contact point with no need to reveal the registrant’s actual email address. There's no reason why this couldn't be required in conjunction with ICANN's Model 3, to address the legitimate concerns of those who need to contact domain owners for operational or business reasons, and who for whatever reason can't obtain contact details in any other way.
- 1. There are actually two versions of Model 2 presented; one that would only apply if the registrant, registry, or registrar is in Europe (which is also the suggested scope of Model 1), and the other that would apply globally. Similarly, options are given for Model 2 to apply either just to individual registrants, or to all registrants. Given that there are over 100 countries that have omnibus data protection laws (and this number is growing), many of which are based on the European model, there seems to be little sense for any of the proposals to be limited to Europe. Neither does it make sense to limit the proposals to individual registrants, because even if it were possible to draw a clear line between individual and organizational registrations (it often isn't), organizational registrations may contain personally identifiable information about corporate officers or contact persons.
In a country where press freedom is already under grave threat, the revocation of an independent publication’s license to operate and a proposed amendment to the Bill of Rights are pushing journalists further into the margins. While the Constitution of the Philippines guarantees press freedom and the country’s media landscape is quite diverse, journalists nevertheless face an array of threats. Libel threats and advertising boycotts are common, and the country ranks fifth in the world in terms of impunity for killing journalists.
And since the election of President Rodrigo Duterte in 2016, press freedom in the Philippines has taken a further blow. Like President Trump, Duterte enjoys going after individual media outlets that criticize his policies, creating an increasingly chilled atmosphere for the country’s independent journalists and free speech.
In an unprecedented move, the Duterte administration’s Security and Exchange Commission (SEC) revoked the registration of independent news organization, Rappler, and ordered them to close up shop. Rappler has been a vocal critic of the Duterte regime and appears to be targeted for its criticism of the current administration, especially when contrasted with how other pro-Duterte bloggers and outlets have been rewarded with government positions or hired as consultants using public funds.
The Duterte administration’s SEC claims its decision to revoke Rappler’s registration was based on an alleged violation of the Foreign Equity Restriction in Mass Media by accepting funds from the Omidyar Network, a fund created by eBay founder Pierre Omidyar that has contributed to independent media outlets all over the world, like the Intercept and the International Consortium of Investigative Journalists.
The SEC had accepted and approved Rappler’s Philippine Depository Receipt (PDR) for contributions from the Omidyar Network back in 2015. A PDR is a financial instrument that does not give the investor voting rights in the board or a say in the management of the organization.
But when President Duterte went after Rappler (as well as broadcast network ABS-CBN) in his July 2017 State of the Nation address, claiming that the company was owned by foreigners, the pressure began to mount. The president later repeated this claim, stating that the company was violating a Constitutional requirement of domestic ownership. Under this increasing pressure from the Duterte administration, the SEC voided the Omidyar PDR last week and revoked Rappler’s Certificate of Incorporation.
Rappler expressed dismay at the “misrepresentations, outward lies, and malice contained in criticisms of Rappler” and maintains that it has complied with all SEC regulations and acted in good faith in adhering to all requirements “even at the risk of exposing [its] corporate data to irresponsible hands with an agenda.” Rappler continues to stand firm in its conviction that it is “100% Filipino-owned” and has not violated any Constitutional restrictions in accepting money from foreign philanthropic investors. Rappler intends to contest the SEC’s revocation “through all legal processes available” in its fight for freedom of the press.
But the Philippine government is taking things even a step further with a push to mandate “responsible speech”. The House of Representatives has moved to amend Article 3, Section 4 of the Constitution's Bill of Rights, which currently states “No law shall be passed abridging the freedom of speech, of expression, or of the press, or the right of the people peaceably to assemble and petition the government for redress of grievances” to read “No law shall be passed abridging the responsible exercise of the freedom of speech, of expression, or of the press, or the right of the people peaceably to assemble and petition the government for redress of grievances.” As opinion writer Ellen T. Tordesillas noted, the movement is similar to a 2006 attempt by the government of former president Gloria Macapagal Arroyo.
The movement may have officially come from the house, but the proposal was actually created by a committee under the office of the president. On a talk show, former solicitor general Florin Hilbay criticized the proposal, stating that “The danger in inserting the word ‘responsible’ is that you’re giving the state power to define responsibility.”
We agree. Handing power over to government authorities to determine what is or isn’t "responsible" is always dangerous, and in the case of Duterte—a president who has targeted journalists, drug users, and communists—could prove deadly. We call on the Philippines to respect the fundamental right to freedom of expression and remind the country of its obligations under the International Covenant on Civil and Political Rights, which allows for only narrow legal limitations to the right to freedom of expression. Furthermore, we stand in solidarity with Rappler, the Foundation for Media Alternatives in its Statement on Press Freedom and Free Speech, as well as the journalists, students, bloggers, and local and international advocates who have taken a stand against the Duterte government’s “alarming attempt to silence independent journalism.”
A huge range of expressive works—including books, documentaries, televisions shows, and songs—depict real people. Should celebrities have a veto right over speech that happens to be about them? A case currently before the California Court of Appeal raises this question. In this case, actor Olivia de Havilland has sued FX asserting that FX’s television series Feud infringed de Havilland’s right of publicity. The trial court found that de Havilland had a viable claim because FX had attempted to portray her realistically and had benefited financially from that portrayal.
Together with the Wikimedia Foundation and the Organization for Transformative Works, EFF has filed an amicus brief [PDF] in the de Havilland case arguing that the trial court should be overruled. Our brief argues that the First Amendment should shield creative expression like Feud from right of publicity claims. The right of publicity is a cause of action for commercial use of a person’s identity. It makes good sense when applied to prevent companies from, say, falsely claiming that a celebrity endorsed their product. But when it is asserted against creative expression, it can burden First Amendment rights.
Courts have struggled to come up with a consistent and coherent standard for how the First Amendment should limit the right of publicity. California courts have applied a rule called the “transformative use” test that considers whether the work somehow “transforms” the identity or likeness of the celebrity. In Comedy III Productions v. Gary Saderup, the California Supreme Court found that the defendant’s etchings were not protected because they were merely “literal, conventional depictions” of the Three Stooges. In contrast, in Winter v. DC Comics, the same court found comic book depictions of Johnny and Edgar Winter to be protected because they transformatively portrayed the brothers as half-human/half-worm creatures.
The transformative use test is deeply flawed. Plenty of valuable speech, such as biographies or documentaries, involves depicting real people as accurately as possible. Why should these works get less First Amendment protection? If the First Amendment requires turning your subject into a half-human/half-worm creature, then the doctrine has gone very badly wrong.
The trial court’s ruling in the de Havilland case, which leaves realistic art about celebrities essentially unprotected, is the logical end-point of the transformative use test. We hope that the drastic result in this case leads California courts to reevaluate free speech limits on the right of publicity. As one judge wrote 30 years ago, no “author should be forced into creating mythological worlds or characters wholly divorced from reality.”