The revolution will not be televised, but it may be hosted on Slack. Community groups, activists, and workers in the United States are increasingly gravitating toward the popular collaboration tool to communicate and coordinate efforts. But many of the people using Slack for political organizing and activism are not fully aware of the ways Slack falls short in serving their security needs. Slack has yet to support this community in its default settings or in its ongoing design.
We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them. In the meantime, this post provides context and things to consider when choosing a platform for political organizing, as well as some tips about how to set Slack up to best protect your community.The Mismatch
Slack is designed as an enterprise system built for business settings. That results in a sometimes dangerous mismatch between the needs of the audience the company is aimed at serving and the needs of the important, often targeted community groups and activists who are also using it.
We urge Slack to recognize the community organizers and activists using its platform and take more steps to protect them.
Two things that EFF tends to recommend for digital organizing are 1) using encryption as extensively as possible, and 2) self-hosting, so that a governmental authority has to get a warrant for your premises in order to access your information. The central thing to understand about Slack (and many other online services) is that it fulfills neither of these things. This means that if you use Slack as a central organizing tool, Slack stores and is able to read all of your communications, as well as identifying information for everyone in your workspace.
We know that for many, especially small organizations, self-hosting is not a viable option, and using strong encryption consistently is hard. Meanwhile, Slack is easy, convenient, and useful. Organizations have to balance their own risks and benefits. Regardless of your situation, it is important to understand the risks of organizing on Slack.First, The Good News
Slack follows several best practices in standing up for users. Slack does require a warrant for content stored on its servers. Further, it promises not to voluntarily provide information to governments for surveillance purposes. Slack also promises to require the FBI to go to court to enforce gag orders issued with National Security Letters, a troubling form of subpoena. Additionally, federal law prohibits Slack from handing over content (but not metadata like membership lists) in response to civil subpoenas.
Slack also stores your data in encrypted form, which means that if it leaks or is stolen, it is not readable. This is excellent protection if you are worried about attacks and data breaches. It is not useful, however, if you are worried about governments or other entities putting pressure on Slack to hand over your information.Risks With Slack In Particular
And now the downsides. These are things that Slack could change, and EFF has called on them to do so.
Slack can turn over content to law enforcement in response to a warrant. Slack’s servers store everything you do on its platform. Since Slack can read this information on its servers—that is, since it’s not end-to-end encrypted—Slack can be forced to hand it over in response to law enforcement requests. Slack does require warrants to turn over content, and can resist warrants it considers improper or overbroad. But if Slack complies with a warrant, users’ communications are readable on Slack’s servers and available for it to turn over to law enforcement.
Slack may fail to notify users of government information requests. When the government comes knocking on a website’s door for user data, that website should, at a minimum, provide users with timely, detailed notice of the request. Slack’s policy in this regard is lacking. Although it states that it will provide advance notice to users of government demands, it allows for a broad set of exceptions to that standard. This is something that Slack could and should fix, but it refuses to even explain why it has included these loopholes.
Slack content can make its way into your email inbox. Signing up for a Slack workspace also signs you up, by default, for email notifications when you are directly mentioned or receive a direct message. These email notifications can include the content of those mentions and messages. If you expect sensitive messages to stay in the Slack workspace where they were written and shared, this might be an unpleasant surprise. With these defaults in place, you have to trust not only Slack but also your email provider with your own and others’ private content.Risks With Third-Party Platforms in General
Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform. Most of these are problems with the law that we all must work on to fix together. Nevertheless, organizers must consider these risks when deciding whether Slack or any other online third-party platform is right for them.
Many of the risks that come with using Slack are also risks that come with using just about any third-party online platform.
Much of your sensitive information is not subject to a warrant requirement. While a warrant is required for content, some of the most sensitive information held by third-party platforms—including the identities and locations of the people in a Slack workspace—is considered “non-content” and not currently protected by the warrant requirement federally and in most states. If the identities of your organization’s membership is sensitive, consider whether Slack or any other online third party is right for you.
Companies can be legally prevented from giving users notice. While Slack and many other platforms have promised to require the FBI to justify controversial National Security Letter gags, these gags may still be enforced in many cases. In addition, many warrants and other legal process contain different kinds of gags ordered by a court, leaving companies with no ability to notify you that the government has seized your data.
Slack workspaces are subject to civil discovery. Government is not the only entity that could seek information from Slack or other third parties. Private companies and other litigants have sought, and obtained, information from hosts ranging from Google to Microsoft to Facebook and Twitter. While federal law prevents them from handing over customer content in civil discovery, it does not protect “non-content” records, such as membership identities and locations.
A group is only as trustworthy as its members. Any group environment is only as trustworthy as the people who participate in it. Group members can share and even screenshot content, so it is important to establish guidelines and expectations that all members agree on. Establishing trusted admins or moderators to facilitate these agreements can also be beneficial.Making Slack as Secure as Possible
If using Slack is still right for you, you can take steps to harden your security settings and make your closed workspaces as private as possible.
The lowest-hanging privacy fruit is to change a workspace’s retention settings. By default, Slack retains all the messages in a workspace or channel (including direct messages) for as long as the workspace exists. The same goes for any files submitted to the workspace. Workspace admins have the ability set shorter retention periods, which can mean less content available for government requests or legal inquiries.
Users can also address the email-leaking concern described above by minimizing email notification settings. This works best if all of the members of a group agree to do it, since email notifications can expose multiple users’ messages.
The privacy of a Slack workspace also relies on the security of individual members’ accounts. Setting up two-factor authentication can add an extra layer of security to an account, and admins even have the option of making two-factor authentication mandatory for all the members of a workspace
However, no settings tweak can completely mitigate the concerns described above. We strongly urge Slack to step up to protect the high-risk groups that are using it along with its enterprise customers. And all of us must stand together to push changes to the law.
Technology should stand with those who wish to make change in our world. Slack has made a great tool that can help, and it’s time for Slack to step up with its policies.
Egyptian journalist Wael Abbas holds a special distinction: Over the years, he’s experienced censorship at the hands of four of Silicon Valley’s top companies. Although more extreme, his story isn’t so different from that of the many individuals who, following a single misstep or mistake at the hands of a content moderator, find themselves unceremoniously removed from a social platform.
When YouTube was still fairly new, Abbas began posting videos depicting police brutality in his native Egypt to the platform. The award-winning journalist and anti-torture activist found utility in the global platform, which even then had massive reach. One of the videos he had posted even resulted in a rare conviction of police officers in Cairo. But in late 2007, he found that his account had been removed without warning. The reason? His content, often graphic in nature, had been receiving large numbers of complaints.
Rights activists rallied around Abbas and were able to convince YouTube to restore his account; his archive of videos were eventually restored. YouTube later adjusted its rules to be more permissive of violent content that is documentarian in nature. Around the same time, Abbas’ Yahoo! email account was shut down—and later restored—on accusations that he was spamming other users.
More recently, Abbas has faced off with Facebook over an erroneous content decision made by the company. In November 2017, Abbas was issued a 30-day suspension by Facebook for a post in which he named and accused an individual of running a scam and threatening other people. As a result of the suspension, Abbas was unable to post to Facebook or use Messenger or other platform tools. After we contacted the company the suspension was reversed and Abbas’s access restored.
In another, more recent instance, Abbas had an image removed from Facebook, and received only a vague notification stating:
In most instances involving content removals, we send people a generic message to let them know that they've violated our Community Standards. We're in the process of trying to be more specific with our language so that people have a better understanding of why we've taken down their content and how can they avoid similar removals in the future.
Abbas was able to hold on to his Facebook account, but with Twitter, he wasn’t so lucky. In December, he was suddenly suspended from the platform without warning or notification. His account, which was verified and had 350,000 followers, was described by Egyptian human rights activist Sherif Azer as “a live archive to the events of the revolution and till today one of few accounts still documenting human rights abuses in Egypt.” EFF contacted Twitter about the suspension, but the company did not respond to our query.Platforms must be accountable to their users
Social media companies took great pride in the role they were said to have played in the 2011 Arab uprisings. But as a recent article from Middle East Eye points out, Egyptians are facing a significant increase in content takedowns on Facebook. The article asks the question: “Would those social media accounts which supported Egypt's uprisings in 2011 now be shut down?”
In fact, the most famous of those social media accounts—the page entitled “We Are All Khaled Said” that first called for protests on January 25, 2011—was actually shut down by Facebook in 2010, just a few months before the uprising. The page, which was later revealed to have been created by Google executive Wael Ghonim, was removed because Ghonim had been using a fake name, and only restored after US-based NGOs stepped in to help.
Similarly, Abbas was only able to have his suspension overturned after contacting EFF. Verified Egyptian Reuters journalist Amina Ismail was able to get a Twitter suspension overturned through her contacts. Abbas and Ismail are both high-profile journalists, however—most users don’t have access to contacts at Silicon Valley’s top tech companies.
Wael Abbas's experience demonstrates the precarity of our online lives, and the dire need for platforms to institute transparent practices. As we recently wrote, social media platforms must notify users clearly when they violate a policy, and offer a clear path of recourse so that all users have an opportunity to appeal content decisions. Abbas's experience is the tip of the iceberg: for every prominent journalist documenting injustice who manages to get through their filters, how many more have lost the fight against the censors before they had a chance to reach a wider public?
It is vital that technology companies recognize the role they play in fostering free expression and act accordingly. To learn more about our efforts to hold companies accountable on freedom of expression, visit Onlinecensorship.org.
Video editing technology hit a milestone this month. The new tech is being used to make porn. With easy-to-use software, pretty much anyone can seamlessly take the face of one real person (like a celebrity) and splice it onto the body of another (like a porn star), creating videos that lack the consent of multiple parties.
People have already picked up the technology, creating and uploading dozens of videos on the Internet that purport to involve famous Hollywood actresses in pornography films that they had no part in whatsoever.
While many specific uses of the technology (like specific uses of any technology) may be illegal or create liability, there is nothing inherently illegal about the technology itself. And existing legal restrictions should be enough to set right any injuries caused by malicious uses.
As Samantha Cole at Motherboard reported in December, a Reddit user named “deepfakes” began posting videos he created that replaced the faces of porn actors with other well-known (non-pornography) actors. According to Cole, the videos were “created with a machine learning algorithm, using easily accessible materials and open-source code that anyone with a working knowledge of deep learning algorithms could put together.”
Just over a month later, Cole reported that the creation of face-swapped porn, labeled “deepfakes” after the original Redditor, had “exploded” with increasingly convincing results. And an increasingly easy-to-use app had launched with the aim of allowing those without technical skills to create convincing deepfakes. Soon, a marketplace for buying and selling deepfakes appeared in a subreddit, before being taken off the site. Other platforms including Twitter, PornHub, Discord, and Gfycat followed suit. In removing the streams, each platform noted a concern that the people depicted in the deepfakes did not consent to their involvement in the videos themselves.
We can quickly imagine many terrible uses for this face-swapping technology, both in creating nonconsensual pornography and false accounts of events, and in undermining the trust we currently place in video as a record of events.
But there can be beneficial and benign uses as well: political commentary, parody, anonymization of those needing identity protection, and even consensual vanity or novelty pornography. (A few others are hypothesized towards the end of this article.)
The knee-jerk reaction many people have towards any new technology that could be used for awful purposes is to try and criminalize or regulate the technology itself. But such a move would threaten the beneficial uses as well, and raise unnecessary constitutional problems.
Fortunately, existing laws should be able to provide acceptable remedies for anyone harmed by deepfake videos. In fact, this area isn’t entirely new when it comes to how our legal framework addresses it. The US legal system has been dealing with the harm caused by photo-manipulation and false information in general for a long time, and the principles so developed should apply equally to deepfakes.What Laws Apply
If a deepfake is used for criminal purposes, then criminal laws will apply. For example, if a deepfake is used to pressure someone to pay money to have it suppressed or destroyed, extortion laws would apply. And for any situations in which deepfakes were used to harass, harassment laws apply. There is no need to make new, specific laws about deepfakes in either of these situations.
On the tort side, the best fit is probably the tort of False Light invasion of privacy. False light claims commonly address photo manipulation, embellishment, and distortion, as well as deceptive uses of non-manipulated photos for illustrative purposes. Deepfakes fit into those areas quite easily.
To win a false light lawsuit, a plaintiff—the person harmed by the deepfake, for example—must typically prove that the defendant—the person who uploaded the deepfake, for example—published something that gives a false or misleading impression of the plaintiff in such a way to damage the plaintiff’s reputation or cause them great offense, in such a way that would be highly offensive to a reasonable person, and caused the plaintiff mental anguish or suffering. It seems that in many situations the placement of someone in a deepfake without their consent would be the type of “highly offensive” conduct that the false light tort covers.
The Supreme Court further requires that in cases pertaining to matters of public interest, the plaintiff must also prove an intent that the audience believe the impression to be true. This is the actual malice requirement found in defamation law.
False light is recognized as a legal action in about two-thirds of the states. It can be difficult to distinguish false light from defamation, and many courts treat them identically. The courts that treat them differently focus on the injury: defamation compensates for damage to reputation, false light compensates for being subject to offensiveness. But of course, a plaintiff could sue for defamation if a deepfake has a natural tendency to damage their reputation.
The tort of Intentional Infliction of Emotional Distress (IIED) will also be available in many situations. A plaintiff can win an IIED lawsuit if they prove that a defendant—again, for example, a deepfake creator and uploader—intended to cause the plaintiff severe emotional distress by extreme and outrageous conduct, and that the plaintiff actually suffered severe emotional distress as a result of the extreme and outrageous conduct. The Supreme Court has found that where the extreme and outrageous conduct is the publication of a false statement and when the statement is about either a matter of public interest or a public figure, the plaintiff must also prove an intent that the audience believe the statement to be true, an analog to defamation law’s actual malice requirement. The Supreme Court has further extended the actual malice requirement to all statements pertaining to matters of public interest.
And to the extent deepfakes are sold or the creator receives some other benefit from them, they raise the possibility of right of publicity claims as well by those whose images are used without their consent.
Lastly, one whose copyrighted material–either the facial image or the source material into which the facial image is embedded–may have a claim for copyright infringement, subject of course to fair use and other defenses.
Yes, deepfakes can present a social problem about consent and trust in video, but EFF sees no reason why the already available legal remedies will not cover injuries caused by deepfakes.
Although we have been opposing Europe's misguided link tax and upload filtering proposals ever since they first surfaced in 2016, the proposals haven't been standing still during all that time. In the back and forth between a multiplicity of different Committees of the European Parliament, and two other institutions of the European Union (the European Commission and the Council of the European Union), various amendments have been offered up in an attempt at political compromise. Unfortunately, the point at which these compromises seem to have landed still poses the same problems as before.What Has Happened with the Link Tax?
Article 11 is its official designation, but "link tax" is a far better informal description of this proposal, which would impose a requirement for Internet platforms to pay money to news publishers for providing links to news articles, accompanied by a short summary of what they are linking to. This isn't a copyright, because the link tax is paid to the publisher rather than the author, and because it is payable even if the portion of the news article taken isn't copyright-protected, falls within a copyright exception, or is freely licensed.
It's unclear why this proposal wasn't abandoned a long time ago. A similar link tax in Spain resulted in the closure of the Spanish version of Google News, a German equivalent has also been deemed a dismal failure, and both small publishers and even a European Commission-funded study have slammed the proposal. Nevertheless as of February 2018, it remains firmly on the table, with virtually nothing to sweeten the thoroughly rotten deal that it offers to Internet platforms and publishers alike.
The most recent attempt at compromise comes in a discussion paper [PDF] from the Bulgarian Council Presidency, prepared as input for a meeting of the Council's Intellectual Property Working Party that was held on February 12. The paper proposes only minor tweaking to the European Commission's original text, such as excluding individual Internet users from liability for the tax, and carving out "individual words or very short excerpts of text" from its scope, but without specifying what "very short excerpts" actually means.
The discussion paper also briefly acknowledges the alternative proposal of dropping the link tax altogether, and instead addressing publishers' concerns without creating any new copyright-like impost. This alternative proposal would create a legal presumption that news publishers are entitled to enforce the existing copyrights in news articles written by their journalists. If Internet platforms are reproducing such large parts of news articles that permission from the copyright owner is required, this would enable the publishers to negotiate directly with those platforms to license that use. This is the only sensible compromise that can be made to the Article 11 proposal, but it is one that the Bulgarian Presidency unfortunately gives short shrift.What has Happened with Upload Filtering?
The same discussion paper also tinkers around the edges of the upload filtering mandate, without addressing the fundamental dangers that it continues to pose to freedom of expression online. For those who came in late, the European Commission's initial upload filter proposal, formally designated as Article 13, would require Internet platforms to put in place costly and ineffective automatic filters to prevent copyright-infringing content from being uploaded by users, creating a kind of robotic censorship regime.
What has changed since then? Not much. The Bulgarian Presidency proposes being slightly more specific about what kinds of online platforms are the target of the measure ("online content sharing services"). It also proposes introducing a new, expansive definition of "communication to the public"; an exclusive right reserved to copyright holders in Europe that had previously only been defined by way of a complicated series of court decisions. By deeming an Internet platform to be engaged in "communication to the public" whenever it allows a user to upload a copyright-protected work for sharing, the Bulgarian Presidency aims to justify excluding that platform from the copyright safe harbor that the existing E-Commerce Directive provides.
The only other change worth noting is that the proposal is now more equivocal about whether Internet platforms would actually have to install automated upload filters, or whether it would be sufficient for them to prevent the uploading of copyright-infringing material in some other way. But as European Digital Rights (EDRi) has cogently pointed out, this is a distinction without a difference.
To comply with Article 13 and to avoid liability under the E-Commerce Directive (per the Bulgarian Presidency's amendment), platforms are required to "take effective measures to prevent the availability on its services of ... unauthorized works or other subject-matter identified by the rightholders," and if such works do nevertheless appear on the platform, must "act expeditiously to remove or disable access to the specific unauthorized work or other subject matter and ... take steps to prevent its future availability."
There is no way in which platforms could possibly comply with this directive other than by agreeing to monitor all of the content they accept, either manually or automatically. By daring not to speak this uncomfortable truth, the Bulgarian Presidency skirts around the fact that such a general monitoring obligation would contravene both Article 14 of the E-Commerce Directive and European human rights law. But that kind of clever circumlocution can't hide the repressive nature of this censorship proposal, and does nothing to improve on the flaws of the original text.What Can You Do?
The fight against Article 11 and Article 13 is entering its closing days. That makes every voice that we can raise in opposition to these harmful proposals more important than ever before. European voices are best placed to convince European policymakers of the harm that their proposals would wreak upon European businesses and users. Thankfully, our allies in Europe are on the case, and if you are European or have colleagues or friends in Europe, here are the links you need to contact your representatives and speak out against their misguided plans:
- Mozilla has put together an awesome call-in tool and response guide, which makes it easy to identify your specific concerns as a technologist, creator, innovator, scientist or librarian. You can also read more on Mozilla's site about how all of these category of user, and more, are affected by the Article 11 and Article 13 proposals, along with some of the other more obscure (but still important) provisions of the broader Digital Single Market Directive.
- A coalition called Create.Refresh have a brilliant, viral campaign that encourages creators to create and share their own works that address the problems inherent in restrictive filtering systems, such as those that Article 13 would effectively mandate.
- OpenMedia's Save the Link network has updated their click-to-call website this month with a brand new petition on Article 11 that enables you to identify yourself as one of the impacted groups, from a drop down menu on the new page. If you are a librarian, software developer, creator, researcher, or journalist, you'll be able to demonstrate how the link tax proposals are harmful to you specifically.
As you can see, there are many options for you to get involved in this fight—and with the final Committee vote in the European Parliament coming up on March 26-27, now is the best time to do so. If we lose this one, the link tax and upload filtering mandates could be here to stay, and the Internet as we know it will never be the same.
Today, we delivered a petition to the U.S. Copyright Office to keep copyright’s safe harbors safe. We asked the Copyright Office to remove a bureaucratic requirement that could cause websites and Internet services to lose protection under the Digital Millennium Copyright Act (DMCA). And we asked them to help keep Congress from replacing the DMCA safe harbor with a mandatory filtering law. Internet users from all over the U.S. and beyond added their voices to our petition.
Under current law, the owners of websites and online services can be protected from monetary liability when their users are accused of infringing copyright through the DMCA “safe harbors.” In order to take advantage of these safe harbors, owners must meet many requirements, including participating in the notorious notice-and-takedown procedure for allegedly infringing content. They also must register an agent—someone who can respond to takedown requests—with the Copyright Office.
The DMCA is far from perfect, but provisions like the safe harbor allow websites and other intermediaries that host third-party material to thrive and grow without the constant threat of massive copyright penalties. Without safe harbors, small Internet businesses could face bankruptcy over the infringing activities of just a few users.
Now, a lot of those small sites risk losing their safe harbor protections. That’s because of the Copyright Office’s rules for registering agents. Those registrations used to be valid as long as the information was accurate. Under the Copyright Office’s new rules, website owners must renew their registrations every three years or risk losing safe harbor protections. That means that websites can risk expensive lawsuits for nothing more than forgetting to file a form. As we’ve written before, because the safe harbor already requires websites to submit and post accurate contact information for infringement complaints, there’s no good reason for agent registrations to expire. We’re also afraid that it will disproportionately affect small businesses, nonprofits, and hobbyists, who are least able to have a cadre of lawyers at the ready to meet bureaucratic requirements.
Many website owners have signed up under the Copyright Office’s new agent registration system, which is designed to send reminder emails when the three-year registrations are set to expire. While the new registration system is a vast improvement over the old paper filing system, the expiration requirement is unnecessary and dangerous.
We explained these problems in our petition, and we also explained how the DMCA faces even greater threats. If certain major media and entertainment companies get their way, it will become much more difficult for websites of any size to earn their safe harbor status. That’s because those companies’ lobbyists are pushing for a system where platforms would be required to use computerized filters to check user-uploaded material for potential copyright infringement.
Requiring filters as a condition of safe harbor protections would make it much more difficult for smaller web platforms to get off the ground. Automated filtering technology is expensive—and not very good. Even when big companies use them, they’re extremely error-prone, causing lots of lawful speech to be blocked or removed. A filtering mandate would threaten smaller websites’ ability to host user content at all, cementing the dominance of today’s Internet giants.
If you run a website or online service that stores material posted by users, make sure that you comply with the DMCA’s requirements. Register a DMCA agent through the Copyright Office’s online system, post the same information on your website, and keep it up to date. Meanwhile, we’ll keep telling the Copyright Office, and Congress, to keep the safe harbors safe.
Online publisher and blogger Eskinder Nega has been imprisoned in Ethiopia since September 2011 for the "crime" of writing articles critical of his government. He is one of the longest-serving prisoners in EFF's Offline casefile of writers and activists unjustly imprisoned for their work online.
Now a chance he may finally be freed has been thrown into doubt because of the Ethiopian authorities' outrageous demand that he sign a false confession before being released.
The Ethiopian Prime Minister, Hailemariam Desalegn, announced in January surprise plans to close down the notorious Maekelawi detention center and release a number of prisoners. The Prime Minister said that the move was intended to "foster national reconciliation."
While Ethiopia's own officials have declined to call the recipients of the amnesty "political prisoners," the bulk of the candidates named so far for release are either opposition politicians and activists, or others, like Eskinder, caught up in previous crackdowns on dissent and free speech.
Despite the government's apparent desire to use the release to moderate tensions in Ethiopia, prison officials have undermined its message—and Eskinder's chance at freedom—by requiring him to sign a false confession before his release.
The document, given to Eskinder without warning last week, included a claim that Eskinder was a member of Ginbot 7, a group the government has previously declared a terrorist organization. Eskinder refused to sign the document, and was subsequently returned to his cell, even as other prisoners were being released. The Committee to Protect Journalists subsequently told Quartz Africa that Eskinder was asked to sign the form a second time over the weekend.
EFF continues to follow Eskinder's case closely, and urges the Ethiopian government to live up to its promise of a new era of reconciliation and renewal by returning Eskinder to his friends and family, unconditionally and immediately.
It should not be surprising that arguably the biggest mistake in Internet policy history is going to invoke a vast political response. Since the FCC repealed federal Open Internet Order in December, many states have attempted to fill the void. With a new bill that reinstates net neutrality protections, Oregon is the latest state to step up.
Oregon’s Majority Leader Jennifer Williamson recently announced her intention to fight to restore much of what the FCC repealed last December under its so-called “Restoring Internet Freedom Order.” Her legislation, H.B. 4155, responds to the FCC’s decision by requiring that any ISP that receives funds from the state to adhere to net neutrality principles—not blocking or throttling content or prioritizing its own content over that of competitors, for example.
If you’re an Oregonian, tell your state representative to act to restore net neutrality.
Oregon is following in what is clearly a trend of state legislatures and executives acting to protect their citizens’ digital rights where the federal government has abdicated responsibility. To date, 17 states have introduced network neutrality legislation and four Governors have issued Executive Orders (Montana, New York, New Jersey, and Hawaii).
The national response to the FCC’s decision to abandon its role as the consumer protection agency overseeing cable and telephone companies is to be expected. It is wildly unpopular with voters of all political leanings; 83% of voters overall including 3 out of 4 Republican voters opposed the FCC decision. Yet despite millions of Americans submitting comments to the FCC to oppose the decision, they were promptly ignored in favor of the interests of AT&T, Verizon, and Comcast. Where else should this vast swath of the American public go if not their state and local representation?
And while both Verizon and their association, the CTIA, made last minute requests to the FCC to try to prevent state privacy and network neutrality laws, they are not going to be successful. Their problem is the plan to eviscerate the law that empowers the FCC also disables the agency’s ability to block state laws. In other words, they cannot have it both ways.
While the FCC's order did contain a lot of words about how states cannot pass their own network neutrality laws, it did so without citing any specific legal authority. We remain skeptical that the FCC itself has that power. And while states still have to navigate the Commerce Clause, EFF has provided guidance on how to do that.
Notably, states and local government and in particular governors have caught onto the obvious weakness in the FCC’s authority and have acted. EFF will continue working to support the states in their effort to protect a free and open Internet until we are able to fully restore the protections we once had at the federal level.
This week, Senators Hatch, Graham, Coons, and Whitehouse introduced a bill that diminishes the data privacy of people around the world.
The Clarifying Overseas Use of Data (CLOUD) Act expands American and foreign law enforcement’s ability to target and access people’s data across international borders in two ways. First, the bill creates an explicit provision for U.S. law enforcement (from a local police department to federal agents in Immigration and Customs Enforcement) to access “the contents of a wire or electronic communication and any record or other information” about a person regardless of where they live or where that information is located on the globe. In other words, U.S. police could compel a service provider—like Google, Facebook, or Snapchat—to hand over a user’s content and metadata, even if it is stored in a foreign country, without following that foreign country’s privacy laws.
Second, the bill would allow the President to enter into “executive agreements” with foreign governments that would allow each government to acquire users’ data stored in the other country, without following each other’s privacy laws.
For example, because U.S.-based companies host and carry much of the world’s Internet traffic, a foreign country that enters one of these executive agreements with the U.S. to could potentially wiretap people located anywhere on the globe (so long as the target of the wiretap is not a U.S. person or located in the United States) without the procedural safeguards of U.S. law typically given to data stored in the United States, such as a warrant, or even notice to the U.S. government. This is an enormous erosion of current data privacy laws.
This bill would also moot legal proceedings now before the U.S. Supreme Court. In the spring, the Court will decide whether or not current U.S. data privacy laws allow U.S. law enforcement to serve warrants for information stored outside the United States. The case, United States v. Microsoft (often called “Microsoft Ireland”), also calls into question principles of international law, such as respect for other countries territorial boundaries and their rule of law.
Notably, this bill would expand law enforcement access to private email and other online content, yet the Email Privacy Act, which would create a warrant-for-content requirement, has still not passed the Senate, even though it has enjoyed unanimous support in the House for the past two years.The CLOUD Act and the US-UK Agreement
The CLOUD Act’s proposed language is not new. In 2016, the Department of Justice first proposed legislation that would enable the executive branch to enter into bilateral agreements with foreign governments to allow those foreign governments direct access to U.S. companies and U.S. stored data. Ellen Nakashima at the Washington Post broke the story that these agreements (the first iteration has already been negotiated with the United Kingdom) would enable foreign governments to wiretap any communication in the United States, so long as the target is not a U.S. person. In 2017, the Justice Department re-submitted the bill for Congressional review, but added a few changes: this time including broad language to allow the extraterritorial application of U.S. warrants outside the boundaries of the United States.
In September 2017, EFF, with a coalition of 20 other privacy advocates, sent a letter to Congress opposing the Justice Department’s revamped bill.
The executive agreement language in the CLOUD Act is nearly identical to the language in the DOJ’s 2017 bill. None of EFF’s concerns have been addressed. The legislation still:
- Includes a weak standard for review that does not rise to the protections of the warrant requirement under the 4th Amendment.
- Fails to require foreign law enforcement to seek individualized and prior judicial review.
- Grants real-time access and interception to foreign law enforcement without requiring the heightened warrant standards that U.S. police have to adhere to under the Wiretap Act.
- Fails to place adequate limits on the category and severity of crimes for this type of agreement.
- Fails to require notice on any level – to the person targeted, to the country where the person resides, and to the country where the data is stored. (Under a separate provision regarding U.S. law enforcement extraterritorial orders, the bill allows companies to give notice to the foreign countries where data is stored, but there is no parallel provision for company-to-country notice when foreign police seek data stored in the United States.)
The CLOUD Act also creates an unfair two-tier system. Foreign nations operating under executive agreements are subject to minimization and sharing rules when handling data belonging to U.S. citizens, lawful permanent residents, and corporations. But these privacy rules do not extend to someone born in another country and living in the United States on a temporary visa or without documentation. This denial of privacy rights is unlike other U.S. privacy laws. For instance, the Stored Communications Act protects all members of the “public” from the unlawful disclosure of their personal communications.An Expansion of U.S. Law Enforcement Capabilities
The CLOUD Act would give unlimited jurisdiction to U.S. law enforcement over any data controlled by a service provider, regardless of where the data is stored and who created it. This applies to content, metadata, and subscriber information – meaning private messages and account details could be up for grabs. The breadth of such unilateral extraterritorial access creates a dangerous precedent for other countries who may want to access information stored outside their own borders, including data stored in the United States.
EFF argued on this basis (among others) against unilateral U.S. law enforcement access to cross-border data, in our Supreme Court amicus brief in the Microsoft Ireland case.
When data crosses international borders, U.S. technology companies can find themselves caught in the middle between the conflicting data laws of different nations: one nation might use its criminal investigation laws to demand data located beyond its borders, yet that same disclosure might violate the data privacy laws of the nation that hosts that data. Thus, U.S. technology companies lobbied for and received provisions in the CLOUD Act allowing them to move to quash or modify U.S. law enforcement orders for extraterritorial data. The tech companies can quash a U.S. order when the order does not target a U.S. person and might conflict with a foreign government’s laws. To do so, the company must object within 14 days, and undergo a complex “comity” analysis – a procedure where a U.S. court must balance the competing interests of the U.S. and foreign governments.Failure to Support Mutual Assistance
Of course, there is another way to protect technology companies from this dilemma, which would also protect the privacy of technology users around the world: strengthen the existing international system of Mutual Legal Assistance Treaties (MLATs). This system allows police who need data stored abroad to obtain the data through the assistance of the nation that hosts the data. The MLAT system encourages international cooperation.
It also advances data privacy. When foreign police seek data stored in the U.S., the MLAT system requires them to adhere to the Fourth Amendment’s warrant requirements. And when U.S. police seek data stored abroad, it requires them to follow the data privacy rules where the data is stored, which may include important “necessary and proportionate” standards. Technology users are most protected when police, in the pursuit of cross-border data, must satisfy the privacy standards of both countries.
While there are concerns from law enforcement that the MLAT system has become too slow, those concerns should be addressed with improved resources, training, and streamlining.
The CLOUD Act raises dire implications for the international community, especially as the Council of Europe is beginning a process to review the MLAT system that has been supported for the last two decades by the Budapest Convention. Although Senator Hatch has in the past introduced legislation that would support the MLAT system, this new legislation fails to include any provisions that would increase resources for the U.S. Department of Justice to tackle its backlog of MLAT requests, or otherwise improve the MLAT system.
A growing chorus of privacy groups in the United States opposes the CLOUD Act’s broad expansion of U.S. and foreign law enforcement’s unilateral powers over cross-border data. For example, Sharon Bradford Franklin of OTI (and the former executive director of the U.S. Privacy and Civil Liberties Oversight Board) objects that the CLOUD Act will move law enforcement access capabilities “in the wrong direction, by sacrificing digital rights.” CDT and Access Now also oppose the bill.
Sadly, some major U.S. technology companies and legal scholars support the legislation. But, to set the record straight, the CLOUD Act is not a “good start.” Nor does it do a “remarkable job of balancing these interests in ways that promise long-term gains in both privacy and security.” Rather, the legislation reduces protections for the personal privacy of technology users in an attempt to mollify tensions between law enforcement and U.S. technology companies.
Legislation to protect the privacy of technology users from government snooping has long been overdue in the United States. But the CLOUD Act does the opposite, and privileges law enforcement at the expense of people’s privacy. EFF strongly opposes the bill. Now is the time to strengthen the MLAT system, not undermine it.
 The text of the CLOUD Act does not limit U.S. law enforcement to serving orders on U.S. companies or companies operating in the United States. The Constitution may prevent the assertion of jurisdiction over service providers with little or no nexus to the United States.Related Cases: In re Warrant for Microsoft Email Stored in Dublin, Ireland
The importance of the US Patent Office’s “inter partes review” (IPR) process was highlighted in dramatic fashion yesterday. Patent appeals judges threw out a patent [PDF] that was used to sue more than 80 companies in the fitness, wearables, and health industries.
US Patent No. 7,454,002 was owned by Sportbrain Holdings, a company that advertised a kind of ‘smart pedometer’ as recently as 2011. But the product apparently didn’t take off, and in 2016, Sportbrain turned to patent lawsuits to make a buck.
A company called Unified Patents challenged the ’002 patent by filing an IPR petition, and last year, the Patent Office agreed that the patent should be reviewed. Yesterday, the patent judges published their decision, canceling every claim of the patent.
The ’002 patent describes capturing a user’s “personal data,” and then sharing that information with a wireless computing device and over a network. It then analyzes the data and provides feedback.
After reviewing the relevant technology, a panel of patent office judges found there wasn’t much new to the ’002 patent. Earlier patents had already described collecting and sharing various types of sports data, including computer-assisted pedometers and a system that measured a skier’s “air time.” Given those earlier advances, the steps of the Sportbrain patent would have been obvious to someone working in the field. The office cancelled all the claims.
That means the dozens of different companies sued by Sportbrain won’t have to each spend hundreds of thousands of dollars—potentially millions—to defend against a patent that, the government now acknowledges, never should have been granted in the first place.A Critical Tool for Innovators
Bad patents like the one asserted by Sportbrain are a drain on the innovation economy, especially for small businesses. But the damage that could be caused by such patents was much worse before the advent of IPRs.
The IPR process has proven to be the most effective part of the 2012 America Invents Act. In most cases, the IPR process is far more efficient than federal courts when it comes to evaluating a patent to figure out if it’s truly new and non-obvious.
IPRs have other advantages for small companies. Often, companies that get sued or threatened by patent trolls will end up paying a licensing fee, even though they don’t think the patents are legitimate. Through the IPR process, defendants can band together to file IPRs. That’s enabled the success of membership-based for-profit companies like RPX and Unified Patents—in fact, it was member-funded Unified that filed the petition which shut down the Sportbrain Holdings patent.
The IPR process also enables non-profits like EFF to fight bad patents. That’s how EFF was able to knock out the Personal Audio “podcasting” patent. The petition was paid for by the more than 1,000 donors who gave to our “Save Podcasting” campaign. Last year, EFF’s victory in that case was upheld by a federal appeals court.
But the IPR process could be in danger. Senator Chris Coons has twice proposed legislation (the STRONG Patents Act and the STRONGER Patents Act) that would gut the IPR system. EFF has opposed these bills. Other opponents of IPRs have taken their complaints to the courts. One company has asked the Supreme Court to declare the process unconstitutional. This case, Oil States, will decide the future of IPRs. We’ve submitted a brief explaining why we think the process of reviewing patents at the Patent Office is not only constitutional, it’s good public policy. We hope both Congress and the high court see their way to upholding this critical tool that saved 80 companies from damaging litigation—and that was just yesterday.Related Cases: EFF v. Personal Audio LLC
With a broken heart I have to announce that EFF's founder, visionary, and our ongoing inspiration, John Perry Barlow, passed away quietly in his sleep this morning. We will miss Barlow and his wisdom for decades to come, and he will always be an integral part of EFF.
It is no exaggeration to say that major parts of the Internet we all know and love today exist and thrive because of Barlow’s vision and leadership. He always saw the Internet as a fundamental place of freedom, where voices long silenced can find an audience and people can connect with others regardless of physical distance.
Barlow was sometimes held up as a straw man for a kind of naive techno-utopianism that believed that the Internet could solve all of humanity's problems without causing any more. As someone who spent the past 27 years working with him at EFF, I can say that nothing could be further from the truth. Barlow knew that new technology could create and empower evil as much as it could create and empower good. He made a conscious decision to focus on the latter: "I knew it’s also true that a good way to invent the future is to predict it. So I predicted Utopia, hoping to give Liberty a running start before the laws of Moore and Metcalfe delivered up what Ed Snowden now correctly calls 'turn-key totalitarianism.'”
Barlow’s lasting legacy is that he devoted his life to making the Internet into “a world that all may enter without privilege or prejudice accorded by race, economic power, military force, or station of birth . . . a world where anyone, anywhere may express his or her beliefs, no matter how singular, without fear of being coerced into silence or conformity.”
In the days and weeks to come, we will be talking and writing more about what an extraordinary role Barlow played for the Internet and the world. And as always, we will continue the work to fulfill his dream.
Newly Released Surveillance Orders Show That Even with Individualized Court Oversight, Spying Powers Are Misused
Once-secret surveillance court orders obtained by EFF last week show that even when the court authorizes the government to spy on specific Americans for national security purposes, that authorization can be misused to potentially violate other people’s civil liberties.
These documents raise larger questions about whether the government can meaningfully protect people’s privacy and free expression rights under Section 702 of the Foreign Intelligence Surveillance Act (FISA), which permits officials to engage in warrantless mass surveillance with far less court oversight than is required under the “traditional” FISA warrant process.
The documents are the third and final batch of Foreign Intelligence Surveillance Court (FISC) opinions released to EFF as part of a FOIA lawsuit seeking all significant orders and opinions of the secret court. Previously, the government released opinions dealing with FISA’s business records and pen register provisions, along with opinions under Section 702.
Although many of the 13 opinions are heavily redacted—and the government withheld another 26 in full—the readable portions show several instances of the court blocking government efforts to expand its surveillance or ordering the destruction of information obtained improperly as a result of its spying.Court Rejects FBI Effort to Log Communications of Individuals Not Targeted by FISA Order
For example, in a 40-page opinion issued in 2004 or 2005, FISC Judge Harold Baker rejected the FBI’s proposal to log copies of recorded conversations of people who, while not targeted by the agency, were still swept up in its surveillance. This likely occurred when innocent people used the same communications service as the FBI’s target, possibly a shared phone line. The opinion demonstrates both the risks of overcollection as part of targeted surveillance as well as the benefits of engaged, detailed court oversight.
Here’s how that oversight works: Once the FISC approves electronic surveillance under FISA’s Title I, the FBI can record a target’s communications, but it must follow “minimization procedures” to avoid unnecessarily listening in on conversations by others who are using the same “facility” (like a telephone line). In this case, however, the FBI employed a surveillance technique that apparently captured a lot of innocent communications. (This is often referred to as “incidental collection” because the recording of these conversations is incidental to spying on the target who uses the same phone line.)
Although redactions make it difficult to understand details of the FBI’s request to the court, it apparently sought to mark these out-of-scope conversations for later use, which would be inconsistent with the “Standard Minimization Procedures” approved for use in FISA Title I cases.
The FBI seems to have presented its request to the FISC as no big deal, with “minimal, if any” impact on the Fourth Amendment. Judge Baker saw it differently. He explained that “it is not sufficient to assert that, because the Standard Procedures already permit the FBI a great deal of latitude, it is reasonable to grant a little more.”
More fundamentally, the court took the FBI to task for the “surprising occasion” of seeking to expand its use of incidentally collected communications, rather than getting rid of them. It faulted the FBI for failing to account “for the possibility that overzealous or ill-intentioned personnel might be inclined to misuse information, if given the opportunity.” As the court put it, “the advantage of minimization at the acquisition stage is clear. Information that is never acquired in the first place cannot be misused.”NSA Makes Ridiculous Argument to Keep Communications it Obtained Without Court Authorization
Other opinions EFF obtained detail the NSA’s unauthorized surveillance of a number of individuals and the government’s efforts to hold onto the data despite a FISA court’s order that the communications be destroyed.
A December 2010 order by FISC Judge Frederick Scullin, Jr. describes how over a period between 15 months and three years, the NSA obtained a number of communications of U.S. persons. The precise number of communications obtained is redacted.
Rather than notifying the court that it had destroyed the communications it obtained without authorization, the NSA made an absurd argument in a bid to retain the communications: because the surveillance was unauthorized, the agency’s internal procedures that require officials to delete non-relevant communications should not apply. Essentially, because the surveillance was unlawful, the law shouldn’t apply and the NSA should get to keep what it had obtained.
The court rejected the NSA’s argument. “One would expect the procedures’ restrictions on retaining and disseminating U.S. person information to apply most fully to such communications, not, as the government would have it, to fail to apply at all,” the court wrote.
The court went on to say that “[t]here is no persuasive reason to give the (procedures) the paradoxical and self-defeating interpretation advanced by the government.”
The court then ordered the NSA to destroy the communications it had obtained without FISC authorization. But another opinion issued by Judge Scullin in May 2011 shows that rather than immediately complying with the order, the NSA asked the FISC once more to allow it to keep the communications.
Again the court rejected the government’s arguments. “No lawful benefit can plausibly result from retaining this information, but further violation of law could ensue,” the court wrote. The court then ordered the NSA to not only delete the data, but to provide reports on the status of its destruction “until such time as the destruction process has been completed.”If Government Abuse of Surveillance Powers Occurs With Careful Oversight, What Happens Under Section 702?
The new opinions show that even when the FISC judges actually approve targeted surveillance on particular individuals, the government still collects the contents of innocent people’s communications in ways that are incompatible with the law. Which begs the question: what is the government getting away with when it engages in surveillance that has even less FISC oversight?
Although the opinions discussed above concern FISA’s statutory requirements of minimization rather than constitutional limits, these are the sort of concerns that EFF has raised in the context of the NSA’s warrantless surveillance under Section 702 of FISA. Unlike FISA Title I, Section 702 does not require the FISC to conduct such detailed oversight of the government’s activities. The court does approve minimization procedures, but it does not review targets or facilities, meaning that it has less insight into the actual surveillance. That necessarily reduces opportunities to prevent overbroad collection or check an intelligence agency’s incremental loosening of its own rules. And, as we’ve seen, it has led to significant “compliance violations” by the NSA and other agencies using Section 702.
All surveillance procedures come with risks, especially with the level of secrecy involved in FISA. Nevertheless, opinions like these demonstrate that detailed court oversight offers the best hope of curtailing these risks. We hope it informs future debate in those areas where oversight is limited by statute, as with Section 702. If anything, the decisions are more evidence that warrantless surveillance must end.Related Cases: Significant FISC Opinions
What with the $400 juicers and the NSFW smart fridges, the Internet of Things has arrived at that point in the hype cycle midway between "bottom line" and "punchline." Hype and jokes aside, the reality is that fully featured computers capable of running any program are getting cheaper and more powerful and smaller with no end in sight, and the gadgets in our lives are transforming from dumb hunks of electronics to computers in fancy cases that are variously labeled "car" or "pacemaker" or "Alexa."
We don't know which designs and products will be successful in the market, but we're dead certain that banning people from talking about flaws in existing designs and trying to fix those flaws will make all the Internet of Things' problems worse.
But a pernicious American law stands between the Internet of Defective Things and your right to know about those defects and remediate them. Section 1201 of the Digital Millennium Copyright Act bans any act that weakens or bypasses a lock that controls access to copyrighted works (these locks are often called Digital Rights Management or DRM). These locks were initially used to lock down the design of DVD players and games consoles, so that manufacturers could prevent otherwise legal activities, like watching out-of-region discs or playing independently produced games.
Today, these locks have proliferated to every device with embedded software: cars, tractors, pacemakers, voting machines, phones, tablets, and, of course, "smart speakers" used to interface with voice assistants. Corporations have figured out that they can deploy DRM to control how you use your device, and then use DMCA 1201 to threaten competitors whose products unlock legal, legitimate features that benefit you, instead of some company's shareholders.
This means that, for example, a printer company can use digital locks to control who can refill your printer-ink cartridges, ensuring that you buy ink from them, at whatever price they want to charge. It means that cellphone manufacturers get to decide who can fix your phone and tractor companies can choose who can fix your tractors.
What's worse: companies have exploited DMCA 1201 to attack security researchers who came forward to report defects in their products, arguing that any disclosures of vulnerabilities in the stuff you own might help you break the DRM, meaning that it's illegal to tell you truthful things about the risks you face from your badly secured gadgets.
Every three years, the US Copyright Office lets us petition for limited exemptions to this law, and we have been slowly, surely carving out a space for Americans to bypass digital locks in order to use their property in legitimate, legal ways—even if there's some DRM between them and that use.
In 2015, we won the right to jailbreak your phones and tablets—to change how they're configured so that you can unlock features that you want (even if the manufacturer doesn't), and remove the ones you don't. We also won an exemption that protects security researchers' right to bypass DRM to investigate and test the security of all sorts of gadgets. Taken together, these two rights—the right to discover defects and the right to change your device configuration—form a foundation on which solutions to the pernicious problems of our vital, ubiquitous, badly secured gadgets can be built.
This year, we're liberating your smart speakers: Apple HomePods, Amazon Echos, Google Homes, and lesser-known offerings from other manufacturers and platforms. These gadgets are finding their way into our living rooms, kitchens—even our bedrooms and bathrooms. They have microphones that are always on and listening (many of them have cameras, too), and they're connected to the Internet. They only run manufacturer-approved apps, and use encryption that prevents security researchers from investigating them and ensuring that they're working as intended.
We've asked the Copyright Office to extend the jailbreaking exemption to cover these smart speakers, giving you the right to load software of your choosing on them—and letting security researchers probe them to make sure they're not sneaking around behind your back. These exemptions include the right to bypass the devices' bootloaders and to activate or disable hardware features. These are rights that you've always had, for virtually every gadget you've ever owned—that is, until manufacturers discovered DMCA 1201's potential to control how you use of their products after they become your property.
We don't have all the answers about how to make smart speakers better, or more secure, but we are one hundred percent certain that banning people from finding out what's wrong with their smart speakers and punishing anyone who tries to improve them isn't helping.
These Copyright Office hearings are important, because they help the Copyright Office understand and acknowledge that DMCA 1201 is causing problems for people who want to do legitimate activities, but the hearings are still grossly insufficient. DMCA 1201 says the Copyright Office can give you the right to use your device in ways that are prevented by DRM, but not the right to acquire a tool to enable you to make that use. Under the DMCA's rules, every person who has the right to bypass DRM is expected to hand-whittle a tool for their own personal use and treat the design of that tool as a matter of strictest secrecy.
This is absurd. It's one of the reasons we're suing the U.S. government over the constitutionality of DMCA 1201, with the intention of having a court rule that the law is unenforceable, killing it altogether or sending it back to Congress for a major overhaul that terminates the ability of corporations to use a so-called anti-piracy law to ban activities that have no connection to copyright infringement.
Trying to succeed as a startup is hard enough. Getting a frivolous patent infringement demand letter in the mail can make it a whole lot harder. The experience of San Francisco-based Motiv is the latest example of how patent trolls impose painful costs on small startups and stifle innovation.
Motiv is a startup of fewer than 50 employees competing in the wearable technology space. Founded in 2013, the company creates fitness trackers, housed in a ring worn on your finger.
In January, Motiv received a letter alleging infringement of U.S. Patent No. 9,069,648 (“the ’648 Patent”). The letter gave Motiv two options: pay $35,000 to license the ’648 Patent, or face the potential of costly litigation.
The '648 Patent, owned by Motivational Health Messaging LLC (“MHM”), is titled “Systems and methods for delivering activity based suggestive (ABS) messages.” The patent describes sending “motivational messages,” based “on the current or anticipated activity of the user,” to a “personal electronic device.” It provides examples such as sending the message “don't give up” when the user is running up a hill, or messages like “do not fear” and “God is with you” when a “user enters a dangerous neighborhood.” Simply put, the patent claims to have invented using a computer to send tailored messages based on activity or location.
While the name “Motivational Health Messaging” may sound new, the actors behind it aren’t: the people associated with MHM and its patent overlap with the people associated with notorious patent assertion entities Shipping & Transit, Electronic Communication Technologies, ArrivalStar, and Eclipse IP, who we’ve written about on numerous occasions. Collectively, these entities have filed over 700 lawsuits, with Shipping & Transit setting the 2016 record for most patent infringement lawsuits filed.
Though MHM and its the patent may be new, the business model seems to be the same as the other, related entities: make patent infringement demands, often against small businesses, and leverage the high cost of litigation to extract settlements in the $25,000 to $45,000 range. (As of the date of this post, MHM has not yet filed any lawsuits and the related entities have been faring very poorly in court.)
Unfortunately, for many small businesses it often makes sense to simply pay for a license instead of spending years tied up in court challenging a patent. Receiving a demand letter frivolously asserting infringement is annoying enough. Even more frustrating is being forced to divert resources away from product development in order to defend against a non-practicing entity with bad patents.
Nevertheless, Motiv decided it would not go down without a fight. Motiv retained Rachael Lamkin, who replied with her own letter explaining why Motiv does not infringe, and why MHM’s patent is invalid. Lamkin also says that in the event of litigation, Motiv would seek to join the individuals behind MHM to the lawsuit—and make them personally responsible for “any sanction or fee award.” The letter laid out in painstaking detail many of the numerous deficiencies with MHM’s patent and infringement claim, and refused to pay MHM a cent. The complete set of materials sent to MHM can be found at the end of this post.
We hope that MHM does not push ahead with a business model that preys on the vulnerability of small businesses, and only succeeds when undeserved settlements are paid. Patent holders like this takes advantage of inefficiencies in our legal system, despite the extreme weakness of their cases. By publishing Motiv’s response letter and supporting documentation, Motiv and EFF hope that others may benefit and not pay the troll under the bridge.
If you have recently been sued or received a demand letter from MHM, contact email@example.com.
Links to documents and correspondence between Motivational Health Messaging, LLC and Motiv, Inc.
- U.S. Pat. No. 9,069,648
- Motivational Health Messaging's demand letter to Motiv
- Motiv's response letter to Motivational Health Messaging
- Assignment records related to U.S. Pat. 9,069,648
- Corporate records related to U.S. Pat. 9,069,648
- Patent Office File History related to U.S. Pat. 9,069,648
- Patent Office File History for patent application related to U.S. Pat. 9,069,648
- Court records from Shipping & Transit, LLC v. Lensdiscounters.com
- Court records from Shipping & Transit, LLC v. 1A Auto, Inc.
- NIH Paper
- Lee prior art and invalidity chart
- Kaufman prior art and invalidity chart
- Steve prior art and invalidity chart
- Christ prior art and invalidity chart
- Ferguson prior art and invalidity chart
- Chittum prior art and invalidity chart
- Dalebout prior art and invalidity chart
- Hoffman prior art and invalidity chart
The list of companies who exercise their right to ask for judicial review when handed national security letter gag orders from the FBI is growing. Last week, the communications platform Twilio posted two NSLs after the FBI backed down from its gag orders. As Twilio’s accompanying blog post documents, the FBI simply couldn’t or didn’t want to justify its nondisclosure requirements in court. This might be the starkest public example yet of why courts should be involved in reviewing NSL gag orders in all cases.
National security letters are a kind of subpoena that give the FBI the power to require telecommunications and Internet providers to hand over private customer records—including names, addresses, and financial records. The FBI nearly always accompanies these requests with a blanket gag order, shutting up the providers and keeping the practice in the shadows, away from public knowledge or criticism.
Although NSLs gag orders severely restrict the providers’ ability to talk about their involvement in government surveillance, the FBI can issue them without court oversight. Under the First Amendment, “prior restraints” like these gag orders are almost never allowed, which is why EFF and our clients CREDO Mobile and Cloudflare have for years been suing to have the NSL statute declared unconstitutional. In response to our suit, Congress included in the 2015 USA FREEDOM Act a process to allow providers to push back against those gag orders.
The new process (referred to as “reciprocal notice”) gives technology companies a right to request judicial review of the gag orders accompanying NSLs. When a company invokes the reciprocal notice process, the government is required to bring the gag order before a judge within 30 days. The judge then reviews the gag order and either approves, modifies, or invalidates it. The company can appear in that proceeding to argue its case, but is not required to do so.
Under the law, reciprocal notice is just an option. It’s no substitute for the full range of First Amendment protections against improper prior restraints, let alone mandatory judicial review of NSL gags in all cases. Nevertheless, EFF encourages all providers to invoke reciprocal notice because it’s the best mechanism available to Internet companies to voice their objections to NSLs. In our 2017 Who Has Your Back report, we awarded gold stars to companies that promised to tell the FBI to go to court for all NSLs, including giants like Apple and Dropbox.
Twilio is the latest company to follow this best practice. It received the two national security letters in May 2017, both of which included nondisclosure requirements preventing Twilio from notifying its users about the government request. And both times, Twilio successfully invoked reciprocal notice, leading to FBI to give permission to publish the letters. This might seem surprising, given that in order to issue a gag, the FBI is supposed to certify that disclosure of the NSL risks serious harm related to an investigation involving national security.
But rather than going to court to back up its certification, the FBI backed down. It retracted one of the NSLs entirely, so that Twilio was not forced to hand over any information at all. For the other, the FBI simply removed the gag order, allowing Twilio to inform its customer and publish the NSL.
This is not what the proper use of a surveillance tool looks like. Instead, it reveals a regime of censorship by attrition. The FBI imposes thousands of NSL gag orders a year, and by default, these gag orders remain in place indefinitely. Only when a company like Twilio objects, does the government have any minimal burden of showing its work. Without a legal obligation to do so in all cases, the FBI can simply hope most companies don’t speak up.
That’s why it’s so crucial that companies like Twilio take responsibility and invoke reciprocal notice. Better still,Twilio also published a list of best practices that companies can look to when responding to NSLs, including template language to push back on standard nondisclosure requirements. (Automattic, the company behind Wordpress, published a similar template last year.)
As the company explained, “The process for receiving and responding to national security letters has become less opaque, but there’s still more room for sunlight.”
We couldn’t agree more. Hopefully if more companies follow the lead of Apple, Dropbox, Twilio and the others who received stars on our report, the courts and Congress will see the need for further reform of the law.
If you watched this year’s Super Bowl, you might have seen an advertisement for Dodge Ram featuring a Dr. Martin Luther King, Jr. voiceover. To criticize the ad, and to show how antithetical it was to King’s views, Current Affairs magazine created a new version. The altered version overlays audio from elsewhere in the same speech where King criticizes excessively commercial culture and specifically calls out car ads. Although this is about as clear a fair use as one could imagine, Chrysler responded with a copyright claim.
Fortunately, the takedown did not last long. The Streisand Effect quickly kicked into gear and others reposted the video. A copy on Twitter has collected over one million views. The copyright claim was then withdrawn. We reached out to Chrysler and a spokesperson responded that the video was taken down by YouTube's Content ID system but that it was restored after Chrysler discovered the error. While we are glad that this video was restored, in many less high-profile cases, automated takedowns are never reviewed or challenged.
Many, including the King Center, have commented on how Chrysler came to use a speech that included criticism of car ads in a car ad. Chrysler has defended the ad saying it had permission from King’s estate. King’s estate partnered with EMI in 2009 to create new “revenue streams” for King’s works and image. But where the use has been unauthorized by King’s estate, it has tended to enforce its rights quite aggressively. It once sued CBS for using a lengthy clip of the “I have a Dream” speech in a documentary. The estate also exacted an $800,000 payment for “permission” to use King’s words and image on the Martin Luther King Jr. Memorial in Washington. The award-winning movie Selma couldn’t use any of King’s speeches because the rights had been licensed to another studio.
Lengthy copyright terms and post-mortem rights of publicity mean that King’s words and image will be fueling EMI’s revenue streams until approximately 2039. Fortunately, fair use offers a counter-balance for the public interest. This is why we can watch Chrysler’s commercial combined with King’s real feelings about car ads. Fair use won the day this time.
BMG v. Cox: ISPs Can Make Their Own Repeat-Infringer Policies, But the Fourth Circuit Wants A Higher "Body Count"
Last week’s BMG v. Cox decision has gotten a lot of attention for its confusing take on secondary infringement liability, but commentators have been too quick to dismiss the implications for the DMCA safe harbor. Internet service providers are still not copyright police, but the decision will inevitably encourage ISPs to act on dubious infringement complaints, and even to kick more people off of the Internet based on unverified accusations.
This long-running case involves a scheme by copyright troll Rightscorp to turn a profit for shareholders by demanding money from users whose computer IP addresses were associated with copyright infringement. Turning away from the tactic of filing lawsuits against individual ISP subscribers, Rightscorp began sending infringement notices to ISPs, coupled with demands for payment, and insisting that ISPs forward those notices to their customers. In other words, Rightscorp and its clients, including BMG, sought to enlist ISPs to help coerce payments from Internet users, threatening the ISPs themselves with an infringement suit if they don’t join in. Cox, a midsize cable operator and ISP, pushed back and was punished for it.
Before the suit, Cox had quite reasonably decided to stick up for its customers by refusing to forward Rightscorp’s money demands. Going along would have put Cox’s imprimatur on Rightscorp’s vaguely worded threats. The Digital Millennium Copyright Act safe harbors, which protect ISPs and other Internet services from copyright liability, don’t require ISPs who simply transmit data to respond to infringement notices, much less forward them.
Unfortunately, Cox failed to comply with another of the DMCA’s requirements. To receive protection, an ISP must “reasonably implement” a policy for terminating “subscribers and account holders” who are “repeat infringers” in “appropriate circumstances.” Past decisions haven’t defined what “appropriate circumstances” are, but they do make clear that a repeat infringer policy has to be more than mere lip service. Cox’s defense foundered—as many do—on a series of unfortunate emails. As shown in court, Cox employees discussed receiving many infringement notices for the same subscriber, and giving repeated warnings to those subscribers, but never actually terminating them, or terminating them only to reconnect them immediately. The emails painted a picture of a company only pretending to observe the repeat-infringer requirement, while maintaining a real policy of never terminating subscribers. The reason, said the Cox employees to one another, was to eke out a bit more revenue.
Despite the emails, BMG’s case had a weakness: the notices from Rightscorp and others were mere accusations of infringement, their accuracy and veracity far from certain. Nothing in the DMCA requires an ISP to kick customers off the Internet based on mere accusations. What’s more, the “appropriate circumstances” for terminating someone’s entire Internet connection are few and far between, given the Internet’s still-growing importance in daily life. As the Supreme Court wrote last year, “Cyberspace . . . in general” and “social media in particular” are “the most important places (in a spatial sense) for the exchange of views.” Even more than a website or social network, an ISP can and should save termination for the most egregious violations, backed by substantial evidence.
The Court of Appeals for the Fourth Circuit acknowledged this, to a point. The court was “mindful of the need to afford ISPs flexibility in crafting repeat infringer policies, and of the difficulty of determining when it is ‘appropriate’ to terminate a person’s access to the Internet.” The court ruled that Cox had lost its safe harbor, not because its termination policy was too lenient, but because it failed to implement its own policy. “Indeed,” wrote the court, “in carrying out its thirteen-strike process, Cox very clearly determined not to terminate subscribers who in fact repeatedly violated the policy.”
The court also ruled that “repeat infringer” isn’t limited to those who are found liable by a court. But the court stopped short of holding that mere accusations should lead to terminations. The court pointed to “instances in which Cox failed to terminate subscribers whom Cox employees regarded as repeat infringers” after conversations with those subscribers, implying that they, at least, should have been terminated.
The court should have stopped there. Unfortunately, it also pointed to the number of actual suspensions Cox engaged in—less than one per month, compared to thousands of warnings and temporary suspensions—as a factor in denying Cox the safe harbor. That focus on “body counts” ignores the reality that terminating home Internet service is akin to “cutting off someone's water." And the court didn’t acknowledge that Cox’s decision to stop accepting Rightscorp’s notices—which included demands for money—protected Cox customers from an exploitative “speculative invoicing” business.
So where does this decision leave ISPs? Certainly, they should not repeat Cox’s mistake by making it clear that their termination policy is an illusion. But nothing in the decision forbids an ISP from standing up for its customers by demanding strong and accurate evidence of infringement, and reserving termination for the most egregious cases—even if that makes actual terminations extremely rare.
The case isn’t over; losing the DMCA safe harbor doesn’t mean that Cox is liable for copyright infringement by its customers. BMG still needs to show that Cox is liable under the contributory, vicarious, or inducement theories that apply to all service providers. The Fourth Circuit ruled that the jury got the wrong instructions, and that contributory liability requires more than a finding that Cox “should have known” about customers’ infringement. Because of that faulty instruction, the appeals court sent the case back for a new trial. The court’s ruling on inducement liability was confusing, as it seemed to conflate “intent” with “knowledge.” It’s important that the courts treat secondary liability doctrines thoughtfully and clearly, as they have a profound effect on how Internet services are designed and what users can do on them. That’s why, while we expect to see more suits like this, we hope that ISPs will continue to stand up for their users as Cox has in defending this one.
The State of Georgia must decide: will it be a hub of technological and online media innovation, or will it be the state that criminalized terms of service violations? Will it support security research that makes us all safer, or will they chill the ability of Georgia’s infosec community to identify vulnerabilities that need to be fixed to protect our private information? This is what’s at stake with Georgia’s S.B. 315, and state lawmakers should stop it dead in its tracks. As EFF wrote in its letter opposing the bill, this legislation would hand immense power to prosecutors to go after anyone for “checking baseball scores on a work computer, lying about your age or height in your user profile contrary to a website’s policy, or sharing passwords with family members in violation of the service provider’s rules.” The bill also fails to clearly exempt legitimate, independent security research—such as that conducted by Georgia Tech’s renowned cybersecurity department—from the computer crime law. Georgia already has a robust computer crime statute that covers a wide range of malicious activities online, but S.B. 315 would criminalize simply accessing a computer, app, or website contrary to how the service provider tells you, even if you never cause or intend to cause harm. A violation under S.B. 315 would be classified as “a misdemeanor of a high and aggravated nature,” punishable by up to $5,000 and 12 months in jail. EFF has long criticized how stretched interpretations of the federal Computer Fraud & Abuse Act have resulted in the prosecution of computer scientists, such as Aaron Swartz. Georgia’s S.B. 315 is even worse in terms of how broadly it may be applied to regular users engaged in benign online behavior. Fortunately, the digital rights community in Georgia is mobilizing. Electronic Frontiers Georgia, an ally in the Electronic Frontiers Alliance network, is speaking out against S.B. 315. Andy Green, an infosec lecturer at Kennesaw State University, is also calling for an overhaul of the bill to ensure computer researchers can carry out their work “without fear of arrest and prosecution.” If Georgia lawmakers want to protect their residents from computer crime, it does not help to open them up to prosecution for the tiniest violation of the fine print in a buried terms of service agreement. And if lawmakers want Georgia to remain a welcoming destination for tech talent who can identify and stop breaches, they should spike S.B. 315 immediately. Read EFF's letter to the Georgia legislature by EFF Staff Attorney Jamie Williams.
Federal Appeals Court Misses Opportunity to Rule that Section 230 Bars Claims Against Online Platforms for Hosting Terrorist Content
Although a federal appeals court this week agreed to dismiss a case alleging that Twitter provided material support for terrorists in the form of accounts and direct messaging services, the court left the door open for similar lawsuits to proceed in the future. This is troubling because the threat of liability created by these types of cases may lead platforms further filter and censor users’ speech.
The decision by the U.S. Court of Appeals for the Ninth Circuit in Fields v. Twitter is good news inasmuch as it ends the case. But the court failed to rule on whether 47 U.S.C. § 230 (known as “Section 230”) applied and barred the plaintiffs’ claims.
That’s disappointing. The Ninth Circuit missed an opportunity to rule that one of the Internet’s most important laws bars these types of cases. Section 230 provides online platforms with broad immunity from liability that flows from user speech. By limiting intermediary liability for user-generated content, Congress sought to incentivize innovation in online products and services and thereby create new avenues for online discourse and engagement. Section 230’s value has taken on increasing importance as the current Congress considers substantially weakening the statute.
The plaintiffs in Fields filed their lawsuit in an attempt to hold Twitter liable for the deaths of two Americans killed in a 2015 attack in Jordan for which ISIS had taken credit. The plaintiffs claimed that by providing accounts and messaging services to ISIS members and sympathizers, Twitter had provided material support to terrorists in violation of U.S. law.
The trial court dismissed the claims, ruling that Section 230 barred the claims. The court also ruled that the plaintiffs had not shown that Twitter played a direct role in the Jordan attack.
When the plaintiffs appealed, EFF filed a brief in support of Twitter. First, we argued that extending such material support liability to online platforms would threaten Internet users because those platforms would become incentivized to over-censor user content or severely curtail the creation of accounts (or even new products and services) in the first place. Second, we argued that such material support liability would violate online platforms’ First Amendment rights. Finally, we argued that the claims undercut both the letter and spirit of Section 230.
The Ninth Circuit affirmed the trial court’s ruling that that plaintiffs had failed to sufficiently allege that Twitter was the “proximate cause” of the attack. This legal concept requires plaintiffs to link the harm they suffered to the actions of defendants.
The appeals court wrote that although the plaintiffs’ complaint in Fields established “Twitter’s alleged provision of material support to ISIS facilitated the organization’s growth and ability to plan and execute terrorist attacks,” the complaint failed to “articulate any connection between Twitter’s provision of this aid and Plaintiffs-Appellants’ injuries.”
After tossing out the case on the proximate cause issue, the Ninth Circuit deliberately avoided ruling on the question of whether Section 230 barred the lawsuit regardless of the causation issue. This was a missed opportunity because a definitive ruling on Section 230 would have likely shut down a handful of similar suits currently in other federal courts—or possibly being considered by other parties.
Like in Fields, these lawsuits claim that online platforms such as Twitter, Facebook, and YouTube provided material support to terrorists based on the presence of user-generated content advocating for terrorism, and that this content led to the injuries or deaths of the plaintiffs. Although the ruling in Fields should make it difficult for these cases to proceed, it’s possible that some plaintiffs could write their complaints to address the causation issue identified by the Ninth Circuit.
On the other hand, if the appeals court had ruled that Section 230 barred the claims, it would have been a clear indication that these lawsuits are not on sound legal footing—and might have been the end of the line for these types of cases. So, although we’re happy that the plaintiffs did not prevail in this case, we hope that future courts examining this issue will actually rule on Section 230 grounds.
Are you going to a Big Game party on Sunday? Or perhaps going to watch the pro football championship game? Or take in the majestic splendor of the Superb Owl? You can also just call it by its real name: the Super Bowl.
The NFL is infamous for coming down like a ton of bricks on anyone who dares use the actual name for the game in public. And it's also famous for trying to grab control of the names people started using when the NFL’s tactics worked and scared everyone away from saying “Super Bowl.” No matter how hard the NFL tries, it doesn’t own the phrase “The Big Game,” which has been used for longer than there’s been a Super Bowl. But anything that looks like someone making money off of the name will attract the NFL’s attention. In 2007, the NFL put a stop to an Indiana church’s party for a number of reasons, including that the church promoted it as a “Super Bowl bash.”
NFL’s tactics don’t change the fact that you can totally say “Super Bowl.”
The NFL has trademarked the terms “Super Bowl” and “Super Sunday,” but that doesn’t mean it actually controls all rights to the phrase. Instinctually, we all know that can’t be how the law works. We see and use trademarked names for things all the time. Grocery stores advertise special deals on Coca-Cola and we put “Windex” on our grocery lists. Commercials namecheck competitors by name all the time.
It doesn’t even make any internal sense. Companies have trademarks so that they can have something that everyone instantly recognizes, not so that they suddenly become Voldemort and can’t be named out of fear.
Having a trademark means being able to make sure no one can slap the name of your product onto theirs and confuse buyers into thinking they’re getting the real thing. It also means stopping an instance where using the name might make someone think it’s an endorsement or sponsorship. If neither of those things happens, you can call the Super Bowl the Super Bowl. The ability to use something’s trademarked name to identify it—even in a commercial—is called “nominative fair use.” Because the trademark is its name.
Thankfully, the NFL and the Super Bowl are really good at letting us know who has paid astronomical amounts to get the NFL’s endorsement. Ads end with things like “official vehicle sponsor of the NFL” and there’s a whole page of sponsor names on the Super Bowl’s website. There are so many instantly recognizable ways to know who has partnered with the NFL and who hasn’t that no one can think your party is an official, NFL-sponsored get together. No one thought that about the one at the church in 2007.
The reason no one says “Super Bowl” has nothing to do with the law and everything to do with the massive amount of resources the NFL has brought to bear on the issue. Its pockets are very deep, its will is strong, and its desire for control ravenous. But its scare tactics don’t change the fact that you can totally say “Super Bowl.”
If Congress votes this month on legislation to protect Dreamers from deportation, any bill it considers should not include invasive surveillance technologies like biometric screening, social media snooping, automatic license plate readers, and drones. Such high tech spying would unduly intrude on the privacy of immigrants and Americans who live near the border and travel abroad.How We Got Here
In September 2017, President Trump announced that, effective March 2018, his administration would end the Obama administration’s Deferred Action for Childhood Arrivals (DACA) program, which protects from deportation some 800,000 young adults (often called Dreamers) brought to the United States as children. In January 2018, Senate Majority Leader Mitch McConnell (R-KY) promised to hold a vote in February 2018 on an immigration bill that protects Dreamers. In response to this promise, Democratic Party Senators voted with Republican Party Senators to end last month’s government shutdown. That immigration vote could occur as early as next week, before a short-term federal funding law expires on February 8.
President Trump’s recent framework for immigration legislation calls for unspecified “technology” to secure the border. That framework also calls for border wall funding, more immigration enforcement personnel, faster deportations, new limits on legal immigration, and a path to citizenship for Dreamers.
A bill recently filed by House Judiciary Committee Chair Bob Goodlatte (R-VA) and House Homeland Security Committee Chair Michael McCaul (R-TX) includes a similar blend of immigration policies. This bill (H.R. 4760) may be the vehicle for Sen. McConnell to try to keep his promise of an immigration vote this month.
This year’s Goodlatte-McCaul bill includes many high tech border spying provisions recycled from three bills filed last year: S. 1757, S. 2192, and H.R. 3548. EFF opposed these bills, and now opposes the Goodlatte-McCaul bill.Biometric Screening at the Border
The Goodlatte-McCaul bill (section 2106) would require the U.S. Department of Homeland Security (DHS) to collect biometric information from people leaving the country, including both U.S. citizens and foreigners. The bill also requires collection of “multiple modes of biometrics.” Further, the new system must be “interoperable” with other systems, meaning together the systems can pool ever-larger sets of biometrics gathered for different purposes by different agencies.
The bill would codify and expand an existing DHS program of facial recognition screening of all travelers, U.S. citizens and foreigners alike, who take certain flights out of the country.
Instead, Congress should simply end this invasive program. Biometric screening is a unique threat to our privacy: it is easy for other people to capture our biometrics, and once this happens, it is hard for us to do anything about it. Once the government collects our biometrics, data thieves might steal it, government employees might misuse it, and policy makers might deploy it to new government programs. Also, facial recognition has significant accuracy problems, especially for people of color.
The Goodlatte-McCaul bill (section 3105) would authorize DHS to snoop on the social media of visa applicants from so-called “high-risk countries.”
This would codify and expand existing DHS and State Department programs of screening the social media of certain visa applicants. EFF opposes these programs. Congress should end them. They threaten the digital privacy and freedom of expression of innocent foreign travelers, and the many U.S. citizens and lawful permanent residents who communicate with them.
The government permanently stores this captured social media information in a record system known as “Alien Files.” The government is now trying to build an artificial intelligence (AI) system to screen this social media information for signs of criminal intent. The government calls this planned system “extreme vetting.” Privacy and immigrant advocates call it a “digital Muslim ban.” Scores of AI experts concluded that this AI system will likely be “inaccurate and biased.”
Moreover, the bill would empower DHS to decide which countries are “high-risk,” based on “any” criteria it deems “appropriate.” DHS may use this broad authority to improperly target social media screening at nations with majority Muslim populations.Drone Flights Near the Border
The Goodlatte-McCaul bill (sections 1112, 1113, and 1117) would expand drone flights near the border. Unfortunately, the bill does not limit the flight paths of these drones. Nor does it limit the collection, storage, and sharing of sensitive information about the whereabouts and activities of innocent bystanders.
Drones can capture personal information, including faces and license plates, from all of the people on the ground within the range and sightlines of a drone. Drones can do so secretly, thoroughly, inexpensively, and at great distances. Millions of U.S. citizens and immigrants live close to the U.S. border, and deployment of drones at the U.S. border will invariably capture personal information from vast numbers of innocent people.ALPRs Near the Border
The Goodlatte-McCaul bill (section 2104) would require DHS to upgrade its automatic license plate readers (ALPRs) at the border, and authorize spending of $125 million to do this. It is unclear whether this provision applies only to ALPRs at border crossings, or also to ALPRs at interior checkpoints, some of which are located as far as 100 miles from the border.
Millions of U.S. citizens and immigrants who live near the U.S. border routinely drive through these interior checkpoints on their way to work and school, while avoiding any actual passage through the U.S. border itself. The federal government should not subject them to ALPR surveillance merely because they live near the border. ALPRs collect highly sensitive location information.
For years, EFF has worked to protect immigrants from high tech spying. For example, we support legislation that would bar state and local police agencies from diverting their criminal justice databases to immigration enforcement. Some Dreamers fear a similar form of digital surveillance: diversion of the federal government’s DACA database, created to assist Dreamers, to instead locate and deport them.
New legislation to protect Dreamers from deportation should not come at the price of other high tech spying on immigrants and others, including biometric screening, social media monitoring, drones, and ALPRs.