EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 56 min ago

Rewriting Intermediary Liability Laws: What EFF Asks – and You Should Too

Mon, 03/15/2021 - 5:26pm

Rewriting the legal pillars of the Internet is a popular sport these days. Frustration at Big Tech, among other things, has led to a flurry of proposals to change long-standing laws, like Section 230, Section 512 of the DMCA, and the E-Commerce Directive, that help shield online intermediaries from potential liability for what their users say or do, or for their content moderation decisions.

It’s popular right now to blame social media platforms for a host of ills. Sometimes that blame is deserved. And sometimes it’s not. 

If anyone tells you revising these laws will be easy, they are gravely mistaken at best. For decades, Internet users – companies, news organizations, creators of all stripes, political activists, nonprofits, libraries, educators, governments and regular humans looking to connect – have relied on these protections. At the same time, some of the platforms and services that help make that all possible have hosted and amplified a great deal of harmful content and activity. Dealing with the latter without harming the former is an incredibly hard challenge. As a general matter, the best starting point is to ask: “Are intermediary protections the problem? Is my solution going to fix that problem? Can I mitigate the inevitable collateral effects?” The answer to all three should be a firm “Yes.” If so, the idea might be worth pursuing. If not, back to the drawing board.

That’s the short version. Here’s a little more detail about what EFF asks when policymakers come knocking.

What’s it trying to accomplish?

This may seem obvious, but it’s important to understand the goal of the proposal and then match that goal to its likely actual impacts. For example, if the stated goal of the proposal is to “rein in Big Tech,” then you must consider whether the plan might actually impede competition from smaller tech companies. If the stated goal is to prevent harassment, then we want to be sure the proposal won’t discourage platforms from moderating their content to cut down on harassment, and to consider whether the proposal will encourage overbroad censorship of non-harassing speech. In addition, we pay attention to whether the goal is consistent with EFF’s mission: to ensure that technology supports freedom, justice, and innovation for everyone.

Is it constitutional?

Too many policymakers seem to care too little about this detail – they’ll leave it for others to fight it out in the courts. Since EFF is likely to be doing the fighting, we want to plan ahead – and help others do the same. Call us crazy, but we also think voters care about staying within the boundaries set by the Constitution and also care about whether their representatives are wasting time (and public money) on initiatives that won’t survive judicial review.

Is it necessary – meaning, are intermediary protections the problem?


It’s popular right now to blame social media platforms for a host of ills. Sometimes that blame is deserved. And sometimes it’s not. Critics of intermediary liability protections too often forget that the law already affords rights and remedies to victims of harmful speech when it causes injury, and that the problem may stem from the failure to apply or enforce existing laws against users who violate those laws. State criminal penalties apply to both stalking and harassment, and a panoply of civil and criminal statutes address conduct that causes physical harm to an individual. Moreover, if an Internet company discovers that people are using its platforms to distribute child sexual abuse material, it must provide that information to the National Center for Missing and Exploited Children and cooperate with law enforcement investigations. Finally, law enforcement sometimes prefers to keep certain intermediaries active so that they can better investigate and track people who are using the platform to engage in illegal conduct.

Against this backdrop, we ask: are intermediaries at fault here and, if so, are they beyond the reach of existing law?

If law enforcement lacks the resources they need to follow up on reports of harassment and abuse, or lacks clear understanding or commitment to enforcing those problems when they come up in the digital space, that’s a problem that needs fixing, immediately. But the solution probably doesn’t start or end with a person screening content in a cubicle, much less an algorithm attempting to do the same.

In addition to criminal charges, victims can use defamation, false light, intentional infliction of emotional distress, common law privacy, interference with economic advantage, fraud, anti- discrimination laws, and other civil causes of action to seek redress against the original author of the offending speech. They can also sue a platform if the platform owner is itself authoring the illegal content.

As for the platforms themselves, intermediary protections often contain important exceptions. To take just a few examples: Section 512 does not limit liability for service providers’ own infringing activities, and requires them to take action when they have knowledge of infringement by their users. Section 230 does not provide immunity against prosecutions under federal criminal law, or liability based on copyright law or certain sex trafficking laws. For example, backers of SESTA/FOSTA, the last Section 230 “reform,” pointed to Backpage.com as a primary target, but the FBI shut down the site without any help from that law. Nor does Section 230 provide immunity against civil or state criminal liability where the company is responsible, in whole or in part, for the creation or development of information. Nor does Section 230 immunize certain intermediary involvement with advertising, e.g., if a platform requires advertisers to choose ad recipients based on their protected status.

Against this backdrop, we ask: are intermediaries at fault here and, if so, are they beyond the reach of existing law? Will the proposed change help alleviate the problem in a practical way? Might targeting intermediaries impede enforcement under existing laws, such as by making it hard for law enforcement to locate and gather evidence about criminal wrongdoers?

Will it cause collateral damage? If so, can that damage be adequately mitigated?

As a civil liberties organization, one of the main reasons EFF defends limits on intermediary liability is because we know the crucial role intermediaries play in empowering Internet users who rely on those services to communicate. Attempts to change platform behavior by undermining Section 230 or Section 512, for example, may actually harm lawful users who rely on those platforms to connect, organize, and learn. This is a special risk to the historically marginalized communities that often lack a voice in traditional media and who often find themselves improperly targeted by content moderation systems. The ultimate beneficiaries of limits on intermediary liability are all of us who want those intermediaries to exist so that we can post things without having to code and host it ourselves, and so that we can read, watch, and re-use content that others create.

Further, we are always mindful that intermediary liability protections are not limited to brand name “tech companies,” of any size. Section 230, by its language, provides immunity to any “provider or user of an interactive computer service” when that “provider or user” republishes content created by someone or something else, protecting both decisions to moderate it and those to transmit it without moderation. “User,” in particular, has been interpreted broadly to apply “simply to anyone using an interactive computer service.” This includes anyone who maintains a website that hosts other people’s comments, posts another person’s op-ed to message boards or newsgroups, or anyone who forwards email written by someone else. A user can be an individual, a nonprofit organization, a university, a small brick-and-mortar business, or, yes, a “tech company.” And Section 512 protects a wide ranges of services, from your ISP to Twitter to the Internet Archive to a hobby site like Ravelry.

Against this backdrop, we ask: Who will be affected by the law? How will they respond?

For example, will intermediaries seek to limit their liability by censoring or curtailing lawful speech and activity? Will the proposal require intermediaries to screen or filter content before it is published? Will the proposal directly or indirectly compel intermediaries to remove or block user content, accounts, whole sections of websites, entire features or services? Will intermediaries shut down altogether? Will the cost of compliance become a barrier to entry for new competitors, further entrenching existing gate-keepers? Will the proposal empower a heckler’s veto, where a single notice or flag that an intermediary is being used for illegal purposes results in third-party liability if the intermediary doesn’t take action?

We asked sex workers and child safety experts what they thought about SESTA/FOSTA. They told us it was dangerous. 

If the answer is yes to any of these, does the proposal include adequate remediation measures? For example (focusing just on competition) if compliance could make it difficult for smaller companies to compete or alternatives to emerge, does the proposal include mitigation measures? Will those mitigation measures be effective?

What expertise is needed to evaluate this proposal? Do we have it? Can we get it?

One of the many lessons of SESTA/FOSTA was that it’s hard to assess collateral effects if you don’t ask the right people. We asked sex workers and child safety experts what they thought about SESTA/FOSTA. They told us it was dangerous. They were right.

We take the same approach with the proposals coming in now. Do we understand the technological implications? For example, some proposed changes to Section 230 protections in the context of online advertising might effectively force systemic changes that will be both expensive and obsolete in a few years. Some might make a lot of sense and not be too burdensome for some services. Others might simply be difficult to assess without deeper knowledge of how the advertising system works now and will likely work in the future. Some implications for users might not be clear to us, so it’s especially important to seek out potentially affected communities and ensure they have a meaningful opportunity to consult and be heard on the impacts of any proposal. We try to make sure we know what questions to ask, but also to know what we don’t know.

Online Intermediary Reform Is Hard

Many intermediary liability reform proposals are little more than vaporware from policymakers who seem bent on willfully misunderstanding how intermediary protections and even the Internet work. But some are more serious, and deserve consideration and review. The above questions should help guide that process.

 

 

Related Cases: Woodhull Freedom Foundation et al. v. United States

Thank You for Speaking Against a Terrible Copyright Proposal

Mon, 03/15/2021 - 3:16pm

Last week was the deadline for comments on the draft of the so-called “Digital Copyright Act,” a proposal which would fundamentally change how creativity functions online. We asked for creators to add their voices to the many groups opposing this draft, and you did it. Ultimately, over 900 of you signed a letter expressing your concern.

The “Digital Copyright Act” was the result of a year of hearings in the U.S. Senate’s Subcommittee on Intellectual Property. Many of the hearings dismissed or marginalized the voices of civil society, Internet users, and Internet creators. Often, it was assumed that the majority of copyrighted work worth protecting is the content made by major media conglomerates or controlled by traditional gatekeepers. We know better.

We know there is a whole new generation of creators whose work is shared online. Some of that work makes fair use of other copyrighted material. Some work is entirely original or based on a work in the public domain. All of it can run afoul of ranking and promotion algorithms, terms of service, and takedowns. The “Digital Copyright Act” would put all of that creativity at risk, entrenching the power of major studios, big tech companies, and major labels.

Along with your signatures and letter, EFF submitted our own comments on the DCA. We urged Congress to set aside the proposal entirely, as many of the policies it contained would cause deep and lasting damage to online speech, creativity, and innovation. We do not only want this particular draft to be put in the bin where it belongs, we want to be clear that even watered-down versions of the policies it contains would further tip the balance away from individuals or small creators and towards large, well-resourced corporations.

One of our concerns remains a call from many a large corporate rightsholder for Internet services to take down more speech, prevent more from being uploaded, and monitor everything on their services for copyrighted material. The “Digital Copyright Act” proposal does just that, in many places and in many ways. Any one of those provisions would result in a requirement for services to use filters or copyright bots.

Filters alone do not work. They simply cannot do the necessary contextual analysis to determine if something is copyright infringement or not. But many of them are used this way, resulting in legal speech being blocked or demonetized. As bad as the current filter use is, it would be much worse if it became legally mandated. Imagine YouTube’s Content ID being the best case scenario for uploading video to the Internet.

So we want to thank you for speaking up and letting Congress know this issue is not simply academic. And letting them know this is not simply Big Tech versus Big Content. For our part, EFF will continue keeping an eye out and helping you be heard.

The Foilies 2021

Sun, 03/14/2021 - 11:51am
Recognizing the year's worst in government transparency.

The Foilies were compiled by Electronic Frontier Foundation Director of Investigations Dave Maass, Senior Staff Attorney Aaron Mackey,  and Frank Stanton Fellow Naomi Gilens, and MuckRock News Co-Founder Michael Morisy and Senior Reporter and Projects Editor Beryl Lipton, with further writing and editing by Shawn Musgrave. Illustrations are by EFF Designer Caitlyn Crites.

The day after the 2021 inauguration, Sen. Chris Murphy of Connecticut took to Twitter to declare: "Biden is making transparency cool again." 

This was a head-scratcher for many journalists and transparency advocates. Freedom of Information—the concept that government documents belong to and must be accessible to the people—has never not been cool. Using federal and local public records laws, a single individual can uncover everything from war crimes to health code violations at the local taqueria. How awesome is that? If you need more proof: there was an Australian comic book series called "Southern Squadron: Freedom of Information Act"; the classic anime Evangelion has a Freedom of Information Act cameo; and the Leeds-based post-punk Mush received 7.4 stars from Pitchfork for its latest album "Lines Redacted." 

OK, now that we've put that down in writing we realize that the line between "cool" and "nerdy" might be a little blurry. But you know what definitely is not cool? Denying the public's right to know. In fact, it suuucks. 

Since 2015, The Foilies have served as an annual opportunity to name-and-shame the uncoolest government agencies and officials who have stood in the way of public access. We collect the most outrageous and ridiculous stories from around the country from journalists, activists, academics, and everyday folk who have filed public records and experienced retaliation, over-redactions, exorbitant fees, and other transparency malpractice. We publish this rogues gallery as a faux awards program during Sunshine Week (March 14-20, 2021), the annual celebration of open government organized by the News Leaders Association. 

This year, the Electronic Frontier Foundation is publishing The Foilies in partnership with MuckRock News, a non-profit dedicated to building a community of cool kids that file Freedom of Information Act (FOIA) and local public records requests. For previous year's dubious winners (many of whom are repeat offenders) check out our archive at www.eff.org/issues/foilies.

And without further ado…

The Most Secretive Dog's Bollocks -  Conan the Belgian Malinois

Back in 2019, what should've been a fluff story (or scruff story) about Conan, the Delta Force K9 that was injured while assisting in the raid that took out an Islamic State leader, became yet another instance of the Trump administration tripping over itself with the facts. Was Conan a very good boy or a very good girl? Various White House and federal officials contradicted themselves, and the mystery remained. 

Transparency advocate and journalist Freddy Martinez wouldn't let the sleeping dog lie; he filed a FOIA request with the U.S. Special Operations Command, a.k.a. SOCOM. But rather than release the records, officials claimed they could “neither confirm nor deny the existence or nonexistence of records,” the much dreaded "Glomar response" usually reserved for sensitive national security secrets (the USNS Hughes Glomar Explorer was a secret CIA ship that the agency didn't want to acknowledge existed). Never one to roll over, Martinez filed a lawsuit against SOCOM and the Defense Department in June 2020. 

Just in time for Sunshine Week, Martinez got his records—a single page of a veterinary examination, almost completely redacted except for the dog's name and the single letter "M" for gender. Conan's breed and color were even blacked out, despite the fact that photos of the dog had already been tweeted by Trump. 

The Pharaoh Prize for Deadline Extensions - Chicago Mayor Lori Lightfoot, Illinois

With COVID-19 affecting all levels of government operations, many transparency advocates and journalists were willing to accept some delays in responding to public records requests. However, some government officials were quick to use the pandemic as an excuse to ignore transparency laws altogether. Taking the prize this year is Mayor Lori Lightfoot of Chicago, who invoked the Old Testament in an effort to lobby the Illinois Attorney General to suspend FOIA deadlines altogether.

"I want to ask the average Chicagoan: Would you like them to do their job or would you like them to be pulled off to do FOIA requests?” Lightfoot said in April 2020, according to the Chicago Tribune, implying that epidemiologists and physicians are also the same people processing public records (they're not).

She continued: “I think for those people who are scared to death about this virus, who are worried every single day that it’s going to come to their doorstep, and I’m mindful of the fact that we’re in the Pesach season, the angel of death that we all talk about is the Passover story, that angel of death is right here in our midst every single day." 

We'd just note that transparency is crucial to ensuring that the government's response to COVID is both effective and equitable. And if ancient Egyptians had the power to FOIA the Pharaoh for communications with Moses and Aaron, perhaps they probably would have avoided all 10 plagues — blood, frogs, and all. 

The Doxxer Prize - Forensic Examiner Colin Fagan 

In July 2020, surveillance researcher and Princeton Ph.D. student Shreyas Gandlur sued the Chicago Police Department to get copies of an electronic guide on police technology regularly received via email by law enforcement officers around the country. The author of the guide, Colin Fagan, a retired cop from Oregon, did not agree that the public has a right to know how cops are being trained, and he decided to make it personal. In a final message to his subscribers announcing he was discontinuing the "Law Enforcement Technology Investigations Resource Guide," Fagan ranted about Gandlur for "attacking the best efforts of Federal, state, and local law enforcement to use effective legal processes to save innocent victims of horrible crimes and hold their perpetrators accountable." 

Fagan included a photo of Gandlur, his email addresses, and urged his readers to recruit crime victims to contact him "and let him know how he could better apply his talents"—one of the most blatant cases of retaliation we've seen in the history of the Foilies. Fagan has since rebounded, turning his email newsletter into a "law enforcement restricted site."

The Redaction Most Likely to Make Your Bubbe Weep - Federal Aviation Administration

When General Atomics proposed flying a new class of drone over the San Diego region to demonstrate its domestic surveillance capabilities, Voice of San Diego Reporter Jesse Marx obviously wanted to learn how it possibly could have been approved. So he filed a FOIA request with the Federal Aviation Administration, and ultimately a lawsuit to liberate documentation. Among the records he received was an email containing a "little vent" from an FAA worker that began with "Oy vey" and then virtually everything else, including the employee's four bullet-pointed "genuinely constructive thoughts," were redacted.

The Government Retribution Award – City of Portland, Oregon

People seeking public records all too often have to sue the government to get a response to their records requests. But in an unusual turn-around, when attorney and activist Alan Kessler requested records from the City of Portland related to text messages on government phones, the government retaliated by suing him and demanding that he turn over copies of his own phone messages. Among other things, the City specifically demanded that Kessler hand over all Signal, WhatsApp, email, and text messages having to do with Portland police violence, the Portland police in general, and the Portland protests. 

Runner up: Reporter CJ Ciaramella requested records from the Washington State Department  of Corrections about Michael Forest Reinoehl, who was killed by a joint U.S. Marshals task force. The Washington DOC apparently planned to produce the records – but before it could, the Thurston County Sheriff’s Department sued Ciaramella and the agency to stop the records from being disclosed. 

The Most Expensive Cover-Up Award – Small Business Administration 

In the early weeks of the pandemic, the Small Business Administration (SBA) awarded millions of dollars to small businesses through new COVID-related relief programs—but didn’t make the names of recipients public. When major news organizations including ProPublica, the Washington Post, and the New York Times filed public records requests to learn exactly where that money had gone, the SBA dragged its feet, and then—after the news organizations sued—tried to withhold the information under FOIA Exemptions 4 and 6, for confidential and private information. A court rejected both claims, and also forced the government to cough up more than $120,000 in fees to the news organizations’ lawyers.    

The Secret COVID Statistics Award – North Carolina Department of Health and Human Services 

Seeking a better understanding of the toll of COVID-19 in the early days of the pandemic, journalists in North Carolina requested copies of death certificates from local county health departments. Within days, officials from the state Department of Health and Human Services reached out to county offices with guidance not to provide the requested records—without citing any legal justification whatsoever. DHHS did not respond to reporters’ questions about why it issued that guidance or how it was justified. 

Some local agencies followed the guidance and withheld records, some responded speedily, and some turned them over begrudgingly—emphasis on the grudge. 

“I will be making everyone in Iredell County aware through various means available; that you are wanting all these death records with their loved ones private information!” one county official wrote to The News and Observer reporters in an email. “As an elected official, it is relevant the public be aware of how you are trying to bully the county into just giving you info from private citizens because you think you deserve it.”

The It’s So Secret, Even The Bullet Points Are Classified Award - Minnesota Fusion Center

Law enforcement and intelligence agencies are always overzealous in claims that disclosing information will harm national security. But officials with the Minnesota Fusion Center  took this paranoia to new heights when they claimed a state law protecting “security information” required them to redact everything—including bullet point—in documents they provided to journalist Ken Klippenstein. And we quite literally mean the bullets themselves.

Fusion centers are part of a controversial program coordinated by the U.S. Department of Homeland Security to facilitate the flow of homeland security intelligence among agencies. Each fusion center is maintained by a state or regional agency; in this case, the Minnesota Bureau of Criminal Apprehension. Klippenstein tweeted that the agency wouldn’t provide document titles or any other information, all the while adding the dreaded black redaction bars to bulleted lists throughout the records. But if officials redacted the bullet points in earnest, we wonder: what is the security risk if the public learns whether Minnesota homeland security officials use the default bullet points or some more exotic style or font? Will the terrorists win if we know they used Wingdings?

The Cat Face Filter Award - Federal Bureau of Prisons

Kids these days—overlaying cat faces on their videos and showing the BOP how it should redact media sought by FOIA requesters. That was the message from an incredulous federal appeals court in March 2020 after the BOP claimed it lacked the ability to blur out or otherwise redact faces (such as those of prisoners and guards) from surveillance videos sought through FOIA by an inmate who was stabbed with a screwdriver in a prison dining hall.

The court wrote: “The same teenagers who regale each other with screenshots are commonly known to revise those missives by such techniques as inserting cat faces over the visages of humans.” The judge made clear that although “we do not necessarily advocate that specific technique,” the BOP's learned helplessness to redact video footage is completely 😹😹😹

The Juking the FOIA Stats Award - Centers for Disease Control 

The Wire, the classic HBO police drama, laid bare how police departments across the country manipulate data to present trends about crime being down. As ex-detective Roland Pryzbylewski put it: “Juking the stats ... Making robberies into larcenies. Making rapes disappear. You juke the stats, and majors become colonels.”

The Centers for Disease Control seems to love to juke its FOIA stats. As the non-profit advocacy organization American Oversight alleged in a lawsuit last year, the CDC has been systematically rejecting FOIA requests by claiming they are overly broad or burdensome, despite years of court decisions requiring agencies to work in good faith with requesters to try to help them find records or narrow their request. The CDC then categorizes those supposedly overbroad requests as “withdrawn” by the requester and closes the file without having to provide any records. So those FOIAs disappear, much like the violent crime reports in The Wire. 

The CDC’s annual FOIA reports show that the agency’s two-step juke move is a favorite. According to American Oversight, between 2016 and 2019, CDC closed between 21 to 31 percent of all FOIA requests it received as “withdrawn.” CDC’s closure rate during that period was roughly three times that of its parent agency, the Department of Health and Human Services, which on average only closed 6 to 10 percent of its FOIAs as withdrawn. After American Oversight sued, the CDC began releasing documents.

The Save the Children (in a Hidden Folder) Award - Louisville Metropolitan Police Department, Kentucky

The Louisville Metropolitan Police Department’s Explorer Scouts program was supposed to give teenagers a chance to learn more about careers in law enforcement. For two LMPD officers, though, it became an opportunity for sexual abuse. When reporters asked for more information on the perpetrators, the city chose to respond with further absurdity — by destroying its records. The case against the city and the Boy Scouts of America is scheduled to begin in April.

The Courier-Journal in Louisville first asked LMPD in mid-2019 for all records regarding the two officers’ sexual abuse of minors. Louisville claimed it didn’t have any; they had been turned over to the FBI. Then the Courier-Journal appealed, and the city eventually determined that — well, what do you know — they’d found a “hidden folder” still containing the responsive records — 738,000 of them, actually. Not for long, though. Less than a month later, they’d all been deleted, despite the ongoing request, a casualty of the city’s automated backup and deletion system, according to Louisville. 

At the end of 2020, the Courier-Journal was still fighting the city’s failure to comply with the Kentucky Open Records Act. 

"I have practiced open records law since the law was enacted 45 years ago, and I have never seen anything so brazen," said Courier-Journal attorney Jon Fleischaker told the paper. "I think it an outrage."

The Eric Cartman Respect My Authoritah Award - Haskell Indian Nations University, Kansas 

When Jared Nally, editor-in-chief of the Indian Leader, the student newspaper at Haskell Indian Nations University in Lawrence, Kansas, started putting questions to his school’s administration and sending records requests to the local police department, he got a lot more than he expected: A directive from his school’s president demanding he cease his requests in the name of the student paper and henceforth treat officials with proper respect, lest he face disciplinary action. 

"Your behavior has discredited you and this university," Haskell Indian Nations University President Ronald Graham wrote."You have compromised your credibility within the community and, more importantly, you have brought yourself, The Indian Leader, Haskell, and me unwarranted attention."

Graham’s aggressive tactics against the college Junior quickly rallied support for the student journalist, with the Native American Journalists Association, Foundation for Individual Rights in Education, and Student Press Law Center all calling for the formal directive to be rescinded. The school ultimately did back down, but the efforts left Nally shocked. “As a student journalist, I’d only been doing it for a year,” he told Poynter in an interview. “When somebody in authority says things like that about you, it really does take a hit. … I’d say I’m recovering from the gaslighting effects, and feeling like what I’m doing really is every bit a part of journalism.”

The Power of the Tweet Award - Pres. Donald J. Trump 

Tweet capture via Buzzfeed

Secrecy nerds know that classification authority — the power to essentially mark some documents as secrets exempt from disclosure — resides with and is largely at the discretion of the president, who can then designate that authority as needed to agency personnel. So one expected upside of a loose-lipped president with an undisciplined social media habit was the ability to use the Tweeter-in-Chief’s posts to target otherwise inaccessible FOIA requests.

Case in point: Trump’s October 6, 2020 tweet: “I have fully authorized the total Declassification of any & all documents pertaining to the single greatest political CRIME in American History, the Russia Hoax. Likewise, the Hillary Clinton Email Scandal. No redactions!”  Hard to argue there’s ambiguity there. But when BuzzFeed News’ Jason Leopold flagged that order in his ongoing lawsuit for the materials, that’s exactly what the Department of Justice did. Based on their investigations, DOJ lawyers told the court, the posts “were not self-executing declassification orders and do not require the declassification of any particular documents.”

The court ultimately bought the argument that you can’t take what the then-president tweets too seriously, but Trump declassified other materials related to the FBI's investigation... on his last day in office.

The 30 Days of Night Award - Hamilton County, Tennessee

It’s hard to imagine a more benign request than asking for copies of other public records requests, but that’s exactly what got Hamilton County officials in Tennessee so spooked they started a mass purge of documents. The shred-a-thon started after Chattanooga Times Free Press reporter Sarah Grace Taylor requested to examine the requests to see if the county’s policies for releasing materials were arbitrary. Originally, the county asked for $717 for about 1,500 pages of records, which Taylor declined to pay in favor of inspecting the records herself.

But as negotiations to view the records commenced, records coordinator Dana Beltramo requested and received permission to update their retention policy to just 30 days for records requests. After Taylor’s continued reporting on the issue sparked an outcry, the county revised its policy once again and promised to do better. “What we did today was basically try to prevent the confusion of mistakes that have happened from happening again,”said Hamilton County mayor Jim Coppinger. In other words, it’s all just a big misunderstanding.

The Handcuffs and Prior Restraints Award: Chicago Police Department and City of Chicago, Illinois 

In February 2019, a swarm of Chicago police officers raided the wrong apartment with their guns drawn. They handcuffed the resident, Anjanette Young, who was completely undressed, and they refused to let her put on clothes as she pleaded with them dozens of times that they had the wrong house. Young sued the city in federal court and filed a request for body camera footage of the officers who invaded her home. The local CBS affiliate, CBS 2, also requested the body camera footage. 

The Chicago Police Department denied both requests, despite a binding ruling just months earlier that CPD was required to turn over body camera footage to people like Young who were involved in the recorded events. Young ultimately got the footage as part of her lawsuit, and her attorney provided them to the media. The city’s lawyers then took the extraordinary step of asking the court to order CBS 2 not to air the video, a  demand to censor speech before it occurs  called a "prior restraint." The judge denied the city’s request. 

The city also sought sanctions against Young’s attorney, but the city withdrew its motion and Chicago Mayor Lori Lightfoot called the request “ill-advised” in a letter to the court. The judge decided not to sanction Young’s attorney. 

The Thin Crust, Wood-Fired Redactions Award - U.S. State Department

Former Secretary of State Mike Pompeo hosted plenty of controversial meals during his three-year tenure. There was the indoor holiday party last December and those bizarre, lavish “Madison Dinners” that cost taxpayers tens of thousands of dollars, including more than $10k for embossed pens alone. And while we know the full menu of Pompeo’s high-class North Korea summit in 2018 in Manhattan—filet mignon with corn purée was the centerpiece—the public may never find out two searing culinary questions about Mikey: What are his pizza toppings of choice, and what’s his go-to sandwich? 

On the pizza angle, the State Department let slip that Pompeo likes it thin and wood-fired, in emails released to NBC correspondent Josh Lederman. But the list of toppings was far too saucy for public consumption, apparently, and redacted on privacy grounds. Same for Pompeo’s sandwich-of-choice, which the State Department redacted from emails released to American Oversight. But we still know “plenty of dry snacks and diet coke” were on offer. 

The Self-Serving Secrecy Award - Niagara County, New York

Money talks. The New York legislature knew this when it passed the Ethics in Government Act in 1987, which required, among other public transparency measures, elected officials in 50,000 person-plus municipalities to complete financial disclosure forms each year. The public should be allowed to see who our leaders may be particularly keen to hear. 

Sixty-one of NY’s 62 counties generally accepted that the disclosure forms, created for public use in the first place, were meant to be disclosed, according to the New York Coalition for Open Government. Back in 1996, though, while everyone was presumably distracted watching the Yankees or Independence Day, Niagara County found a quick trick to keep from sharing its officials’ finances: they made it illegal. By local ordinance, the records were made secret, and the county proceeded to reject any requests for access by claiming that releasing the information would be a violation of the law.

This local law prohibiting access was itself, of course, a violation of the law, but Niagara County managed to keep it on the books for more than two decades, and it may have gotten away with it had it not been for the work of the NY Coalition for Open Government. 

In February 2020, the NYCOG, represented by the University at Buffalo School of Law Civil Rights & Transparency Clinic, sued Niagara County, alleging its ordinance was unlawful (because it was). This past fall, a court agreed. Five months later, in January 2021, the county began releasing records, ones that should have been available for the last 30+ years. 

Want more transparency horror stories? Check out The Foilies archives

Seattle and Portland: Say No to Public-Private Surveillance Networks

Fri, 03/12/2021 - 1:56pm

An organization calling itself Safe Cities Northwest is aiming to create public-private surveillance networks in Portland, Oregon and Seattle, Washington. The organization claims that it is building off of a “successful model for public safety” that it built in San Francisco. However, it’s hard to call that model successful when it has been at the center of a civil rights lawsuit against the city of San Francisco, been used to spy on a number of public events, including  Black-led protests against police violence and a Pride parade, and is now facing resistance from a neighborhood hoping to prevent the spread of the surveillance program. 

In San Francisco, the organization SF Safe connects semi-private Business Improvement Districts (BID) and Community Benefit Districts (CBD) with the police by funding large-scale camera networks that blanket entire neighborhoods. BIDs and CBDS, also known as special assessment districts, are quasi-government agencies that act with state authority to levy taxes in exchange for supplemental city services. While they are run by non-city organizations, they are funded with public money and carry out public services. 

These camera networks are managed by staff within the neighborhood and streamed to a local control room, but footage can be shared with other entities, including individuals and law enforcement, with little oversight. At least six special assessment districts in San Francisco have installed these camera networks, the largest of which belongs to the Union Square BID. The camera networks now blanket a handful of neighborhoods and cover 135 blocks, according to a recent New York Times report.

In October 2020, EFF and ACLU of Northern California sued San Francisco after emails between the San Francisco Police Department and the Union Square BID revealed that police were granted live access to over 400 cameras and a dump of hours of footage in order to monitor Black Lives Matter protests in June 2020. By gaining access, the SFPD violated San Francisco’s Surveillance Technology Ordinance, which prohibits city agencies like the SFPD from acquiring, borrowing, or using surveillance technology without prior approval from the city’s Board of Supervisors. 

Subsequent reporting by the SF Examiner revealed that June 2020 had not been the first time the SFPD had gotten approval for live access to the camera networks without permission of the Board of Supervisors, and that prior instances included surveillance of a Super Bowl parade and a Pride parade.

Seeing how police have been known to request live access to the camera networks in order to surveil public events, residents of San Francisco’s Castro neighborhood, the city’s historically LGBTQ+ area, have contested plans to install its own camera network. 

Seattle and Portland have both been home to large-scale protest movements, both historically and within the last year. City residents that have already grappled with government spy planes and the Department of Homeland Security throwing people into unmarked vans could soon also be confronted by a semi-private widespread camera network, unregulated and without input from the community. The introduction of these new public-private camera networks further threatens the political activity of everyone from grassroots activists and organizers to casual canvassers and demonstrators by opening them up to more surveillance and potential retribution. 

Make no mistake: businesses, many of which already have security cameras, will join these new camera networks based on the premise they will help fight crime. But once consolidated into a single network, this system of hundreds of cameras will prove too tempting for police to ignore, as occurred in San Francisco. For Portland in particular, which unlike Seattle or San Francisco does not have an ordinance that restricts law enforcement’s use of surveillance technologies, residents would have fewer tools to combat the new threat to First Amendment-protected activities. Seattle residents should pay close attention to ensure their police department seeks city council approval and holds public meetings before gaining access to any BID/CBD camera networks. 

EFF is standing by to help residents and organizations on the ground combat the spread of surveillance networks that act like a private entity when they want to avoid regulations, but a public camera network when they want to help police spy on protests.  

Related Cases: Williams v. San Francisco

Congress Proposes Bold Plan to End the Digital Divide

Thu, 03/11/2021 - 3:05pm

New year, new Congress, but the problems of Internet access remains. If anything, the longer the COVID-19 crisis continues, the stronger the case for fast, affordable, Internet for all becomes. And so, an updated version of the Accessible, Affordable Internet for All Act has been introduced. It remains a bold federal program that will tackle broadband access in the same scale and scope the United States once did for water and electricity.

EFF supported the first introduction of this legislation and we enthusiastically support it today after its updates. Most changes simply reflect past COVID-19 provisions that have already been enacted into law such as the Emergency Benefit Program, a program that ensures people are not disconnected due to a lack of income caused by the pandemic. But its most noteworthy updates are the preferences for open access and a minimum speed metric of low latency 100/100 Mbps, which inherently means fiber infrastructure will play a key role. By adopting these standards—along with a massive investment of federal dollars—Congress can reshape the market to be competitive, universally available, and affordable.

It Is Time to End the Digital Divide by Extending Fiber to Everyone

The digital divide isn’t about whether you have access to a certain speed like 25/3 Mbps, which is the current federal standard that is effectively useless today as a metric to measure connectivity, it is about what infrastructure has been invested into your community. Is that infrastructure robust, future-proofed, and competitively priced? If the answer to any of these is no, then you have folks that are not able to fully utilize the Internet and they sit on the wrong side of the divide. 

As EFF noted in 2019, the fact that major industry players were slow-rolling or shutting down their fiber to the home deployments, even in major metropolitan areas where really no excuse exists to not wire everyone, was a danger sign. It meant that future-ready access was no longer on track to being universally deployed except through local governments and small private providers who lack the finances to do it nationally. At the beginning of the pandemic in 2020 as the stay-at-home orders were coming in, we pointed out that the digital divide failures we will see are going to be prominent in areas that lack ubiquitous fiber infrastructure.

The pandemic demonstrated what that means in terms of real dollars to the government support systems. In areas where fiber was not present, millions of dollars had to be burned to give people temporary mobile hotspots with spotty coverage. Whereas communities with fiber providers got things like free fast Internet from both public and small private fiber providers. In fact, while the federal government is subsidizing broadband access as high as $50-$75 a month, Chattanooga’s EPB is able to deliver via fiber 100/100 mbps at just $3 a month in subsidy cost.

Why Fiber? Because It Is Unequivocally Future-Proofed

We focus on fiber optic infrastructure because it is the universal medium that is unifying all of the 21st century-based communications networks. Low earth orbit satellites, 5G, next-generation WiFi, and direct wireline connections that seek to deliver ever-increasing speeds are all dependent on fiber. Demand for data has never waned, but rather has consistently grown for decades at an average rate of 21 percent per year, meaning if you are in a community that is not deploying fiber, which is decades ahead of the demand curve, you will run into capacity problems. 

We see these capacity problems already in the legacy infrastructure, namely copper and cable, as they get more expensive to operate yet can only deliver obsolete connection speeds with lots of restrictions. We detailed why this is happening in our technical piece that explains why different broadband networks yield different results in connectivity. But really the evidence is clear when increased usage of an essential service is being met with upload throttling and data caps instead of just delivering the service to meet demand. It is why subsidizing or propping up legacy networks is actually going to be more expensive than investing in fiber infrastructure in the long run.

This is the reality that many Americans are all too familiar with and why we must pass this bill. If we do not, it is a certainty that we will continue to talk about the digital divide in perpetuity. But that is a choice now facing Congress and you need to make sure your legislator is on board. If we figured out how to get an electrical line to every house, there really is no reason we can’t do that with a fiber line.

App Stores Have Kicked Out Some Location Data Brokers. Good, Now Kick Them All Out.

Wed, 03/10/2021 - 2:26pm

Last fall, reports revealed the location data broker X-Mode’s ties to several U.S. defense contractors. Shortly after, both Apple and Google banned the X-Mode SDK from their app stores, essentially shutting off X-Mode’s pipeline of location data. In February, Google kicked another location data broker, Predicio, from its stores.

We’ve written about the problems with app-store monopolies: companies shouldn’t have control over what software users can choose to run on their devices. But that doesn’t mean app stores shouldn’t moderate. On the contrary, Apple and Google have a responsibility to make sure the apps they sell, and profit from, do not put their users at risk of harms like unwarranted surveillance. Kicking out two data brokers helps to protect users, but it’s just a first step. 

X-Mode and Predicio have each been the subject of reports over the past year that reveal how U.S. government agencies—including the Department of Defense and ICE—try to work around the 4th Amendment by buying location data on the private market. In 2018, the Supreme Court handed down U.S. v. Carpenter, a landmark decision which ruled that location data collected from cell phone towers is protected by the 4th Amendment. This means law enforcement can’t get your location from your cell carrier without a warrant. 

But dozens of companies are still collecting the same location from a different source—mobile apps—and making it available to law enforcement, defense, intelligence, immigration, and other government agencies. Data brokers entice app developers to install pieces of third-party code, called SDKs, which collect raw GPS data and feed it directly to the brokers. These data brokers then resell the location feeds to advertisers, hedge funds, other data brokers, and governments all around the world.

The apps that source the data run the gamut from prayer apps to weather services. X-Mode collected data from thousands of apps including Muslim Pro, one of the most popular Muslim prayer apps in the U.S. X-Mode allegedly sold that data to several Pentagon contractors. Another broker, Predicio, collected data from hundreds of apps including Fu*** Weather and Salaat First. It then sold data to Gravy Analytics, whose subsidiary Venntel has provided location data to the IRS, CBP, and ICE.

It took many months of investigative journalism by Vice, the Wall Street Journal, Protocol, NRK Beta, and others to piece together the flow of location data from particular apps to the U.S. government. These reporters deserve our gratitude. But it’s not good enough for app stores to wait for specific data brokers to come into the public spotlight before banning them. 

We know brokers continue to mine location data from our apps and sell it to military and law enforcement—we just don’t know which apps. For example, we know that Babel Street sells its secretive Locate X product, which comprises real-time location data about untold numbers of users, to the Department of Homeland Security, the Department of Defense, and the Secret Service. This data reportedly comes from thousands of different mobile apps. 

But figuring out which apps are responsible is difficult. Laws in the U.S. generally do not require companies to disclose exactly where they sell personal data, so it’s easy for data brokers to mask their behavior. Journalists often must rely on technical analysis (which requires expertise and lots of time) and government records requests (which may take years and be heavily redacted) to piece together data flows. When investigators do discover proof of unwanted data sharing, the apps and brokers involved can just change their tactics. Even the app developers involved often don’t know where the data they share will end up. Users can’t make educated choices without knowing where or how their data will be shared. 

Google Play and the Apple App Store shouldn’t wait on journalists to establish end-to-end data flows before taking steps to protect users.

The ecosystem of phone app location data should be better regulated. Local CCOPS (community control of police surveillance) laws can ban police and other local government agencies from acquiring surveillance tech, including data broker deals, without legislative permission and community input. We support these laws, but most cities do not have them. Also, they do not address the problem of federal agencies buying our location data on the open market. We will continue pushing for legislation and judicial decisions that, as required by the Fourth Amendment, prevent the government at all levels from buying this kind of data without first getting a warrant. But in the meantime, many government agencies will continue buying location data as long as they believe they can.

App stores are in a unique position to protect tech users from app-powered surveillance. We applaud Apple and Google for taking action against X-Mode and Predicio. But this is only the tip of the iceberg. Now the app stores should take the next step: ban SDKs from any data brokers that collect and sell our location information.

There is no good reason for apps to collect and sell location data, especially when users have no way of knowing how that data will be used. We implore Apple and Google to end this seedy industry, and make it clear that location data brokers are not welcome on their app stores.

EFF to Supreme Court: Users Must Be Able to Hold Tech Companies Accountable in Lawsuits When Their Data is Mishandled

Wed, 03/10/2021 - 1:15pm
Facebook, Google, and Others Want To Make It Harder For Users To Sue

Washington, D.C.—The Electronic Frontier Foundation (EFF) today urged the Supreme Court to rule that consumers can take big tech companies like Facebook and Google to court, including in class action lawsuits, to hold them accountable for privacy and other user data-related violations, regardless of whether they can show they suffered identical harms.

Standing up to defend the ability of tech users to hold powerful companies responsible for protecting the massive amounts of personal data they capture and store every day, EFF and co-counsel Hausfeld LLP told the high court that—contrary to the companies’ claims—Congress rightfully ensured that users could sue when those companies mishandle sensitive, private information about them.

EFF filed a brief today with the Supreme Court in a case called TransUnion v. Ramirez. In the case, TransUnion misidentified plaintiff Sergio Ramirez and over 8,000 other people as included on the U.S. terrorist list. As a result, Ramirez was flagged as a terrorist when he tried to purchase a car. TransUnion is fighting to keep the other consumers it tagged as terrorists from suing it as a group with Ramirez. The company argues that they don’t have standing to sue under the law and shouldn’t be part of the “class” of plaintiffs in the lawsuit because they weren’t harmed in the same way as Ramirez.

Facebook, Google, and tech industry trade groups are siding with TransUnion. They filed a legal brief pushing for even more limitations on users and others impacted by a wide range of privacy and data integrity violations. The companies argue that users whose biometric information is misused or are improperly tracked or wiretapped should also be denied the opportunity to sue if they did not lose money or property. Even those who can sue must all have been harmed in the exact same way to file a class action case, the companies argue.

“Facebook and the other tech giants gather and use immense quantities of our personal data each day, but don’t want to be held accountable by their users in court when they go back on their privacy promises, or unlawfully mishandle user data,” said EFF Executive Director Cindy Cohn. “This logic—that the courthouse door should remain closed unless their users suffer financial or personal injuries even when the companies flagrantly violate the law—is cynical and wrong. Intangible harms have long been recognized as harms under the law, and Congress and the states must be allowed to pass laws that protect us. In today’s digital economy, all of us depend on these companies to ensure that the data they have about us is accurate and safeguarded. When it’s mishandled, we should be able to take those companies to court.”

Class action rules require people suing as a group to have the same claims based upon the same basic facts, not the exact same injuries, EFF told the Supreme Court. Facebook and other tech companies are asking the court to change that, which will make it harder for users to hold them accountable and utilize class action lawsuits to do so.

“When users lose control of their data, or the correctness of their data is compromised, those are serious harms in and of themselves, and they put users at tremendous risk,” said EFF Senior Staff Attorney Adam Schwartz. “Companies that gather and use vast amounts of users’ personal, private information are trying to raise the bar on their own accountability when they fail to protect people’s data. We are telling the Supreme Court: don’t let them.”

For the brief:
https://www.eff.org/document/transunion-amicus-brief

For more on users’ right to sue:
https://www.eff.org/deeplinks/2019/01/you-should-have-right-sue-companies-violate-your-privacy

Contact:  CindyCohnExecutive Directorcindy@eff.org AdamSchwartzSenior Staff Attorneyadam@eff.org

Internet Advocates Call on ISPs to Commit to Basic User Privacy Protections

Wed, 03/10/2021 - 9:04am

This blog post was co-written by EFF, the Internet Society, and Mozilla.

As people have learned more about how companies like Google and Facebook track them online they are increasingly taking steps to protect themselves, but there is one relatively unknown way that companies and bad actors can collect troves of data.

Internet Service Providers (ISPs) like Comcast, Verizon, and AT&T are your gateway to the Internet. These companies have complete, unfettered, and unregulated access to a constant stream of your browsing history that can build a profile that they can sell or otherwise use without your consent.

Last year, Comcast committed to a broad range of DNS privacy standards. Companies like Verizon, AT&T, and T-Mobile – which have a major market share of mobile broadband customers in the U.S. – haven’t even committed to these basic protections like not tracking website traffic, deleting DNS logs, or refusing to sell users’ information. What's more, these companies have a history of abusing customer data: AT&T (along with Sprint and T-Mobile) sold customer location data to bounty hunters and Verizon injected trackers bypassing user control.

Every single ISP should have a responsibility to protect the privacy of its users – and as mobile internet access continues to grow, that responsibility rests even more squarely on the shoulders of mobile ISPs. As our partner, Consumer Reports, notes: even opting in to secondary uses of data can be convoluted for consumers. Companies shouldn’t be able just bury consent within their terms of service or use a dark pattern to get people to click "OK” and still claim they are acting with users’ explicit consent.

Nearly every single website you visit transmits your data to dozens or even hundreds of companies. This pervasive and intrusive personal surveillance has become the norm, and it won’t cease without action from us.

In that vein, Mozilla, the Internet Society, and the Electronic Frontier Foundation are individually and collectively taking steps to protect consumers' right to data privacy. A key element of that is an effective baseline federal privacy law that curbs data abuses by ISPs and other third parties and gives consumers meaningful control over how their personal data is used.

But effective regulatory action could be years away, and that’s why we need to proactively hold the ISPs accountable today. Laws and technical solutions can go a long way, but we also need better behavior from those who collect our sensitive DNS data. 

Today we are publishing an open letter calling on AT&T, T-Mobile, and Verizon to publish a privacy notice for their DNS service that commits to deleting the data within 24 hours and to only using the data for providing the service. It is our hope that they heed the call, and that other ISPs take note as well. Click here to see the full letter.

EFF to Supreme Court: States Face High Burden to Justify Forcing Groups to Turn Over Donor Names

Tue, 03/09/2021 - 3:02pm

Throughout our nation’s history—most potently since the era of civil rights activism—those participating in social movements challenging the status quo have enjoyed First Amendment protections to freely associate with others in advocating for causes they believe in. This right is directly tied to our ability to maintain privacy over what organizations we choose to join or support financially. Forcing organizations to hand membership or donor lists to the state threatens First Amendment activities and suppresses dissent, as those named, facing harassment or worse, have to decide between staying safe or speaking out.

In a California case over donor disclosures, we’ve urged the Supreme Court to apply this important principle to ensure that the bar for public officials seeking information about people’s political and civic activities is sufficiently high. In an amicus brief filed last week, EFF, along with four other free speech advocacy groups, asked the court to compel the California Attorney General to better justify its requirement that nonprofits to turn over to the state the names of their major donors.

The U.S. Court of Appeals for the Ninth Circuit in 2018 upheld California’s charitable donation reporting requirement, under which nonprofits must give state officials the names and addresses of their largest donors. The court, ruling in Americans For Prosperity Foundation v. Becerra, rejected arguments that the requirement infringes on donors’ First Amendment right to freely associate with others, and said the plaintiffs hadn’t shown specific evidence to back up claims that donors would be threatened or harassed if their names were disclosed.

The decision goes against years of Supreme Court precedent requiring the government, whether or not there’s direct evidence of harassment, to show it has a compelling interest justifying donor disclosure requirements that can divulge people’s political activities. Joined by the Freedom to Read Foundation, the National Coalition Against Censorship, the People United for Privacy Foundation, and Woodhull Freedom Foundation, we urged the Supreme Court to overturn the Ninth Circuit decision and rule that “exacting scrutiny” applies to any donor disclosure mandate by the government. By that we mean the government must show its interest is sufficiently important and the requirement carefully crafted to infringe as little as possible on donors’ First Amendment rights.

Even where there’s no specific evidence that donors are being harassed or groups can’t attract funders, the court has found, states wishing to intrude on Americans’ right to keep their political associations private must always demonstrate a compelling state interest in obtaining the information.

This principle was at the center of the Supreme Court’s unanimous landmark 1958 decision blocking Alabama from forcing the NAACP to turn over names and addresses of its members. The court never questioned the NAACP’s concerns about harassment and retaliation, let alone suggest that the organization had the burden of making some threshold showing confirming the nature or specificity of its concerns. The Ninth Circuit said California’s disclosure requirement posed minimal First Amendment harms because the Attorney General must keep the donor names confidential. It faulted the plaintiffs for not producing evidence that donors would be harassed if their names were revealed and not identifying donors whose willingness to contribute hinged on whether their identities would be disclosed by the Attorney General.

The court is wrong on both counts.

First, pledging to keep the names confidential doesn’t eliminate the requirement’s speech-chilling effects, we said in our brief. Groups that challenge or oppose state policies have legitimate fears that members and donors, or their businesses, could become targets of harassment or retaliation by the government itself. It’s easy to imagine that a Black Lives Matter organization, or an organization assisting undocumented immigrants at the border, would have justifiable concerns about turning their donor or membership information to the government, regardless of whether the government shares that information with anyone else. If allowed to stand, the Ninth Circuit’s decision gives the government unchecked power to collect information on people’s political associations.

Second, the burden is on the government to show it has a compelling interest connected to the required information before forcing disclosures that could put people in harm’s way. As we stated in our brief: “Speaking out on contentious issues creates a very real risk of harassment and intimidation by private citizens and critically by the government itself. Furthermore, numerous contemporary issues—ranging from the Black Lives Matter movement, to gender identity, to immigration—arouse significant passion by people with many divergent beliefs. Thus, now, as much as any time in our nation’s history, it is necessary for individuals to be able to express and promote their viewpoints through associational affiliations without personally exposing themselves to a political firestorm or even governmental retaliation.”

The precedent established by this case will affect the associational rights of civil rights and civil liberties groups across the country. We urge the Supreme Court to affirm meaningful protections that nonprofits and their members and contributors need from government efforts to make them hand over donor or member lists.

Scholars Under Surveillance: How Campus Police Use High Tech to Spy on Students

Tue, 03/09/2021 - 11:58am

Hailey Rodis, a student at the University of Nevada, Reno Reynolds School of Journalism, was the primary researcher on this report. We extend our gratitude to the dozens of other UNR students and volunteers who contributed data on campus police to the Atlas of Surveillance project. The report will be updated periodically with responses from university officials. These updates will be noted in the text. 

It may be many months before college campuses across the U.S. fully reopen, but when they do, many students will be returning to a learning environment that is under near constant scrutiny by law enforcement. 

A fear of school shootings, and other campus crimes, have led administrators and campus police to install sophisticated surveillance systems that go far beyond run-of-the-mill security camera networks to include drones, gunshot detection sensors, and much more. Campuses have also adopted automated license plate readers, ostensibly to enforce parking rules, but often that data feeds into the criminal justice system. Some campuses use advanced biometric software to verify whether students are eligible to eat in the cafeteria. Police have even adopted new technologies to investigate activism on campus. Often, there is little or no justification for why a school needs such technology, other than novelty or asserted convenience. 

In July 2020, the Electronic Frontier Foundation and the Reynolds School of Journalism at University of Nevada, Reno launched the Atlas of Surveillance, a database of now more than 7,000 surveillance technologies deployed by law enforcement agencies across the United States. In the process of compiling this data we noticed a peculiar trend: college campuses are acquiring a surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime. 

So, we began collecting data from universities and community colleges using a variety of methods, including running specific search terms across .edu domains and assigning small research tasks to a large number of students using EFF's Report Back tool. We documented more than 250 technology purchases, ranging from body-worn cameras to face recognition, adopted by more than 200 universities in 37 states. As big as these numbers are, they are only a sliver of what is happening on college campuses around the world.

Click the image to launch an interactive map (Google's Privacy policy applies)

Technologies

Download the U.S. Campus Police Surveillance dataset as a CSV.

Technologies Body-worn cameras

Maybe your school has a film department, but the most prolific cinematographers on your college campus are probably the police. 

Since the early 2010s, body-worn cameras (BWCs) have become more and more common in the United States. This holds true for law enforcement agencies on university and college campuses. These cameras are attached to officers’ uniforms (often the chest or shoulder, but sometimes head-mounted) and capture interactions between police and members of the public. While BWC programs are often pitched as an accountability measure to reduce police brutality, in practice these cameras are more often used to capture evidence later used in prosecutions. 

Policies on these cameras vary from campus to campus—such as whether a camera should be always recording, or only during certain circumstances. But students and faculty should be aware than any interaction, or even near-interaction, with a police officer could be on camera. That footage could be used in a criminal case, but in many states, journalists and members of the public are also able to obtain BWC footage through an open records request. 

Aside from your run-of-the-mill, closed-circuit surveillance camera networks, BWCs were the most prevalent technology we identified in use by campus police departments. This isn't surprising, since researchers have observed similar trends in municipal law enforcement. We documented 152 campus police departments using BWCs, but as noted, this is only a fraction of what is being used throughout the country. One of the largest rollouts began last summer when Pennsylvania State University announced that police on all 22 campuses would start wearing the devices. 

One of the main ways that universities have purchased BWCs is through funding from the U.S. Department of Justice's Bureau of Justice Assistance. Since 2015, more than 20 universities and community colleges have received funds through the bureau's Body-Worn Camera Grant Program established during the Obama administration. In Oregon, these funds helped the Portland State University Police Department adopt the technology well ahead of their municipal counterparts. PSU police received $20,000 in 2015 for BWCs, while the Portland Police Department does not use BWCs at all (Portland PD's latest attempt to acquire them in 2021 was scuttled due to budget concerns). 

Drones


Drones, also known as unmanned aerial vehicles (UAVs), are remote-controlled flying devices that can be used to surveil crowds from above or locations that would otherwise be difficult or dangerous to observe by a human on the ground. On many campuses, drones are purchased for research purposes, and it's not unusual to see a quadrotor (a drone with four propellers) buzzing around the quad. However, campus police have also purchased drones for surveillance and criminal investigations. 

Our data, which was based on a study conducted by the Center for the Study of The Drone at Bard College, identified 10 campus police departments that have drones: 

  • California State Monterey University Police Department
  • Colorado State University Police Department
  • Cuyahoga Community College Police Department
  • Lehigh University Police Department
  • New Mexico State University Police Department
  • Northwest Florida State College Campus Police Department
  • Pennsylvania State University Police Department
  • University of Alabama, Huntsville Police Department
  • University of Arkansas, Fort Smith Police Department
  • University of North Dakota Police Department

One of the earliest campus drone programs originated at the University of North Dakota, where the campus police began deploying a drone in 2012 as part of a regional UAV unit that also included members of local police and sheriffs' offices. According to UnmannedAerial.com, the unit moved from a "reactive" to a "proactive" approach in 2018, allowing officers to carry drones with them on patrol, rather than retrieving them in response to specific incidents. 

The Northwest Florida State University Police Department was notable in acquiring the most drones. While most universities had one, NFSU police began using four drones in 2019, primarily to aid in searching for missing people, assessing traffic accidents, photographing crime scenes, and mapping evacuation routes. 

The New Mexico State University Police Department launched its drone program in 2017 and, with the help of a local Eagle Scout in Las Cruces, built a drone training facility for local law enforcement in the region. In response to a local resident who questioned on Facebook whether the program was unnerving, a NMSU spokesperson wrote in 2019: 

[The program] thus far has been used to investigate serious traffic crashes (you can really see the skid marks from above), search for people in remote areas, and monitor traffic conditions at large events. They aren't very useful for monitoring campus residents (even if we wanted to, which we don't), since so many stay inside.

Not all agencies have taken such a limited approach. The Lehigh University Police Department acquired a drone in 2015, and equipped it with a thermal imaging camera. Police Chief Edward Shupp told a student journalist at The Brown and Right that the only limits on the drone are Federal Aviation Administration regulations, that there are no privacy regulations for officers to follow, and that the department can use the drones "for any purpose" on and off campus. 

Even when a university police department does not have its own drones, it may seek help from other local law enforcement agencies. Such was the case in 2017, when the University of California Berkeley Police Department requested drone assistance from the Alameda County Sheriff's Office to surveil protests on campus. 

Automated License Plate Readers

Students and faculty may complain about the price tag of parking passes, but there is also an unseen cost of driving on campus: privacy.

Automated license plate readers (ALPRs) are cameras attached to fixed locations or to security or parking patrol cars that capture every license plate that passes. The data is then uploaded to searchable databases with the time, date, and GPS coordinates. Through our research, we identified ALPRs at 49 universities and colleges throughout the country.

ALPRs are used in two main capacities on college campuses. First, transportation and parking divisions have begun using ALPRs for parking enforcement, either attaching the cameras to parking enforcement vehicles or installing cameras at the entrances and exits to parking lots and garages. For example, the University of Connecticut Parking Services uses NuPark, a system that uses ALPRs to manage virtual permits and citations.

Second, campus police are using ALPRs for public safety purposes. The Towson University Police Department in Maryland, for example, scanned over 3 million license plates using automated license plate readers in 2018 and sent that data to the Maryland Coordination and Analysis Center, a fusion center operated by the Maryland State Police. The University has a total of 6 fixed ALPR sites, with 10 cameras and one mobile unit.

These two uses are not always separate: in some cases, parking officials share data with their police counterparts. At Florida Atlantic University, ALPRs are used for parking enforcement, but the police department also has access to this technology through their Communications Center, which monitors all emergency calls to the department, as well as fire alarms, intrusion alarms, and panic alarm systems. In California, the San Jose/Evergreen Community College District Police Department shared* ALPR data with its regional fusion center, the Northern California Regional Intelligence Center. 

March 10, 2021 Update: A spokesperson from San Jose/Evegreen Community College emailed this information: "While it is true that SJECCD did previously purchase two LPR devices, we never licensed the software that would allow data to be collected and shared, so no data from SJECCD’s LPR devices was ever shared with the Northern California Regional Intelligence Center. Further, the MOU that was signed with NCRIC expired in 2018 and was not renewed, so there is no existing MOU between SJECCD and the agency." We have updated the piece to indicate that the ALPR data sharing occurred in the past. 

Social Media Monitoring

Colleges and universities are also watching their students on social media, and it is not just to retweet or like a cute Instagram post about your summer internship. Campus public safety divisions employ social media software, such as Social Sentinel, to look for possible threats to the university, such as posts where students indicate suicidal ideation or threats of gun violence. We identified 21 colleges that use social media monitoring to watch their students and surrounding community for threats. This does not include higher education programs to monitor social media for marketing purposes.

This technology is used for public safety by both private and public universities. The Massachusetts Institute of Technology has used Social Sentinel since 2015, while the Des Moines Area Community College Campus Security spent $15,000 on Social Sentinel software in 2020. 

Social media monitoring technology may also be used to monitor students' political activities. Social Sentinel software was used to watch activists on the University of North Carolina campus who were protesting a Confederate memorial on campus, Silent Sam. As NBC reported, UNC Police and the North Carolina State Bureau of Investigation used a technique called "geofencing" to monitor the social media of people in the vicinity of the protests.

"This information was monitored in an attempt to prevent any potential acts of violence (such as those that have occurred at other public protests around the country, including Charlottesville) and to ensure the safety of all participants," a law enforcement spokesperson told NBC, adding that investigators only looked at public-facing posts and no records of the posts were kept after the event. However, the spokesperson declined to elaborate on how the technology may have been used at other public events. 

Biometric Identification


When we say that a student body is under surveillance, we also mean that literally. The term “biometrics” refers to physical and behavioral characteristics (your body and what you do with it) that can be used to identify you. Fingerprints are among the types of biometrics most familiar to people, but police agencies around the country are adopting computer systems capable of identifying people using face recognition and other sophisticated biometrics. 

At least four police departments at universities in Florida–University of South Florida, University of North Florida, University of Central Florida, and Florida Atlantic University–have access to a statewide face recognition network called Face Analysis Comparison and Examination System (FACES), which is operated by the Pinellas County Sheriff's Office. Through FACES, investigators can upload an image and search a database of Florida driver’s license photos and mugshots.  

University of Southern California in Los Angeles confirmed to The Fix that its public safety department uses face recognition, however the practice was more prevalent in the San Diego, California area up until recently.  

In San Diego, at least five universities and college campuses participated in a face recognition program involving mobile devices. San Diego State University stood out for having conducted more than 180 face recognition searches in 2018. However, in 2019, this practice was suspended in California under a three-year statewide moratorium. 

Faces aren't the only biometric being scanned. In 2017, the University of Georgia introduced iris scanning stations in dining halls, encouraging students to check-in with their eyes to use their meal plans. This replaced an earlier program requiring hand scans, another form of biometric identification.

Gunshot Detection

Gunshot detection is a technology that involves installing acoustic sensors (essentially microphones) around a neighborhood or building. When a loud noise goes off, such as a gunshot or a firework, the sensors attempt to determine the location and then police receive an alert. 

Universities and colleges have begun using this technology in part as a response to fears of campus shootings. However, these technologies often are not as accurate as their sellers claim and could result in dangerous confrontations based on errors. Also, these devices can capture human voices engaged in private conversations, and prosecutors have attempted to use such recordings in court. 

Our dataset has identified eight universities and colleges that have purchased gunshot-detection technology:

  • East Carolina University Police Department
  • Hampton University Police Department
  • Truett McConnell University Campus Safety Department
  • University of California San Diego Police Department
  • University of Connecticut Police Department
  • University of Maryland Police Department
  • University of West Georgia Police Department
  • Georgia Tech Police Department

Some universities and colleges purchase their own gunshot detection technology, while others have access to the software through partnerships with other law enforcement agencies. For example, the Georgia Tech Police Department has access to gunshot detection through the Fūsus Real-Time Crime Center. The University of California San Diego Police Department, on the other hand, installed its own ShotSpotter gunshot detection technology on campus in 2017. 

When a university funds surveillance technology, it can impact the communities nearby. For example, University of Nevada, Reno journalism student Henry Stone obtained documents through Nevada's public records law that showed that UNR Cooperative Extension spent $500,000 in 2017 to install and operate Shotspotter sensors in a 3-mile impoverished neighborhood of Las Vegas. The system is controlled by the Las Vegas Metropolitan Police Department.

Video Analytics

While most college campuses employ some sort of camera network, we identified two particular universities that are applying for extra credit in surveilling students: the University of Miami Police Department in Florida and Grand Valley State University Department of Public Safety in Michigan. These universities apply advanced software to the camera footage—sometimes called video analytics or computer vision—that use an algorithm to achieve round-the-clock monitoring that many officers viewing cameras could never achieve. Often employing artificial intelligence, video analytics systems can track objects and people from camera to camera, identify patterns and anomalies, and potentially conduct face recognition. 

Grand Valley State University began using Avigilon video analytics technology in 2018. The University of Miami Police Department uses video analytics software combined with more than 1,300 cameras.

Three university police departments in Maryland also maintain lists of cameras owned by local residents and businesses. With these camera registries, private parties are asked to voluntarily provide information about the location of their security cameras, so that police can access or request footage during investigations. The University of Maryland, Baltimore Police Department, the University of Maryland, College Park Police Department and the Johns Hopkins University Campus Police are all listed on Motorola Solutions' CityProtect site as maintaining such camera registries. 

Two San Francisco schools—UC Hastings School of Law and UC San Francisco—explored leasing Knightscope surveillance robots in 2019 and 2020 to patrol their campuses, though the plans seem to have been scuttled by COVID-19. The robots are equipped with cameras, artificial intelligence, and, depending on the model, the ability to capture license plate data, conduct facial recognition, or recognize nearby phones. 

Conclusion 

Universities in the United States pride themselves on the free exchange of ideas and the ability for students to explore different concepts and social movements over the course of their academic careers. Unfortunately, for decades upon decades, police and intelligence agencies have also spied on students and professors engaged in social movements. High-tech surveillance only exacerbates the threat to academic freedom.

Around the country, cities are pushing back against surveillance by passing local ordinances requiring a public process and governing body approval before a police agency can acquire a new surveillance technology. Many community colleges do have elected bodies, and we urge these policymakers to enact similar policies to ensure adequate oversight of police surveillance. 

However, these kinds of policy-making opportunities often aren't available to students (or faculty) at state and private universities, whose leadership is appointed, not elected. We urge student and faculty associations to press their police departments to limit the types of data collected on students and to ensure a rigorous oversight process that allows students, faculty, and other staff to weigh in before decisions are made to adopt technologies that can harm their rights.

EFF, ACLU and EPIC File Amicus Brief Challenging Warrantless Cell Phone Search, Retention, and Subsequent Search

Mon, 03/08/2021 - 5:27pm

Last week, EFF—along with the ACLU and EPIC—filed an amicus brief in the Wisconsin Supreme Court challenging a series of warrantless digital searches and seizures by state law enforcement officers: the search of a person’s entire cell phone, the retention of a copy of the data on the phone, and the subsequent search of the copy by a different law enforcement agency. Given the vast quantity of private information on an ordinary cell phone, the police’s actions in this case, State v. Burch, pose a serious threat to digital privacy, violating the Fourth Amendment’s core protection against “giving police officers unbridled discretion to rummage at will among a person’s private effects.”

The Facts

In June 2016, the Green Bay Police Department was investigating a hit-and-run accident and vehicle fire. Since Burch had previously driven the vehicle at issue, the police questioned him. Burch provided an alibi involving text messages with a friend who lived near the location of the incident. To corroborate his account, Burch agreed to an officer’s request to look at those text messages on his cell phone. But, despite initially only asking for the text messages, the police used a sophisticated mobile device forensic tool to copy the contents of the entire phone. Then about a week later, after reviewing the cell phone data, a Green Bay Police officer wrote a report that ruled Burch out as a suspect, finding that there was “no information to prove [Burch] was the one driving the [vehicle] during the [hit-and- run] accident.”

But that’s not where things end. Also in the summer of 2016, a separate Wisconsin police agency, the Brown County Sheriff’s Office, was investigating a homicide. And in August, Burch became a suspect in that case. In the course of that investigation, the Brown County Sheriff's Office learned that the Green Bay Police Department had kept the download of Burch’s cell phone and obtained a copy of it. The Brown County Sherriff’s Office then used information on the phone to charge Burch with the murder. 

Burch was ultimately convicted but argued that the evidence from his cell phone should have been suppressed on Fourth Amendment grounds. Last fall, a Wisconsin intermediate appellate court certified Burch’s Fourth Amendment challenge to the Wisconsin Supreme Court, writing that the “issues raise novel questions regarding the application of Fourth Amendment jurisprudence to the vast array of digital information contained in modern cell phones.” In December, the Wisconsin Supreme Court decided to review the case and asked the parties to address six specific questions related to the search and retention of the cell phone data.  

The Law

In a landmark ruling in Riley v. California , the U.S. Supreme Court established the general rule that police must get a warrant to search a cell phone. However, there are certain narrow exceptions to the warrant requirement, including when a person consents to the search of a device. While Burch did consent to a limited search of his phone, that did not provide law enforcement limitless authority to search and retain a copy of his entire phone.

Specifically, in our brief, we argue that the state committed multiple independent violations of Burch’s Fourth Amendment rights. First, since Burch only consented to the search of his text messages, it was unlawful for the Green Bay police to copy his entire phone. And even if his consent extended beyond his text messages, he did not give the police the authority to search information on his phone having nothing to do with the initial investigation. Next, regardless of the extent of  Burch’s consent, after the police determined Burch was no longer a suspect, the state lost virtually all justification in retaining Burch’s private information and should have returned it to him or purged it. Lastly, since the state had no compelling legal justification to hold Burch’s data after closing the initial investigation on him, the Brown County Sheriff’s warrantless search of the data retained by the Green Bay police was blatantly unlawful. 

The Privacy Threat at Stake

The police’s actions here are not an outlier. In a recent investigative report, Upturn found that law enforcement in all fifty states have access to the type of mobile forensic tools the police employed in this case. And although consent is a recognized exception to the rule that warrants are required for cell phone searches, Upturn’s study reveals that police rely on warrant exceptions like consent to use those tools at an alarming rate. For example, of the 1,583 cell phones on which the Harris County, Texas Sheriff’s Office performed extractive searches from August 2015 to July 2019, 53% were conducted without a warrant, including searches based on consent and search of phones the police classified as “abandoned/deceased.” Additionally, of the 497 cell phone extractions performed in Anoka County, Minnesota between 2017 to May 2019, 38% were consent searches. 

In light of both how common consent-based searches are and their problematic nature (as a recent EFF post explains), the implications of the state’s core argument is only all the more troubling. In the state’s view, no one—including suspects, witnesses, and victims—who consents to a search of their digital device in the context of one investigation could prevent law enforcement from storing a copy of their entire device in a database that could be mined years into the future, for any reason the government sees fit.

The state’s arguments would erase the hard-fought protections for digital data recognized in cases like Riley. The Wisconsin Supreme Court should recognize that consent does not authorize the full extraction, indefinite retention, and subsequent search of a person’s cell phone.

Washington: Everyone Deserves Reliable Internet

Mon, 03/08/2021 - 1:38pm

The coronavirus pandemic, its related stay-at-home orders, and its economic and social impacts have illustrated how important robust broadband service is to everything from home-based work to education. Yet, even now, many communities across America have been unable to meet their residents’ telecommunication needs. This is because of two problems: disparities in access to services that exacerbate race and class inequality—the digital divide—and the overwhelming lack of competition in service providers. At the heart of both problems is the current inability of public entities to provide their own broadband services.

This is why EFF joined a coalition of private-sector companies and organizations to support H.B. 1336, authored by Washington State Representative Drew Hansen. This bill would remove restrictions in current Washington law preventing public entities from building and providing broadband services. In removing these restrictions, Hansen’s bill would allow public entities to create and implement broadband policy based on the needs of the people they serve, and provide services unconstrained and not beholden to big, unreliable ISPs

Take Action

Washington: Demand Reliable Internet for Everyone

There are already two examples of community-provided telecommunications services showing what removing these constraints could do. Chattanooga, Tennessee has been operating a profitable municipal broadband network for 10 years and, in response to the pandemic, had the capacity to provide 18,000 school children with free 100/100mbps so they could continue to learn. In Utah, 11 cities joined together to build an open-access fiber network that not only brought competitively priced high-speed fiber to its residents but also provided them with over a dozen choices as provided by small businesses. This multi-city partnership has been so successful that they added two new cities into the network in 2020.

The pandemic made it abundantly clear that communication services and capabilities are the platform, driver, and enabler of all that matters in communities. It is also abundantly clear that  monopolistic ISPs failed to meet the needs of communities. H.B. 1136 would correct that failure by allowing public entities to address the concerns and needs of the people they serve. If you are a Washington resident, please urge your lawmakers to support this bill. Broadband access is vitally important now and beyond the pandemic. This bill would not only loosen the hold of monopolistic ISPs, but also give everyone a chance at faster service to participate meaningfully in an increasingly digital world. 

TAKE ACTION

WASHINGTON: DEMAND RELIABLE INTERNET FOR EVERYONE

 

The FBI Should Stop Attacking Encryption and Tell Congress About All the Encrypted Phones It’s Already Hacking Into

Mon, 03/08/2021 - 12:51pm

Federal law enforcement has been asking for a backdoor to read Americans’ encrypted communications for years now. FBI Director Christopher Wray did it again last week in testimony to the Senate Judiciary Committee. As usual, the FBI’s complaints involved end-to-end encryption employed by popular messaging platforms, as well as the at-rest encryption of digital devices, which Wray described as offering “user-only access.” 

The FBI wants these terms to sound scary, but they actually describe security best practices. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. And “user-only access” is actually a perfect encapsulation of how device encryption should work; otherwise, anyone who got their hands on your phone or laptop—a thief, an abusive partner, or an employer—could access its most sensitive data. When you intentionally weaken these systems, it hurts our security and privacy, because there’s no magical kind of access that only works for the good guys. If Wray gets his special pass to listen in on our conversations and access our devices, corporations, criminals, and authoritarians will be able to get the same access. 

It’s remarkable that Wray keeps getting invited to Congress to sing the same song. Notably, Wray was invited there to talk, in part, about the January 6th insurrection, a serious domestic attack in which the attackers—far from being concerned about secrecy—proudly broadcast many of their crimes, resulting in hundreds of arrests. 

It’s also remarkable what Wray, once more, chose to leave out of this narrative. While Wray continues to express frustration about what his agents can’t get access to, he fails to brief Senators about the shocking frequency with which his agency already accesses Americans’ smartphones. Nevertheless, the scope of police snooping on Americans’ mobile phones is becoming clear, and it’s not just the FBI who is doing it. Instead of inviting Wray up to Capitol Hill to ask for special ways to invade our privacy and security, Senators should be asking Wray about the private data his agents are already trawling through. 

Police Have An Incredible Number of Ways to Break Into Encrypted Phones

In all 50 states, police are breaking into phones on a vast scale. An October report from the non-profit Upturn, “Mass Extraction,” has revealed details of how invasive and widespread police hacking of our phones has become. Police can easily purchase forensic tools that extract data from nearly every popular phone. In March 2016, Cellebrite, a popular forensic tool company, supported “logical extractions” for 8,393 different devices, and “physical extractions,” which involves copying all the data on a phone bit-by-bit, for 4,254 devices. Cellebrite can bypass lock screens on about 1,500 different devices. 

How do they bypass encryption? Often, they just guess the password. In 2018, Prof. Matthew Green estimated it would take no more than 22 hours for forensic tools to break into some older iPhones with a 6-digit passcode simply by continuously guessing passwords (i.e. “brute-force” entry). A 4-digit passcode would fail in about 13 minutes. 

That brute force guessing was enabled by a hardware flaw that has been fixed since 2018, and the rate of password guessing is much more limited now. But even as smartphone companies like Apple improve their security, device hacking remains very much a cat-and-mouse game. As recently as September 2020, Cellebrite marketing materials boasted its tools can break into iPhone devices up to “the latest iPhone 11/ 11 Pro / Max running the latest iOS versions up to the latest 13.4.1” 

Even when passwords can’t be broken, vendors like Cellebrite offer “advanced services” that can unlock even the newest iOS and Samsung devices. Upturn research suggests the base price on such services is $1,950, but it can be cheaper in bulk. 

Buying electronic break-in technology on a wholesale basis represents the best deal for police departments around the U.S., and they avail themselves of these bargains regularly. In 2018, the Seattle Police Department purchased 20 such “actions” from Cellebrite for $33,000, allowing them to extract phone data within weeks or even days. Law enforcement agencies that want to unlock phones en masse can bring Cellebrite’s “advanced unlocking” in-house, for prices that range from $75,000 to $150,000. 

That means for most police departments, breaking into phones isn’t just convenient, it’s relatively inexpensive. Even a mid-sized police department like Virginia Beach, VA has a police budget of more than $100 million; New York City’s police budget is over $5 billion. The FBI’s 2020 budget request is about $9 billion

When the FBI says it’s “going dark” because it can’t beat encryption, what it’s really asking for is a method of breaking in that’s cheaper, easier, and more reliable than the methods they already have. The only way to fully meet the FBI’s demands would be to require a backdoor in all platforms, applications, and devices. Especially at a time when police abuses nationwide have come into new focus, this type of complaint should be a non-starter with elected officials. Instead, they should be questioning how and why police are already dodging encryption. These techniques aren’t just being used against criminals. 

Phone Searches By Police Are Widespread and Commonplace

Upturn has documented more than 2,000 agencies across the U.S. that have purchased products or services from mobile device forensic tool vendors, including every one of the 50 largest police departments, and at least 25 of the 50 largest sheriffs’ offices. 

Law enforcement officials like Wray want to convince us that encryption needs to be bypassed or broken for threats like terrorism or crimes against children, but in fact, Upturn’s public records requests show that police use forensic tools to search phones for everyday low-level crimes. Even when police don't need to bypass encryption—such as when they convince someone to "consent" to the search of a phone and unlock it—these invasive police phone searches are used “as an all-purpose investigative tool, for an astonishingly broad array of offenses, often without a warrant,” as Upturn put it.

The 44 law enforcement agencies who provided records to Upturn revealed at least 50,000 extractions of cell phones between 2015 and 2019. And there’s no question that this number is a “severe undercount,” counting only 44 agencies, when at least 2,000 agencies have the tools. Many of the largest police departments, including New York, Chicago, Washington D.C., Baltimore, and Boston, either denied Upturn’s record requests or did not respond. 

“Law enforcement… use these tools to investigate cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses,” Upturn reports. In Suffolk County, NY, 20 percent of the phones searched by police were for narcotics cases. Authorities in Santa Clara County, CA, San Bernardino County, CA, and Fort Worth, TX all reported that drug crimes were among the most common reasons for cell phone data extractions. Here are just a few examples of the everyday offenses in which Upturn found police searched phones: 

  • In one case, police officers sought to search two phones for evidence of drug sales after a $220 undercover marijuana bust. 
  • Police stopped a vehicle for a “left lane violation,” then “due to nervousness and inconsistent stories, a free air sniff was conducted by a … K9 with positive alert to narcotics.” The officers found bags of marijuana in the car, then seized eight phones from the car’s occupants, and sought to extract data from them for “evidence of drug transactions.” 
  • Officers looking for a juvenile who allegedly violated terms of his electronic monitoring found him after a “short foot pursuit” in which the youngster threw his phone to the ground. Officers sought to search the phone for evidence of “escape in the second degree.” 

And these searches often take place without judicial warrants, despite the U.S. Supreme Court’s clear ruling in Riley v. California that a warrant is required to search a cell phone. That’s because police frequently abuse rules around so-called consent searches. These types of searches are widespread, but they’re hardly consensual. In January, we wrote about how these so-called “consent searches” are extraordinary violations of our privacy. 

Forensic searches of cell phones are increasingly common. The Las Vegas police, for instance, examined 260% more cell phones in 2018-2019 compared with 2015-2016. 

The searches are often overbroad, as well. It’s not uncommon for data unrelated to the initial suspicions to be copied, kept, and used for other purposes later. For instance, police can deem unrelated data to be “gang related,” and keep it in a “gang database,” which have often vague standards. Being placed in such a database can easily affect peoples’ future employment options. Many police departments don’t have any policies in place about when forensic phone-searching tools can be used. 

It’s Time for Oversight On Police Phone Searches

Rather than listening to a litany of requests for special access to personal data from federal agencies like the FBI, Congress should assert oversight over the inappropriate types of access that are already taking place. 

The first step is to start keeping track of what’s happening. Congress should require that federal law enforcement agencies create detailed audit logs and screen recordings of digital searches. And we agree with Upturn that agencies nationwide should collect and publish aggregated information about how many phones were searched, and whether those searches involved warrants (with published warrant numbers), or so-called consent searches. Agencies should also disclose what tools were used for data extraction and analysis. 

Congress should also consider placing sharp limits on when consent searches can take place at all. In our January blog post, we suggest that such searches be banned entirely in high-coercion settings like traffic stops, and suggest some specific limits that should be set in less-coercive settings. 

Why You Can’t Sue Your Broadband Monopoly

Fri, 03/05/2021 - 4:45pm

EFF Legal Fellow Josh Srago co-wrote this blog post

The relationship between the federal judiciary and the executive agencies is a complex one. While Congress makes the laws, they can grant the agencies rulemaking authority to interpret the law. So long as the agency’s interpretation of any ambiguous language in the statute is reasonable, the courts will defer to the judgment of the agency.

For broadband access, the courts have deferred to the Federal Communications Commission’s (FCC’s) judgment on the proper classification of broadband services twice in the last several years. In 2015, the Court deferred to the FCC when it classified broadband as Title II in the Open Internet Order. In 2017, it deferred again when broadband internet was reclassified as Title I in the Restoring Internet Freedom Order. A Title II service is subject to strict FCC oversight, rules, and regulations, but a Title I service is not.

Classification of services isn’t the only place where the courts defer to the FCC’s authority. Two Supreme Court decisions – Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, LLP, and Credit Suisse Securities (USA) LLC v. Billing – have established the precedent that if an industry is overseen by an expert regulatory agency (such as broadband being overseen by the FCC) then the courts will defer to the agency’s judgment on competition policy because the agency has the particular and specific knowledge to make the best determination.

In other words, civil antitrust law has to overcome multiple barriers in applying to broadband providers, potentially denying it as a remedy for monopolization for consumers. EFF's conducted an in-depth analysis on this issue. For a summary, read on.

The Judicial Deference Circle and How It Blocks Antitrust Enforcement Over Broadband

What this creates is circular deferential reasoning. The FCC has the authority to determine whether or not broadband will be subject to strict oversight or subject to no oversight and the courts will defer to the FCC’s determination. If the service is subject to strict rules and regulations, then the FCC has the power to take action if a provider acts in an anti-competitive way. Courts will defer to the FCC’s enforcement powers to ensure that the market is regulated as it sees fit.

However, if the FCC determines that the service should not be subject to the strict rules and regulations of Title II and a monopoly broadband provider acts in an anticompetitive way, the courts will still defer to the FCC’s determination as to whether the bad actor is doing something they should not. If the courts did otherwise, then their determination would be in direct conflict with the regulatory regime established by the FCC to ensure that the market is regulated as it sees fit.

What this means is that individuals and municipalities are left without a legal pathway when a broadband service provider abuses its monopoly powers under our antitrust laws. A complaint can be filed with the FCC regarding the behavior, but how that complaint is handled is subject to the FCC’s decisions, not on whether the conduct is anti-competitive.

A Better Broadband World Under Robust Antitrust Enforcement

The best path forward to resolve this is for Congress to pass legislation that overturns Trinko and Credit Suisse, ensuring that people, or representatives of people such as local governments, can protect their interests and aren’t being taken advantage of by incumbent monopoly broadband providers. But what will that world look like? EFF analyzed that question and theorized how things could improve for consumers. You can read our memo here. As Congress debates reforming antitrust laws with a focus on Big Tech, there are a lot of downstream positive impacts that can stem from such reforms, namely in giving people the ability to sue their broadband monopolist and use the courts to bring in competition.

Google’s FLoC Is a Terrible Idea

Wed, 03/03/2021 - 6:12pm

The third-party cookie is dying, and Google is trying to create its replacement. 

No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet. 

Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful. 

FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting. 

Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads. 

We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures. 

In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab. 

In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.

Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.

What is FLoC?

In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN, TURTLEDOVE, SPARROW, SWAN, SPURFOWL, PELICAN, PARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.

FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).

If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.

Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one. 

For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.

According to the proposal, most of the specifics are still up in the air. The draft specification states that a user’s cohort ID will be available via Javascript, but it’s unclear whether there will be any restrictions on who can access it, or whether the ID will be shared in any other ways. FLoC could perform clustering based on URLs or page content instead of domains; it could also use a federated learning-based system (as the name FLoC implies) to generate the groups instead of SimHash. It’s also unclear exactly how many possible cohorts there will be. Google’s experiment used 8-bit cohort identifiers, meaning that there were only 256 possible cohorts. In practice that number could be much higher; the documentation suggests a 16-bit cohort ID comprising 4 hexadecimal characters. The more cohorts there are, the more specific they will be; longer cohort IDs will mean that advertisers learn more about each user’s interests and have an easier time fingerprinting them.

One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.

New privacy problems

FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks. 

Fingerprinting

The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint. 

Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.

Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.

Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.

Cross-context exposure

The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior. 

The project’s Github page addresses this up front:

This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.

As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.

Two categories of information may be exposed in this way:

  1. Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites. 
  2. General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.

This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.

You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.

Beyond privacy

FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.

The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes. 

Over the years, the machinery of targeted advertising has frequently been used for exploitation, discrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.

Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.

Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads

Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographics like gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.

Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again. 

This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.

In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.

Google, please don’t do this

We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.

Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.

It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.

We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.

The Justice in Policing Act Does Not Do Enough to Rein in Body-Worn Cameras

Tue, 03/02/2021 - 4:05pm

Reformers often tout police use of body-worn cameras (BWCs) as a way to prevent law enforcement misconduct. But, far too often, this technology becomes one more tool in a toolbox already overflowing with surveillance technology that spies on civilians. Worse, because police often control when BWCs are turned on and how the footage is stored, BWCs often fail to do the one thing they were intended to do: record video of how police interact with the public. So EFF opposes BWCs absent strict safeguards.

While it takes some useful steps toward curbing nefarious ways that police use body-worn cameras, the George Floyd Justice in Policing Act, H.R. 1280, does not do enough. It places important limits on how federal law enforcement officials use BWCs. And it is a step forward compared to last year’s version: it bans federal officials from applying face surveillance technology to any BWC footage. However, H.R. 1280 still falls short: it funds BWCs for state and local police, but does not apply the same safeguards that the bill applies to federal officials. We urge amendments to this bill as detailed below. Otherwise, these federally-funded BWCs will augment law enforcement’s already excessive surveillance capabilities. 

At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies. 

As has been our position, BWCs should adhere to the following regulations: 

Mandated activation of body-worn cameras. Officers must be required to activate their cameras at the start of all investigative encounters with civilians, and leave them on until the encounter ends. Otherwise, officers could subvert any accountability benefits of BWCs by simply turning them off when misconduct is imminent, or not turning them on. In narrow circumstances where civilians have heightened privacy interests (like crime victims and during warrantless home searches), officers should give civilians the option to deactivate BWCs.

No political spying with body-worn cameras. Police must not use BWCs to gather information about how people are exercising their First Amendment rights to speak, associate, or practice their religion. Government surveillance chills and deters such protected activity.

Retention of body-worn camera footage. All BWC footage should be held for a few months, to allow injured civilians sufficient time to come forward and seek evidence. Then footage should be promptly destroyed, to reduce the risks of data breach, employee misuse, and long-term surveillance of the public. However, if footage depicts an officer’s use of force or an episode subject to a civilian’s complaint, then the footage must be retained for a lengthier period. 

Officer review of footage. If footage depicts use of force or an episode subject to a civilian complaint, then an officer must not be allowed to review the footage until after they make an initial statement about the event. Given the malleability of human memory, a video can alter or even overwrite a recollection. And some officers might use footage to better “testily,” or stretch the truth about encounters.

Public access to footage. If footage depicts a particular person, then that person must have access to it. If footage depicts police use of force, then all members of the general public must have access to it. If a person seeks footage that does not depict them or use of force, then whether they may have access must depend on a weighing by a court of (a) the benefits of disclosure to police accountability, and (b) the costs of disclosure to the privacy of a depicted member of the public. If the footage does not depict police misconduct, then disclosure will rarely have a police accountability benefit. In many cases, blurring of civilian faces might diminish privacy concerns. In no case should footage be withheld on the grounds it is a police investigatory record.

Enforcement of these rules. If footage is recorded or retained in violation of these rules, then it must not be admissible in court. If footage is not recorded or retained in violation of these rules, then a civil rights plaintiff or criminal defendant must receive an evidentiary presumption that the missing footage would have helped them. And departments must discipline officers who break these rules.

Community control over body-worn cameras. Local police and sheriffs must not acquire or use BWCs, or any other surveillance technology, absent permission from their city council or county board, after ample opportunity for residents to make their voices heard. This is commonly called community control over police surveillance (CCOPS). 

EFF supported a California law (A.B. 1215) that placed a three-year moratorium on use of face surveillance with BWCs. Likewise, EFF in 2019, 2020, and 2021 joined scores of privacy and civil rights groups in opposing any federal use of face surveillance, and also any federal funding of state and local face surveillance. 

So we are pleased with Section 374 of H.R. 1280, which states: “No camera or recording device authorized or required to be used under this part may be equipped with or employ facial recognition technology, and footage from such a camera or recording device may not be subjected to facial recognition technology.” We are also pleased with Section 3051, which says that federal grant funds for state and local programs “may not be used for expenses related to facial recognition technology.” Both of these provisions validate civil society and over-policed communities’ long-standing assertion that government use of face recognition is dangerous and must be banned. However, this bill does not go far enough. EFF firmly supports a full ban of all government use of face recognition technology. At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies. For body-worn cameras to be a small part of a solution, rather than part of the problem, their operation and footage storage must be heavily regulated, and they must used solely to record video of how police interact with the public, and not serve as trojan horses for increased surveillance.

Officials in Baltimore and St. Louis Put the Brakes on Persistent Surveillance Systems Spy Planes

Tue, 03/02/2021 - 12:38pm

Baltimore, MD and St. Louis, MO, have a lot in common. Both cities suffer from declining populations and high crime rates. In recent years, the predominantly Black population in each city has engaged in collective action opposing police violence. In recent weeks, officials in both cities voted unanimously to spare their respective residents from further invasions on their privacy and essential liberties by a panoptic aerial surveillance system designed to protect soldiers on the battlefield, not resident's rights and public safety.

Baltimore’s Unanimous Vote to Terminate  

From April to October of 2020, Baltimore residents were subjected to a panopticon-like system of surveillance facilitated by a partnership between the Baltimore Police Department and a privately-funded Ohio company called Persistent Surveillance Systems (PSS). During that period, for at least 40 hours a week, PSS flew surveillance aircraft over 32 square miles of the city, enabling police to identify specific individuals from the images captured by the planes. Although no planes had flown as part of the collaboration since late October—and the program was scheduled to end later this year—the program had become troubling enough that on February 3, the City's spending board voted unanimously to terminate Baltimore's contract with Ohio-based Persistent Surveillance Systems.

St. Louis Rules Committee Says ‘Do Not Pass’

Given the program's problematic history and unimpressive efficacy, it may come as some surprise that on December 11, 2020, City of St. Louis Alderman Tom Oldenburg introduced legislation that would have forced the mayor, and comptroller, to enter into a contract with PSS closely replicating Baltimore's spy plane program.

With lobbyists for the privately-funded Persistent Surveillance Systems program padding campaign coffers, Alderman Oldenburg's proposal was initially well received by the City's Board of Alders. However, as EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the bill’s unconstitutionality, that support began to waver. While the bill narrowly cleared a preliminary vote in late January, by Feb. 4 the Rules Committee voted unanimously to issue a "Do Not Pass" recommendation.

A supermajority of the Board could vote to override the Committee's guidance when they meet for the last time this session on April 19. However, the bill's sponsor has acknowledged that outcome to be unlikely—while also suggesting he plans to introduce a similar bill next session. If the Board does approve the ordinance when they meet on April 19, it is doubtful that St. Louis Mayor Lyda Krewson would sign the bill after her successor has been chosen in the City's April 6 election.

Next Up: Fourth Circuit Court of Appeals 

While municipal lawmakers are weighing in unanimously against the program, it may be the courts that make the final call. Last November, EFF along with the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute filed a friend-of-the-court brief in a federal civil rights lawsuit challenging Baltimore’s aerial surveillance program. A divided three-judge panel of the U.S. Court of Appeals for the Fourth Circuit initially upheld the program, but the full court has since withdrawn that decision and decided to rehear the case en banc. Oral arguments are scheduled for March 8. While the people of St. Louis and Baltimore are protected for now, we're hopeful that the court will find that the aerial surveillance program violates the Fourth Amendment’s guarantee against warrantless dragnet surveillance, potentially shutting down  the program for good. 

What the AT&T Breakup Teaches Us About a Big Tech Breakup

Mon, 03/01/2021 - 1:51pm

The multi-pronged attempt by state Attorneys General, the Department of Justice, and the Federal Trade Commission to find Google and Facebook liable for violating antitrust law may result in breaking up these giant companies. But in order for any of this to cause lasting change, we need to look to the not-so-recent past.

In the world of antitrust, the calls to “break up” Big Tech companies translate to the fairly standard remedy of “structural separation,” where companies are barred from selling services and competing with the buyers of those services (for example, rail companies have been forced to stop selling freight services that compete with their own customers). It has been done before as part of the fight against communication monopolies. However, history shows us that the real work is not just breaking up companies, but following through afterward.

In order to make sure that the Internet becomes a space for innovation and competition, there has to be a vision of an ideal ecosystem. When we look back at the United States’ previous move from telecom monopoly into what can best be described as “regulated competition,” we can learn a lot of lessons—good and bad—about what can be done post-breakup.

The AT&T of Yore and the Big Tech of Today

Cast your mind back, back to when AT&T was a giant corporation. No, further back. When AT&T was the world’s largest corporation and the telephone monopoly. In the 1970s, AT&T resembled Big Tech companies in scale, significance, and influence.

AT&T grew by relentlessly gobbling up rival companies and eventually struck a deal with the government to make its monopolization legal in exchange for universal service (known as the Kingsbury Commitment). As a monopolist, AT&T's unilateral decisions dictated the way people communicated. The company exerted extraordinary influence over public debate and used its influence to argue that its monopoly was in the public interest. Its final antitrust battle was a quagmire that spanned two political administrations, and despite this, its political power was so great that it was able to get the Department of Defense to claim its monopoly was vital to national security.  

Today, Big Tech is reenacting the battle of the AT&T of yore. Facebook CEO Mark Zuckerberg assertion that his company’s dominance is the only means to compete with China is a repeat of AT&T’s attempt to use national security to bypass competition concerns. Similarly, Facebook's recent change of heart on whether Section 230 of the Communications Decency Act should be gutted is an effort to appease policymakers looking to scrutinize the company's dominance. Not coincidentally, Section 230 is the lifeblood of every would-be competitor to Facebook. In trading 230 in for policy concessions, Facebook both escapes a breakup and salts the earth against the growth of any new competitors to become the regulated monopoly that remains.

Google is a modern AT&T, too. Google acquired its way to dominance by purchasing a multitude of companies to extend its vertical reach over the years. Mergers and acquisitions were key to AT&T's monopoly strategy. That's why the government then sought to break up the company – and that's why the US government today is proposing breakups for  Google. Now, with AT&T, there were clear geographic lines on which the company could be broken into smaller regional companies. It's different for Google and Facebook: those lines will have to be drawn along different parts of the companies "stack," such as advertising and platforms.

When the US Department of Justice broke up AT&T, it traded one national monopoly for a set of regional monopolies. Over time Congress learned that it wasn't enough. Likewise, breakups for Google and Facebook will only be step one.  

Without a Broader Vision, Big Tech Will Be the Humpty Dumpty That Put Himself Back Together Again

Supporters of structural separation for Big Tech need to learn the lessons of the past. Our forebears got it right initially with telecom but then failed to sustain a consistent vision of competition eventually allowing dozens of companies to consolidate into a mix of regional monopolies or super dominant national companies.

When originally passed, the 1996 telecom law Congress passed to follow the AT&T breakup enabled the creation of the Competitive Local Exchange Carrier (aka CLEC) industry. These were smaller companies that already existed but had been severely hamstrung by the local monopolies. Their reach was severely limited because there was no federal competition law.

The  1996 Act lowered the start-up costs for new phone companies: they wouldn't have to build an entire network from scratch. The Act forced the Baby Bells (the regional parts of the original AT&T monopoly) to share their "essential facilities" with these new competitors at a fair price, opening the market to much smaller players with much less capital.

But the incumbent monopolies still had friends in statehouses and Congress. By 2001, federal and state governments began adopting a new theory of competition in communications: "deregulated competition"—which whittled away the facilities sharing rules and rules banning the broken up parts of AT&T from merging with one another again (as well as cable and wireless companies). If the purpose of this untested, unproven approach was to promote competition, then clearly it was a failure. A majority of Americans today have only one choice for high-speed broadband access that meets 21st century needs. There has been no serious reckoning for "deregulated competition" and it remains the heart of telecom policy despite nearly every prediction of the benefits of "deregulated competition" having been proven wrong. This only happened because policymakers and the public forgot how they received competition in telecom in the first place and allowed the unwinding that remains with us still today. 

Steve Coll, author of The Deal of the Century: The Breakup of AT&T, predicted this problem shortly after the AT&T's breakup:

It is quite possible - some would argue it is more than likely - that the final landscape of the Bell System breakup will include a bankrupted MCI and an AT&T returned to its original state as a regulated, albeit smaller and less effective, telephone monopoly. The source of this specter lies not in anyone's crystal ball but in the history of U.S. v. AT&T. Precious little in that history - the birth of MCI, the development of phone industry competition, the filing of the Justice lawsuit, the prolonged inaction of Congress, the aborted compromise deals between Justice and AT&T, the Reagan administration's tortured passivity, the final inter-intra settlement itself - was the product of a single coherent philosophy, or a genuine, reasoned consensus, or a farsighted public policy strategy.

A Post-Breakup Internet Tech Vision: Decentralization, Empowerment of Disruptive Innovation, and Consumer Protection

Anyone thinking about Big Tech breakups needs to learn the lesson of AT&T.  Breakups are just step one. Before we take that step, we need to know what steps we'll take next. We need a plan for post-break-up regulated competition, or we'll squander years and years of antitrust courtroom battles, only to see the fragments of the companies reform into new, unstoppable juggernauts. We need a common narrative about where competition comes from and how we sustain it.

Like phone companies, internet platforms have “network effects”: to compete with them, a new company needs access, not the company's "ecosystem" – the cluster of products and services monopolists weave around themselves to lock in users, squeeze suppliers, and fend off competitors. In '96, we forced regional monopolies to share their facilities and thousands of local ISPs sprung up across the country, almost overnight. Creating a durable competitive threat to tech monopolists means finding similar measures to promote a flourishing, pluralistic, diverse Internet.

We've always said that tech industry competition is a multifaceted project that calls for multiple laws and careful regulation. Changes to antitrust law, intellectual property law, intermediary liability, and consumer privacy legislation all play critical and integral parts in a more competitive future. Strike the wrong balance and you drain away the Internet's capacity for putting power in the hands of people and communities. Get any of the policies wrong and you risk strangling a hundred future Googles and Facebooks in their cradles—companies whose destiny is to grow for a time but to eventually be replaced by new upstarts better suited for the unforeseeable circumstances of the future.

Here are two examples of policies that are every bit as important as breakups for creating and maintaining a competitive digital world:

The Internet once stood for a world where people with good ideas and a little know-how could change the world, attracting millions of users and spawning dozens of competitors. That was the Net's lifecycle of competition. We can get that future back, but only if we commit to a shared and durable vision of competition. It's fine to talk about breaking up Big Tech, but the hard part starts after the companies are split up. Now is the time to start asking what competition should look like, or we'll get dragged back to our current future before we get started down the road to a better one.

Federal Court Agrees: Prosecutors Can’t Keep Forensic Evidence Secret from Defendants

Fri, 02/26/2021 - 5:44pm

When the government tries to convict you of a crime, you have a right to challenge its evidence. This is a fundamental principle of due process, yet prosecutors and technology vendors have routinely argued against disclosing how forensic technology works.

For the first time, a federal court has ruled on the issue, and the decision marks a victory for civil liberties.

EFF teamed up with the ACLU of Pennsylvania to file an amicus brief arguing in favor of defendants’ rights to challenge complex DNA analysis software that implicates them in crimes. The prosecution and the technology vendor Cybergenetics opposed disclosure of the software’s source code on the grounds that the company has a commercial interest in secrecy.

The court correctly determined that this secrecy interest could not outweigh a defendant’s rights and ordered the code disclosed to the defense team. The disclosure will be subject to a “protective order” that bars further disclosure, but in a similar previous case a court eventually allowed public scrutiny of source code of a different DNA analysis program after a defense team found serious flaws.

This is the second decision this year ordering the disclosure of the secret TrueAllele software. This added scrutiny will help ensure that the software does not contribute to unjust incarceration.

From Creativity to Exclusivity: The German Government's Bad Deal for Article 17

Fri, 02/26/2021 - 1:25pm

The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users' rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF, other civil society organizations, and experts that only strong user safeguards can help preventing Article 17 from turning tech companies and online services operators into copyright police.

A glimpse of hope was presented by the German government in a recent discussion paper. While the draft proposal fails to prevent the use of upload filters to monitor all user uploads and assess them against the information provided by rightsholders, it showed creativity by giving users the option of pre-flagging uploads as "authorized" (online by default) and by setting out exceptions for everyday uses. Remedies against abusive removal requests by self-proclaimed rightsholders were another positive feature of the discussion draft.

Inflexible Rules in Favor of Press Publishers

However, the recently adopted copyright implementation proposal by the German Federal Cabinet has abandoned the focus on user rights in favor of inflexible rules that only benefit press publishers. Instead of opting for broad and fair statutory authorization for non-commercial minor uses, the German government suggests trivial carve-outs for "uses presumably authorized by law," which are not supposed to be blocked automatically by online platforms. However, the criteria for such uses are narrow and out of touch with reality. For example, the limit for minor use of text is 160 characters.

By comparison, the maximum length of a tweet is 280 characters, which is barely enough substance for a proper quote. As those uses are only presumably authorized, they can still be disputed by rightsholders and blocked at a later stage if they infringe copyright. However, this did not prevent the German government from putting a price tag on such communication as service providers will have to pay the author an "appropriate remuneration." There are other problematic elements in the proposal, such as the plan to limit the use of parodies to uses that are "justified by the specific purpose"—so better be careful about being too playful.

The German Parliament Can Improve the Bill

It's now up to the German Parliament to decide whether to be more interested in the concerns of press publishers or in the erosion of user rights and freedoms. EFF will continue to reach out to Members of Parliament to help them make the right decision.

Pages