Throughout our nation’s history—most potently since the era of civil rights activism—those participating in social movements challenging the status quo have enjoyed First Amendment protections to freely associate with others in advocating for causes they believe in. This right is directly tied to our ability to maintain privacy over what organizations we choose to join or support financially. Forcing organizations to hand membership or donor lists to the state threatens First Amendment activities and suppresses dissent, as those named, facing harassment or worse, have to decide between staying safe or speaking out.
In a California case over donor disclosures, we’ve urged the Supreme Court to apply this important principle to ensure that the bar for public officials seeking information about people’s political and civic activities is sufficiently high. In an amicus brief filed last week, EFF, along with four other free speech advocacy groups, asked the court to compel the California Attorney General to better justify its requirement that nonprofits to turn over to the state the names of their major donors.
The U.S. Court of Appeals for the Ninth Circuit in 2018 upheld California’s charitable donation reporting requirement, under which nonprofits must give state officials the names and addresses of their largest donors. The court, ruling in Americans For Prosperity Foundation v. Becerra, rejected arguments that the requirement infringes on donors’ First Amendment right to freely associate with others, and said the plaintiffs hadn’t shown specific evidence to back up claims that donors would be threatened or harassed if their names were disclosed.
The decision goes against years of Supreme Court precedent requiring the government, whether or not there’s direct evidence of harassment, to show it has a compelling interest justifying donor disclosure requirements that can divulge people’s political activities. Joined by the Freedom to Read Foundation, the National Coalition Against Censorship, the People United for Privacy Foundation, and Woodhull Freedom Foundation, we urged the Supreme Court to overturn the Ninth Circuit decision and rule that “exacting scrutiny” applies to any donor disclosure mandate by the government. By that we mean the government must show its interest is sufficiently important and the requirement carefully crafted to infringe as little as possible on donors’ First Amendment rights.
Even where there’s no specific evidence that donors are being harassed or groups can’t attract funders, the court has found, states wishing to intrude on Americans’ right to keep their political associations private must always demonstrate a compelling state interest in obtaining the information.
This principle was at the center of the Supreme Court’s unanimous landmark 1958 decision blocking Alabama from forcing the NAACP to turn over names and addresses of its members. The court never questioned the NAACP’s concerns about harassment and retaliation, let alone suggest that the organization had the burden of making some threshold showing confirming the nature or specificity of its concerns. The Ninth Circuit said California’s disclosure requirement posed minimal First Amendment harms because the Attorney General must keep the donor names confidential. It faulted the plaintiffs for not producing evidence that donors would be harassed if their names were revealed and not identifying donors whose willingness to contribute hinged on whether their identities would be disclosed by the Attorney General.
The court is wrong on both counts.
First, pledging to keep the names confidential doesn’t eliminate the requirement’s speech-chilling effects, we said in our brief. Groups that challenge or oppose state policies have legitimate fears that members and donors, or their businesses, could become targets of harassment or retaliation by the government itself. It’s easy to imagine that a Black Lives Matter organization, or an organization assisting undocumented immigrants at the border, would have justifiable concerns about turning their donor or membership information to the government, regardless of whether the government shares that information with anyone else. If allowed to stand, the Ninth Circuit’s decision gives the government unchecked power to collect information on people’s political associations.
Second, the burden is on the government to show it has a compelling interest connected to the required information before forcing disclosures that could put people in harm’s way. As we stated in our brief: “Speaking out on contentious issues creates a very real risk of harassment and intimidation by private citizens and critically by the government itself. Furthermore, numerous contemporary issues—ranging from the Black Lives Matter movement, to gender identity, to immigration—arouse significant passion by people with many divergent beliefs. Thus, now, as much as any time in our nation’s history, it is necessary for individuals to be able to express and promote their viewpoints through associational affiliations without personally exposing themselves to a political firestorm or even governmental retaliation.”
The precedent established by this case will affect the associational rights of civil rights and civil liberties groups across the country. We urge the Supreme Court to affirm meaningful protections that nonprofits and their members and contributors need from government efforts to make them hand over donor or member lists.
Hailey Rodis, a student at the University of Nevada, Reno Reynolds School of Journalism, was the primary researcher on this report. We extend our gratitude to the dozens of other UNR students and volunteers who contributed data on campus police to the Atlas of Surveillance project. The report will be updated periodically with responses from university officials. These updates will be noted in the text.
It may be many months before college campuses across the U.S. fully reopen, but when they do, many students will be returning to a learning environment that is under near constant scrutiny by law enforcement.
A fear of school shootings, and other campus crimes, have led administrators and campus police to install sophisticated surveillance systems that go far beyond run-of-the-mill security camera networks to include drones, gunshot detection sensors, and much more. Campuses have also adopted automated license plate readers, ostensibly to enforce parking rules, but often that data feeds into the criminal justice system. Some campuses use advanced biometric software to verify whether students are eligible to eat in the cafeteria. Police have even adopted new technologies to investigate activism on campus. Often, there is little or no justification for why a school needs such technology, other than novelty or asserted convenience.
In July 2020, the Electronic Frontier Foundation and the Reynolds School of Journalism at University of Nevada, Reno launched the Atlas of Surveillance, a database of now more than 7,000 surveillance technologies deployed by law enforcement agencies across the United States. In the process of compiling this data we noticed a peculiar trend: college campuses are acquiring a surprising number of surveillance technologies more common to metropolitan areas that experience high levels of violent crime.
So, we began collecting data from universities and community colleges using a variety of methods, including running specific search terms across .edu domains and assigning small research tasks to a large number of students using EFF's Report Back tool. We documented more than 250 technology purchases, ranging from body-worn cameras to face recognition, adopted by more than 200 universities in 37 states. As big as these numbers are, they are only a sliver of what is happening on college campuses around the world.
- Body-worn cameras
- Automated License Plate Readers
- Social Media Monitoring
- Biometric Identification
- Gunshot Detection
- Video Analytics
Maybe your school has a film department, but the most prolific cinematographers on your college campus are probably the police.
Since the early 2010s, body-worn cameras (BWCs) have become more and more common in the United States. This holds true for law enforcement agencies on university and college campuses. These cameras are attached to officers’ uniforms (often the chest or shoulder, but sometimes head-mounted) and capture interactions between police and members of the public. While BWC programs are often pitched as an accountability measure to reduce police brutality, in practice these cameras are more often used to capture evidence later used in prosecutions.
Policies on these cameras vary from campus to campus—such as whether a camera should be always recording, or only during certain circumstances. But students and faculty should be aware than any interaction, or even near-interaction, with a police officer could be on camera. That footage could be used in a criminal case, but in many states, journalists and members of the public are also able to obtain BWC footage through an open records request.
Aside from your run-of-the-mill, closed-circuit surveillance camera networks, BWCs were the most prevalent technology we identified in use by campus police departments. This isn't surprising, since researchers have observed similar trends in municipal law enforcement. We documented 152 campus police departments using BWCs, but as noted, this is only a fraction of what is being used throughout the country. One of the largest rollouts began last summer when Pennsylvania State University announced that police on all 22 campuses would start wearing the devices.
One of the main ways that universities have purchased BWCs is through funding from the U.S. Department of Justice's Bureau of Justice Assistance. Since 2015, more than 20 universities and community colleges have received funds through the bureau's Body-Worn Camera Grant Program established during the Obama administration. In Oregon, these funds helped the Portland State University Police Department adopt the technology well ahead of their municipal counterparts. PSU police received $20,000 in 2015 for BWCs, while the Portland Police Department does not use BWCs at all (Portland PD's latest attempt to acquire them in 2021 was scuttled due to budget concerns).Drones
Drones, also known as unmanned aerial vehicles (UAVs), are remote-controlled flying devices that can be used to surveil crowds from above or locations that would otherwise be difficult or dangerous to observe by a human on the ground. On many campuses, drones are purchased for research purposes, and it's not unusual to see a quadrotor (a drone with four propellers) buzzing around the quad. However, campus police have also purchased drones for surveillance and criminal investigations.
Our data, which was based on a study conducted by the Center for the Study of The Drone at Bard College, identified 10 campus police departments that have drones:
- California State Monterey University Police Department
- Colorado State University Police Department
- Cuyahoga Community College Police Department
- Lehigh University Police Department
- New Mexico State University Police Department
- Northwest Florida State College Campus Police Department
- Pennsylvania State University Police Department
- University of Alabama, Huntsville Police Department
- University of Arkansas, Fort Smith Police Department
- University of North Dakota Police Department
One of the earliest campus drone programs originated at the University of North Dakota, where the campus police began deploying a drone in 2012 as part of a regional UAV unit that also included members of local police and sheriffs' offices. According to UnmannedAerial.com, the unit moved from a "reactive" to a "proactive" approach in 2018, allowing officers to carry drones with them on patrol, rather than retrieving them in response to specific incidents.
The Northwest Florida State University Police Department was notable in acquiring the most drones. While most universities had one, NFSU police began using four drones in 2019, primarily to aid in searching for missing people, assessing traffic accidents, photographing crime scenes, and mapping evacuation routes.
The New Mexico State University Police Department launched its drone program in 2017 and, with the help of a local Eagle Scout in Las Cruces, built a drone training facility for local law enforcement in the region. In response to a local resident who questioned on Facebook whether the program was unnerving, a NMSU spokesperson wrote in 2019:
[The program] thus far has been used to investigate serious traffic crashes (you can really see the skid marks from above), search for people in remote areas, and monitor traffic conditions at large events. They aren't very useful for monitoring campus residents (even if we wanted to, which we don't), since so many stay inside.
Not all agencies have taken such a limited approach. The Lehigh University Police Department acquired a drone in 2015, and equipped it with a thermal imaging camera. Police Chief Edward Shupp told a student journalist at The Brown and Right that the only limits on the drone are Federal Aviation Administration regulations, that there are no privacy regulations for officers to follow, and that the department can use the drones "for any purpose" on and off campus.
Even when a university police department does not have its own drones, it may seek help from other local law enforcement agencies. Such was the case in 2017, when the University of California Berkeley Police Department requested drone assistance from the Alameda County Sheriff's Office to surveil protests on campus.Automated License Plate Readers
Students and faculty may complain about the price tag of parking passes, but there is also an unseen cost of driving on campus: privacy.
Automated license plate readers (ALPRs) are cameras attached to fixed locations or to security or parking patrol cars that capture every license plate that passes. The data is then uploaded to searchable databases with the time, date, and GPS coordinates. Through our research, we identified ALPRs at 49 universities and colleges throughout the country.
ALPRs are used in two main capacities on college campuses. First, transportation and parking divisions have begun using ALPRs for parking enforcement, either attaching the cameras to parking enforcement vehicles or installing cameras at the entrances and exits to parking lots and garages. For example, the University of Connecticut Parking Services uses NuPark, a system that uses ALPRs to manage virtual permits and citations.
Second, campus police are using ALPRs for public safety purposes. The Towson University Police Department in Maryland, for example, scanned over 3 million license plates using automated license plate readers in 2018 and sent that data to the Maryland Coordination and Analysis Center, a fusion center operated by the Maryland State Police. The University has a total of 6 fixed ALPR sites, with 10 cameras and one mobile unit.
These two uses are not always separate: in some cases, parking officials share data with their police counterparts. At Florida Atlantic University, ALPRs are used for parking enforcement, but the police department also has access to this technology through their Communications Center, which monitors all emergency calls to the department, as well as fire alarms, intrusion alarms, and panic alarm systems. In California, the San Jose/Evergreen Community College District Police Department shared* ALPR data with its regional fusion center, the Northern California Regional Intelligence Center.
* March 10, 2021 Update: A spokesperson from San Jose/Evegreen Community College emailed this information: "While it is true that SJECCD did previously purchase two LPR devices, we never licensed the software that would allow data to be collected and shared, so no data from SJECCD’s LPR devices was ever shared with the Northern California Regional Intelligence Center. Further, the MOU that was signed with NCRIC expired in 2018 and was not renewed, so there is no existing MOU between SJECCD and the agency." We have updated the piece to indicate that the ALPR data sharing occurred in the past.Social Media Monitoring
Colleges and universities are also watching their students on social media, and it is not just to retweet or like a cute Instagram post about your summer internship. Campus public safety divisions employ social media software, such as Social Sentinel, to look for possible threats to the university, such as posts where students indicate suicidal ideation or threats of gun violence. We identified 21 colleges that use social media monitoring to watch their students and surrounding community for threats. This does not include higher education programs to monitor social media for marketing purposes.
This technology is used for public safety by both private and public universities. The Massachusetts Institute of Technology has used Social Sentinel since 2015, while the Des Moines Area Community College Campus Security spent $15,000 on Social Sentinel software in 2020.
Social media monitoring technology may also be used to monitor students' political activities. Social Sentinel software was used to watch activists on the University of North Carolina campus who were protesting a Confederate memorial on campus, Silent Sam. As NBC reported, UNC Police and the North Carolina State Bureau of Investigation used a technique called "geofencing" to monitor the social media of people in the vicinity of the protests.
"This information was monitored in an attempt to prevent any potential acts of violence (such as those that have occurred at other public protests around the country, including Charlottesville) and to ensure the safety of all participants," a law enforcement spokesperson told NBC, adding that investigators only looked at public-facing posts and no records of the posts were kept after the event. However, the spokesperson declined to elaborate on how the technology may have been used at other public events.Biometric Identification
When we say that a student body is under surveillance, we also mean that literally. The term “biometrics” refers to physical and behavioral characteristics (your body and what you do with it) that can be used to identify you. Fingerprints are among the types of biometrics most familiar to people, but police agencies around the country are adopting computer systems capable of identifying people using face recognition and other sophisticated biometrics.
At least four police departments at universities in Florida–University of South Florida, University of North Florida, University of Central Florida, and Florida Atlantic University–have access to a statewide face recognition network called Face Analysis Comparison and Examination System (FACES), which is operated by the Pinellas County Sheriff's Office. Through FACES, investigators can upload an image and search a database of Florida driver’s license photos and mugshots.
University of Southern California in Los Angeles confirmed to The Fix that its public safety department uses face recognition, however the practice was more prevalent in the San Diego, California area up until recently.
In San Diego, at least five universities and college campuses participated in a face recognition program involving mobile devices. San Diego State University stood out for having conducted more than 180 face recognition searches in 2018. However, in 2019, this practice was suspended in California under a three-year statewide moratorium.
Faces aren't the only biometric being scanned. In 2017, the University of Georgia introduced iris scanning stations in dining halls, encouraging students to check-in with their eyes to use their meal plans. This replaced an earlier program requiring hand scans, another form of biometric identification.Gunshot Detection
Gunshot detection is a technology that involves installing acoustic sensors (essentially microphones) around a neighborhood or building. When a loud noise goes off, such as a gunshot or a firework, the sensors attempt to determine the location and then police receive an alert.
Universities and colleges have begun using this technology in part as a response to fears of campus shootings. However, these technologies often are not as accurate as their sellers claim and could result in dangerous confrontations based on errors. Also, these devices can capture human voices engaged in private conversations, and prosecutors have attempted to use such recordings in court.
Our dataset has identified eight universities and colleges that have purchased gunshot-detection technology:
- East Carolina University Police Department
- Hampton University Police Department
- Truett McConnell University Campus Safety Department
- University of California San Diego Police Department
- University of Connecticut Police Department
- University of Maryland Police Department
- University of West Georgia Police Department
- Georgia Tech Police Department
Some universities and colleges purchase their own gunshot detection technology, while others have access to the software through partnerships with other law enforcement agencies. For example, the Georgia Tech Police Department has access to gunshot detection through the Fūsus Real-Time Crime Center. The University of California San Diego Police Department, on the other hand, installed its own ShotSpotter gunshot detection technology on campus in 2017.
When a university funds surveillance technology, it can impact the communities nearby. For example, University of Nevada, Reno journalism student Henry Stone obtained documents through Nevada's public records law that showed that UNR Cooperative Extension spent $500,000 in 2017 to install and operate Shotspotter sensors in a 3-mile impoverished neighborhood of Las Vegas. The system is controlled by the Las Vegas Metropolitan Police Department.Video Analytics
While most college campuses employ some sort of camera network, we identified two particular universities that are applying for extra credit in surveilling students: the University of Miami Police Department in Florida and Grand Valley State University Department of Public Safety in Michigan. These universities apply advanced software to the camera footage—sometimes called video analytics or computer vision—that use an algorithm to achieve round-the-clock monitoring that many officers viewing cameras could never achieve. Often employing artificial intelligence, video analytics systems can track objects and people from camera to camera, identify patterns and anomalies, and potentially conduct face recognition.
Grand Valley State University began using Avigilon video analytics technology in 2018. The University of Miami Police Department uses video analytics software combined with more than 1,300 cameras.
Three university police departments in Maryland also maintain lists of cameras owned by local residents and businesses. With these camera registries, private parties are asked to voluntarily provide information about the location of their security cameras, so that police can access or request footage during investigations. The University of Maryland, Baltimore Police Department, the University of Maryland, College Park Police Department and the Johns Hopkins University Campus Police are all listed on Motorola Solutions' CityProtect site as maintaining such camera registries.
Two San Francisco schools—UC Hastings School of Law and UC San Francisco—explored leasing Knightscope surveillance robots in 2019 and 2020 to patrol their campuses, though the plans seem to have been scuttled by COVID-19. The robots are equipped with cameras, artificial intelligence, and, depending on the model, the ability to capture license plate data, conduct facial recognition, or recognize nearby phones.Conclusion
Universities in the United States pride themselves on the free exchange of ideas and the ability for students to explore different concepts and social movements over the course of their academic careers. Unfortunately, for decades upon decades, police and intelligence agencies have also spied on students and professors engaged in social movements. High-tech surveillance only exacerbates the threat to academic freedom.
Around the country, cities are pushing back against surveillance by passing local ordinances requiring a public process and governing body approval before a police agency can acquire a new surveillance technology. Many community colleges do have elected bodies, and we urge these policymakers to enact similar policies to ensure adequate oversight of police surveillance.
However, these kinds of policy-making opportunities often aren't available to students (or faculty) at state and private universities, whose leadership is appointed, not elected. We urge student and faculty associations to press their police departments to limit the types of data collected on students and to ensure a rigorous oversight process that allows students, faculty, and other staff to weigh in before decisions are made to adopt technologies that can harm their rights.
EFF, ACLU and EPIC File Amicus Brief Challenging Warrantless Cell Phone Search, Retention, and Subsequent Search
Last week, EFF—along with the ACLU and EPIC—filed an amicus brief in the Wisconsin Supreme Court challenging a series of warrantless digital searches and seizures by state law enforcement officers: the search of a person’s entire cell phone, the retention of a copy of the data on the phone, and the subsequent search of the copy by a different law enforcement agency. Given the vast quantity of private information on an ordinary cell phone, the police’s actions in this case, State v. Burch, pose a serious threat to digital privacy, violating the Fourth Amendment’s core protection against “giving police officers unbridled discretion to rummage at will among a person’s private effects.”The Facts
In June 2016, the Green Bay Police Department was investigating a hit-and-run accident and vehicle fire. Since Burch had previously driven the vehicle at issue, the police questioned him. Burch provided an alibi involving text messages with a friend who lived near the location of the incident. To corroborate his account, Burch agreed to an officer’s request to look at those text messages on his cell phone. But, despite initially only asking for the text messages, the police used a sophisticated mobile device forensic tool to copy the contents of the entire phone. Then about a week later, after reviewing the cell phone data, a Green Bay Police officer wrote a report that ruled Burch out as a suspect, finding that there was “no information to prove [Burch] was the one driving the [vehicle] during the [hit-and- run] accident.”
But that’s not where things end. Also in the summer of 2016, a separate Wisconsin police agency, the Brown County Sheriff’s Office, was investigating a homicide. And in August, Burch became a suspect in that case. In the course of that investigation, the Brown County Sheriff's Office learned that the Green Bay Police Department had kept the download of Burch’s cell phone and obtained a copy of it. The Brown County Sherriff’s Office then used information on the phone to charge Burch with the murder.
Burch was ultimately convicted but argued that the evidence from his cell phone should have been suppressed on Fourth Amendment grounds. Last fall, a Wisconsin intermediate appellate court certified Burch’s Fourth Amendment challenge to the Wisconsin Supreme Court, writing that the “issues raise novel questions regarding the application of Fourth Amendment jurisprudence to the vast array of digital information contained in modern cell phones.” In December, the Wisconsin Supreme Court decided to review the case and asked the parties to address six specific questions related to the search and retention of the cell phone data.The Law
In a landmark ruling in Riley v. California , the U.S. Supreme Court established the general rule that police must get a warrant to search a cell phone. However, there are certain narrow exceptions to the warrant requirement, including when a person consents to the search of a device. While Burch did consent to a limited search of his phone, that did not provide law enforcement limitless authority to search and retain a copy of his entire phone.
Specifically, in our brief, we argue that the state committed multiple independent violations of Burch’s Fourth Amendment rights. First, since Burch only consented to the search of his text messages, it was unlawful for the Green Bay police to copy his entire phone. And even if his consent extended beyond his text messages, he did not give the police the authority to search information on his phone having nothing to do with the initial investigation. Next, regardless of the extent of Burch’s consent, after the police determined Burch was no longer a suspect, the state lost virtually all justification in retaining Burch’s private information and should have returned it to him or purged it. Lastly, since the state had no compelling legal justification to hold Burch’s data after closing the initial investigation on him, the Brown County Sheriff’s warrantless search of the data retained by the Green Bay police was blatantly unlawful.The Privacy Threat at Stake
The police’s actions here are not an outlier. In a recent investigative report, Upturn found that law enforcement in all fifty states have access to the type of mobile forensic tools the police employed in this case. And although consent is a recognized exception to the rule that warrants are required for cell phone searches, Upturn’s study reveals that police rely on warrant exceptions like consent to use those tools at an alarming rate. For example, of the 1,583 cell phones on which the Harris County, Texas Sheriff’s Office performed extractive searches from August 2015 to July 2019, 53% were conducted without a warrant, including searches based on consent and search of phones the police classified as “abandoned/deceased.” Additionally, of the 497 cell phone extractions performed in Anoka County, Minnesota between 2017 to May 2019, 38% were consent searches.
In light of both how common consent-based searches are and their problematic nature (as a recent EFF post explains), the implications of the state’s core argument is only all the more troubling. In the state’s view, no one—including suspects, witnesses, and victims—who consents to a search of their digital device in the context of one investigation could prevent law enforcement from storing a copy of their entire device in a database that could be mined years into the future, for any reason the government sees fit.
The state’s arguments would erase the hard-fought protections for digital data recognized in cases like Riley. The Wisconsin Supreme Court should recognize that consent does not authorize the full extraction, indefinite retention, and subsequent search of a person’s cell phone.
The coronavirus pandemic, its related stay-at-home orders, and its economic and social impacts have illustrated how important robust broadband service is to everything from home-based work to education. Yet, even now, many communities across America have been unable to meet their residents’ telecommunication needs. This is because of two problems: disparities in access to services that exacerbate race and class inequality—the digital divide—and the overwhelming lack of competition in service providers. At the heart of both problems is the current inability of public entities to provide their own broadband services.
This is why EFF joined a coalition of private-sector companies and organizations to support H.B. 1336, authored by Washington State Representative Drew Hansen. This bill would remove restrictions in current Washington law preventing public entities from building and providing broadband services. In removing these restrictions, Hansen’s bill would allow public entities to create and implement broadband policy based on the needs of the people they serve, and provide services unconstrained and not beholden to big, unreliable ISPs.
Washington: Demand Reliable Internet for Everyone
There are already two examples of community-provided telecommunications services showing what removing these constraints could do. Chattanooga, Tennessee has been operating a profitable municipal broadband network for 10 years and, in response to the pandemic, had the capacity to provide 18,000 school children with free 100/100mbps so they could continue to learn. In Utah, 11 cities joined together to build an open-access fiber network that not only brought competitively priced high-speed fiber to its residents but also provided them with over a dozen choices as provided by small businesses. This multi-city partnership has been so successful that they added two new cities into the network in 2020.
The pandemic made it abundantly clear that communication services and capabilities are the platform, driver, and enabler of all that matters in communities. It is also abundantly clear that monopolistic ISPs failed to meet the needs of communities. H.B. 1136 would correct that failure by allowing public entities to address the concerns and needs of the people they serve. If you are a Washington resident, please urge your lawmakers to support this bill. Broadband access is vitally important now and beyond the pandemic. This bill would not only loosen the hold of monopolistic ISPs, but also give everyone a chance at faster service to participate meaningfully in an increasingly digital world.
WASHINGTON: DEMAND RELIABLE INTERNET FOR EVERYONE
The FBI Should Stop Attacking Encryption and Tell Congress About All the Encrypted Phones It’s Already Hacking Into
Federal law enforcement has been asking for a backdoor to read Americans’ encrypted communications for years now. FBI Director Christopher Wray did it again last week in testimony to the Senate Judiciary Committee. As usual, the FBI’s complaints involved end-to-end encryption employed by popular messaging platforms, as well as the at-rest encryption of digital devices, which Wray described as offering “user-only access.”
The FBI wants these terms to sound scary, but they actually describe security best practices. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. And “user-only access” is actually a perfect encapsulation of how device encryption should work; otherwise, anyone who got their hands on your phone or laptop—a thief, an abusive partner, or an employer—could access its most sensitive data. When you intentionally weaken these systems, it hurts our security and privacy, because there’s no magical kind of access that only works for the good guys. If Wray gets his special pass to listen in on our conversations and access our devices, corporations, criminals, and authoritarians will be able to get the same access.
It’s remarkable that Wray keeps getting invited to Congress to sing the same song. Notably, Wray was invited there to talk, in part, about the January 6th insurrection, a serious domestic attack in which the attackers—far from being concerned about secrecy—proudly broadcast many of their crimes, resulting in hundreds of arrests.
It’s also remarkable what Wray, once more, chose to leave out of this narrative. While Wray continues to express frustration about what his agents can’t get access to, he fails to brief Senators about the shocking frequency with which his agency already accesses Americans’ smartphones. Nevertheless, the scope of police snooping on Americans’ mobile phones is becoming clear, and it’s not just the FBI who is doing it. Instead of inviting Wray up to Capitol Hill to ask for special ways to invade our privacy and security, Senators should be asking Wray about the private data his agents are already trawling through.Police Have An Incredible Number of Ways to Break Into Encrypted Phones
In all 50 states, police are breaking into phones on a vast scale. An October report from the non-profit Upturn, “Mass Extraction,” has revealed details of how invasive and widespread police hacking of our phones has become. Police can easily purchase forensic tools that extract data from nearly every popular phone. In March 2016, Cellebrite, a popular forensic tool company, supported “logical extractions” for 8,393 different devices, and “physical extractions,” which involves copying all the data on a phone bit-by-bit, for 4,254 devices. Cellebrite can bypass lock screens on about 1,500 different devices.
How do they bypass encryption? Often, they just guess the password. In 2018, Prof. Matthew Green estimated it would take no more than 22 hours for forensic tools to break into some older iPhones with a 6-digit passcode simply by continuously guessing passwords (i.e. “brute-force” entry). A 4-digit passcode would fail in about 13 minutes.
That brute force guessing was enabled by a hardware flaw that has been fixed since 2018, and the rate of password guessing is much more limited now. But even as smartphone companies like Apple improve their security, device hacking remains very much a cat-and-mouse game. As recently as September 2020, Cellebrite marketing materials boasted its tools can break into iPhone devices up to “the latest iPhone 11/ 11 Pro / Max running the latest iOS versions up to the latest 13.4.1”
Even when passwords can’t be broken, vendors like Cellebrite offer “advanced services” that can unlock even the newest iOS and Samsung devices. Upturn research suggests the base price on such services is $1,950, but it can be cheaper in bulk.
Buying electronic break-in technology on a wholesale basis represents the best deal for police departments around the U.S., and they avail themselves of these bargains regularly. In 2018, the Seattle Police Department purchased 20 such “actions” from Cellebrite for $33,000, allowing them to extract phone data within weeks or even days. Law enforcement agencies that want to unlock phones en masse can bring Cellebrite’s “advanced unlocking” in-house, for prices that range from $75,000 to $150,000.
That means for most police departments, breaking into phones isn’t just convenient, it’s relatively inexpensive. Even a mid-sized police department like Virginia Beach, VA has a police budget of more than $100 million; New York City’s police budget is over $5 billion. The FBI’s 2020 budget request is about $9 billion.
When the FBI says it’s “going dark” because it can’t beat encryption, what it’s really asking for is a method of breaking in that’s cheaper, easier, and more reliable than the methods they already have. The only way to fully meet the FBI’s demands would be to require a backdoor in all platforms, applications, and devices. Especially at a time when police abuses nationwide have come into new focus, this type of complaint should be a non-starter with elected officials. Instead, they should be questioning how and why police are already dodging encryption. These techniques aren’t just being used against criminals.Phone Searches By Police Are Widespread and Commonplace
Upturn has documented more than 2,000 agencies across the U.S. that have purchased products or services from mobile device forensic tool vendors, including every one of the 50 largest police departments, and at least 25 of the 50 largest sheriffs’ offices.
Law enforcement officials like Wray want to convince us that encryption needs to be bypassed or broken for threats like terrorism or crimes against children, but in fact, Upturn’s public records requests show that police use forensic tools to search phones for everyday low-level crimes. Even when police don't need to bypass encryption—such as when they convince someone to "consent" to the search of a phone and unlock it—these invasive police phone searches are used “as an all-purpose investigative tool, for an astonishingly broad array of offenses, often without a warrant,” as Upturn put it.
The 44 law enforcement agencies who provided records to Upturn revealed at least 50,000 extractions of cell phones between 2015 and 2019. And there’s no question that this number is a “severe undercount,” counting only 44 agencies, when at least 2,000 agencies have the tools. Many of the largest police departments, including New York, Chicago, Washington D.C., Baltimore, and Boston, either denied Upturn’s record requests or did not respond.
“Law enforcement… use these tools to investigate cases involving graffiti, shoplifting, marijuana possession, prostitution, vandalism, car crashes, parole violations, petty theft, public intoxication, and the full gamut of drug-related offenses,” Upturn reports. In Suffolk County, NY, 20 percent of the phones searched by police were for narcotics cases. Authorities in Santa Clara County, CA, San Bernardino County, CA, and Fort Worth, TX all reported that drug crimes were among the most common reasons for cell phone data extractions. Here are just a few examples of the everyday offenses in which Upturn found police searched phones:
- In one case, police officers sought to search two phones for evidence of drug sales after a $220 undercover marijuana bust.
- Police stopped a vehicle for a “left lane violation,” then “due to nervousness and inconsistent stories, a free air sniff was conducted by a … K9 with positive alert to narcotics.” The officers found bags of marijuana in the car, then seized eight phones from the car’s occupants, and sought to extract data from them for “evidence of drug transactions.”
- Officers looking for a juvenile who allegedly violated terms of his electronic monitoring found him after a “short foot pursuit” in which the youngster threw his phone to the ground. Officers sought to search the phone for evidence of “escape in the second degree.”
And these searches often take place without judicial warrants, despite the U.S. Supreme Court’s clear ruling in Riley v. California that a warrant is required to search a cell phone. That’s because police frequently abuse rules around so-called consent searches. These types of searches are widespread, but they’re hardly consensual. In January, we wrote about how these so-called “consent searches” are extraordinary violations of our privacy.
Forensic searches of cell phones are increasingly common. The Las Vegas police, for instance, examined 260% more cell phones in 2018-2019 compared with 2015-2016.
The searches are often overbroad, as well. It’s not uncommon for data unrelated to the initial suspicions to be copied, kept, and used for other purposes later. For instance, police can deem unrelated data to be “gang related,” and keep it in a “gang database,” which have often vague standards. Being placed in such a database can easily affect peoples’ future employment options. Many police departments don’t have any policies in place about when forensic phone-searching tools can be used.It’s Time for Oversight On Police Phone Searches
Rather than listening to a litany of requests for special access to personal data from federal agencies like the FBI, Congress should assert oversight over the inappropriate types of access that are already taking place.
The first step is to start keeping track of what’s happening. Congress should require that federal law enforcement agencies create detailed audit logs and screen recordings of digital searches. And we agree with Upturn that agencies nationwide should collect and publish aggregated information about how many phones were searched, and whether those searches involved warrants (with published warrant numbers), or so-called consent searches. Agencies should also disclose what tools were used for data extraction and analysis.
Congress should also consider placing sharp limits on when consent searches can take place at all. In our January blog post, we suggest that such searches be banned entirely in high-coercion settings like traffic stops, and suggest some specific limits that should be set in less-coercive settings.
EFF Legal Fellow Josh Srago co-wrote this blog post
The relationship between the federal judiciary and the executive agencies is a complex one. While Congress makes the laws, they can grant the agencies rulemaking authority to interpret the law. So long as the agency’s interpretation of any ambiguous language in the statute is reasonable, the courts will defer to the judgment of the agency.
For broadband access, the courts have deferred to the Federal Communications Commission’s (FCC’s) judgment on the proper classification of broadband services twice in the last several years. In 2015, the Court deferred to the FCC when it classified broadband as Title II in the Open Internet Order. In 2017, it deferred again when broadband internet was reclassified as Title I in the Restoring Internet Freedom Order. A Title II service is subject to strict FCC oversight, rules, and regulations, but a Title I service is not.
Classification of services isn’t the only place where the courts defer to the FCC’s authority. Two Supreme Court decisions – Verizon Communications, Inc. v. Law Offices of Curtis V. Trinko, LLP, and Credit Suisse Securities (USA) LLC v. Billing – have established the precedent that if an industry is overseen by an expert regulatory agency (such as broadband being overseen by the FCC) then the courts will defer to the agency’s judgment on competition policy because the agency has the particular and specific knowledge to make the best determination.
In other words, civil antitrust law has to overcome multiple barriers in applying to broadband providers, potentially denying it as a remedy for monopolization for consumers. EFF's conducted an in-depth analysis on this issue. For a summary, read on.The Judicial Deference Circle and How It Blocks Antitrust Enforcement Over Broadband
What this creates is circular deferential reasoning. The FCC has the authority to determine whether or not broadband will be subject to strict oversight or subject to no oversight and the courts will defer to the FCC’s determination. If the service is subject to strict rules and regulations, then the FCC has the power to take action if a provider acts in an anti-competitive way. Courts will defer to the FCC’s enforcement powers to ensure that the market is regulated as it sees fit.
However, if the FCC determines that the service should not be subject to the strict rules and regulations of Title II and a monopoly broadband provider acts in an anticompetitive way, the courts will still defer to the FCC’s determination as to whether the bad actor is doing something they should not. If the courts did otherwise, then their determination would be in direct conflict with the regulatory regime established by the FCC to ensure that the market is regulated as it sees fit.
What this means is that individuals and municipalities are left without a legal pathway when a broadband service provider abuses its monopoly powers under our antitrust laws. A complaint can be filed with the FCC regarding the behavior, but how that complaint is handled is subject to the FCC’s decisions, not on whether the conduct is anti-competitive.A Better Broadband World Under Robust Antitrust Enforcement
The best path forward to resolve this is for Congress to pass legislation that overturns Trinko and Credit Suisse, ensuring that people, or representatives of people such as local governments, can protect their interests and aren’t being taken advantage of by incumbent monopoly broadband providers. But what will that world look like? EFF analyzed that question and theorized how things could improve for consumers. You can read our memo here. As Congress debates reforming antitrust laws with a focus on Big Tech, there are a lot of downstream positive impacts that can stem from such reforms, namely in giving people the ability to sue their broadband monopolist and use the courts to bring in competition.
The third-party cookie is dying, and Google is trying to create its replacement.
No one should mourn the death of the cookie as we know it. For more than two decades, the third-party cookie has been the lynchpin in a shadowy, seedy, multi-billion dollar advertising-surveillance industry on the Web; phasing out tracking cookies and other persistent third-party identifiers is long overdue. However, as the foundations shift beneath the advertising industry, its biggest players are determined to land on their feet.
Google is leading the charge to replace third-party cookies with a new suite of technologies to target ads on the Web. And some of its proposals show that it hasn’t learned the right lessons from the ongoing backlash to the surveillance business model. This post will focus on one of those proposals, Federated Learning of Cohorts (FLoC), which is perhaps the most ambitious—and potentially the most harmful.
FLoC is meant to be a new way to make your browser do the profiling that third-party trackers used to do themselves: in this case, boiling down your recent browsing activity into a behavioral label, and then sharing it with websites and advertisers. The technology will avoid the privacy risks of third-party cookies, but it will create new ones in the process. It may also exacerbate many of the worst non-privacy problems with behavioral ads, including discrimination and predatory targeting.
Google’s pitch to privacy advocates is that a world with FLoC (and other elements of the “privacy sandbox”) will be better than the world we have today, where data brokers and ad-tech giants track and profile with impunity. But that framing is based on a false premise that we have to choose between “old tracking” and “new tracking.” It’s not either-or. Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
We stand at a fork in the road. Behind us is the era of the third-party cookie, perhaps the Web’s biggest mistake. Ahead of us are two possible futures.
In one, users get to decide what information to share with each site they choose to interact with. No one needs to worry that their past browsing will be held against them—or leveraged to manipulate them—when they next open a tab.
In the other, each user’s behavior follows them from site to site as a label, inscrutable at a glance but rich with meaning to those in the know. Their recent history, distilled into a few bits, is “democratized” and shared with dozens of nameless actors that take part in the service of each web page. Users begin every interaction with a confession: here’s what I’ve been up to this week, please treat me accordingly.
Users and advocates must reject FLoC and other misguided attempts to reinvent behavioral targeting. We implore Google to abandon FLoC and redirect its effort towards building a truly user-friendly Web.What is FLoC?
In 2019, Google presented the Privacy Sandbox, its vision for the future of privacy on the Web. At the center of the project is a suite of cookieless protocols designed to satisfy the myriad use cases that third-party cookies currently provide to advertisers. Google took its proposals to the W3C, the standards-making body for the Web, where they have primarily been discussed in the Web Advertising Business Group, a body made up primarily of ad-tech vendors. In the intervening months, Google and other advertisers have proposed dozens of bird-themed technical standards: PIGIN, TURTLEDOVE, SPARROW, SWAN, SPURFOWL, PELICAN, PARROT… the list goes on. Seriously. Each of the “bird” proposals is designed to perform one of the functions in the targeted advertising ecosystem that is currently done by cookies.
FLoC is designed to help advertisers perform behavioral targeting without third-party cookies. A browser with FLoC enabled would collect information about its user’s browsing habits, then use that information to assign its user to a “cohort” or group. Users with similar browsing habits—for some definition of “similar”—would be grouped into the same cohort. Each user’s browser will share a cohort ID, indicating which group they belong to, with websites and advertisers. According to the proposal, at least a few thousand users should belong to each cohort (though that’s not a guarantee).
If that sounds dense, think of it this way: your FLoC ID will be like a succinct summary of your recent activity on the Web.
Google’s proof of concept used the domains of the sites that each user visited as the basis for grouping people together. It then used an algorithm called SimHash to create the groups. SimHash can be computed locally on each user’s machine, so there’s no need for a central server to collect behavioral data. However, a central administrator could have a role in enforcing privacy guarantees. In order to prevent any cohort from being too small (i.e. too identifying), Google proposes that a central actor could count the number of users assigned each cohort. If any are too small, they can be combined with other, similar cohorts until enough users are represented in each one.
For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
One thing that is specified is duration. FLoC cohorts will be re-calculated on a weekly basis, each time using data from the previous week’s browsing. This makes FLoC cohorts less useful as long-term identifiers, but it also makes them more potent measures of how users behave over time.New privacy problems
FLoC is part of a suite intended to bring targeted ads into a privacy-preserving future. But the core design involves sharing new information with advertisers. Unsurprisingly, this also creates new privacy risks.Fingerprinting
The first issue is fingerprinting. Browser fingerprinting is the practice of gathering many discrete pieces of information from a user’s browser to create a unique, stable identifier for that browser. EFF’s Cover Your Tracks project demonstrates how the process works: in a nutshell, the more ways your browser looks or acts different from others’, the easier it is to fingerprint.
Google has promised that the vast majority of FLoC cohorts will comprise thousands of users each, so a cohort ID alone shouldn’t distinguish you from a few thousand other people like you. However, that still gives fingerprinters a massive head start. If a tracker starts with your FLoC cohort, it only has to distinguish your browser from a few thousand others (rather than a few hundred million). In information theoretic terms, FLoC cohorts will contain several bits of entropy—up to 8 bits, in Google’s proof of concept trial. This information is even more potent given that it is unlikely to be correlated with other information that the browser exposes. This will make it much easier for trackers to put together a unique fingerprint for FLoC users.
Google has acknowledged this as a challenge, but has pledged to solve it as part of the broader “Privacy Budget” plan it has to deal with fingerprinting long-term. Solving fingerprinting is an admirable goal, and its proposal is a promising avenue to pursue. But according to the FAQ, that plan is “an early stage proposal and does not yet have a browser implementation.” Meanwhile, Google is set to begin testing FLoC as early as this month.
Fingerprinting is notoriously difficult to stop. Browsers like Safari and Tor have engaged in years-long wars of attrition against trackers, sacrificing large swaths of their own feature sets in order to reduce fingerprinting attack surfaces. Fingerprinting mitigation generally involves trimming away or restricting unnecessary sources of entropy—which is what FLoC is. Google should not create new fingerprinting risks until it’s figured out how to deal with existing ones.Cross-context exposure
The second problem is less easily explained away: the technology will share new personal data with trackers who can already identify users. For FLoC to be useful to advertisers, a user’s cohort will necessarily reveal information about their behavior.
The project’s Github page addresses this up front:
This API democratizes access to some information about an individual’s general browsing history (and thus, general interests) to any site that opts into it. … Sites that know a person’s PII (e.g., when people sign in using their email address) could record and reveal their cohort. This means that information about an individual's interests may eventually become public.
As described above, FLoC cohorts shouldn’t work as identifiers by themselves. However, any company able to identify a user in other ways—say, by offering “log in with Google” services to sites around the Internet—will be able to tie the information it learns from FLoC to the user’s profile.
Two categories of information may be exposed in this way:
- Specific information about browsing history. Trackers may be able to reverse-engineer the cohort-assignment algorithm to determine that any user who belongs to a specific cohort probably or definitely visited specific sites.
- General information about demographics or interests. Observers may learn that in general, members of a specific cohort are substantially likely to be a specific type of person. For example, a particular cohort may over-represent users who are young, female, and Black; another cohort, middle-aged Republican voters; a third, LGBTQ+ youth.
This means every site you visit will have a good idea about what kind of person you are on first contact, without having to do the work of tracking you across the web. Moreover, as your FLoC cohort will update over time, sites that can identify you in other ways will also be able to track how your browsing changes. Remember, a FLoC cohort is nothing more, and nothing less, than a summary of your recent browsing activity.
You should have a right to present different aspects of your identity in different contexts. If you visit a site for medical information, you might trust it with information about your health, but there’s no reason it needs to know what your politics are. Likewise, if you visit a retail website, it shouldn’t need to know whether you’ve recently read up on treatment for depression. FLoC erodes this separation of contexts, and instead presents the same behavioral summary to everyone you interact with.Beyond privacy
FLoC is designed to prevent a very specific threat: the kind of individualized profiling that is enabled by cross-context identifiers today. The goal of FLoC and other proposals is to avoid letting trackers access specific pieces of information that they can tie to specific people. As we’ve shown, FLoC may actually help trackers in many contexts. But even if Google is able to iterate on its design and prevent these risks, the harms of targeted advertising are not limited to violations of privacy. FLoC’s core objective is at odds with other civil liberties.
The power to target is the power to discriminate. By definition, targeted ads allow advertisers to reach some kinds of people while excluding others. A targeting system may be used to decide who gets to see job postings or loan offers just as easily as it is to advertise shoes.
Over the years, the machinery of targeted advertising has frequently been used for exploitation, discrimination, and harm. The ability to target people based on ethnicity, religion, gender, age, or ability allows discriminatory ads for jobs, housing, and credit. Targeting based on credit history—or characteristics systematically associated with it— enables predatory ads for high-interest loans. Targeting based on demographics, location, and political affiliation helps purveyors of politically motivated disinformation and voter suppression. All kinds of behavioral targeting increase the risk of convincing scams.
Instead of re-inventing the tracking wheel, we should imagine a better world without the myriad problems of targeted ads.
Google, Facebook, and many other ad platforms already try to rein in certain uses of their targeting platforms. Google, for example, limits advertisers’ ability to target people in “sensitive interest categories.” However, these efforts frequently fall short; determined actors can usually find workarounds to platform-wide restrictions on certain kinds of targeting or certain kinds of ads.
Even with absolute power over what information can be used to target whom, platforms are too often unable to prevent abuse of their technology. But FLoC will use an unsupervised algorithm to create its clusters. That means that nobody will have direct control over how people are grouped together. Ideally (for advertisers), FLoC will create groups that have meaningful behaviors and interests in common. But online behavior is linked to all kinds of sensitive characteristics—demographics like gender, ethnicity, age, and income; “big 5” personality traits; even mental health. It is highly likely that FLoC will group users along some of these axes as well. FLoC groupings may also directly reflect visits to websites related to substance abuse, financial hardship, or support for survivors of trauma.
Google has proposed that it can monitor the outputs of the system to check for any correlations with its sensitive categories. If it finds that a particular cohort is too closely related to a particular protected group, the administrative server can choose new parameters for the algorithm and tell users’ browsers to group themselves again.
This solution sounds both orwellian and sisyphean. In order to monitor how FLoC groups correlate with sensitive categories, Google will need to run massive audits using data about users’ race, gender, religion, age, health, and financial status. Whenever it finds a cohort that correlates too strongly along any of those axes, it will have to reconfigure the whole algorithm and try again, hoping that no other “sensitive categories” are implicated in the new version. This is a much more difficult version of the problem it is already trying, and frequently failing, to solve.
In a world with FLoC, it may be more difficult to target users directly based on age, gender, or income. But it won’t be impossible. Trackers with access to auxiliary information about users will be able to learn what FLoC groupings “mean”—what kinds of people they contain—through observation and experiment. Those who are determined to do so will still be able to discriminate. Moreover, this kind of behavior will be harder for platforms to police than it already is. Advertisers with bad intentions will have plausible deniability—after all, they aren’t directly targeting protected categories, they’re just reaching people based on behavior. And the whole system will be more opaque to users and regulators.Google, please don’t do this
We wrote about FLoC and the other initial batch of proposals when they were first introduced, calling FLoC “the opposite of privacy-preserving technology.” We hoped that the standards process would shed light on FLoC’s fundamental flaws, causing Google to reconsider pushing it forward. Indeed, several issues on the official Github page raise the exact same concerns that we highlight here. However, Google has continued developing the system, leaving the fundamentals nearly unchanged. It has started pitching FLoC to advertisers, boasting that FLoC is a “95% effective” replacement for cookie-based targeting. And starting with Chrome 89, released on March 2, it’s deploying the technology for a trial run. A small portion of Chrome users—still likely millions of people—will be (or have been) assigned to test the new technology.
Make no mistake, if Google does follow through on its plan to implement FLoC in Chrome, it will likely give everyone involved “options.” The system will probably be opt-in for the advertisers that will benefit from it, and opt-out for the users who stand to be hurt. Google will surely tout this as a step forward for “transparency and user control,” knowing full well that the vast majority of its users will not understand how FLoC works, and that very few will go out of their way to turn it off. It will pat itself on the back for ushering in a new, private era on the Web, free of the evil third-party cookie—the technology that Google helped extend well past its shelf life, making billions of dollars in the process.
It doesn’t have to be that way. The most important parts of the privacy sandbox, like dropping third-party identifiers and fighting fingerprinting, will genuinely change the Web for the better. Google can choose to dismantle the old scaffolding for surveillance without replacing it with something new and uniquely harmful.
We emphatically reject the future of FLoC. That is not the world we want, nor the one users deserve. Google needs to learn the correct lessons from the era of third-party tracking and design its browser to work for users, not for advertisers.
Reformers often tout police use of body-worn cameras (BWCs) as a way to prevent law enforcement misconduct. But, far too often, this technology becomes one more tool in a toolbox already overflowing with surveillance technology that spies on civilians. Worse, because police often control when BWCs are turned on and how the footage is stored, BWCs often fail to do the one thing they were intended to do: record video of how police interact with the public. So EFF opposes BWCs absent strict safeguards.
While it takes some useful steps toward curbing nefarious ways that police use body-worn cameras, the George Floyd Justice in Policing Act, H.R. 1280, does not do enough. It places important limits on how federal law enforcement officials use BWCs. And it is a step forward compared to last year’s version: it bans federal officials from applying face surveillance technology to any BWC footage. However, H.R. 1280 still falls short: it funds BWCs for state and local police, but does not apply the same safeguards that the bill applies to federal officials. We urge amendments to this bill as detailed below. Otherwise, these federally-funded BWCs will augment law enforcement’s already excessive surveillance capabilities.
At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies.
As has been our position, BWCs should adhere to the following regulations:
Mandated activation of body-worn cameras. Officers must be required to activate their cameras at the start of all investigative encounters with civilians, and leave them on until the encounter ends. Otherwise, officers could subvert any accountability benefits of BWCs by simply turning them off when misconduct is imminent, or not turning them on. In narrow circumstances where civilians have heightened privacy interests (like crime victims and during warrantless home searches), officers should give civilians the option to deactivate BWCs.
No political spying with body-worn cameras. Police must not use BWCs to gather information about how people are exercising their First Amendment rights to speak, associate, or practice their religion. Government surveillance chills and deters such protected activity.
Retention of body-worn camera footage. All BWC footage should be held for a few months, to allow injured civilians sufficient time to come forward and seek evidence. Then footage should be promptly destroyed, to reduce the risks of data breach, employee misuse, and long-term surveillance of the public. However, if footage depicts an officer’s use of force or an episode subject to a civilian’s complaint, then the footage must be retained for a lengthier period.
Officer review of footage. If footage depicts use of force or an episode subject to a civilian complaint, then an officer must not be allowed to review the footage until after they make an initial statement about the event. Given the malleability of human memory, a video can alter or even overwrite a recollection. And some officers might use footage to better “testily,” or stretch the truth about encounters.
Public access to footage. If footage depicts a particular person, then that person must have access to it. If footage depicts police use of force, then all members of the general public must have access to it. If a person seeks footage that does not depict them or use of force, then whether they may have access must depend on a weighing by a court of (a) the benefits of disclosure to police accountability, and (b) the costs of disclosure to the privacy of a depicted member of the public. If the footage does not depict police misconduct, then disclosure will rarely have a police accountability benefit. In many cases, blurring of civilian faces might diminish privacy concerns. In no case should footage be withheld on the grounds it is a police investigatory record.
Enforcement of these rules. If footage is recorded or retained in violation of these rules, then it must not be admissible in court. If footage is not recorded or retained in violation of these rules, then a civil rights plaintiff or criminal defendant must receive an evidentiary presumption that the missing footage would have helped them. And departments must discipline officers who break these rules.
Community control over body-worn cameras. Local police and sheriffs must not acquire or use BWCs, or any other surveillance technology, absent permission from their city council or county board, after ample opportunity for residents to make their voices heard. This is commonly called community control over police surveillance (CCOPS).
EFF supported a California law (A.B. 1215) that placed a three-year moratorium on use of face surveillance with BWCs. Likewise, EFF in 2019, 2020, and 2021 joined scores of privacy and civil rights groups in opposing any federal use of face surveillance, and also any federal funding of state and local face surveillance.
So we are pleased with Section 374 of H.R. 1280, which states: “No camera or recording device authorized or required to be used under this part may be equipped with or employ facial recognition technology, and footage from such a camera or recording device may not be subjected to facial recognition technology.” We are also pleased with Section 3051, which says that federal grant funds for state and local programs “may not be used for expenses related to facial recognition technology.” Both of these provisions validate civil society and over-policed communities’ long-standing assertion that government use of face recognition is dangerous and must be banned. However, this bill does not go far enough. EFF firmly supports a full ban of all government use of face recognition technology. At a minimum, H.R. 1280 must be amended to extend the face surveillance ban it mandates for federal BWCs, to federally-funded BWCs employed by state and local law enforcement agencies. For body-worn cameras to be a small part of a solution, rather than part of the problem, their operation and footage storage must be heavily regulated, and they must used solely to record video of how police interact with the public, and not serve as trojan horses for increased surveillance.
Baltimore, MD and St. Louis, MO, have a lot in common. Both cities suffer from declining populations and high crime rates. In recent years, the predominantly Black population in each city has engaged in collective action opposing police violence. In recent weeks, officials in both cities voted unanimously to spare their respective residents from further invasions on their privacy and essential liberties by a panoptic aerial surveillance system designed to protect soldiers on the battlefield, not resident's rights and public safety.Baltimore’s Unanimous Vote to Terminate
From April to October of 2020, Baltimore residents were subjected to a panopticon-like system of surveillance facilitated by a partnership between the Baltimore Police Department and a privately-funded Ohio company called Persistent Surveillance Systems (PSS). During that period, for at least 40 hours a week, PSS flew surveillance aircraft over 32 square miles of the city, enabling police to identify specific individuals from the images captured by the planes. Although no planes had flown as part of the collaboration since late October—and the program was scheduled to end later this year—the program had become troubling enough that on February 3, the City's spending board voted unanimously to terminate Baltimore's contract with Ohio-based Persistent Surveillance Systems.St. Louis Rules Committee Says ‘Do Not Pass’
Given the program's problematic history and unimpressive efficacy, it may come as some surprise that on December 11, 2020, City of St. Louis Alderman Tom Oldenburg introduced legislation that would have forced the mayor, and comptroller, to enter into a contract with PSS closely replicating Baltimore's spy plane program.
With lobbyists for the privately-funded Persistent Surveillance Systems program padding campaign coffers, Alderman Oldenburg's proposal was initially well received by the City's Board of Alders. However, as EFF and local advocates—including the ACLU of Missouri and Electronic Frontier Alliance member Privacy Watch STL—worked to educate lawmakers and their constituents about the bill’s unconstitutionality, that support began to waver. While the bill narrowly cleared a preliminary vote in late January, by Feb. 4 the Rules Committee voted unanimously to issue a "Do Not Pass" recommendation.
A supermajority of the Board could vote to override the Committee's guidance when they meet for the last time this session on April 19. However, the bill's sponsor has acknowledged that outcome to be unlikely—while also suggesting he plans to introduce a similar bill next session. If the Board does approve the ordinance when they meet on April 19, it is doubtful that St. Louis Mayor Lyda Krewson would sign the bill after her successor has been chosen in the City's April 6 election.Next Up: Fourth Circuit Court of Appeals
While municipal lawmakers are weighing in unanimously against the program, it may be the courts that make the final call. Last November, EFF along with the Brennan Center for Justice, Electronic Privacy Information Center, FreedomWorks, National Association of Criminal Defense Lawyers, and the Rutherford Institute filed a friend-of-the-court brief in a federal civil rights lawsuit challenging Baltimore’s aerial surveillance program. A divided three-judge panel of the U.S. Court of Appeals for the Fourth Circuit initially upheld the program, but the full court has since withdrawn that decision and decided to rehear the case en banc. Oral arguments are scheduled for March 8. While the people of St. Louis and Baltimore are protected for now, we're hopeful that the court will find that the aerial surveillance program violates the Fourth Amendment’s guarantee against warrantless dragnet surveillance, potentially shutting down the program for good.
The multi-pronged attempt by state Attorneys General, the Department of Justice, and the Federal Trade Commission to find Google and Facebook liable for violating antitrust law may result in breaking up these giant companies. But in order for any of this to cause lasting change, we need to look to the not-so-recent past.
In the world of antitrust, the calls to “break up” Big Tech companies translate to the fairly standard remedy of “structural separation,” where companies are barred from selling services and competing with the buyers of those services (for example, rail companies have been forced to stop selling freight services that compete with their own customers). It has been done before as part of the fight against communication monopolies. However, history shows us that the real work is not just breaking up companies, but following through afterward.
In order to make sure that the Internet becomes a space for innovation and competition, there has to be a vision of an ideal ecosystem. When we look back at the United States’ previous move from telecom monopoly into what can best be described as “regulated competition,” we can learn a lot of lessons—good and bad—about what can be done post-breakup.The AT&T of Yore and the Big Tech of Today
Cast your mind back, back to when AT&T was a giant corporation. No, further back. When AT&T was the world’s largest corporation and the telephone monopoly. In the 1970s, AT&T resembled Big Tech companies in scale, significance, and influence.
AT&T grew by relentlessly gobbling up rival companies and eventually struck a deal with the government to make its monopolization legal in exchange for universal service (known as the Kingsbury Commitment). As a monopolist, AT&T's unilateral decisions dictated the way people communicated. The company exerted extraordinary influence over public debate and used its influence to argue that its monopoly was in the public interest. Its final antitrust battle was a quagmire that spanned two political administrations, and despite this, its political power was so great that it was able to get the Department of Defense to claim its monopoly was vital to national security.
Today, Big Tech is reenacting the battle of the AT&T of yore. Facebook CEO Mark Zuckerberg assertion that his company’s dominance is the only means to compete with China is a repeat of AT&T’s attempt to use national security to bypass competition concerns. Similarly, Facebook's recent change of heart on whether Section 230 of the Communications Decency Act should be gutted is an effort to appease policymakers looking to scrutinize the company's dominance. Not coincidentally, Section 230 is the lifeblood of every would-be competitor to Facebook. In trading 230 in for policy concessions, Facebook both escapes a breakup and salts the earth against the growth of any new competitors to become the regulated monopoly that remains.
Google is a modern AT&T, too. Google acquired its way to dominance by purchasing a multitude of companies to extend its vertical reach over the years. Mergers and acquisitions were key to AT&T's monopoly strategy. That's why the government then sought to break up the company – and that's why the US government today is proposing breakups for Google. Now, with AT&T, there were clear geographic lines on which the company could be broken into smaller regional companies. It's different for Google and Facebook: those lines will have to be drawn along different parts of the companies "stack," such as advertising and platforms.
When the US Department of Justice broke up AT&T, it traded one national monopoly for a set of regional monopolies. Over time Congress learned that it wasn't enough. Likewise, breakups for Google and Facebook will only be step one.Without a Broader Vision, Big Tech Will Be the Humpty Dumpty That Put Himself Back Together Again
Supporters of structural separation for Big Tech need to learn the lessons of the past. Our forebears got it right initially with telecom but then failed to sustain a consistent vision of competition eventually allowing dozens of companies to consolidate into a mix of regional monopolies or super dominant national companies.
When originally passed, the 1996 telecom law Congress passed to follow the AT&T breakup enabled the creation of the Competitive Local Exchange Carrier (aka CLEC) industry. These were smaller companies that already existed but had been severely hamstrung by the local monopolies. Their reach was severely limited because there was no federal competition law.
The 1996 Act lowered the start-up costs for new phone companies: they wouldn't have to build an entire network from scratch. The Act forced the Baby Bells (the regional parts of the original AT&T monopoly) to share their "essential facilities" with these new competitors at a fair price, opening the market to much smaller players with much less capital.
But the incumbent monopolies still had friends in statehouses and Congress. By 2001, federal and state governments began adopting a new theory of competition in communications: "deregulated competition"—which whittled away the facilities sharing rules and rules banning the broken up parts of AT&T from merging with one another again (as well as cable and wireless companies). If the purpose of this untested, unproven approach was to promote competition, then clearly it was a failure. A majority of Americans today have only one choice for high-speed broadband access that meets 21st century needs. There has been no serious reckoning for "deregulated competition" and it remains the heart of telecom policy despite nearly every prediction of the benefits of "deregulated competition" having been proven wrong. This only happened because policymakers and the public forgot how they received competition in telecom in the first place and allowed the unwinding that remains with us still today.
Steve Coll, author of The Deal of the Century: The Breakup of AT&T, predicted this problem shortly after the AT&T's breakup:
It is quite possible - some would argue it is more than likely - that the final landscape of the Bell System breakup will include a bankrupted MCI and an AT&T returned to its original state as a regulated, albeit smaller and less effective, telephone monopoly. The source of this specter lies not in anyone's crystal ball but in the history of U.S. v. AT&T. Precious little in that history - the birth of MCI, the development of phone industry competition, the filing of the Justice lawsuit, the prolonged inaction of Congress, the aborted compromise deals between Justice and AT&T, the Reagan administration's tortured passivity, the final inter-intra settlement itself - was the product of a single coherent philosophy, or a genuine, reasoned consensus, or a farsighted public policy strategy.A Post-Breakup Internet Tech Vision: Decentralization, Empowerment of Disruptive Innovation, and Consumer Protection
Anyone thinking about Big Tech breakups needs to learn the lesson of AT&T. Breakups are just step one. Before we take that step, we need to know what steps we'll take next. We need a plan for post-break-up regulated competition, or we'll squander years and years of antitrust courtroom battles, only to see the fragments of the companies reform into new, unstoppable juggernauts. We need a common narrative about where competition comes from and how we sustain it.
Like phone companies, internet platforms have “network effects”: to compete with them, a new company needs access, not the company's "ecosystem" – the cluster of products and services monopolists weave around themselves to lock in users, squeeze suppliers, and fend off competitors. In '96, we forced regional monopolies to share their facilities and thousands of local ISPs sprung up across the country, almost overnight. Creating a durable competitive threat to tech monopolists means finding similar measures to promote a flourishing, pluralistic, diverse Internet.
We've always said that tech industry competition is a multifaceted project that calls for multiple laws and careful regulation. Changes to antitrust law, intellectual property law, intermediary liability, and consumer privacy legislation all play critical and integral parts in a more competitive future. Strike the wrong balance and you drain away the Internet's capacity for putting power in the hands of people and communities. Get any of the policies wrong and you risk strangling a hundred future Googles and Facebooks in their cradles—companies whose destiny is to grow for a time but to eventually be replaced by new upstarts better suited for the unforeseeable circumstances of the future.
Here are two examples of policies that are every bit as important as breakups for creating and maintaining a competitive digital world:
- A private right of action that ties consumer privacy rights to the individual and ensures all marketplace participants are responsible for protecting individual user privacy. This legal protection will increase consumer willingness to switch products as what they use will not impact how they are protected. Conversely, the loss of privacy has been shown through studies to reduce consumer willingness to try new products, particularly when they impact sensitive uses such as finance and health.
- Incumbents must be denied the means of suing competitors as a means to retain their dominance. That's why laws like the Computer Fraud and Abuse Act (which Facebook used to sue a competitor out of existence and still leverages against innovation today) have to be significantly narrowed in scope.
The Internet once stood for a world where people with good ideas and a little know-how could change the world, attracting millions of users and spawning dozens of competitors. That was the Net's lifecycle of competition. We can get that future back, but only if we commit to a shared and durable vision of competition. It's fine to talk about breaking up Big Tech, but the hard part starts after the companies are split up. Now is the time to start asking what competition should look like, or we'll get dragged back to our current future before we get started down the road to a better one.
When the government tries to convict you of a crime, you have a right to challenge its evidence. This is a fundamental principle of due process, yet prosecutors and technology vendors have routinely argued against disclosing how forensic technology works.
For the first time, a federal court has ruled on the issue, and the decision marks a victory for civil liberties.
EFF teamed up with the ACLU of Pennsylvania to file an amicus brief arguing in favor of defendants’ rights to challenge complex DNA analysis software that implicates them in crimes. The prosecution and the technology vendor Cybergenetics opposed disclosure of the software’s source code on the grounds that the company has a commercial interest in secrecy.
The court correctly determined that this secrecy interest could not outweigh a defendant’s rights and ordered the code disclosed to the defense team. The disclosure will be subject to a “protective order” that bars further disclosure, but in a similar previous case a court eventually allowed public scrutiny of source code of a different DNA analysis program after a defense team found serious flaws.
This is the second decision this year ordering the disclosure of the secret TrueAllele software. This added scrutiny will help ensure that the software does not contribute to unjust incarceration.
The implementation process of Article 17 (formerly Article 13) of the controversial Copyright Directive into national laws is in full swing, and it does not look good for users' rights and freedoms. Several EU states have failed to present balanced copyright implementation proposals, ignoring the concerns off EFF, other civil society organizations, and experts that only strong user safeguards can help preventing Article 17 from turning tech companies and online services operators into copyright police.
A glimpse of hope was presented by the German government in a recent discussion paper. While the draft proposal fails to prevent the use of upload filters to monitor all user uploads and assess them against the information provided by rightsholders, it showed creativity by giving users the option of pre-flagging uploads as "authorized" (online by default) and by setting out exceptions for everyday uses. Remedies against abusive removal requests by self-proclaimed rightsholders were another positive feature of the discussion draft.Inflexible Rules in Favor of Press Publishers
However, the recently adopted copyright implementation proposal by the German Federal Cabinet has abandoned the focus on user rights in favor of inflexible rules that only benefit press publishers. Instead of opting for broad and fair statutory authorization for non-commercial minor uses, the German government suggests trivial carve-outs for "uses presumably authorized by law," which are not supposed to be blocked automatically by online platforms. However, the criteria for such uses are narrow and out of touch with reality. For example, the limit for minor use of text is 160 characters.
By comparison, the maximum length of a tweet is 280 characters, which is barely enough substance for a proper quote. As those uses are only presumably authorized, they can still be disputed by rightsholders and blocked at a later stage if they infringe copyright. However, this did not prevent the German government from putting a price tag on such communication as service providers will have to pay the author an "appropriate remuneration." There are other problematic elements in the proposal, such as the plan to limit the use of parodies to uses that are "justified by the specific purpose"—so better be careful about being too playful.The German Parliament Can Improve the Bill
It's now up to the German Parliament to decide whether to be more interested in the concerns of press publishers or in the erosion of user rights and freedoms. EFF will continue to reach out to Members of Parliament to help them make the right decision.
Section 230, a key law protecting free speech online since its passage in 1996, has been the subject of numerous legislative assaults over the past few years. The attacks have come from all sides. One of the latest, the SAFE Tech Act, seeks to address real problems Internet users experience, but its implementation would harm everyone on the Internet.
The SAFE Tech Act is a shotgun approach to Section 230 reform put forth by Sens. Mark Warner, Mazie Hirono and Amy Klobuchar earlier this month. It would amend Section 230 through the ever-popular method of removing platform immunity from liability arising from various types of user speech. This would lead to more censorship as social media companies seek to minimize their own legal risk. The bill compounds the problems it causes by making it more difficult to use the remaining immunity against claims arising from other kinds of user content.
Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all.
Section 230 Benefits Everyone
The act would not protect users’ rights in a way that is substantially better than current law. And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole. Our three biggest concerns with the SAFE Tech Act are: 1) its failure to capture the reality of paid content online, 2) the danger that an affirmative defense requirement creates and 3) the lack of guardrails around injunctive relief that would open the door for a host of new suits that simply remove certain speech.
Before considering what this bill would change, it’s useful to take a look at the benefits that Section 230 provides for all internet users. The Internet today allows people everywhere to connect and share ideas—whether that’s for free on social media platforms and educational or cultural platforms like Wikipedia and the Internet Archive, or on paid hosting services like Squarespace or Patreon. Section 230’s legal protections benefit Internet users in two ways.
Section 230 Protects Intermediaries That Host Speech: Section 230 enables services to host the content of other speakers—from writing, to videos, to pictures, to code that others write or upload—without those services generally having to screen or review that content before being published. Without this partial immunity, all of the intermediaries who help the speech of millions and billions of users reach their audiences would face unworkable content moderation requirements that inevitably lead to large scale censorship. The immunity has some important exceptions, including for violations of federal criminal law and intellectual property claims. But the legal immunity’s protections extend to services far beyond social media platforms. Thus everyone who sends an email, makes a Kickstarter, posts on Medium, shares code on Github, protects their site from DDOS attacks with Cloudflare, makes friends on Meetup, or posts on Reddit, benefits from Section 230’s immunity for all intermediaries.
Section 230 Protects Users Who Create Content: Section 230 directly protects Internet users who themselves act as online intermediaries from being held liable for the content created by others. So when people publish a blog and allow reader comments, for example, Section 230 protects them. This enables Internet users to create their own platforms for others’ speech, such as when an Internet user created the Shitty Media Men list that allowed others to share their own experiences involving harassment and sexual assault.The SAFE Tech Act Fails to Capture the Reality of Paid Content Online
In what appears to be an attempt to limit deceptive advertising, the SAFE Tech Act would amend Section 230 to remove the service’s immunity for user-generated content when that content is paid speech. According to the senators, the goal of this change is to stop Section 230 from applying to ads, “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.”
With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.
But the language in the bill is much broader than just ads. The bill says Section 230’s platform immunity for user-generated content does not apply if, “the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Much, much more of the Internet is likely included behind this definition than advertising, and it is unclear how much paid or sponsored content this language would sweep up. This change would undoubtedly force a massive, and dangerous, overhaul to Internet services at every level.
Although much of the legislative conversation around Section 230 reform focuses on the dominant social media services that are generally free to users, most of the intermediaries people rely on involve some form of payment or monetization: from more obvious content that sits behind a paywall on sites like Patreon, to websites that pay for hosting from providers like GoDaddy, to the comment section of a newspaper only available to subscribers. If all companies that host speech online and whose businesses depend on user payments lose Section 230 protections, the relationship between users and many intermediaries will change significantly, in several unintended ways:
Harm to Data Privacy: Services that previously accepted payments from users may decide to change to a different business model based on collecting and selling users’ personal information. So in seeking to regulate advertising, the SAFE TECH Act may perversely expand the private surveillance business model to other parts of the Internet, just so those services can continue to maintain Section 230’s protections.
Increased Censorship: Those businesses that continue to accept payments will have to make new decisions about what speech they can risk hosting and how they vet users and screen their content. They would be forced to monitor and filter all content that appears whenever money has exchanged hands—a dangerous and unworkable solution that would find much important speech disappeared, and would turn everyone from web hosts to online newspapers into censors. The only other alternative—not hosting user speech—would also not be a step forward.
As we’ve said many times, censorship has been shown to amplify existing imbalances in society. History shows us that when faced with the prospect of having to defend lawsuits, online services (like offline intermediaries before them) will opt to remove and reject user speech rather than try to defend it, even when it is strongly defensible. These decisions, as history has shown us, are applied disproportionately against the speech of marginalized speakers. Immunity, like that provided by Section 230, alleviates that prospect of having to defend such lawsuits.
Unintended Burdens on a Complex Ecosystem: While minimizing dangerous or deceptive advertising may be a worthy goal, and even if the SAFE Tech Act were narrowed to target ads in particular, it would not only burden sites like Facebook that function as massive online advertising ecosystems; it would also burden the numerous companies that comprise the complex online advertising ecosystem. There are numerous intermediaries between the user seeing an ad on a website and the ad going up. It is unclear which companies would lose Section 230 immunity under the SAFE TECH Act; arguably it would be all of them. The bill doesn’t reflect or account for the complex ways that publishers, advertisers, and scores of middlemen actually exchange money in today’s online ad ecosystem, which happens often in a split second through Real-Time Bidding protocols. It also doesn’t account for more nuanced advertising regimes. For example, how would an Instagram influencer—someone who is paid by a company to share information about a product—be affected by this loss of immunity? No money has exchanged hands with Instagram, and therefore one can imagine influencers and other more covert forms of advertising becoming the norm to protect advertisers and platforms from liability.
For a change in Section 230 to work as intended and not spiral into a mass of unintended consequences, legislators need to have a greater understanding of the Internet ecosystem of paid and content, and the language needs to be more specifically and narrowly tailored.The Danger That an Affirmative Defense Requirement Creates
The SAFE Tech Act also would alter the legal procedure around when Section 230’s immunity for user-generated content would apply in a way that would have massive practical consequences for users’ speech. Many people upset about user-generated content online bring cases against platforms, hosts, and other online intermediaries. Congressman Devin Nunes’ repeated lawsuits against Twitter for its users’ speech are a prime example of this phenomenon.
The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech.
Under current law, Section 230 operates as a procedural fast-lane for online services—and users who publish another user’s content—to get rid of frivolous lawsuits. Platforms and users subjected to these lawsuits can move to dismiss the cases before having to even respond to the legal complaint or going through the often expensive fact-gathering portion of a case, known as discovery. Right now, if it’s clear from the face of a legal complaint that the underlying allegations are based on a third party’s content, the statute’s immunity requires that the case against the platform or user who hosted the complained-of content be dismissed. Of course, this has not stopped plaintiffs from bringing (often unmeritorious) lawsuits in the first place. But in those cases, Section 230 minimizes the work the court must go through to grant a motion to dismiss the case, and minimizes costs for the defendant. This protects not only platforms but users; it is the desire to avoid litigation costs that leads intermediaries to default to censoring user speech.
The SAFE Tech Act would subject both provider and user defendants to much more protracted and expensive litigation before a case could be dismissed. By downgrading Section 230’s immunity to an “affirmative defense … that an interactive computer service provider has a burden of proving by a preponderance of the evidence,” defendants could no longer use Section 230 to dismiss cases at the beginning of a suit and would be required to prove—with evidence—that Section 230 applies. Right now, Section 230 saves companies and users significant legal costs when they are subjected to frivolous lawsuits. With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.
The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. An online service that cannot quickly get out of frivolous litigation based on user-generated content is likely going to take steps to prevent such content from becoming a target of litigation in the first place, including screening user’s speech or prohibiting certain types of speech entirely. And in the event that someone upset by a user’s speech sends a legal threat to an intermediary, the service is likely to be much more willing to remove the speech—even when it knows the speech cannot be subject to legal liability—just to avoid the new, larger expense and time to defend against the lawsuit.
As a result, the SAFE Tech Act would open the door for a host of new suits that by design are not filed to vindicate a legal wrong but simply to remove certain speech from the Internet—also called SLAPP lawsuits. These would remove a much greater volume of speech that does not, in fact, violate the law. Large services may find ways to absorb these new costs. But for small intermediaries and growing platforms that may be competing with those large companies, a single costly lawsuit, even if the defendant small company eventually prevails, may be the difference between success and failure. This is not to mention the many small businesses who use social media to market their company or service to respond to (and moderate) comments on their pages or sites, and who would likely be in danger of losing immunity from liability under this change.No Guardrails Around Injunctive Relief Would Open the Door to Dangerous Takedowns
The SAFE Tech Act also modifies Section 230’s immunity in another significant way, by permitting aggrieved individuals to seek non-monetary relief from platforms whose content has harmed them. Under the bill, Section 230 would not apply when a plaintiff seeks injunctive relief to require an online service to remove or restrict user-generated content that is “likely to cause irreparable harm.”
The SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.
This extremely broad change may be designed to address a legitimate concern about Section 230. Some people who are harmed online simply want the speech taken down instead of seeking monetary compensation. While giving certain Internet users an effective remedy that they currently lack under 230, the SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.
The SAFE Tech Act’s language appears to permit enforcement of all types of injunctive relief at any stage in a case. Litigants often seek emergency and temporary injunctive relief at an extremely early stage of the case, and judges frequently grant it without giving the speaker or platform an opportunity to respond. Courts already issue these kinds of takedown orders against online platforms, and they are prior restraints in violation of the First Amendment. If Section 230 does not bar these types of preliminary takedown orders, plaintiffs are likely to misuse the legal system to force down legal content without a final adjudication about the actual legality of the user-generated content.
Also, the injunctive relief carveout could be abused in another type of case, known as a default judgment, to remove speech without any judicial determination that the content is illegal. Default judgments are when the defendant does not fight the case, allowing the plaintiff to win without any examination of the underlying merits. In many cases, defendants avoid litigation simply because they don’t have the time or money for it.
Because of its one-sided nature, default judgments are subject to great fraud and abuse. Others have documented the growing phenomenon of fraudulent default judgments, typically involving defamation claims, in which a meritless lawsuit is crafted for the specific purpose of getting a default judgment and to avoid a consideration of its merits. If the SAFE Tech Act were to become law, fraudulent lawsuits like these would be incentivized and become more common, because Section 230 would no longer provide a barrier against their use to legally compel intermediaries to remove lawful speech.
A recent Section 230 case called Hassel v. Bird illustrates how a broad injunctive relief carveout to the law that would apply to default judgments would incentivize censorship of protected user speech. In Hassel, a lawyer sued a user of Yelp (Bird) who gave her law office a bad review, claiming defamation. The court never ruled on whether the speech was defamatory, but because the reviewer did not defend the lawsuit, the trial judge entered a default judgment against the reviewer, ordering the removal of the post. Section 230 prevented a court from ordering Yelp to remove the post.
Despite the potential for litigants to abuse the SAFE Tech Act’s injunctive relief carveout, the bill contains no guardrails for online intermediaries hosting legitimate speech targeted for removal. As it stands, the injunctive relief exception to Section 230 poses a real danger to legitimate speech.In Conclusion, For Safer Tech, Look Beyond Section 230
This only scratches the surface of the SAFE Tech Act. But the bill’s shotgun approach to amending Section 230, and the broadness of its language, make it impossible to support as it stands.
If legislators take issue with deceptive advertisers, they should use existing laws to protect users from them. Instead of making sweeping changes to Section 230, they should update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion, creating much of the problems we see in the first place. If they want to make Big Tech more responsive to the concerns of consumers, they should pass a strong consumer data privacy law with a robust private right of action.
If they disagree with the way that large companies like Facebook benefit from Section 230, they should carefully consider that changes to Section 230 will mostly burden smaller platforms and entrench the large companies that can absorb or adapt to the new legal landscape (large companies continue to support amendments to Section 230, even as those companies simultaneously push back against substantive changes that actually seek to protect users, and therefore harm their bottom line). Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all.
It’s absolutely a problem that just a few tech companies wield such immense control over what speakers and messages are allowed online. And it’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. But this bill would not create a fairer system.
Virginia’s legislature has passed a bill meant to protect consumer privacy—but the bill, called the Virginia Consumer Data Protection Act, really protects the interests of business far more than the interests of everyday consumers.
Virginia: Speak Up for Real Privacy
The bill, which both Microsoft and Amazon supported, is now headed to the desk of Governor Ralph Northam. This week, EFF joined with the Virginia Citizens Consumer Council, Consumer Federation of America, Privacy Rights Clearinghouse, U.S. PIRG to ask for a veto on this bill, or for the governor to add a reenactment clause—a move that would send the bill back to the legislature to try again.
If you’re in Virginia and care about true privacy protections, let the governor know that the VCDA doesn’t give consumers the protections they need. In fact, it stacks the deck against them, by offering an “opt-out” framework that doesn’t protect privacy by default, allowing companies to force consumers that exercise their privacy rights to pay higher prices or accept a lower quality of service, and offering no meaningful enforcement—making it very unlikely that consumers will be able to hold companies to account if any of the few rights this bill grants them are violated.
As passed by the legislature, the bill is set to go into effect in 2023 and will establish a working group to make improvements between now and then. That offers some chance for improvements—but it likely won’t be enough to get real consumer protections. As we noted in a joint press release, “These groups appreciate that Governor Northam’s office has engaged with the concerns of consumer groups and committed to a robust stakeholder process to improve this bill. Yet the fundamental problems with the CDPA are too big to be fixed after the fact.”
Consumer privacy rights must be the foundation of any real privacy bill. The CDPA was written without meaningful input from consumer advocates; in fact, as Protocol reported, it was handed to the bill’s sponsor by an Amazon lobbyist. Some have suggested the Virginia bill could be a model for other states or for federal legislation. That’s bad for Virginia and bad for all of us.
Virginians, it’s time to take a stand. Tell Governor Northam that this bill is not good enough, and urge him to veto it or send it back for another try.
VIRGINIA: SPEAK UP FOR REAL PRIVACY
With a new year and a new Congress, the House of Representatives’ subcommittee covering antitrust has turned its attention to “reviving competition.” On Thursday, the first in a series of hearings was held, focusing on how to help small businesses challenge Big Tech. One very good idea kept coming up, backed by both parties. And it is one EFF also considers essential: interoperability.
This was the first hearing since the House Judiciary Committee issued its antitrust report from its investigation into the business practices of Big Tech companies. This week’s hearing was exclusively focused on how to re-enable small businesses to disrupt the dominance of Big Tech. A critical aspect of the Internet EFF calls the life cycle of competition has vanished from the Internet as small new entrants no longer seek (nor could even if they tried) to displace well-established giants, but rather seek to be acquired by them.Strong Bipartisan Support for Interoperability
Across the committee Members of Congress appeared to agree that some means of requiring Big Tech to grant access to competitors through interoperability will be an essential piece of the competition puzzle. The need is straightforward, the larger these networks became, the more their value rose, making it harder for a new business to enter into direct competition. One expert witness, Public Knowledge’s Competition Policy Director Charlotte Slaiman, noted that these “network effects” meant that one company with double the network size as a competitor wasn’t twice as attractive, it was exponentially more attractive to users.
But even in cases where you have large competitors with sizeable networks, Big Tech companies are using their dominance in other markets as a means to push out existing competitors. One of the most powerful testimonies in favor of interoperability provided to Congress was by the CEO of Mapbox, Eric Gunderson who detailed how Google is leveraging its dominance in search to exert dominance in Google Maps. Specifically, Google through a colorful trademark “brand confusion” contract term requires developers who wish to use Google Search to only integrate their products with Google Maps. Mr. Gunderson made clear that this tying of products that really do not need to be tied together at all is not only foreclosing on market opportunities for Mapbox, but it is also forcing their existing clients to abandon doing anything that doesn’t use Google Maps outright.
The solution to this type of corporate incumbent anticompetitive behavior is not revolutionary and has deep roots in tech history. As Ranking Member Ken Buck (R-CO) stated, “interoperability is a time-honored practice in the tech industry that allows competing technologies to speak to one another so that consumers can make a choice without being locked into any one technology.” We at EFF have long agreed that interoperability will be essential to reopening the Internet market to vibrant competition and recently published a white paper laying out in detail how we can get to a more competitive future. Seeing growing consensus from Congress is encouraging, but doing it right will require careful calibration in policy.
EFF has joined 42 other organizations, including the ACLU, the Knight Institute, and the National Security Archive calling for the new Biden administration to fulfill its promise to “bring transparency and truth back to government.”
Specifically, these organizations are asking the administration and the federal government at large to update policy and implementation regarding the collection, retention, and dissemination of public records as dictated in the Freedom of Information Act (FOIA), the Federal Records Act (FRA), and the Presidential Records Act (PRA).
Our call for increased transparency with the administration comes in the wake of many years of extreme secrecy and increasingly unreliable enforcement of record retention and freedom of information laws.
The letter request that the following actions be taken by the Biden administration:
- Emphasize to All Federal Employees the Obligation to Give Full Effect to Federal Transparency Laws.
- Direct Agencies to Adopt New FOIA Guidelines That Prioritize Transparency and the Public Interest.
- Direct DOJ to Fully Leverage its Central Role in Agencies’ FOIA Implementation.
- Issue New FOIA Guidance by the Office of Management and Budget (OMB) and Update the National FOIA Portal.
- Assess, Preserve, and Disclose the Key Records of the Previous Administration.
- Champion Funding Increases for the Public Records Laws.
- Endorse Legislative Improvements for the Public Records Laws.
- Embrace Major Reforms of Classification and Declassification.
- Issue an Executive Order Reforming the Prepublication Review System.
You can read the full letter here:
It’s nearing the end of Black History Month, and that history is inherently tied to strife, resistance, and organizing related to government surveillance and oppression. Even though programs like COINTELPRO are more well-known now, the other side of these kinds of stories are the ways the Black community has fought back through intricate networks and communication aimed at avoiding surveillance.The Borderland Network
The Trans-Atlantic Slave Trade was a dark, cruel time in the history of much of the Americas. The horrors of slavery still cast their shadow through systemic racism today. One of the biggest obstacles enslaved Africans faced when trying to organize and fight was the fact that they were closely watched, along with being separated, abused, tortured, and brought onto a foreign land to work until their death for free. They often spoke different languages from each other, with different cultures, and beliefs. Organizing under these conditions seemed impossible. Yet even under these conditions including overbearing surveillance, they developed a way to fight back. Much of this is attributed to the brilliance of these Africans using everything they had to develop communications with each other under chattel slavery. The continued fight today reflects much of the history that was established from dealing with censorship and authoritarian surveillance.
“The white folks down south don’t seem to sleep much, nights. They are watching for runaways, and to see if any other slaves come among theirs, or theirs go off among others.” - Former Runaway, Slavery’s Exiles - Sylviane A. Diouf
As Sylvane Diouf chronicled in the book, Slavery’s Exiles, slavery was not only catastrophic for many Africans, but also thankfully never a peaceful time for white owners and overseers either. Those captured from Africa and brought to the Americas seldom gave their captors a night of rest. Through rebellion, resistance, and individual sabotage with everyday life during this horrible period, freedom remained an objective. And with that objective came a deep history of secret communications and cunning intelligence.
Runaways often returned to plantations at night for years unnoticed and undetected, mostly to stay connected to family or relay information. One married couple, as Diouf tells it, had a simple yet effective signaling system where the wife placed a garment in a particular spot that was visible from her husband’s covert. Ben and his wife (whose name is unknown) had other systems in place if it was too dark to see. For example, shining a bright light through the cracks in their cabin for an instant, and then repeating it at intervals of two or three minutes, three or four times.
These close-proximity runaways were deemed “Borderland Maroons''. They’d create tight networks of communication from plantation to plantation. Information, like the amount of reward for capture and punishment, traveled quickly through the grapevine of the Borderland Maroons. Based on this intelligence, many would make plans around either traveling away completely or staying around longer to gather others. Former Georgia Delegates from the Continental Congress recounted:
“The negroes have a wonderful art of communicating intelligence among themselves, it will run several hundred miles in a week or fortnight”
These networks often gained runaways years out of captivity and thus the ability to maintain a network among the enslaved. Coachmen, draymen, boatmen, and others who were allowed to move around off plantations were the backbone for this chain of intelligence. The shadow network of the Borderlands was the entry point of organizing for potential runaways. So even if someone was captured, they could tap into this network again later. No one would be getting rest or sleep. As Diouf recounts, keeping a high level of surveillance took a lot of resources from the slaveholders, and that fact was well-exploited by the enslaved.Moses
Perhaps the most famous artisan of secret communications during this period is the venerable Harriet Tubman. Her character and will is undisputed, and her impeccable timing and remarkable intuition strengthened the Underground Railroad.
Dr. Bryan Walls notes much of her written and verbal communication was through plain language that acted as a metaphor:
- “tracks” (routes fixed by abolitionist sympathizers)
- “stations” or “depots” (hiding places)
- “conductors” (guides on the Underground Railroad)
- “agents” (sympathizers who helped the slaves connect to the Railroad)
- “station masters” (those who hid slaves in their homes)
- “passengers,” “cargo,” “fleece,” or “freight” (escaped slaves)
- “tickets” (indicated that slaves were traveling on the Railroad)
- “stockholders” (financial supporters who donated to the Railroad)
- “the drinking gourd” (the Big Dipper constellation—a star in this constellation pointed to the North Star, located on the end of the Little Dipper’s handle)
The most famous example of verbal communication on plantations was the usage of song. The tradition of verbal history and storytelling remained strong among the enslaved, and acted as a way to “hide in plain sight”. Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.
Harriet Tubman’s famous claim is “she never lost a passenger.” This rang true not only as she freed others, but also when she acted as a spy during the Civil War aiding the Union. As the first and only woman to organize and lead a military operation during the Civil War, her reputation was solidified as an expert in espionage. Her information was so detailed and accurate it often saved Black troops in the Union from harm.
Many of these tactics won’t be found written down, but passed verbally. It was illegal or prohibited for Black people to read and write. Therefore, it was a lethal risk to write more traditional ciphertext as communications.Language as Resistance
Even though language was a barrier in the beginning and written communication was out of the question, over time English was forced onto enslaved Africans and many found a way to each other by creating an entirely new language on their own—Creole. There are many different kinds of Creole across the African Diaspora, which served as not only a way to communicate and develop a “home” language-wise, but also a way to communicate information to each other under the eyes of overseers.
"Anglican clergy were still reporting that Africans spoke little or no English but stood around in groups talking among themselves in “strange languages". ([Lorena] Walsh 1997:96–97) - Notes on the Origins and Evolution of African American LanguageCoded Resistance in the African Diaspora
Of course, resistance against slavery didn’t just occur in the U.S., but also in Central and South America. Under domineering surveillance, many tactics had to be devised quickly and planned under the eye of white supremacy. Quilombos, or what can be viewed as the “Maroons” of Brazil, developed a way to fight against the Portuguese rule of that time:
“Prohibited from celebrating their cultural customs and strictly forbidden from practicing any martial arts, capoeira is thought to have emerged as a way to bypass these two imposing laws.” - Disguised in Dance: The Secret History of Capoeira
The rebellions in Jamaica, Haiti, and Mexico had extensive planning. They were not, as they are sometimes portrayed, merely the product of spontaneous and rightful rage against their oppressors. Some rebellions, such as Tacky’s War in Jamaica, were documented to be in the works for over a year before the first strike.Modern Communication, Subversion, and Circumvention Radio
As technology progressed, the oppressed adapted. During the height of the Civil Rights Movement, radio became an integral part of informing supporters of the movement. While churches may have been centers of gathering outside of worship, the radio was present even in these churches to give signals and other vital info. As Brian Ward notes in Radio and the Struggle for Civil Rights in the South, this info was conveyed in covert ways as well. Such as reporting traffic jams to indicate police roadblocks.
Radio made information accessible to those who could not afford newspapers or who were denied access to literacy education due to Jim Crow. Black DJs relayed information about protests, misinformation, and police checkpoints. Keeping the community informed and as safe as possible became these DJ’s mission outside of music and propelled them into civic engagement, from protest to walking new Black voters through the voting procedure and system. Radio became a central place to enter a different world past Jim Crow.WATS Phone Lines
Wide Area Telephone Services (WATS) also became a vital tool for the Civil Rights Movement to disperse information during important moments that often meant life or death. To circumvent the monopolistic Bell System (“Ma Bell”) that only employed white operators and colluded with law enforcement, vital civil rights organizations used WATS phone lines. These numbers were dedicated and paid lines such as 800 numbers. Directly patching through to organizations like the Student Nonviolent Coordinating Committee (SNCC), Congress of Racial Equality (CORE), Council of Federated Organizations (COFO), and the Southern Christian Leadership Conference (SCLC). These organization’s bases had code names to refer to when relaying information to another base either via WATS or radio.Looking at Today: Reverse Surveillance
While Black and other marginalized communities still struggle to communicate despite surveillance, we do have digital tools to help. With encryption widely available, we can now use protected communications with each other for sensitive information. Of course, not everyone today is free to roam or use these services equally. Encryption itself is also under constant risk of being undermined in different areas of the world. Technology can feel nefarious and “Big Tech'' seems to have a constant eye on millions.
In addition, just as with the DJs of the past, current activist groups like Black Lives Matter used this hypervisibility under Big Tech to get police brutality highlighted in the mainstream conversation and in real life. The world has seen police brutality up close because of on-site video, live recordings from phones and police scanners. Databases like EFF’s Atlas of Surveillance increasingly map police technology in your city. And all of us, whether activists or not, can use tools to scan for the probing of communications during protests.
The Black community has been fighting what essentially is the technological militarization of the police force since the 1990s. While the struggle continues, we have seen recent wins where police use of facial recognition technology is now being limited or banned in many areas in the U.S., with support from groups around the country, we can help close this especially dangerous window of surveillance.
Being able to communicate with each other and organize is embedded in the roots of resistance around the world, but it has a long and important history in the Black community in the United States. Whether online or off, we are keeping a public eye on those who are sworn to serve and protect us, with the hope one day we can freely move without the chains of surveillance and white supremacy. Until then, we’ll continue to see, and to celebrate, the spirit of resistance as well as the creativity of efforts to build and keep a strong line of communication despite surveillance and repression.
Happy Black History Month.
During the pandemic, a dangerous business has prospered: invading students’ privacy with proctoring software and apps. In the last year, we’ve seen universities compel students to download apps that collect their face images, driver’s license data, and network information. Students who want to move forward with their education are sometimes forced to accept being recorded in their own homes and having the footage reviewed for “suspicious” behavior.
Given these invasions, it’s no surprise that students and educators are fighting back against these apps. Last fall, Ian Linkletter, a remote learning specialist at the University of British Columbia, became part of a chorus of critics concerned with this industry.
Now, he’s been sued for speaking out. The outrageous lawsuit—which relies on a bizarre legal theory that linking to publicly viewable videos is copyright infringement—will become an important test of a 2019 British Columbia law passed to defend free speech, the Protection of Public Participation Act, or PPPA.Sued for Linking
This isn’t the first time U.S.-based Proctorio has taken a particularly aggressive tack in responding to public criticism. In July, Proctorio CEO Mike Olsen even publicly posted the chat logs of a student who complained about the software’s support, posting the conversation on Reddit, a move he later apologized for.
Shortly after that, Linkletter dove in deep to analyze the software that many students at his university were being forced to adopt, an app called Proctorio. He became concerned about what Proctorio was—and wasn’t—telling students and faculty about how its software works.
In Linkletter’s view, customers and users were not getting the whole story. The software performed all kinds of invasive tracking, like watching for “abnormal” eye movements, head movements, and other behaviors branded suspicious by the company. The invasive tracking and filming were of great concern to Linkletter, who was worried about students being penalized academically on the basis of Proctorio’s analysis.
“I can list a half dozen conditions that would cause your eyes to move differently than other people,” Linkletter said in an interview with EFF. “It’s a really toxic technology if you don’t know how it works.”
In order to make his point clear, Linkletter published some of his criticism on Twitter, where he linked to Proctorio’s own published YouTube videos describing how their software works. In those videos, Proctorio describes its own tracking functions. The videos described functions with titles like “Behaviour Flags,” “Abnormal Head Movement,” and “Record Room.”
Instead of replying to Linkletter’s critique, Proctorio sued him. Even though Linkletter didn’t copy any Proctorio materials, the company says Linkletter violated Canada’s Copyright Act just by linking to its videos. The company also said those materials were confidential, and alleged that Linkletter’s tweets violated the confidentiality agreement between UBC and Proctorio, since Linkletter is a university employee.Test of New Law
Proctorio’s legal attack on Ian Linkletter is meritless. It’s a classic SLAPP, an acronym that stands for Strategic Lawsuit Against Public Participation. Fortunately, British Columbia’s PPPA is a type of “anti-SLAPP” law. This is a type of law that’s being widely adopted throughout U.S. states and also exists in two Canadian provinces. In Canada, anti-SLAPP laws typically allow a defendant to bring an early challenge to the lawsuit against them on the basis that their speech is on a topic of “public interest.” If the court accepts that characterization, the court shall dismiss the action—unless the plaintiff can prove that their case has substantial merit, the defendant has no valid defense, and that the public interest in allowing the suit to continue outweighs the public’s interest in protecting the expression. That’s a very high bar for plaintiffs and changes the dynamics of a typical lawsuit dramatically.
Without anti-SLAPP laws, well-funded companies like Proctorio are often able to litigate their critics into silence—even in situations where the critics would have prevailed on the legal merits.
“Cases like this are exactly why anti-SLAPP laws were invented,” said Ren Bucholz, a litigator in Toronto.
Linkletter should prevail here. It isn’t copyright infringement to link to a published video on the open web, and the fact that Proctorio made the video “unlisted” doesn’t change that. Even if Linkletter had copied parts or all of the videos—which he did not—he would have broad fair dealing rights (similar to U.S. "fair use" rights) to criticize the software that has put many UBC students under surveillance in their own homes.
Linkletter had to create a GoFundMe page to pay for much of his legal defense. But Proctorio’s bad behavior has inspired a broad community of people to fight for better student privacy rights, and hundreds of people donated to Linkletter’s defense fund, which raised more than $50,000. And the PPPA gives him a greater chance of getting his fees back.
We hope the PPPA is proven effective in this, one of its first serious tests, and that lawmakers in both the U.S. and Canada adopt laws that prevent such abuses of the litigation system. Meanwhile, Proctorio should cease its efforts to muzzle critics from Vancouver to Ohio.Legal documents
This event has ended. Click here to watch a recording of the event.
If you make and share things online, professionally or for fun, you’ve been affected by copyright law. You may use a service that depends on the Digital Millennium Copyright Act (DMCA) in order to survive. You may have gotten a DMCA notice if you used part of a movie, TV show, or song in your work. You have almost certainly run up against the weird and draconian world of copyright filters like YouTube’s Content ID. EFF wants to help.
The end of last year was a flurry of copyright news, from the mess with Twitch to the “#StopDMCA” campaign that took off as new copyright proposals became law. The new year has proven that this issue is not going away, as a story emerged about cops using music in what looked like an attempt to trigger copyright filters to take videos of them offline. And throughout the pandemic, people stuck at home have tried to move their creativity online, only to find filters standing in their way. Enough is enough.
Next Friday, February 26th, at 10 AM Pacific, EFF will be hosting a town hall for Internet creators. There’s been a lot of actual and proposed changes to copyright law that you should know about and be able to ask questions about.
We will go over the copyright laws that got snuck into the omnibus spending package at the end of last year and what they mean for you. We will also use what we learned in writing our whitepaper on Content ID to help creators understand how it works and what to do with it. Finally, we will talk about the latest copyright proposal, the Digital Copyright Act, and how dangerous it is for online creativity. Most importantly, we will give you a way to stay informed and fight back.
Half of the 90-minute town hall will be devoted to answering your questions and hearing your concerns. Please join us for a conversation about the state of copyright in 2021 and what you need to know about it.
Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.
The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.
Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.
It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)
The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.
Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.
Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”
Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.
YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:
Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.
This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.
A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.
Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.