Section 230, a key law protecting free speech online since its passage in 1996, has been the subject of numerous legislative assaults over the past few years. The attacks have come from all sides. One of the latest, the SAFE Tech Act, seeks to address real problems Internet users experience, but its implementation would harm everyone on the Internet.
The SAFE Tech Act is a shotgun approach to Section 230 reform put forth by Sens. Mark Warner, Mazie Hirono and Amy Klobuchar earlier this month. It would amend Section 230 through the ever-popular method of removing platform immunity from liability arising from various types of user speech. This would lead to more censorship as social media companies seek to minimize their own legal risk. The bill compounds the problems it causes by making it more difficult to use the remaining immunity against claims arising from other kinds of user content.
Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all.
Section 230 Benefits Everyone
The act would not protect users’ rights in a way that is substantially better than current law. And it would, in some cases, harm marginalized users, small companies, and the Internet ecosystem as a whole. Our three biggest concerns with the SAFE Tech Act are: 1) its failure to capture the reality of paid content online, 2) the danger that an affirmative defense requirement creates and 3) the lack of guardrails around injunctive relief that would open the door for a host of new suits that simply remove certain speech.
Before considering what this bill would change, it’s useful to take a look at the benefits that Section 230 provides for all internet users. The Internet today allows people everywhere to connect and share ideas—whether that’s for free on social media platforms and educational or cultural platforms like Wikipedia and the Internet Archive, or on paid hosting services like Squarespace or Patreon. Section 230’s legal protections benefit Internet users in two ways.
Section 230 Protects Intermediaries That Host Speech: Section 230 enables services to host the content of other speakers—from writing, to videos, to pictures, to code that others write or upload—without those services generally having to screen or review that content before being published. Without this partial immunity, all of the intermediaries who help the speech of millions and billions of users reach their audiences would face unworkable content moderation requirements that inevitably lead to large scale censorship. The immunity has some important exceptions, including for violations of federal criminal law and intellectual property claims. But the legal immunity’s protections extend to services far beyond social media platforms. Thus everyone who sends an email, makes a Kickstarter, posts on Medium, shares code on Github, protects their site from DDOS attacks with Cloudflare, makes friends on Meetup, or posts on Reddit, benefits from Section 230’s immunity for all intermediaries.
Section 230 Protects Users Who Create Content: Section 230 directly protects Internet users who themselves act as online intermediaries from being held liable for the content created by others. So when people publish a blog and allow reader comments, for example, Section 230 protects them. This enables Internet users to create their own platforms for others’ speech, such as when an Internet user created the Shitty Media Men list that allowed others to share their own experiences involving harassment and sexual assault.The SAFE Tech Act Fails to Capture the Reality of Paid Content Online
In what appears to be an attempt to limit deceptive advertising, the SAFE Tech Act would amend Section 230 to remove the service’s immunity for user-generated content when that content is paid speech. According to the senators, the goal of this change is to stop Section 230 from applying to ads, “ensuring that platforms cannot continue to profit as their services are used to target vulnerable consumers with ads enabling frauds and scams.”
With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.
But the language in the bill is much broader than just ads. The bill says Section 230’s platform immunity for user-generated content does not apply if, “the provider or user has accepted payment to make the speech available or, in whole or in part, created or funded the creation of the speech.” Much, much more of the Internet is likely included behind this definition than advertising, and it is unclear how much paid or sponsored content this language would sweep up. This change would undoubtedly force a massive, and dangerous, overhaul to Internet services at every level.
Although much of the legislative conversation around Section 230 reform focuses on the dominant social media services that are generally free to users, most of the intermediaries people rely on involve some form of payment or monetization: from more obvious content that sits behind a paywall on sites like Patreon, to websites that pay for hosting from providers like GoDaddy, to the comment section of a newspaper only available to subscribers. If all companies that host speech online and whose businesses depend on user payments lose Section 230 protections, the relationship between users and many intermediaries will change significantly, in several unintended ways:
Harm to Data Privacy: Services that previously accepted payments from users may decide to change to a different business model based on collecting and selling users’ personal information. So in seeking to regulate advertising, the SAFE TECH Act may perversely expand the private surveillance business model to other parts of the Internet, just so those services can continue to maintain Section 230’s protections.
Increased Censorship: Those businesses that continue to accept payments will have to make new decisions about what speech they can risk hosting and how they vet users and screen their content. They would be forced to monitor and filter all content that appears whenever money has exchanged hands—a dangerous and unworkable solution that would find much important speech disappeared, and would turn everyone from web hosts to online newspapers into censors. The only other alternative—not hosting user speech—would also not be a step forward.
As we’ve said many times, censorship has been shown to amplify existing imbalances in society. History shows us that when faced with the prospect of having to defend lawsuits, online services (like offline intermediaries before them) will opt to remove and reject user speech rather than try to defend it, even when it is strongly defensible. These decisions, as history has shown us, are applied disproportionately against the speech of marginalized speakers. Immunity, like that provided by Section 230, alleviates that prospect of having to defend such lawsuits.
Unintended Burdens on a Complex Ecosystem: While minimizing dangerous or deceptive advertising may be a worthy goal, and even if the SAFE Tech Act were narrowed to target ads in particular, it would not only burden sites like Facebook that function as massive online advertising ecosystems; it would also burden the numerous companies that comprise the complex online advertising ecosystem. There are numerous intermediaries between the user seeing an ad on a website and the ad going up. It is unclear which companies would lose Section 230 immunity under the SAFE TECH Act; arguably it would be all of them. The bill doesn’t reflect or account for the complex ways that publishers, advertisers, and scores of middlemen actually exchange money in today’s online ad ecosystem, which happens often in a split second through Real-Time Bidding protocols. It also doesn’t account for more nuanced advertising regimes. For example, how would an Instagram influencer—someone who is paid by a company to share information about a product—be affected by this loss of immunity? No money has exchanged hands with Instagram, and therefore one can imagine influencers and other more covert forms of advertising becoming the norm to protect advertisers and platforms from liability.
For a change in Section 230 to work as intended and not spiral into a mass of unintended consequences, legislators need to have a greater understanding of the Internet ecosystem of paid and content, and the language needs to be more specifically and narrowly tailored.The Danger That an Affirmative Defense Requirement Creates
The SAFE Tech Act also would alter the legal procedure around when Section 230’s immunity for user-generated content would apply in a way that would have massive practical consequences for users’ speech. Many people upset about user-generated content online bring cases against platforms, hosts, and other online intermediaries. Congressman Devin Nunes’ repeated lawsuits against Twitter for its users’ speech are a prime example of this phenomenon.
The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech.
Under current law, Section 230 operates as a procedural fast-lane for online services—and users who publish another user’s content—to get rid of frivolous lawsuits. Platforms and users subjected to these lawsuits can move to dismiss the cases before having to even respond to the legal complaint or going through the often expensive fact-gathering portion of a case, known as discovery. Right now, if it’s clear from the face of a legal complaint that the underlying allegations are based on a third party’s content, the statute’s immunity requires that the case against the platform or user who hosted the complained-of content be dismissed. Of course, this has not stopped plaintiffs from bringing (often unmeritorious) lawsuits in the first place. But in those cases, Section 230 minimizes the work the court must go through to grant a motion to dismiss the case, and minimizes costs for the defendant. This protects not only platforms but users; it is the desire to avoid litigation costs that leads intermediaries to default to censoring user speech.
The SAFE Tech Act would subject both provider and user defendants to much more protracted and expensive litigation before a case could be dismissed. By downgrading Section 230’s immunity to an “affirmative defense … that an interactive computer service provider has a burden of proving by a preponderance of the evidence,” defendants could no longer use Section 230 to dismiss cases at the beginning of a suit and would be required to prove—with evidence—that Section 230 applies. Right now, Section 230 saves companies and users significant legal costs when they are subjected to frivolous lawsuits. With this change, even if the defendant ultimately prevails against a plaintiff’s claims, they will have to defend themselves in court for longer, driving up their costs.
The increased legal costs of even meritless lawsuits will have serious consequences for users’ speech. An online service that cannot quickly get out of frivolous litigation based on user-generated content is likely going to take steps to prevent such content from becoming a target of litigation in the first place, including screening user’s speech or prohibiting certain types of speech entirely. And in the event that someone upset by a user’s speech sends a legal threat to an intermediary, the service is likely to be much more willing to remove the speech—even when it knows the speech cannot be subject to legal liability—just to avoid the new, larger expense and time to defend against the lawsuit.
As a result, the SAFE Tech Act would open the door for a host of new suits that by design are not filed to vindicate a legal wrong but simply to remove certain speech from the Internet—also called SLAPP lawsuits. These would remove a much greater volume of speech that does not, in fact, violate the law. Large services may find ways to absorb these new costs. But for small intermediaries and growing platforms that may be competing with those large companies, a single costly lawsuit, even if the defendant small company eventually prevails, may be the difference between success and failure. This is not to mention the many small businesses who use social media to market their company or service to respond to (and moderate) comments on their pages or sites, and who would likely be in danger of losing immunity from liability under this change.No Guardrails Around Injunctive Relief Would Open the Door to Dangerous Takedowns
The SAFE Tech Act also modifies Section 230’s immunity in another significant way, by permitting aggrieved individuals to seek non-monetary relief from platforms whose content has harmed them. Under the bill, Section 230 would not apply when a plaintiff seeks injunctive relief to require an online service to remove or restrict user-generated content that is “likely to cause irreparable harm.”
The SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.
This extremely broad change may be designed to address a legitimate concern about Section 230. Some people who are harmed online simply want the speech taken down instead of seeking monetary compensation. While giving certain Internet users an effective remedy that they currently lack under 230, the SAFE Tech Act’s injunctive relief carveout fails to account for how the provision will be misused to suppress lawful speech.
The SAFE Tech Act’s language appears to permit enforcement of all types of injunctive relief at any stage in a case. Litigants often seek emergency and temporary injunctive relief at an extremely early stage of the case, and judges frequently grant it without giving the speaker or platform an opportunity to respond. Courts already issue these kinds of takedown orders against online platforms, and they are prior restraints in violation of the First Amendment. If Section 230 does not bar these types of preliminary takedown orders, plaintiffs are likely to misuse the legal system to force down legal content without a final adjudication about the actual legality of the user-generated content.
Also, the injunctive relief carveout could be abused in another type of case, known as a default judgment, to remove speech without any judicial determination that the content is illegal. Default judgments are when the defendant does not fight the case, allowing the plaintiff to win without any examination of the underlying merits. In many cases, defendants avoid litigation simply because they don’t have the time or money for it.
Because of its one-sided nature, default judgments are subject to great fraud and abuse. Others have documented the growing phenomenon of fraudulent default judgments, typically involving defamation claims, in which a meritless lawsuit is crafted for the specific purpose of getting a default judgment and to avoid a consideration of its merits. If the SAFE Tech Act were to become law, fraudulent lawsuits like these would be incentivized and become more common, because Section 230 would no longer provide a barrier against their use to legally compel intermediaries to remove lawful speech.
A recent Section 230 case called Hassel v. Bird illustrates how a broad injunctive relief carveout to the law that would apply to default judgments would incentivize censorship of protected user speech. In Hassel, a lawyer sued a user of Yelp (Bird) who gave her law office a bad review, claiming defamation. The court never ruled on whether the speech was defamatory, but because the reviewer did not defend the lawsuit, the trial judge entered a default judgment against the reviewer, ordering the removal of the post. Section 230 prevented a court from ordering Yelp to remove the post.
Despite the potential for litigants to abuse the SAFE Tech Act’s injunctive relief carveout, the bill contains no guardrails for online intermediaries hosting legitimate speech targeted for removal. As it stands, the injunctive relief exception to Section 230 poses a real danger to legitimate speech.In Conclusion, For Safer Tech, Look Beyond Section 230
This only scratches the surface of the SAFE Tech Act. But the bill’s shotgun approach to amending Section 230, and the broadness of its language, make it impossible to support as it stands.
If legislators take issue with deceptive advertisers, they should use existing laws to protect users from them. Instead of making sweeping changes to Section 230, they should update antitrust law to stop the flood of mergers and acquisitions that have made competition in Big Tech an illusion, creating much of the problems we see in the first place. If they want to make Big Tech more responsive to the concerns of consumers, they should pass a strong consumer data privacy law with a robust private right of action.
If they disagree with the way that large companies like Facebook benefit from Section 230, they should carefully consider that changes to Section 230 will mostly burden smaller platforms and entrench the large companies that can absorb or adapt to the new legal landscape (large companies continue to support amendments to Section 230, even as those companies simultaneously push back against substantive changes that actually seek to protect users, and therefore harm their bottom line). Addressing Big Tech’s surveillance-based business models can’t, and shouldn’t, be done through amendments to Section 230—but that doesn’t mean it shouldn’t be done at all.
It’s absolutely a problem that just a few tech companies wield such immense control over what speakers and messages are allowed online. And it’s a problem that those same companies fail to enforce their own policies consistently or offer users meaningful opportunities to appeal bad moderation decisions. But this bill would not create a fairer system.
Virginia’s legislature has passed a bill meant to protect consumer privacy—but the bill, called the Virginia Consumer Data Protection Act, really protects the interests of business far more than the interests of everyday consumers.
Virginia: Speak Up for Real Privacy
The bill, which both Microsoft and Amazon supported, is now headed to the desk of Governor Ralph Northam. This week, EFF joined with the Virginia Citizens Consumer Council, Consumer Federation of America, Privacy Rights Clearinghouse, U.S. PIRG to ask for a veto on this bill, or for the governor to add a reenactment clause—a move that would send the bill back to the legislature to try again.
If you’re in Virginia and care about true privacy protections, let the governor know that the VCDA doesn’t give consumers the protections they need. In fact, it stacks the deck against them, by offering an “opt-out” framework that doesn’t protect privacy by default, allowing companies to force consumers that exercise their privacy rights to pay higher prices or accept a lower quality of service, and offering no meaningful enforcement—making it very unlikely that consumers will be able to hold companies to account if any of the few rights this bill grants them are violated.
As passed by the legislature, the bill is set to go into effect in 2023 and will establish a working group to make improvements between now and then. That offers some chance for improvements—but it likely won’t be enough to get real consumer protections. As we noted in a joint press release, “These groups appreciate that Governor Northam’s office has engaged with the concerns of consumer groups and committed to a robust stakeholder process to improve this bill. Yet the fundamental problems with the CDPA are too big to be fixed after the fact.”
Consumer privacy rights must be the foundation of any real privacy bill. The CDPA was written without meaningful input from consumer advocates; in fact, as Protocol reported, it was handed to the bill’s sponsor by an Amazon lobbyist. Some have suggested the Virginia bill could be a model for other states or for federal legislation. That’s bad for Virginia and bad for all of us.
Virginians, it’s time to take a stand. Tell Governor Northam that this bill is not good enough, and urge him to veto it or send it back for another try.
VIRGINIA: SPEAK UP FOR REAL PRIVACY
With a new year and a new Congress, the House of Representatives’ subcommittee covering antitrust has turned its attention to “reviving competition.” On Thursday, the first in a series of hearings was held, focusing on how to help small businesses challenge Big Tech. One very good idea kept coming up, backed by both parties. And it is one EFF also considers essential: interoperability.
This was the first hearing since the House Judiciary Committee issued its antitrust report from its investigation into the business practices of Big Tech companies. This week’s hearing was exclusively focused on how to re-enable small businesses to disrupt the dominance of Big Tech. A critical aspect of the Internet EFF calls the life cycle of competition has vanished from the Internet as small new entrants no longer seek (nor could even if they tried) to displace well-established giants, but rather seek to be acquired by them.Strong Bipartisan Support for Interoperability
Across the committee Members of Congress appeared to agree that some means of requiring Big Tech to grant access to competitors through interoperability will be an essential piece of the competition puzzle. The need is straightforward, the larger these networks became, the more their value rose, making it harder for a new business to enter into direct competition. One expert witness, Public Knowledge’s Competition Policy Director Charlotte Slaiman, noted that these “network effects” meant that one company with double the network size as a competitor wasn’t twice as attractive, it was exponentially more attractive to users.
But even in cases where you have large competitors with sizeable networks, Big Tech companies are using their dominance in other markets as a means to push out existing competitors. One of the most powerful testimonies in favor of interoperability provided to Congress was by the CEO of Mapbox, Eric Gunderson who detailed how Google is leveraging its dominance in search to exert dominance in Google Maps. Specifically, Google through a colorful trademark “brand confusion” contract term requires developers who wish to use Google Search to only integrate their products with Google Maps. Mr. Gunderson made clear that this tying of products that really do not need to be tied together at all is not only foreclosing on market opportunities for Mapbox, but it is also forcing their existing clients to abandon doing anything that doesn’t use Google Maps outright.
The solution to this type of corporate incumbent anticompetitive behavior is not revolutionary and has deep roots in tech history. As Ranking Member Ken Buck (R-CO) stated, “interoperability is a time-honored practice in the tech industry that allows competing technologies to speak to one another so that consumers can make a choice without being locked into any one technology.” We at EFF have long agreed that interoperability will be essential to reopening the Internet market to vibrant competition and recently published a white paper laying out in detail how we can get to a more competitive future. Seeing growing consensus from Congress is encouraging, but doing it right will require careful calibration in policy.
EFF has joined 42 other organizations, including the ACLU, the Knight Institute, and the National Security Archive calling for the new Biden administration to fulfill its promise to “bring transparency and truth back to government.”
Specifically, these organizations are asking the administration and the federal government at large to update policy and implementation regarding the collection, retention, and dissemination of public records as dictated in the Freedom of Information Act (FOIA), the Federal Records Act (FRA), and the Presidential Records Act (PRA).
Our call for increased transparency with the administration comes in the wake of many years of extreme secrecy and increasingly unreliable enforcement of record retention and freedom of information laws.
The letter request that the following actions be taken by the Biden administration:
- Emphasize to All Federal Employees the Obligation to Give Full Effect to Federal Transparency Laws.
- Direct Agencies to Adopt New FOIA Guidelines That Prioritize Transparency and the Public Interest.
- Direct DOJ to Fully Leverage its Central Role in Agencies’ FOIA Implementation.
- Issue New FOIA Guidance by the Office of Management and Budget (OMB) and Update the National FOIA Portal.
- Assess, Preserve, and Disclose the Key Records of the Previous Administration.
- Champion Funding Increases for the Public Records Laws.
- Endorse Legislative Improvements for the Public Records Laws.
- Embrace Major Reforms of Classification and Declassification.
- Issue an Executive Order Reforming the Prepublication Review System.
You can read the full letter here:
It’s nearing the end of Black History Month, and that history is inherently tied to strife, resistance, and organizing related to government surveillance and oppression. Even though programs like COINTELPRO are more well-known now, the other side of these kinds of stories are the ways the Black community has fought back through intricate networks and communication aimed at avoiding surveillance.The Borderland Network
The Trans-Atlantic Slave Trade was a dark, cruel time in the history of much of the Americas. The horrors of slavery still cast their shadow through systemic racism today. One of the biggest obstacles enslaved Africans faced when trying to organize and fight was the fact that they were closely watched, along with being separated, abused, tortured, and brought onto a foreign land to work until their death for free. They often spoke different languages from each other, with different cultures, and beliefs. Organizing under these conditions seemed impossible. Yet even under these conditions including overbearing surveillance, they developed a way to fight back. Much of this is attributed to the brilliance of these Africans using everything they had to develop communications with each other under chattel slavery. The continued fight today reflects much of the history that was established from dealing with censorship and authoritarian surveillance.
“The white folks down south don’t seem to sleep much, nights. They are watching for runaways, and to see if any other slaves come among theirs, or theirs go off among others.” - Former Runaway, Slavery’s Exiles - Sylviane A. Diouf
As Sylvane Diouf chronicled in the book, Slavery’s Exiles, slavery was not only catastrophic for many Africans, but also thankfully never a peaceful time for white owners and overseers either. Those captured from Africa and brought to the Americas seldom gave their captors a night of rest. Through rebellion, resistance, and individual sabotage with everyday life during this horrible period, freedom remained an objective. And with that objective came a deep history of secret communications and cunning intelligence.
Runaways often returned to plantations at night for years unnoticed and undetected, mostly to stay connected to family or relay information. One married couple, as Diouf tells it, had a simple yet effective signaling system where the wife placed a garment in a particular spot that was visible from her husband’s covert. Ben and his wife (whose name is unknown) had other systems in place if it was too dark to see. For example, shining a bright light through the cracks in their cabin for an instant, and then repeating it at intervals of two or three minutes, three or four times.
These close-proximity runaways were deemed “Borderland Maroons''. They’d create tight networks of communication from plantation to plantation. Information, like the amount of reward for capture and punishment, traveled quickly through the grapevine of the Borderland Maroons. Based on this intelligence, many would make plans around either traveling away completely or staying around longer to gather others. Former Georgia Delegates from the Continental Congress recounted:
“The negroes have a wonderful art of communicating intelligence among themselves, it will run several hundred miles in a week or fortnight”
These networks often gained runaways years out of captivity and thus the ability to maintain a network among the enslaved. Coachmen, draymen, boatmen, and others who were allowed to move around off plantations were the backbone for this chain of intelligence. The shadow network of the Borderlands was the entry point of organizing for potential runaways. So even if someone was captured, they could tap into this network again later. No one would be getting rest or sleep. As Diouf recounts, keeping a high level of surveillance took a lot of resources from the slaveholders, and that fact was well-exploited by the enslaved.Moses
Perhaps the most famous artisan of secret communications during this period is the venerable Harriet Tubman. Her character and will is undisputed, and her impeccable timing and remarkable intuition strengthened the Underground Railroad.
Dr. Bryan Walls notes much of her written and verbal communication was through plain language that acted as a metaphor:
- “tracks” (routes fixed by abolitionist sympathizers)
- “stations” or “depots” (hiding places)
- “conductors” (guides on the Underground Railroad)
- “agents” (sympathizers who helped the slaves connect to the Railroad)
- “station masters” (those who hid slaves in their homes)
- “passengers,” “cargo,” “fleece,” or “freight” (escaped slaves)
- “tickets” (indicated that slaves were traveling on the Railroad)
- “stockholders” (financial supporters who donated to the Railroad)
- “the drinking gourd” (the Big Dipper constellation—a star in this constellation pointed to the North Star, located on the end of the Little Dipper’s handle)
The most famous example of verbal communication on plantations was the usage of song. The tradition of verbal history and storytelling remained strong among the enslaved, and acted as a way to “hide in plain sight”. Tubman said she changed the tempo of the songs to indicate whether it was safe to come out or not.
Harriet Tubman’s famous claim is “she never lost a passenger.” This rang true not only as she freed others, but also when she acted as a spy during the Civil War aiding the Union. As the first and only woman to organize and lead a military operation during the Civil War, her reputation was solidified as an expert in espionage. Her information was so detailed and accurate it often saved Black troops in the Union from harm.
Many of these tactics won’t be found written down, but passed verbally. It was illegal or prohibited for Black people to read and write. Therefore, it was a lethal risk to write more traditional ciphertext as communications.Language as Resistance
Even though language was a barrier in the beginning and written communication was out of the question, over time English was forced onto enslaved Africans and many found a way to each other by creating an entirely new language on their own—Creole. There are many different kinds of Creole across the African Diaspora, which served as not only a way to communicate and develop a “home” language-wise, but also a way to communicate information to each other under the eyes of overseers.
"Anglican clergy were still reporting that Africans spoke little or no English but stood around in groups talking among themselves in “strange languages". ([Lorena] Walsh 1997:96–97) - Notes on the Origins and Evolution of African American LanguageCoded Resistance in the African Diaspora
Of course, resistance against slavery didn’t just occur in the U.S., but also in Central and South America. Under domineering surveillance, many tactics had to be devised quickly and planned under the eye of white supremacy. Quilombos, or what can be viewed as the “Maroons” of Brazil, developed a way to fight against the Portuguese rule of that time:
“Prohibited from celebrating their cultural customs and strictly forbidden from practicing any martial arts, capoeira is thought to have emerged as a way to bypass these two imposing laws.” - Disguised in Dance: The Secret History of Capoeira
The rebellions in Jamaica, Haiti, and Mexico had extensive planning. They were not, as they are sometimes portrayed, merely the product of spontaneous and rightful rage against their oppressors. Some rebellions, such as Tacky’s War in Jamaica, were documented to be in the works for over a year before the first strike.Modern Communication, Subversion, and Circumvention Radio
As technology progressed, the oppressed adapted. During the height of the Civil Rights Movement, radio became an integral part of informing supporters of the movement. While churches may have been centers of gathering outside of worship, the radio was present even in these churches to give signals and other vital info. As Brian Ward notes in Radio and the Struggle for Civil Rights in the South, this info was conveyed in covert ways as well. Such as reporting traffic jams to indicate police roadblocks.
Radio made information accessible to those who could not afford newspapers or who were denied access to literacy education due to Jim Crow. Black DJs relayed information about protests, misinformation, and police checkpoints. Keeping the community informed and as safe as possible became these DJ’s mission outside of music and propelled them into civic engagement, from protest to walking new Black voters through the voting procedure and system. Radio became a central place to enter a different world past Jim Crow.WATS Phone Lines
Wide Area Telephone Services (WATS) also became a vital tool for the Civil Rights Movement to disperse information during important moments that often meant life or death. To circumvent the monopolistic Bell System (“Ma Bell”) that only employed white operators and colluded with law enforcement, vital civil rights organizations used WATS phone lines. These numbers were dedicated and paid lines such as 800 numbers. Directly patching through to organizations like the Student Nonviolent Coordinating Committee (SNCC), Congress of Racial Equality (CORE), Council of Federated Organizations (COFO), and the Southern Christian Leadership Conference (SCLC). These organization’s bases had code names to refer to when relaying information to another base either via WATS or radio.Looking at Today: Reverse Surveillance
While Black and other marginalized communities still struggle to communicate despite surveillance, we do have digital tools to help. With encryption widely available, we can now use protected communications with each other for sensitive information. Of course, not everyone today is free to roam or use these services equally. Encryption itself is also under constant risk of being undermined in different areas of the world. Technology can feel nefarious and “Big Tech'' seems to have a constant eye on millions.
In addition, just as with the DJs of the past, current activist groups like Black Lives Matter used this hypervisibility under Big Tech to get police brutality highlighted in the mainstream conversation and in real life. The world has seen police brutality up close because of on-site video, live recordings from phones and police scanners. Databases like EFF’s Atlas of Surveillance increasingly map police technology in your city. And all of us, whether activists or not, can use tools to scan for the probing of communications during protests.
The Black community has been fighting what essentially is the technological militarization of the police force since the 1990s. While the struggle continues, we have seen recent wins where police use of facial recognition technology is now being limited or banned in many areas in the U.S., with support from groups around the country, we can help close this especially dangerous window of surveillance.
Being able to communicate with each other and organize is embedded in the roots of resistance around the world, but it has a long and important history in the Black community in the United States. Whether online or off, we are keeping a public eye on those who are sworn to serve and protect us, with the hope one day we can freely move without the chains of surveillance and white supremacy. Until then, we’ll continue to see, and to celebrate, the spirit of resistance as well as the creativity of efforts to build and keep a strong line of communication despite surveillance and repression.
Happy Black History Month.
During the pandemic, a dangerous business has prospered: invading students’ privacy with proctoring software and apps. In the last year, we’ve seen universities compel students to download apps that collect their face images, driver’s license data, and network information. Students who want to move forward with their education are sometimes forced to accept being recorded in their own homes and having the footage reviewed for “suspicious” behavior.
Given these invasions, it’s no surprise that students and educators are fighting back against these apps. Last fall, Ian Linkletter, a remote learning specialist at the University of British Columbia, became part of a chorus of critics concerned with this industry.
Now, he’s been sued for speaking out. The outrageous lawsuit—which relies on a bizarre legal theory that linking to publicly viewable videos is copyright infringement—will become an important test of a 2019 British Columbia law passed to defend free speech, the Protection of Public Participation Act, or PPPA.Sued for Linking
This isn’t the first time U.S.-based Proctorio has taken a particularly aggressive tack in responding to public criticism. In July, Proctorio CEO Mike Olsen even publicly posted the chat logs of a student who complained about the software’s support, posting the conversation on Reddit, a move he later apologized for.
Shortly after that, Linkletter dove in deep to analyze the software that many students at his university were being forced to adopt, an app called Proctorio. He became concerned about what Proctorio was—and wasn’t—telling students and faculty about how its software works.
In Linkletter’s view, customers and users were not getting the whole story. The software performed all kinds of invasive tracking, like watching for “abnormal” eye movements, head movements, and other behaviors branded suspicious by the company. The invasive tracking and filming were of great concern to Linkletter, who was worried about students being penalized academically on the basis of Proctorio’s analysis.
“I can list a half dozen conditions that would cause your eyes to move differently than other people,” Linkletter said in an interview with EFF. “It’s a really toxic technology if you don’t know how it works.”
In order to make his point clear, Linkletter published some of his criticism on Twitter, where he linked to Proctorio’s own published YouTube videos describing how their software works. In those videos, Proctorio describes its own tracking functions. The videos described functions with titles like “Behaviour Flags,” “Abnormal Head Movement,” and “Record Room.”
Instead of replying to Linkletter’s critique, Proctorio sued him. Even though Linkletter didn’t copy any Proctorio materials, the company says Linkletter violated Canada’s Copyright Act just by linking to its videos. The company also said those materials were confidential, and alleged that Linkletter’s tweets violated the confidentiality agreement between UBC and Proctorio, since Linkletter is a university employee.Test of New Law
Proctorio’s legal attack on Ian Linkletter is meritless. It’s a classic SLAPP, an acronym that stands for Strategic Lawsuit Against Public Participation. Fortunately, British Columbia’s PPPA is a type of “anti-SLAPP” law. This is a type of law that’s being widely adopted throughout U.S. states and also exists in two Canadian provinces. In Canada, anti-SLAPP laws typically allow a defendant to bring an early challenge to the lawsuit against them on the basis that their speech is on a topic of “public interest.” If the court accepts that characterization, the court shall dismiss the action—unless the plaintiff can prove that their case has substantial merit, the defendant has no valid defense, and that the public interest in allowing the suit to continue outweighs the public’s interest in protecting the expression. That’s a very high bar for plaintiffs and changes the dynamics of a typical lawsuit dramatically.
Without anti-SLAPP laws, well-funded companies like Proctorio are often able to litigate their critics into silence—even in situations where the critics would have prevailed on the legal merits.
“Cases like this are exactly why anti-SLAPP laws were invented,” said Ren Bucholz, a litigator in Toronto.
Linkletter should prevail here. It isn’t copyright infringement to link to a published video on the open web, and the fact that Proctorio made the video “unlisted” doesn’t change that. Even if Linkletter had copied parts or all of the videos—which he did not—he would have broad fair dealing rights (similar to U.S. "fair use" rights) to criticize the software that has put many UBC students under surveillance in their own homes.
Linkletter had to create a GoFundMe page to pay for much of his legal defense. But Proctorio’s bad behavior has inspired a broad community of people to fight for better student privacy rights, and hundreds of people donated to Linkletter’s defense fund, which raised more than $50,000. And the PPPA gives him a greater chance of getting his fees back.
We hope the PPPA is proven effective in this, one of its first serious tests, and that lawmakers in both the U.S. and Canada adopt laws that prevent such abuses of the litigation system. Meanwhile, Proctorio should cease its efforts to muzzle critics from Vancouver to Ohio.Legal documents
This event has ended. Click here to watch a recording of the event.
If you make and share things online, professionally or for fun, you’ve been affected by copyright law. You may use a service that depends on the Digital Millennium Copyright Act (DMCA) in order to survive. You may have gotten a DMCA notice if you used part of a movie, TV show, or song in your work. You have almost certainly run up against the weird and draconian world of copyright filters like YouTube’s Content ID. EFF wants to help.
The end of last year was a flurry of copyright news, from the mess with Twitch to the “#StopDMCA” campaign that took off as new copyright proposals became law. The new year has proven that this issue is not going away, as a story emerged about cops using music in what looked like an attempt to trigger copyright filters to take videos of them offline. And throughout the pandemic, people stuck at home have tried to move their creativity online, only to find filters standing in their way. Enough is enough.
Next Friday, February 26th, at 10 AM Pacific, EFF will be hosting a town hall for Internet creators. There’s been a lot of actual and proposed changes to copyright law that you should know about and be able to ask questions about.
We will go over the copyright laws that got snuck into the omnibus spending package at the end of last year and what they mean for you. We will also use what we learned in writing our whitepaper on Content ID to help creators understand how it works and what to do with it. Finally, we will talk about the latest copyright proposal, the Digital Copyright Act, and how dangerous it is for online creativity. Most importantly, we will give you a way to stay informed and fight back.
Half of the 90-minute town hall will be devoted to answering your questions and hearing your concerns. Please join us for a conversation about the state of copyright in 2021 and what you need to know about it.
Someone tries to livestream their encounters with the police, only to find that the police started playing music. In the case of a February 5 meeting between an activist and the Beverly Hills Police Department, the song of choice was Sublime’s “Santeria.” The police may not got no crystal ball, but they do seem to have an unusually strong knowledge about copyright filters.
The timing of music being played when a cop saw he was being filmed was not lost on people. It seemed likely that the goal was to trigger Instagram’s over-zealous copyright filter, which would shut down the stream based on the background music and not the actual content. It’s not an unfamiliar tactic, and it’s unfortunately one based on the reality of how copyright filters work.
Copyright filters are generally more sensitive to audio content than audiovisual content. That sensitivity causes real problems for people performing, discussing, or reviewing music online. It’s a problem of mechanics. It is easier for filters to find a match just on a piece of audio material compared to a full audiovisual clip. And then there is the likelihood that a filter is merely checking to see if a few seconds of a video file seems to contain a few seconds of an audio file.
It’s part of why playing music is a better way of getting a video stream you don’t want seen shut down. (The other part is that playing music is easier than walking around with a screen playing a Disney film in its entirety. Much fun as that would be.)
The other side of the coin is how difficult filters make it for musicians to perform music that no one owns. For example, classical musicians filming themselves playing public domain music—compositions that they have every right to play, as they are not copyrighted—attract many matches. This is because the major rightsholders or tech companies have put many examples of copyrighted performances of these songs into the system. It does not seem to matter whether the video shows a different performer playing the song—the match is made on audio alone. This drives lawful use of material offline.
Another problem is that people may have licensed the right to use a piece of music or are using a piece of free music that another work also used. And if that other work is in the filter’s database, it’ll make a match between the two. This results in someone who has all the rights to a piece of music being blocked or losing income. It’s a big enough problem that, in the process of writing our whitepaper on YouTube’s copyright filter, Content ID, we were told that people who had experienced this problem had asked for it to be included specifically.
Filters are so sensitive to music that it is very difficult to make a living discussing music online. The difficulty of getting music clips past Content ID explains the dearth of music commentators on YouTube. It is common knowledge among YouTube creators, with one saying “this is why you don’t make content about music.”
Criticism, commentary, and education of music are all areas that are legally protected by fair use. Using parts of a thing you are discussing to show what you mean is part of effective communication. And while the law does not make fair use of music more difficult to prove than any other kind of work, filters do.
YouTube’s filter does something even more insidious than simply taking down videos, though. When it detects a match, it allows the label claiming ownership to take part or all of the money that the original creator would have made. So a video criticizing a piece of music ends up enriching the party being critiqued. As one music critic explained:
Every single one of my videos will get flagged for something and I choose not to do anything about it, because all they’re taking is the ad money. And I am okay with that, I’d rather make my videos the way they are and lose the ad money rather than try to edit around the Content ID because I have no idea how to edit around the Content ID. Even if I did know, they’d change it tomorrow. So I just made a decision not to worry about it.
This setup is also how a ten-hour white noise video ended up with five copyright claims against it. This taking-from-the-poor-and-giving-to-the-rich is a blatantly absurd result, but it’s the status quo on much of YouTube.
A group, like the police, who is particularly tech-savvy could easily figure out which songs result in videos being removed rather than have the money stolen. Internet creators talk on social media about the issues they run into and from whom. Some rightsholders are infamously controlling and litigious.
Copyright should not be a fast-track to getting speech removed that you do not like. The law is meant to encourage creativity by giving artists a limited period of exclusive rights to their creations. It is not a way to make money off of criticism or a loophole to be exploited by authorities.
San Francisco - The Electronic Frontier Foundation (EFF) is representing four racial and immigrant justice groups— Just Futures Law, MediaJustice, Mijente Support Committee, and the Immigrant Defense Project—suing the U.S. Departments of Homeland Security and Health and Human Services under the Freedom of Information Act (FOIA) for withholding critical records about the collection and sharing of data during the COVID-19 pandemic.
The four groups all filed FOIA requests for information about COVID-related surveillance and data analysis last year. In particular, the groups are worried about HHS Protect, a vast secretive data platform designed by controversial data software company Palantir. Palantir has a long history of building surveillance systems for the Department of Homeland Security that facilitate criminal prosecutions, family separation, and raids that lead to detention and deportation. In July of last year, the government required all hospitals to report COVID-19 infection data to HHS Protect, instead of the system operated by the Centers for Disease Control.
However, the public has little to no information about COVID-19 data collection and tracking, including on the more than 200 data sources included in HHS Protect. The plaintiffs in this case asked both the Department of Homeland Security and the Department of Health and Human Services for any records describing the data sources, as well as limits on the use of data collected and the duration of retention, but have yet to receive anything responsive to their requests. Without this information, the public cannot evaluate either the efficacy of these invasive technologies now or the risks they might pose in the future.
“Secrecy from the government is not helping us fight this pandemic. We’ve already seen how privacy fears have deterred some from getting important medical care for COVID,” said Steven Renderos, Executive Director of MediaJustice. “Yet the government is still withholding this information. If we can’t say with confidence what the government is doing, we have an uphill battle to protect public health. Immediate answers are essential.”
“We know that the government is collecting huge amounts of health data on us for the purported purpose of public health and combating COVID,” said Julie Mao, Deputy Director from Just Futures Law. “For example, we’ve seen a lot of location data gathered from mobile phones or contract tracing apps, but scientists have questioned the effectiveness of such mass surveillance at mitigating disease spread. The public has the right to know what sensitive information these agencies are collecting and to evaluate its utility.”
The lawsuit demands the government immediately process the groups’ FOIA request, and make the records available to them.
"It's unacceptable that we have no idea how the HHS Protect platform is collecting data or how long it's holding it," said Jacinta Gonzalez, Senior Campaign Organizer with Mijente. "It's imperative that the public understands how personal data is being funnelled into large databases like this and how long that data is being stored. But it's especially critical here, because HHS has a history of sharing personal data with ICE for deportation purposes, to say nothing of the fact that the company that designed this platform, Palantir, is a well-known ICE contractor. The government's secrecy here is very alarming."
“The potential privacy and human rights impact of this data surveillance is deeply concering,” said Mizue Aizeki, Interim Executive Director of the Immigrant Defense Project. “We cannot allow tech corporations and the government take advantage of the pandemic to expand surveillance and policing powers. The Department of Health and Human Services is set to spend half a billion dollars on surveillance and data technologies in the coming months and years, so the time for answers is now.”
For the full complaint in Just Futures v DHS:
Just Futures Law (JFL) is a women-of-color led transformative immigration law project rooted in movement lawyering. @justfutureslaw.
MediaJustice is dedicated to building a grassroots movement for a more just and participatory media—fighting for racial, economic, and gender justice in a digital age. MediaJustice boldly advances communication rights, access, and power for communities harmed by persistent dehumanization, discrimination and disadvantage. Home of the #MediaJusticeNetwork, we envision a future where everyone is connected, represented, and free.
Mijente Support Committee is a Latinx/Chicanx political, digital, and grassroots organizing hub. Launched in 2015, Mijente seeks to strengthen and increase the participation of Latino people in the broader movements for racial, economic, climate, and gender justice. @conmijente
The Immigrant Defense Project (IDP) works to secure fairness and justice for immigrants in the racialized U.S. criminal and immigration systems. IDP fights to end the current era of unprecedented mass criminalization, detention and deportation through a multi-pronged strategy including advocacy, litigation, legal support, community partnerships, and strategic communications. @ImmDefense.Contact: RebeccaJeschkeMedia Relations Director and Digital Rights Analystpress@eff.org JulieMaoDeputy Director, Just Futures Lawjulie@justfutureslaw.org
This blog post was co-written by EFF intern Haley Amster.
EFF filed an amicus brief in the U.S. Court of Appeals for the First Circuit urging the court to hold that under the First Amendment public schools may not punish students for their off-campus speech, including posting to social media while off campus.
The Supreme Court has long held that students have the same constitutional rights to speak in their communities as do adults, and this principle should not change in the social media age. In its landmark 1969 student speech decision, Tinker v. Des Moines Independent Community School District, the Supreme Court held that a school could not punish students for wearing black armbands at school to protest the Vietnam War. In a resounding victory for the free speech rights of students, the Court made clear that school administrators are generally forbidden from policing student speech except in a narrow set of exceptional circumstances: when (1) a student’s expression actually causes a substantial disruption on school premises; (2) school officials reasonably forecast a substantial disruption; or (3) the speech invades the rights of other students.
However, because Tinker dealt with students’ antiwar speech at school, the Court did not explicitly address the question of whether schools have any authority to regulate student speech that occurs outside of school. At the time, it may have seemed obvious that students can publish op-eds or attend protests outside of school, and that the school has no authority to punish students for that speech even if it’s highly controversial and even if other students talk about it in school the next day. As we argued in our amicus brief, the Supreme Court’s three student speech cases following Tinker all involved discipline related to speech that may reasonably be characterized as on-campus.
In the social media age, the line between off- and on-campus has been blurred. Students frequently engage in speech on the Internet outside of school, and that speech is then brought into school by students on their smartphones and other mobile devices. Schools are increasingly punishing students for off-campus Internet speech brought onto campus.
In our amicus brief, EFF urged the First Circuit to make clear that schools have no authority under Tinker to police students’ off-campus speech, including when that speech occurs on social media. The case, Doe v. Hopkinton, involves two public high school students, “John Doe” and “Ben Bloggs,” who were suspended for making comments in a private Snapchat group that their school considered to be bullying. Doe and Bloggs filed suit asserting that the school suspension violated their First Amendment rights.
The school made no attempt to show in the lower court that Doe and Bloggs sent the messages at issue while on campus, and the federal judge erroneously concluded that “it does not matter whether any particular message was sent from an on- or off-campus location.”
As we explained in our amicus brief, that conclusion was wrong. Tinker made clear that students’ speech is entitled to First Amendment protection, and authorized schools to punish student speech only in narrow circumstances to ensure the safety and functioning of the school. The Supreme Court has never authorized or suggested that public schools have any authority to reach into students’ private lives and punish them for their speech while off school grounds or after school hours.
This is exactly what another federal appeals court considering this question concluded last summer. In B.L. v. Mahanoy Area School District, a high school student who had failed to advance from junior varsity to the varsity cheerleading squad posted a Snapchat selfie over the weekend with text that said, among other things, “fuck cheer.” One of her Snapchat connections took a screen shot of the post and shared it with the cheerleading coaches, who suspended the student from participation in the junior varsity cheer squad.
The Third Circuit in Mahanoy made clear that the narrow set of circumstances established in Tinker where a school may regulate disruptive student speech applies only to speech uttered at school. As such, it held that schools have no authority to punish students for their off-campus speech—even when that speech “involves the school, mentions teachers or administrators, is shared with or accessible to students, or reaches the school environment.”
This conclusion is especially critical given that students use social media to engage in a wide variety of self-expression, political speech, and activism. As we highlighted in our amicus brief, this includes expressing dissatisfaction with their schools’ COVID-19 safety protocols, calling out instances of racism at schools, and organizing protests against school gun violence. It is essential that courts draw a bright line prohibiting schools from policing off-campus speech so that students can exercise their constitutional rights outside of school without fear that they might be punished for it come Monday morning.
Mahanoy is currently on appeal to the Supreme Court, which will consider the case this spring. We hope that the First Circuit and the Supreme Court will take this opportunity to reaffirm the free speech rights of public-school students and draw clear limits on schools’ ability to police students’ private lives.