EFF: Updates
Decoding Meta's Advertising Policies for Abortion Content
This is the seventh installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
For users hoping to promote or boost an abortion-related post on Meta platforms, the Community Standards are just step one. While the Community Standards apply to all posts, paid posts and advertisements must also comply with Meta's Advertising Standards. It’s easy to understand why Meta places extra requirements on paid content. In fact, their “advertising policy principles” outline several important and laudable goals, including promoting transparency and protecting users from scams, fraud, and unsafe and discriminatory practices.
But additional standards bring additional content moderation, and with that comes increased potential for user confusion and moderation errors. Meta’s ad policies, like its enforcement policies, are vague on a number of important questions. Because of this, it’s no surprise that Meta's ad policies repeatedly came up as we reviewed our Stop Censoring Abortion submissions.
There are two important things to understand about these ad policies. First, the ad policies do indeed impose stricter rules on content about abortion—and specifically medication abortion—than Meta’s Community Standards do. To help users better understand what is and isn’t allowed, we took a closer look at the policies and what Meta has said about them.
Second, despite these requirements, the ad policies do not categorically block abortion-related posts from being promoted as ads. In other words, while Meta’s ad policies introduce extra hurdles, they should not, in theory, be a complete barrier to promoting abortion-related posts as boosted content. Still, our analysis revealed that Meta is falling short in several areas.
What’s Allowed Under the Drugs and Pharmaceuticals Policy?When EFF asked Meta about potential ad policy violations, the company first pointed to its Drugs and Pharmaceuticals policy. In the abortion care context, this policy applies to paid content specifically about medication abortion and use of abortion pills. Ads promoting these and other prescription drugs are permitted, but there are additional requirements:
- To reduce risks to consumers, Meta requires advertisers to prove they’re appropriately licensed and get prior authorization from Meta.
- Authorization is limited to online pharmacies, telehealth providers, and pharmaceutical manufacturers.
- The ads also must only target people 18 and older, and only in the countries in which the user is licensed.
Understanding what counts as “promoting prescription drugs” is where things get murky. Crucially, the written policy states that advertisers do not need authorization to run ads that “educate, advocate or give public service announcements related to prescription drugs” or that “promote telehealth services generally.” This should, in theory, leave a critical opening for abortion advocates focused on education and advocacy rather than direct prescription drug sales.
But Meta told EFF that advertisers “must obtain authorization to post ads discussing medical efficacy, legality, accessibility, affordability, and scientific merits and restrict these ads to adults aged 18 or older.” Yet many of these topics—medical efficacy, legality, accessibility—are precisely what educational content and advocacy often address. Where’s the line? This vagueness makes it difficult for abortion pill advocates to understand what’s actually permitted.
What’s Allowed Under the Social Issues Policy?Meta also told EFF that its Ads about Social Issues, Elections or Politics policy may apply to a range of abortion-related content. Under this policy, advertisers within certain countries—including the U.S.—must meet several requirements before running ads about certain “social issues.” Requirements include:
- Completing Meta’s social issues authorization process;
- Including a verified "Paid for by" disclaimer on the ad; and
- Complying with all applicable laws and regulations.
While certain news publishers are exempt from the policy, it otherwise applies to a wide range of accounts, including activists, brands, non-profit groups and political organizations.
Meta defines “social issues” as “sensitive topics that are heavily debated, may influence the outcome of an election or result in/relate to existing or proposed legislation.” What falls under this definition differs by country, and Meta provides country-specific topics lists and examples. In the U.S. and several other countries, ads that include “discussion, debate, or advocacy for or against...abortion services and pro-choice/pro-life advocacy” qualify as social issues ads under the “Civil and Social Rights” category.
Confusingly, Meta differentiates this from ads that primarily sell a product or promote a service, which do not require authorization or disclaimers, even if the ad secondarily includes advocacy for an issue. For instance, according to Meta's examples, an ad that says, “How can we address systemic racism?” counts as a social issues ad and requires authorization and disclaimers. On the other hand, an ad that says, “We have over 100 newly-published books about systemic racism and Black History now on sale” primarily promotes a product, and would not require authorization and disclaimers. But even with Meta's examples, the line is still blurry. This vagueness invites confusion and content moderation errors.
Oddly, Meta never specifically identified its Health and Wellness ad policy to EFF, though the policy is directly relevant to abortion-related paid content. This policy addresses ads about reproductive health and family planning services, and requires ads regarding “abortion medical consultation and related services” to be targeted at users 18 and older. It also expressly states that for paid content involving “[r]eproductive health and wellness drugs or treatments that require prescription,” accounts must comply with both this policy and the Drugs and Pharmaceuticals policy.
This means abortion advocates must navigate the Drugs and Pharmaceuticals policy, the Social Issues policy, and the Health and Wellness policy—each with its own requirements and authorization processes. That Meta didn’t mention this highly relevant policy when asked about abortion advertising underscores how confusingly dispersed these rules are.
Like the Drugs policy, the Health and Wellness policy contains an important education exception for abortion advocates: The age-targeting requirements do not apply to “[e]ducational material or information about family planning services without any direct promotion or facilitation of the services.”
When Content Moderation Makes MistakesMeta's complex policies create fertile ground for automated moderation errors. Our Stop Censoring Abortion survey submissions revealed that Meta's systems repeatedly misidentified educational abortion content as Community Standards violations. The same over-moderation problems are also a risk in the advertising context.
On top of that, content moderation errors even on unpaid posts can trigger advertising restrictions and penalties. Meta's advertising restrictions policy states that Community Standards violations can result in restricted advertising features or complete advertising bans. This creates a compounding problem when educational content about abortion is wrongly flagged. Abortion advocates could face a double penalty: first their content is removed, then their ability to advertise is restricted.
This may be, in part, what happened to Red River Women's Clinic, a Minnesota abortion clinic we wrote about earlier in this series. When its account was incorrectly suspended for violating the “Community Standards on drugs,” the clinic appealed and eventually reached out to a contact at Meta. When Meta finally removed the incorrect flag and restored the account, Red River received a message informing them they were no longer out of compliance with the advertising restrictions policy.
Screenshot submitted by Red River Women's Clinic to EFF
How Meta Can ImproveOur review of the ad policies and survey submissions showed that there is room for improvement in how Meta handles abortion-related advertising.
First, Meta should clarify what is permitted without prior authorization under the Drugs and Pharmaceuticals policy. As noted above, the policies say advertisers do not need authorization to “educate, advocate or give public service announcements,” but Meta told EFF authorization is needed to promote posts discussing “medical efficacy, legality, accessibility, affordability, and scientific merits.” Users should be able to more easily determine what content falls on each side of that line.
Second, Meta should clarify when its Social Issues policy applies. Does discussing abortion at all trigger its application? Meta says the policy excludes posts primarily advertising a service, yet this is not what survey respondent Lynsey Bourke experienced. She runs the Instagram account Rouge Doulas, a global abortion support collective and doula training school. Rouge Doulas had a paid post removed under this very policy for advertising something that is clearly a service: its doula training program called “Rouge Abortion Doula School.” The policy’s current ambiguity makes it difficult for advocates to create compliant content with confidence.
Third, and as EFF has previously argued, Meta should ensure its automated system is not over-moderating. Meta must also provide a meaningful appeals process for when errors inevitably occur. Automated systems are blunt tools and are bound to make mistakes on complex topics like abortion. But simply using an image of a pill on an educational post shouldn’t automatically trigger takedowns. Improving automated moderation will help correct the cascading effect of incorrect Community Standards flags triggering advertising restrictions.
With clearer policies, better moderation, and a commitment to transparency, Meta can make it easier for accounts to share and boost vital reproductive health information.
This is the seventh post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more at https://www.eff.org/pages/stop-censoring-abortion
Affected by unjust censorship? Share your story using the hashtag #StopCensoringAbortion. Amplify censored posts and accounts, share screenshots of removals and platform messages—together, we can demonstrate how these policies harm real people.
Protecting Access to the Law—and Beneficial Uses of AI
As the first copyright cases concerning AI reach appeals courts, EFF wants to protect important, beneficial uses of this technology—including AI for legal research. That’s why we weighed in on the long-running case of Thomson Reuters v. ROSS Intelligence. This case raises at least two important issues: the use of (possibly) copyrighted material to train a machine learning AI system, and public access to legal texts.
ROSS Intelligence was a legal research startup that built an AI-based tool for locating judges’ written opinions based on natural language queries—a competitor to ubiquitous legal research platforms like Lexis and Thomson Reuters’ Westlaw. To build its tool, ROSS hired another firm to read through thousands of the “West headnotes” that Thomson Reuters adds to the legal decisions it publishes, paraphrasing the individual legal conclusions (what lawyers call “holdings”) that the headnotes identified. ROSS used those paraphrases to train its tool. Importantly, the ROSS tool didn’t output any West headnotes, or even the paraphrases of those headnotes—it simply directed the user to the original judges’ decisions. Still, Thomson sued ROSS for copyright infringement, arguing that using the headnotes without permission was illegal.
Early decisions in the suit were encouraging. EFF wrote about how the court allowed ROSS to bring an antitrust counterclaim against Thomson Reuters, letting them try to prove that Thomson was abusing monopoly power. And the trial judge initially ruled that ROSS’s use of the West headnotes was fair use under copyright law.
The case then took turns for the worse. ROSS was unable to prove its antitrust claim. The trial judge issued a new opinion reversing his earlier decision and finding that ROSS’s use was not fair but rather infringed Thomson’s copyrights. And in the meantime, ROSS had gone out of business (though it continues to defend itself in court).
The court’s new decision on copyright was particularly worrisome. It ruled that West headnotes—a few lines of text copying or summarizing a single legal conclusion from a judge’s written opinion—could be copyrighted, and that using them to train the ROSS tool was not fair use, in part because ROSS was a competitor to Thomson Reuters. And the court rejected ROSS’s attempt to avoid any illegal copying by using a “clean room” procedure often used in software development. The decision also threatens to limit the public’s access to legal texts.
EFF weighed in with an amicus brief joined by the American Library Association, the Association of Research Libraries, the Internet Archive, Public Knowledge, and Public.Resource.Org. We argued that West headnotes are not copyrightable in the first place, since they simply restate individual points from judges’ opinions with no meaningful creative contributions. And even if copyright does attach to the headnotes, we argued, the source material is entirely factual statements about what the law is, and West’s contribution was minimal, so fair use should have tipped in ROSS’s favor. The trial judge had found that the factual nature of the headnotes favored ROSS, but dismissed this factor as unimportant, effectively writing it out of the law.
This case is one of the first to touch on copyright and AI, and is likely to influence many of the other cases that are already pending (with more being filed all the time). That’s why we’re trying to help the appeals court get this one right. The law should encourage the creation of AI tools to digest and identify facts for use by researchers, including facts about the law.
Towards the 10th Summit of the Americas: Concerns and Recommendations from Civil Society
This post is an adapted version of the article originally published at Silla Vacía
Heads of state and governments of the Americas will gather this December at the Tenth Summit of the Americas in the Dominican Republic to discuss challenges and opportunities facing the region’s nations. As part of the Summit of the Americas’ Process, which had its first meeting in 1994, the theme of this year’s summit is "Building a Secure and Sustainable Hemisphere with Shared Prosperity.”
More than twenty civil society organizations, including EFF, released a joint contribution ahead of the summit addressing the intersection between technology and human rights. Although the meeting's concept paper is silent about the role of digital technologies in the scope of this year's summit, the joint contribution stresses that the development and use of technologies is a cross-cutting issue and will likely be integrated into policies and actions agreed upon at the meeting.
Human Security, Its Core Dimensions, and Digital Technologies
The concept paper indicates that people in the Americas, like the rest of the world, are living in times of uncertainty and geopolitical, socioeconomic, and environmental challenges that require urgent actions to ensure human security in multiple dimensions. It identifies four key areas: citizen security, food security, energy security, and water security.
The potential of digital technologies cuts across these areas of concern and will very likely be considered in the measures, plans, and policies that states take up in the context of the summit, both at the national level and through regional cooperation. Yet, when harnessing the potential of emerging technologies, their challenges also surface. For example, AI algorithms can help predict demand peaks and manage energy flows in real time on power grids, but the infrastructure required for the growing and massive operation of AI systems itself poses challenges to energy security.
In Latin America, the imperative of safeguarding rights in the face of already documented risks and harmful impacts stands out particularly in citizen security. The abuse of surveillance powers, enhanced by digital technologies, is a recurring and widespread problem in the region.
It is intertwined with deep historical roots of a culture of secrecy and permissiveness that obstructs implementing robust privacy safeguards, effective independent oversight, and adequate remedies for violations. The proposal in the concept paper for creating a Hemispheric Platform of Action for Citizen and Community Security cannot ignore—and above all, must not reinforce—these problems.
It is crucial that the notion of security embedded in the Tenth Summit's focus on human security be based on human development, the protection of rights, and the promotion of social well-being, especially for historically discriminated against groups. It is also essential that it moves away from securitization and militarization, which have been used for social control, silencing dissent, harassing human rights defenders and community leaders, and restricting the rights and guarantees of migrants and people in situations of mobility.
Toward Regional Commitments Anchored in Human Rights
In light of these concerns, the joint contribution signed by EFF, Derechos Digitales, Wikimedia Foundation, CELE, ARTICLE 19 – Office for Mexico and Central America, among other civil society organizations, addresses the following:
-- The importance of strengthening the digital civic space, which requires robust digital infrastructure and policies for connectivity and digital inclusion, as well as civic participation and transparency in the formulation of public policies.
-- Challenges posed by the growing surveillance capabilities of states in the region through the increasing adoption of ever more intrusive technologies and practices without necessary safeguards.
-- State obligations established under the Inter-American Human Rights System and key standards affirmed by the Inter-American Court in the case of Members of the Jose Alvear Restrepo Lawyers Collective (CAJAR) v. Colombia.
-- A perspective on state digitalization and innovation centered on human rights, based on thorough analysis of current problems and gaps and their detrimental impacts on people. The insufficiency or absence of meaningful mechanisms for public participation, transparency, and evaluation are striking features of various experiences across countries in the Americas.
Finally, the contribution makes recommendations for regional cooperation, promoting shared solutions and joint efforts at the regional level anchored in human rights, justice, and inclusion.
We hope the joint contribution reinforces a human rights-based perspective across the debates and agreements at the summit. When security-related abuses abound facilitated by digital technologies, regional cooperation towards shared prosperity must take into account these risks and put justice and people's well-being at the center of any unfolding initiatives.
EFF Urges Virgina Court of Appeals to Require Search Warrants to Access ALPR Databases
This post was co-authored by EFF legal intern Olivia Miller.
For most Americans—driving is a part of everyday life. Practically speaking, many of us drive to work, school, play, and anywhere in between. Not only do we visit places that give insights into our personal lives, but we sometimes use vehicles as a mode of displaying information about our political beliefs, socioeconomic status, and other intimate details.
All of this personal activity can be tracked and identified through Automatic License Plate Reader (ALPR) data—a popular surveillance tool used by law enforcement agencies across the country. That’s why, in an amicus brief filed with the Virginia Court of Appeals, EFF, the ACLU of Virginia, and NACDL urged the court to require police to seek a warrant before searching ALPR data.
In Commonwealth v. Church, a police officer in Norfolk, Virginia searched license plate data without a warrant—not to prove that defendant Ronnie Church was at the scene of the crime, but merely to try to show he had a “guilty mind.” The lower court, in a one-page ruling relying on Commonwealth v. Bell, held this warrantless search violated the Fourth Amendment and suppressed the ALPR evidence. We argued the appellate court should uphold this decision.
Like the cellphone location data the Supreme Court protected in Carpenter v. United States, ALPR data threatens peoples’ privacy because it is collected indiscriminately over time and can provide police with a detailed picture of a person’s movements. ALPR data includes photos of license plates, vehicle make and model, any distinctive features of the vehicle, and precise time and location information. Once an ALPR logs a car’s data, the information is uploaded to the cloud and made accessible to law enforcement agencies at the local, state, and federal level—creating a near real-time tracking tool that can follow individuals across vast distances.
Think police only use ALPRs to track suspected criminals? Think again. ALPRs are ubiquitous; every car traveling into the camera’s view generates a detailed dataset, regardless of any suspected criminal activity. In fact, a survey of 173 law enforcement agencies employing ALPRs nationwide revealed that 99.5% of scans belonged to people who had no association to crime.
Norfolk County, Virginia, is home to over 170 ALPR cameras operated by Flock, a surveillance company that maintains over 83,000 ALPRs nationwide. The resulting surveillance network is so large that Norfolk county’s police chief suggested “it would be difficult to drive any distance and not be recorded by one.”
Recent and near-horizon advancements in Flock’s products will continue to threaten our privacy and further the surveillance state. For example, Flock’s ALPR data has been used for immigration raids, to track individuals seeking abortion-related care, to conduct fishing expeditions, and to identify relationships between people who may be traveling together but in different cars. With the help of artificial intelligence, ALPR databases could be aggregated with other information from data breaches and data brokers, to create “people lookup tools.” Even public safety advocates and law enforcement, like the International Association of Chiefs of Police, have warned that ALPR tech creates a risk “that individuals will become more cautious in their exercise of their protected rights of expression, protest, association, political participation because they consider themselves under constant surveillance.”
This is why a warrant requirement for ALPR data is so important. As the Virginia trial court previously found in Bell, prolonged tracking of public movements with surveillance invades peoples’ reasonable expectation of privacy in the entirety of their movements. Recent Fourth Amendment jurisprudence, including Carpenter and Leaders of a Beautiful Struggle from the federal Fourth Circuit Court of Appeals favors a warrant requirement as well. Like the technologies at issue in those cases, ALPRs give police the ability to chronicle movements in a “detailed, encyclopedic” record, akin to “attaching an ankle monitor to every person in the city.”
The Virginia Court of Appeals has a chance to draw a clear line on warrantless ALPR surveillance, and to tell Norfolk PD what the Fourth Amendment already says: come back with a warrant.
Chat Control Is Back on the Menu in the EU. It Still Must Be Stopped
The European Union Council is once again debating its controversial message scanning proposal, aka “Chat Control,” that would lead to the scanning of private conversations of billions of people.
Chat Control, which EFF has strongly opposed since it was first introduced in 2022, keeps being mildly tweaked and pushed by one Council presidency after another.
Chat Control is a dangerous legislative proposal that would make it mandatory for service providers, including end-to-end encrypted communication and storage services, to scan all communications and files to detect “abusive material.” This would happen through a method called client-side scanning, which scans for specific content on a device before it’s sent. In practice, Chat Control is chat surveillance and functions by having access to everything on a device with indiscriminate monitoring of everything. In a memo, the Danish Presidency claimed this does not break end-to-end encryption.
This is absurd.
We have written extensively that client-side scanning fundamentally undermines end-to-end encryption, and obliterates our right to private spaces. If the government has access to one of the “ends” of an end-to-end encrypted communication, that communication is no longer safe and secure. Pursuing this approach is dangerous for everyone, but is especially perilous for journalists, whistleblowers, activists, lawyers, and human rights workers.
If passed, Chat Control would undermine the privacy promises of end-to-end encrypted communication tools, like Signal and WhatsApp. The proposal is so dangerous that Signal has stated it would pull its app out of the EU if Chat Control is passed. Proponents even seem to realize how dangerous this is, because state communications are exempt from this scanning in the latest compromise proposal.
This doesn’t just affect people in the EU, it affects everyone around the world, including in the United States. If platforms decide to stay in the EU, they would be forced to scan the conversation of everyone in the EU. If you’re not in the EU, but you chat with someone who is, then your privacy is compromised too. Passing this proposal would pave the way for authoritarian and tyrannical governments around the world to follow suit with their own demands for access to encrypted communication apps.
Even if you take it in good faith that the government would never do anything wrong with this power, events like Salt Typhoon show there’s no such thing as a system that’s only for the “good guys.”
Despite strong opposition, Denmark is pushing forward and taking its current proposal to the Justice and Home Affairs Council meeting on October 14th.
We urge the Danish Presidency to drop its push for scanning our private communication and consider fundamental rights concerns. Any draft that compromises end-to-end encryption and permits scanning of our private communication should be blocked or voted down.
Phones and laptops must work for the users who own them, not act as “bugs in our pockets” in the service of governments, foreign or domestic. The mass scanning of everything on our devices is invasive, untenable, and must be rejected.
Further reading:
After Years Behind Bars, Alaa Is Free at Last
Alaa Abd El Fattah is finally free and at home with his family. On September 22, it was announced that Egyptian President Abdel Fattah al-Sisi had issued a pardon for Alaa’s release after six years in prison. One day later, the BBC shared video of Alaa dancing with his family in their Cairo home and hugging his mother Laila and sister Sanaa, as well as other visitors.
Alaa's sister, Mona Seif, posted on X: "An exceptionally kind day. Alaa is free."
Alaa has spent most of the last decade behind bars, punished for little more than his words. In June 2014, Egypt accused him of violating its protest law and attacking a police officer. He was convicted in absentia and sentenced to fifteen years in prison, after being prohibited from entering the courthouse. Following an appeal, Alaa was granted a retrial, and sentenced in February 2015 to five years in prison. In 2019, he was finally released, first into police custody then to his family. As part of his parole, he was told he would have to spend every night of the next five years at a police station, but six months later—on September 29, 2019—Alaa was re-arrested in a massive sweep of activists and charged with spreading false news and belonging to a terrorist organisation after sharing a Facebook post about torture in Egypt.
Despite that sentence effectively ending on September 29, 2024, one year ago today, Egyptian authorities continued his detention, stating that he would be released in January 2027—violating both international legal norms and Egypt’s own domestic law. As Amnesty International reported, Alaa faced inhumane conditions during his imprisonment, “including denial of access to lawyers, consular visits, fresh air, and sunlight,” and his family repeatedly spoke of concerns about his health, particularly during periods in which he engaged in hunger strike.
When Egyptian authorities failed to release Alaa last year, his mother, Laila Soueif, launched a hunger strike. Her action stretched to an astonishing 287 days, during which she was hospitalized twice in London and nearly lost her life. She continued until July of this year, when she finally ended the strike following direct commitments from UK officials that Alaa would be freed.
Throughout this time, a broad coalition, including EFF, rallied around Alaa: international human rights organizations, senior UK parliamentarians, former British Ambassador John Casson, and fellow former political prisoner Nazanin Zaghari-Ratcliffe all lent their voices. Celebrities joined the call, while the UN Working Group on Arbitrary Detention declared his imprisonment unlawful and demanded his release. This groundswell of solidarity was decisive in securing his release.
Alaa’s release is an extraordinary relief for his family and all who have campaigned on his behalf. EFF wholeheartedly celebrates Alaa’s freedom and reunification with his family.
But we must remain vigilant. Alaa must be allowed to travel to the UK to be reunited with his son Khaled, who currently lives with his mother and attends school there. Furthermore, we continue to press for the release of those who remain imprisoned for nothing more than exercising their right to speak.
Fair Use Protects Everyone—Even the Disney Corporation
Jimmy Kimmel has been in the news a lot recently, which means the ongoing lawsuit against him by perennial late-night punching bag/convicted fraudster/former congressman George Santos flew under the radar. But what happened in that case is an essential illustration of the limits of both copyright law and the “fine print” terms of service on websites and apps.
What happened was this: Kimmel and his staff saw that Santos was on Cameo, which allows people to purchase short videos from various public figures with requested language. Usually it’s something like “happy birthday” or “happy retirement.” In the case of Kimmel and his writers, they set out to see if there was anything they couldn’t get Santos to say on Cameo. For this to work, they obviously didn’t disclose that it was Jimmy Kimmel Live! asking for the videos.
Santos did not like the segment, which aired clips of these videos, called “Will Santos Say It?”. He sued Kimmel, ABC, and ABC’s parent company, Disney. He alleged both copyright infringement and breach of contract—the contract in this case being Cameo’s terms of service. He lost on all counts, twice: his case was dismissed at the district court level, and then that dismissal was upheld by an appeals court.
On the copyright claim, Kimmel and Disney argued and won on the grounds of fair use. The court cited precedent that fair use excuses what might be strictly seen as infringement if such a finding would “stifle the very creativity” that copyright is meant to promote. In this case, the use of the videos was part of the ongoing commentary by Jimmy Kimmel Live! around whether there was anything Santos wouldn’t say for money. Santos tried to argue that since this was their purpose from the outset, the use wasn’t transformative. Which... isn’t how it works. Santos’ purpose was, presumably, to fulfill a request sent through the app. The show’s purpose was to collect enough examples of a behavior to show a pattern and comment on it.
Santos tried to say that their not disclosing what the reason was invalidated the fair use argument because it was “deceptive.” But the court found that the record didn’t show that the deception was designed to replace the market for Santos’s Cameos. It bears repeating: commenting on the quality of a product or the person making it is not legally actionable interference with a business. If someone tells you that a movie, book, or, yes, Cameo isn’t worth anything because of its ubiquity or quality and shows you examples, that’s not a deceptive business practice. In fact, undercover quality checks and reviews are fairly standard practices! Is this a funnier and more entertaining example than a restaurant review? Yes. That doesn’t make it unprotected by fair use.
It’s nice to have this case as a reminder that, despite everything, the major studios often argue, fair use protects everyone, including them. Don’t hold your breath on them remembering this the next time someone tries to make a YouTube review of a Hollywood movie using clips.
Another claim from this case that is less obvious but just as important involves the Cameo terms of service. We often see contracts being used to restrict people’s fair use rights. Cameo offers different kinds of videos for purchase. The most well-known comes with a personal use license, the “happy birthdays,” and so on. They also offer a “commercial” use license, presumably if you want to use the videos to generate revenue, like you do with an ad or paid endorsement. However, in this case, the court found that the terms of service are a contract between a customer and Cameo, not between the customer and the video maker. Cameo’s terms of service explicitly lay out when their terms apply to the person selling a video, and they don’t create a situation where Santos can use those terms to sue Jimmy Kimmel Live! According to the court, the terms don’t even imply a shared understanding and contract between the two parties.
It's so rare to find a situation where the wall of text that most terms of service consist of actually helps protect free expression; it’s a pleasant surprise to see it here.
In general, we at EFF hate it when these kinds of contracts—you know the ones, where you hit accept after scrolling for ages just so you can use the app—are used to constrain users’ rights. Fair use is supposed to protect us all from overly strict interpretations of copyright law, but abusive terms of service can erode those rights. We’ll keep fighting for those rights and the people who use them, even if the one exercising fair use is Disney.
The Abortion Hotline Meta Wants to Go Dark
This is the sixth installment in a blog series documenting EFF's findings from the Stop Censoring Abortion campaign. You can read additional posts here.
When we started our Stop Censoring Abortion campaign, we heard from activists, advocacy organizations, researchers, and even healthcare providers who had all experienced having abortion-related content removed or suppressed on social media. One of the submissions we received was from an organization called the Miscarriage and Abortion Hotline.
The Miscarriage and Abortion Hotline (M+A Hotline) formed in 2019, is staffed by a team of healthcare providers who wanted to provide free and confidential “expert advice on various aspects of miscarriage and abortion, ensuring individuals receive accurate information and compassionate support throughout their journey.” By 2022, the hotline was receiving between 25 to 45 calls and texts a day.
Like many reproductive health, rights, and justice groups, the M+A Hotline is active on social media, sharing posts that affirm the voices and experiences of abortion seekers, assert the safety of medication abortion, and spread the word about the expert support that the hotline offers. However, in late March of this year, the M+A Hotline’s Instagram suddenly had numerous posts taken down and was hit with restrictions that prevented the account from starting or joining livestreams or creating ads until June 25, 2025.
Screenshots provided to EFF from M+A Hotline
The reason behind the restrictions and takedowns, according to Meta, was that the M+A Hotline’s Instagram account failed to follow Meta’s guidelines on the sale of illegal or regulated goods. The “guidelines” refer to Meta’s Community Standards which dictate the types of content that are allowed on Facebook, Instagram, Messenger, and Threads. But according to Meta, it is not against these Community Standards to provide guidance on how to legally access pharmaceutical drugs, and this is treated differently than an offer to buy, sell, or trade pharmaceuticals (though there are additional compliance requirements for paid ads).
Under these rules, the M+A Hotline’s content should have been fine: The Hotline does not sell medication abortion and simply educates on the efficacy and safety of medication abortion while providing guidance on how abortion seekers could legally access the pills. Despite this, around 10 posts from the account were removed by Instagram, none of which were ads.
For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.
In a letter to Amnesty International in February 2024, Meta publicly clarified that organic content on its platforms that educates users about medication abortion is not in violation of the Community Standards. The company claims that the policies are “based on feedback from people and the advice of experts in fields like technology, public safety and human rights.” The Community Standards are thorough and there are sections covering everything from bullying and harassment to account integrity to restricted goods and services. Notably, within the several webpages that make up the Community Standards, there are very few mentions of the words “abortion” and “reproductive health.” For how little the topic is mentioned in these Standards, content about abortion seems to face extremely high scrutiny from Meta.
Screenshots provided to EFF from M+A Hotline
Not only were posts removed, but even after further review, many were not restored. The M+A Hotline was once again told that their content violates the Community Standards on drugs. While it’s understandable that moderation systems may make mistakes, it’s unacceptable for those mistakes to be repeated consistently with little transparency or direct communication with the users whose speech is being restricted and erased. This problem is only made worse by lack of helpful recourse. As seen here, even when users request review and identify these moderation errors, Meta may still refuse to restore posts that are permitted under the Community Standards.
The removal of the M+A Hotline’s educational content demonstrates that Meta must be more accurate, consistent, and transparent in the enforcement of their Community Standards, especially in regard to reproductive health information. Informing users that medical professionals are available to support those navigating a miscarriage or abortion is plainly not an attempt to buy or sell pharmaceutical drugs. Meta must clearly defineand then fairly enforce–what is and isn’t permitted under its Standards. This includes ensuring there is a meaningful way to quickly rectify any moderation errors through the review process.
At a time when attacks on online access to information—and particularly abortion information—are intensifying, Meta must not exacerbate the problem by silencing healthcare providers and suppressing vital health information. We must all continue to fight back against online censorship.
This is the sixth post in our blog series documenting the findings from our Stop Censoring Abortion campaign. Read more in the series: https://www.eff.org/pages/stop-censoring-abortion
California: Tweet at Governor Newsom to Get A.B. 566 Signed Into Law
We need your help to make a common-sense bill into California law. Despite the fact that California has one of the nation’s most comprehensive data privacy laws, it’s not always easy for people to exercise those privacy rights. A.B. 566 intends to make it easy by directing browsers to give all their users the option to tell companies they don’t want personal information that’s collected about them on the internet to be sold or shared. Now, we just need Governor Gavin Newsom to sign it into law by October 13, 2025, and this toolkit will help us put on the pressure. Tweet at Gov. Gavin Newsom and help us get A.B. 566 signed into law!
First, pick your platform of choice. Reach Gov. Newsom at any of his social media handles:
- X: @CAgovernor
- Bluesky: @governor.ca.gov
- TikTok: @cagovernor
- Facebook: @CAgovernor
Then, pick a message that resonates with you. Or, feel free to remix!
Sample Posts
- It should be easy for Californians to exercise our rights under the California Consumer Privacy Act, but major internet browser companies are making it difficult for us to do that. @CAgovernor, sign AB 566 and give power to the consumers to protect their privacy!
- We are living in a time of mass surveillance and tracking. Californian consumers should be able to easily control their privacy and AB 566 would make that possible. @CAgovernor, sign AB 566 and ensure that millions of Californians can opt out of the sale and sharing of their private information!
- People seeking abortion care, immigrants, and LGBTQ+ people are at risk of bad actors using their online activity against them. @CAgovernor could sign AB 566 and protect the privacy of vulnerable communities and all Californians.
- AB 566 gives Californians a practical way to use their right to opt-out of websites selling or sharing their private info. @CAgovernor can sign it and give consumers power over their privacy choices under the California Consumer Privacy Act.
- Hey @CAgovernor! AB 566 makes it easy for Californians to tell companies what they want to happen with their own private information. Sign it and make the California Consumer Privacy Act more user-friendly!
- Companies haven’t made it easy for Californians to tell companies not to sell or share their personal information. We need AB 566 so that browsers MUST give users the option to easily opt out of this data sharing. @CAgovernor, sign AB 566!
- Major browsers have made it hard for Californians to opt out of the share and sale of their private info. Right now, consumers must individually opt out at every website they visit. AB 566 can change that by requiring browsers to create one single opt-out preference, but @CAgovernor MUST sign it!
- It should be easy for Californians to opt out of the share and sale of their private info, such as health info, immigration status, and political affiliation, but browsers have made it difficult. @CAgovernor can sign AB 566 and give power to consumers to more easily opt out of this data sharing.
- Right now, if a Californian wants to tell companies not to sell or share their info, they must go through the processes set up by each company, ONE BY ONE, to opt out of data sharing. AB 566 can remove that burden. @CAgovernor, sign AB 566 to empower consumers!
- Industry groups who want to keep the scales tipped in favor of corporations who want to profit off the sale of our private info have lobbied heavily against AB 566, a bill that will make it easy for Californians to tell companies what they want to happen with their own info. @CAgovernor—sign it!
