EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 4 hours 49 min ago

What Apple's Promise to Support RCS Means for Text Messaging

Wed, 01/31/2024 - 4:51pm

You may have heard recently that Apple is planning to implement Rich Communication Services (RCS) on iPhones, once again igniting the green versus blue bubble debate. RCS will thankfully bring a number of long-missing features to those green bubble conversations in Messages, but Apple's proposed implementation has a murkier future when it comes to security. 

The RCS standard will replace SMS, the protocol behind basic everyday text messages, and MMS, the protocol for sending pictures in text messages. RCS has a number of improvements over SMS, including being able to send longer messages, sending high quality pictures, read receipts, typing indicators, GIFs, location sharing, the ability to send and receive messages over Wi-Fi, and improved group messaging. Basically, it's a modern messaging standard with features people have grown to expect. 

The RCS standard is being worked on by the same standards body (GSMA) that wrote the standard for SMS and many other core mobile functions. It has been in the works since 2007 and supported by Google since 2019. Apple had previously said it wouldn’t support RCS, but recently came around and declared that it will support sending and receiving RCS messages starting some time in 2024. This is a win for user experience and interoperability, since now iPhone and Android users will be able to send each other rich modern text messages using their phone’s default messaging apps. 

But is it a win for security? 

On its own, the core RCS protocol is currently not any more secure than SMS. The protocol is not encrypted by default, meaning that anyone at your phone company or any law enforcement agent (ordinarily with a warrant) will be able to see the contents and metadata of your RCS messages. The RCS protocol by itself does not specify or recommend any type of end-to-end encryption. The only encryption of messages is in the incidental transport encryption that happens between your phone and a cell tower. This is the same way it works for SMS.

But what’s exciting about RCS is its native support for extensions. Google has taken advantage of this ability to implement its own plan for encryption on top of RCS using a version of the Signal protocol. As of now, this only works for users who are both using Google’s default messaging app (Google Messages), and whose phone companies support RCS messaging (the big three in the U.S. all do, as do a majority around the world). If encryption is not supported by either user the conversation continues to use the default unencrypted version. A user’s phone company could actively choose to block encrypted RCS in a specific region or for a specific user or for a specific pair of users by pretending it doesn’t support RCS. In that case the user will be given the option of resending the messages unencrypted, but can choose to not send the message over the unencrypted channel. Google’s implementation of encrypted RCS also doesn’t hide any metadata about your messages, so law enforcement could still get a record of who you conversed with, how many messages were sent, at what times, and how big the messages were. It's a significant security improvement over SMS, but people with heightened risk profiles should still consider apps that leak less metadata, like Signal. Despite those caveats this is a good step by Google towards a fully encrypted text messaging future.

Apple stated it will not use any type of proprietary end-to-end encryption–presumably referring to Google's approach—but did say it would work to make end-to-end encryption part of the RCS standard. Avoiding a discordant ecosystem with a different encryption protocol for each company is desirable goal. Ideally Apple and Google will work together on standardizing end-to-end encryption in RCS so that the solution is guaranteed to work with both companies’ products from the outset. Hopefully encryption will be a part of the RCS standard by the time Apple officially releases support for it, otherwise users will be left with the status quo of having to use third-party apps for interoperable encrypted messaging.

We hope that the GSMA members will agree on a standard soon, that any standard will use modern cryptographic techniques, and that the standard will do more to protect metadata and downgrade attacks than the current implementation of encrypted RCS. We urge Google and Apple to work with the GSMA to finalize and adopt such a standard quickly. Interoperable, encrypted text messaging by default can’t come soon enough.

Dozens of Rogue California Police Agencies Still Sharing Driver Locations with Anti-Abortion States

Wed, 01/31/2024 - 2:56pm
Civil Liberties Groups Urge Attorney General Bonta to Enforce California's Automated License Plate Reader Laws

SAN FRANCISCO—California Attorney General Rob Bonta should crack down on police agencies that still violate Californians’ privacy by sharing automated license plate reader information with out-of-state government agencies, putting abortion seekers and providers at particular risk, the Electronic Frontier Foundation (EFF) and the state’s American Civil Liberties Union (ACLU) affiliates urged in a letter to Bonta today. 

In October 2023, Bonta issued a legal interpretation and guidance clarifying that a 2016 state law, SB 34, prohibits California’s local and state police from sharing information collected from automated license plate readers (ALPR) with out-of-state or federal agencies. However, despite the Attorney General’s definitive stance, dozens of law enforcement agencies have signaled their intent to continue defying the law. 

The EFF and ACLU letter lists 35 specific police agencies which either have informed the civil liberties organizations that they plan to keep sharing ALPR information with out-of-state law enforcement, or have failed to confirm their compliance with the law in response to inquiries by the organizations. 

“We urge your office to explore all potential avenues to ensure that state and local law enforcement agencies immediately comply,” the letter said. “We are deeply concerned that the information could be shared with agencies that do not respect California’s commitment to civil rights and liberties and are not covered by California’s privacy protections.” 

ALPR systems collect and store location information about drivers, including dates, times, and locations. This sensitive information can reveal where individuals work, live, associate, worship, or seek reproductive health services and other medical care. Sharing any ALPR information with out-of-state or federal law enforcement agencies has been forbidden by the California Civil Code since enactment of SB 34 in 2016.  

And sharing this data with law enforcement in states that criminalize abortion also undermines California’s extensive efforts to protect reproductive health privacy, especially a 2022 law (AB 1242) prohibiting state and local agencies from providing abortion-related information to out-of-state agencies. The UCLA Center on Reproductive Health, Law and Policy estimates that between 8,000 and 16,100 people will travel to California each year for reproductive care. 

An EFF investigation involving hundreds of public records requests uncovered that many California police departments continued sharing records containing residents’ detailed driving profiles with out-of-state agencies. EFF and the ACLUs of Northern and Southern California in March 2023 wrote to more than 70 such agencies to demand they comply with state law. While many complied, many others have not. 

“We appreciate your office’s statement on SB 34 and your efforts to protect the privacy and civil rights of everyone in California,” today’s letter said. “Nevertheless, it is clear that many law enforcement agencies continue to ignore your interpretation of the law by continuing to share ALPR information with out-of-state and federal agencies. This violation of SB 34 will continue to imperil marginalized communities across the country, and abortion seekers, providers, and facilitators will be at greater risk of undue criminalization and prosecution.” 

For the letter to Bonta: https://www.eff.org/document/01-31-2024-letter-california-ag-rob-bonta-re-enforcing-sb34-alprs 

For the letters sent last year to noncompliant California police agencies: https://www.eff.org/press/releases/civil-liberties-groups-demand-california-police-stop-sharing-drivers-location-data 

For information on how ALPRs threaten abortion access: https://www.eff.org/deeplinks/2022/09/automated-license-plate-readers-threaten-abortion-access-heres-how-policymakers 

For general information about ALPRs: https://sls.eff.org/technologies/automated-license-plate-readers-alprs

Contact:  JenniferPinsofStaff Attorneyjpinsof@eff.org AdamSchwartzPrivacy Litigation Directoradam@eff.org

EFF and Access Now's Submission to U.N. Expert on Anti-LGBTQ+ Repression 

Wed, 01/31/2024 - 10:06am

As part of the United Nations (U.N.) Independent Expert on protection against violence and discrimination based on sexual orientation and gender identity (IE SOGI) report to the U.N. Human Rights Council, EFF and Access Now have submitted information addressing digital rights and SOGI issues across the globe. 

The submission addresses the trends, challenges, and problems that people and civil society organizations face based on their real and perceived sexual orientation, gender identity, and gender expression. Our examples underscore the extensive impact of such legislation on the LGBTQ+ community, and the urgent need for legislative reform at the domestic level.

Read the full submission here.

In Final Talks on Proposed UN Cybercrime Treaty, EFF Calls on Delegates to Incorporate Protections Against Spying and Overcriminalization or Reject Convention

Mon, 01/29/2024 - 12:42pm

UN Member States are meeting in New York this week to conclude negotiations over the final text of the UN Cybercrime Treaty, which—despite warnings from hundreds of civil society organizations across the globe, security researchers, media rights defenders, and the world’s largest tech companies—will, in its present form, endanger human rights and make the cyber ecosystem less secure for everyone.

EFF and its international partners are going into this last session with a unified message: without meaningful changes to limit surveillance powers for electronic evidence gathering across borders and add robust minimum human rights safeguard that apply across borders, the convention should be rejected by state delegations and not advance to the UN General Assembly in February for adoption.

EFF and its partners have for months warned that enforcement of such a treaty would have dire consequences for human rights. On a practical level, it will impede free expression and endanger activists, journalists, dissenters, and everyday people.

Under the draft treaty's current provisions on accessing personal data for criminal investigations across borders, each country is allowed to define what constitutes a "serious crime." Such definitions can be excessively broad and violate international human rights standards. States where it’s a crime to  criticize political leaders (Thailand), upload videos of yourself dancing (Iran), or wave a rainbow flag in support of LGBTQ+ rights (Egypt), can, under this UN-sanctioned treaty, demand other countries and technology companies assist them in surveilling people under investigation for these offenses and turn over personal information, location data, and private communications, in secret with no guardrails.

The final 10-day negotiating session in New York will conclude a series of talks that started in 2022 to create a treaty to prevent and combat core computer-enabled crimes, like distribution of malware, data interception and theft, and money laundering. From the beginning, Member States failed to reach consensus on the treaty’s scope, the inclusion of human rights safeguards, and even the definition of “cybercrime.” The scope of the entire treaty was too broad from the very beginning; Member States eventually narrowed only the scope of the criminalization section, but not evidence gathering provisions that hands States dangerous surveillance powers. What was supposed to be an international accord to combat core cybercrime morphed into a global surveillance agreement covering any and all crimes conceived by Member States.

The latest draft, released last November, blatantly disregards our calls to narrow the scope, strengthen human rights safeguards, and tighten loopholes enabling countries to assist each other in spying on people. It also retains a controversial provision allowing states to compel engineers or tech employees to undermine security measures, posing a threat to encryption. Absent from the draft are protections for good-faith cybersecurity researchers and others acting in the public interest.

This is unacceptable. In a Jan. 23 joint statement to delegates participating in this final session, EFF and 110 organizations outlined non-negotiable redlines for the draft that will emerge from this session, which ends Feb. 8. These include:

  • Narrowing the scope of the entire Convention to cyber-dependent crimes specifically defined within its text.
  • Including provisions to ensure that security researchers, whistleblowers, journalists, and human rights defenders are not prosecuted for their legitimate activities and that other public interest activities are protected. 
  • Guaranteeing explicit data protection and human rights standards like legitimate purpose, nondiscrimination, prior judicial authorization, necessity and proportionality apply to the entire Convention.
  • Mainstreaming gender across the Convention as a whole and throughout each article in efforts to prevent and combat cybercrime.

It’s been a long fight pushing for a treaty that combats cybercrime without undermining basic human rights. Without these improvements, the risks of this treaty far outweigh its potential benefits. States must stand firm and reject the treaty if our redlines can’t be met. We cannot and will not support or recommend a draft that will make everyone less, instead of more, secure.

More Than a Decade Later, Site-Blocking Is Still Censorship

Fri, 01/26/2024 - 2:45pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

As Copyright Week comes to a close, it’s worth remembering why we have in it January. Twelve years ago, a diverse coalition of internet users, websites, and public interest activists took to the internet to protest SOPA/PIPA, proposed laws that would have, among other things, blocked access to websites if they were alleged to be used for copyright infringement. More than a decade on, there still is no way to do this without causing irreparable harm to legal online expression.

A lot has changed in twelve years. Among those changes is a major shift in how we, and legislators, view technology companies. What once were new innovations have become behemoths. And what once were underdogs are now the establishment.

What has not changed, however, is the fact that much of what internet platforms are used for is legal, protected, expression. Moreover, the typical users of those platforms are those without access to the megaphones of major studios, record labels, or publishers. Any attempt to resurrect SOPA/PIPA—no matter what it is rebranded as—remains a threat to that expression.

Site-blocking, sometimes called a “no-fault injunction,” functionally allows a rightsholder to prevent access to an entire website based on accusations of copyright infringement. Not just access to the alleged infringement, but the entire website. It is using a chainsaw to trim your nails.

We are all so used to the Digital Millennium Copyright Act (DMCA) and the safe harbor it provides that we sometimes forget how extraordinary the relief it provides really is. Instead of providing proof of their claims to a judge or jury, rightsholders merely have to contact a website with their honest belief that their copyright is being infringed, and the allegedly infringing material will be taken down almost immediately. That is a vast difference from traditional methods of shutting down expression.

Site-blocking would go even further, bypassing the website and getting internet service providers to deny their customers access to a website. This clearly imperils the expression of those not even accused of infringement, and it’s far too blunt an instrument for the problem it’s meant to solve. We remain opposed to any attempts to do this. We have a long memory, and twelve years isn’t even that long.

Save your Twitter Account

Thu, 01/25/2024 - 7:02pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Amid reports that X—the site formerly known as Twitter—is dropping in value, hindering how people use the site, and engaging in controversial account removals, it has never been more precarious to rely on the site as a historical record. So, it’s important for individuals to act now and save what they can. While your tweets may feel ephemeral or inconsequential, they are part of a greater history in danger of being wiped out.

Any centralized communication platform, particularly one operated for profit, is vulnerable to being coopted by the powerful. This might mean exploiting users to maximize short-term profits or changing moderation rules to silence marginalized people and promote hate speech. The past year has seen unprecedented numbers of users fleeing X, Reddit, and other platforms over changes in policy

But leaving these platforms, whether in protest, disgust, or boredom, leaves behind an important digital record of how communities come together and grow.

Archiving tweets isn’t just for Dril and former presidents. In its heyday, Twitter was an essential platform for activists, organizers, journalists, and other everyday people around the world to speak truth to power and fight for social justice. Its importance for movements and building community was noted by oppressive governments around the world, forcing the site to ward off data requests and authoritarian speech suppression

A prominent example in the U.S. is the movement for Black Lives, where activists built momentum on the site and found effective strategies to bring global attention to their protests. Already though, #BlackLivesMatter tweets from 2014 are vanishing from X, and the site seems to be blocking and disabling  tools from archivists preserving this history.

In documenting social movements we must remember social media is not an archive, and platforms will only store (and gate keep) user work insofar as it's profitable, just as they only make it accessible to the public when it is profitable to do so. But when platforms fail, with them goes the history of everyday voices speaking to power, the very voices organizations like EFF fought to protect. The voice of power, in contrast, remains well documented.

In the battleground of history, archival work is cultural defense. Luckily, digital media can be quickly and cheaply duplicated and shared. In just a few minutes of your time, the following easy steps will help preserve not just your history, but the history of your community and the voices you supported.

1. Request Your Archive

Despite the many new restrictions on Twitter access, the site still allows users to backup their entire profile in just a few clicks.

  • First, in your browser or the X app, navigate to Settings. This will look like three dots, and may say "More" on the sidebar.

  • Select Settings and Privacy, then Your Account, if it is not already open.

  • Here, click Download an archive of your data

  • You'll be prompted to sign into your account again, and X will need to send a verification code to your email or text message. Verifying with email may be more reliable, particularly for users outside of the US.

  • Select Request archive

  • Finally—wait. This process can take a few days, but you will receive an email once it is complete. Eventually you will get an email saying that your archive is ready. Follow that link while logged in and download the ZIP files.
2. Optionally, Share with a Library or Archive.

There are many libraries, archives, and community groups who would be interested in preserving these archives. You may want to reach out to a librarian to help find one curating a collection specific to your community.

You can also request that your archive be preserved by the Internet Archive's Wayback Machine. While these steps are specific to the Internet Archive. We recommend using a desktop computer or laptop, rather than a mobile device.

  • Unpack the ZIP file you downloaded in the previous section.
  • In the Data folder, select the tweets.js file. This is a JSON file with just your tweets. JSON files are difficult to read, but you can convert it to a CSV file and view them in a spreadsheet program like Excel or LibreOffice Calc as a free alternative.
  • With your accounts and tweets.js file ready, go to the Save Page Now's Google Sheet Interface and select "Archive all your Tweets with the Wayback Machine.”

  • Fill in your Twitter handle, select your "tweets.js" file from Step 2 and click "Upload."

  • After some processing, you will be able to download the CSV file.
  • Import this CSV to a new Google Sheet. All of this information is already public on Twitter, but if you notice very sensitive content, you can remove those lines. Otherwise it is best to leave this information untampered.
  • Then, use Save Page Now's Google Sheet Interface again to archive from the sheet made in the previous step.
  • It may take hours or days for this request to fully process, but once it is complete you will get an email with the results.
  • Finally, The Wayback Machine will give you the option to also preserve all of your outlinks as well. This is a way to archive all the website URLs you shared on Twitter. This is an easy way to further preserve the messages you've promoted over the years.
3. Personal Backup Plan

Now that you have a ZIP file with all of your Twitter data, including public and private information, you may want to have a security plan on how to handle this information. This plan will differ for everyone, but these are a few steps to consider.

If you only wish to preserve the public information you already successfully shared with an archive, you can delete the archive. For anything you would like to keep but may be sensitive, you may want to use a tool to encrypt the file and keep it on a secure device.

Finally, even if this information is not sensitive, you'll want to be sure you have a solid backup plan. If you are still using Twitter, this means deciding on a schedule to repeat this process so your archive is up to date. Otherwise, you'll want to keep a few copies of the file across several devices. If you already have a plan for backing up your PC, this may not be necessary.

4. Closing Your Account

Finally, you'll want to consider what to do with your current Twitter account now that all your data is backed up and secure.

(If you are planning on leaving X, make sure to follow EFF on Mastodon, Bluesky or another platform.)

Since you have a backup, it may be a good idea to request data be deleted on the site. You can try to delete just the most sensitive information, like your account DMs, but there's no guarantee Twitter will honor these requests—or that it's even capable of honoring such requests. Even EU citizens covered by the GDPR will need to request the deletion of their entire account.

If you aren’t concerned about Twitter keeping this information, however, there is some value in keeping your old account up. Holding the username can prevent impersonators, and listing your new social media account will help people on the site find you elsewhere. In our guide for joining mastodon we recommended sharing your new account in several places. However, adding the new account to one's Twitter name will have the best visibility across search engines, screenshots, or alternative front ends like nitter.

Tell the FTC: It's Time to Act on the Right to Repair

Thu, 01/25/2024 - 6:22pm

Do you care about being able to fix and modify your stuff? Then it's time to speak up and tell the Federal Trade Commission that you care about your right to repair.

As we have said before, you own what you buy—and you should be able do what you want with it. That should be the end of the story, whether we’re talking about a car, a tractor, a smartphone, or a computer. If something breaks, you should be able to fix it yourself, or choose who you want to take care of it for you.

The Federal Trade Commission has just opened a 30-day comment period on the right to repair, and it needs to hear from you. If you have a few minutes to share why the right to repair is important to you, or a story about something you own that you haven't been able to fix the way you want, click here and tell the agency what it needs to hear.

Take Action

Tell the FTC: Stand up for our Right to Repair

If you’re not sure what to say, there are three topics that matter most for this petition. The FTC should:

  • Make repair easy
  • Make repair parts available and reasonably priced
  • Label products with ease of repairability

If you have a personal story of why right to repair matters to you, let them know!

This is a great moment to ask for the FTC to step up. We have won some huge victories in state legislatures across the country in the past several years, with good right-to-repair bills passing in California, Minnesota, Colorado, and Massachusetts. Apple, long a critic, has come out in favor of right to repair.

With the wind at our backs, it's time for the FTC to consider nationwide solutions, such as making parts and resources more available to everyday people and independent repair shops.

EFF has worked for years with our friends at organizations including U.S. PIRG (Public Interest Research Group) and iFixit to make it easier to tinker with your stuff. We're proud to support their call to the FTC to work on right to repair, and hope you'll add your voice to the chorus.

Join the (currently) 700 people making their voice heard. 

Take Action

Tell the FTC: Stand up for our Right to Repair

 

San Francisco: Vote No on Proposition E to Stop Police from Testing Dangerous Surveillance Technology on You

Thu, 01/25/2024 - 1:14pm

San Francisco voters will confront a looming threat to their privacy and civil liberties on the March 5, 2024 ballot. If Proposition E passes, we can expect the San Francisco Police Department (SFPD) will use untested and potentially dangerous technology on the public, any time they want, for a full year without oversight. How do we know this? Because the text of the proposition explicitly permits this, and because a city government proponent of the measure has publicly said as much.

%3Ciframe%20src%3D%22https%3A%2F%2Fwww.youtube.com%2Fembed%2FXbC3QOmVz3M%3Fautoplay%3D1%26amp%3Bstart%3D53%26amp%3Bhd%3D1%26autoplay%3D1%26mute%3D1%22%20width%3D%22568%22%20height%3D%22323%22%20accelerometer%3D%22%22%20autoplay%3D%22autoplay%22%20encrypted-media%3D%22%22%20gyroscope%3D%22%22%20picture-in-picture%3D%22%22%20allowfullscreen%3D%22%22%20frameborder%3D%220%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from youtube.com

While discussing Proposition E at a November 13, 2023 Board of Supervisors meeting, the city employee said the new rule, “authorizes the department to have a one-year pilot period to experiment, to work through new technology to see how they work.” Just watch the video above if you want to witness it being said for yourself.

They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach...

Any privacy or civil liberties proponent should find this statement appalling. Police should know how technologies work (or if they work) before they deploy them on city streets. They also should know how these technologies will impact communities, rather than taking a deploy-first and ask-questions-later approach—which all but guarantees civil rights violations.

This ballot measure would erode San Francisco’s landmark 2019 surveillance ordinance that requires city agencies, including the police department, to seek approval from the democratically-elected Board of Supervisors before acquiring or deploying new surveillance technologies. Agencies also must provide a report to the public about exactly how the technology would be used. This is not just an important way of making sure people who live or work in the city have a say in surveillance technologies that could be used to police their communitiesit’s also by any measure a commonsense and reasonable provision. 

However, the new ballot initiative attempts to gut the 2019 surveillance ordinance. The measure says “..the Police Department may acquire and/or use a Surveillance Technology so long as it submits a Surveillance Technology Policy to the Board of Supervisors for approval by ordinance within one year of the use or acquisition, and may continue to use that Surveillance Technology after the end of that year unless the Board adopts an ordinance that disapproves the Policy…”  In other words, police would be able to deploy virtually any new surveillance technology they wished for a full year without any oversight, accountability, transparency, or semblance of democratic control.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection.

This ballot measure would turn San Francisco into a laboratory where police are given free rein to use the most unproven, dangerous technologies on residents and visitors without regard for criticism or objection. That’s one year of police having the ability to take orders from faulty and racist algorithms. One year during which police could potentially contract with companies that buy up geolocation data from millions of cellphones and sift through the data.

Trashing important oversight mechanisms that keep police from acting without democratic checks and balances will not make the city safer. With all of the mind-boggling, dangerous, nearly-science fiction surveillance technologies currently available to local police, we must ensure that the medicine doesn’t end up doing more damage to the patient. But that’s exactly what will happen if Proposition E passes and police are able to expose already marginalized and over-surveilled communities to a new and less accountable generation of surveillance technologies. 

So, tell your friends. Tell your family. Shout it from the rooftops. Talk about it with strangers when you ride MUNI or BART. We have to get organized so we can, as a community, vote NO on Proposition E on the March 5, 2024 ballot. 

What Home Videotaping Can Tell Us About Generative AI

Wed, 01/24/2024 - 4:04pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.


It’s 1975. Earth, Wind and Fire rule the airwaves, Jaws is on every theater screen, All In the Family is must-see TV, and Bill Gates and Paul Allen are selling software for the first personal computer, the Altair 8800.

But for copyright lawyers, and eventually the public, something even more significant is about to happen: Sony starts selling the first videotape recorder, or VTR. Suddenly, people had the power to  store TV programs and watch them later. Does work get in the way of watching your daytime soap operas? No problem, record them and watch when you get home. Want to watch the game but hate to miss your favorite show? No problem. Or, as an ad Sony sent to Universal Studios put it, “Now you don’t have to miss Kojak because you’re watching Columbo (or vice versa).”

What does all of this have to do with Generative AI? For one thing, the reaction to the VTR was very similar to today’s AI anxieties. Copyright industry associations ran to Congress, claiming that the VTR "is to the American film producer and the American public as the Boston strangler is to the woman home alone" – rhetoric that isn’t far from some of what we’ve heard in Congress on AI lately. And then, as now, rightsholders also ran to court, claiming Sony was facilitating mass copyright infringement. The crux of the argument was a new legal theory: that a machine manufacturer could be held liable under copyright law (and thus potentially subject to ruinous statutory damages) for how others used that machine.

The case eventually worked its way up to the Supreme Court, and in 1984 the Court rejected the copyright industry’s rhetoric and ruled in Sony’s favor. Forty years later, at least two aspects of that ruling are likely to get special attention.

First, the Court observed that where copyright law has not kept up with technological innovation, courts should be careful not to expand copyright protections on their own. As the decision reads:

Congress has the constitutional authority and the institutional ability to accommodate fully the varied permutations of competing interests that are inevitably implicated by such new technology. In a case like this, in which Congress has not plainly marked our course, we must be circumspect in construing the scope of rights created by a legislative enactment which never contemplated such a calculus of interests.

Second, the Court borrowed from patent law the concept of “substantial noninfringing uses.” In order to hold Sony liable for how its customers used their VTRs, rightholders had to show that the VTR was simply a tool for infringement. If, instead, the VTR was “capable of substantial noninfringing uses,” then Sony was off the hook. The Court held that the VTR fell in the latter category because it was used for private, noncommercial time-shifting, and that time-shifting was a lawful fair use.  The Court even quoted Fred Rogers, who testified that home-taping of children’s programs served an important function for many families.

That rule helped unleash decades of technological innovation. If Sony had lost, Hollywood would have been able to legally veto any tool that could be used for infringing as well as non-infringing purposes. With Congress’ help, it has found ways to effectively do so anyway, such as Section 1201 of the DMCA. Nonetheless, Sony remains a crucial judicial protection for new creativity.

Generative AI may test the enduring strength of that protection. Rightsholders argue that generative AI toolmakers directly infringe when they used copyrighted works as training data. That use is very likely to be found lawful. The more interesting question is whether toolmakers are liable if customers use the tools to generate infringing works. To be clear, the users themselves may well be liable – but they are less likely to have the kind of deep pockets that make litigation worthwhile. Under Sony, however, the key question for the toolmakers will be whether their tools are capable of substantial non-infringing uses. The answer to that question is surely yes, which should preclude most of the copyright claims.

But there’s risk here as well – if any of these cases reach its doors, the Supreme Court could overturn Sony. Hollywood certainly hoped it would do so it when considered the legality of peer-to-peer file-sharing in MGM v Grokster. EFF and many others argued hard for the opposite result. Instead, the Court side-stepped Sony altogether in favor of creating a new form of secondary liability for “inducement.”

The current spate of litigation may end with multiple settlements, or Congress may decide to step in. If not, the Supreme Court (and a lot of lawyers) may get to party like it’s 1975. Let’s hope the justices choose once again to ensure that copyright maximalists don’t get to control our technological future.

Related Cases: MGM v. Grokster

Victory! Ring Announces It Will No Longer Facilitate Police Requests for Footage from Users

Wed, 01/24/2024 - 2:09pm

Amazon’s Ring has announced that it will no longer facilitate police's warrantless requests for footage from Ring users. This is a victory in a long fight, not just against blanket police surveillance, but also against a culture in which private, for-profit companies build special tools to allow law enforcement to more easily access companies’ users and their data—all of which ultimately undermine their customers’ trust.

This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage.

Years ago, after public outcry and a lot of criticism from EFF and other organizations, Ring ended its practice of allowing police to automatically send requests for footage to a user’s email inbox, opting instead for a system where police had to publicly post requests onto Ring’s Neighbors app. Now, Ring hopefully will altogether be out of the business of platforming casual and warrantless police requests for footage to its users. This is a step in the right direction, but has come after years of cozy relationships with police and irresponsible handling of data (for which they reached a settlement with the FTC). We also helped to push Ring to implement end-to-end encryption. Ring has been forced to make some important concessions—but we still believe the company must do more. Ring can enable their devices to be encrypted end-to-end by default and turn off default audio collection, which reports have shown collect audio from greater distances than initially assumed. We also remain deeply skeptical about law enforcement’s and Ring’s ability to determine what is, or is not, an emergency that requires the company to hand over footage without a warrant or user consent.

Despite this victory, the fight for privacy and to end Ring’s historic ill-effects on society aren’t over. The mass existence of doorbell cameras, whether subsidized and organized into registries by cities or connected and centralized through technologies like Fusus, will continue to threaten civil liberties and exacerbate racial discrimination. Many other companies have also learned from Ring’s early marketing tactics and have sought to create a new generation of police-advertisers who promote the purchase and adoption of their technologies. This announcement will also not stop police from trying to get Ring footage directly from device owners without a warrant. Ring users should also know that when police knock on their door, they have the right to—and should—request that police get a warrant before handing over footage. 

Fragging: The Subscription Model Comes for Gamers

Tue, 01/23/2024 - 7:24pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

The video game industry is undergoing the same concerning changes we’ve seen before with film and TV, and it underscores the need for meaningful digital ownership.

Twenty years ago you owned DVDs. Ten years ago you probably had a Netflix subscription with a seemingly endless library. Now, you probably have two to three subscription services, and regularly hear about shows and movies you can no longer access, either because they’ve moved to yet another subscription service, or because platforms are delisting them all together.

The video game industry is getting the same treatment. While it is still common for people to purchase physical or digital copies of games, albeit often from within walled gardens like Steam or Epic Games, game subscriptions are becoming more and more common. Like the early days of movie streaming, services like Microsoft Game Pass or PlayStation Plus seem to offer a good deal. For a flat monthly fee, you have access to seemingly unlimited game choices. That is, for now.

In a recent announcement from game developer Ubisoft, their director of subscriptions said plainly that a goal of their subscription service’s rebranding is to get players “comfortable” with not owning their games. Notably, this is from a company which had developed five non-mobile games last year, hoping users will access them and older games through a $17.99 per month subscription; that is, $215.88 per year. And after a year, how many games does the end user actually own? None. 

This fragmentation of the video game subscription market isn’t just driven by greed, but answering a real frustration from users the industry itself has created. Gamers at one point could easily buy and return games, they could rent games they were only curious about, and even recoup costs by reselling their game. With the proliferation of DRM and walled-garden game vendors, ownership rights have been eroded. Reselling or giving away a copy of your game, or leaving it for your next of kin, is no longer permitted. The closest thing to a rental now available is a game demo (if it exists) or playing a game within the time frame necessary to get a refund (if a storefront offers one). These purchases are also put at risk as games are sometimes released incomplete beyond this time limit. Developers such as Ubisoft will also shut down online services which severely impact the features of these games, or even make them unplayable.

DRM and tightly controlled gaming platforms also make it harder to mod or tweak games in ways the platform doesn’t choose to support. Mods are a thriving medium for extending the functionalities, messages, and experiences facilitated by a base game, one where passion has driven contributors to design amazing things with a low barrier to entry. Mods depend on users who have the necessary access to a work to understand how to mod it and to deploy mods when running the program. A model wherein the player can only access these aspects of the game in the ways the manufacturer supports undermines the creative rights of owners as well.

This shift should raise alarms for both users and creators alike. With publishers serving as intermediaries, game developers are left either struggling to reach their audience, or settling for a fraction of the revenue they could receive from traditional sales. 

We need to preserve digital ownership before we see video games fall into the same cycles as film and TV, with users stuck paying more and receiving not robust ownership, but fragile access on the platform’s terms.

EFF and More Than 100+ NGOS Set Non-Negotiable Redlines Ahead of UN Cybercrime Treaty Negotiations

Tue, 01/23/2024 - 9:44am

EFF has joined forces with 110 NGOs today in a joint statement delivered to the United Nations Ad Hoc Committee, clearly outlining civil society non-negotiable redlines for the proposed UN Cybercrime Treaty, and asserting that states should reject the proposed treaty if these essential changes are not implemented. 

The last draft published on November 6, 2023 does not adequately ensure adherence to human rights law and standards. Initially focused on cybercrime, the proposed Treaty has alarmingly evolved into an expansive surveillance tool.

Katitza Rodriguez, EFF Policy Director for Global Privacy, asserts ahead of the upcoming concluding negotiations:

The proposed treaty needs more than just minor adjustments; it requires a more focused, narrowly defined approach to tackle cybercrime. This change is essential to prevent the treaty from becoming a global surveillance pact rather than a tool for effectively combating core cybercrimes. With its wide-reaching scope and invasive surveillance powers, the current version raises serious concerns about cross-border repression and potential police overreach. Above all, human rights must be the treaty's cornerstone, not an afterthought. If states can't unite on these key points, they must outright reject the treaty.

Historically, cybercrime legislation has been exploited to target journalists and security researchers, suppress dissent and whistleblowers, endanger human rights defenders, limit free expression, and justify unnecessary and disproportionate state surveillance measures. We are concerned that the proposed Treaty, as it stands now, will exacerbate these problems. The proposed treaty concluding session will be held at the UN Headquarters in New York from January 29 to February 10th. EFF will be attending in person.

The joint statement specifically calls States to narrow the scope of criminalization provisions to well defined cyber dependent crimes; shield security researchers, whistleblowers, activists, and journalists from being prosecuted for their legitimate activities; explicitly include language on international human rights law, data protection, and gender mainstreaming; limit the scope of the domestic criminal procedural measures and international cooperation to core cybercrimes established in the criminalization chapter; and address concerns that the current draft could weaken cybersecurity and encryption. Additionally, it requires the necessity to establish specific safeguards, such as the principles of prior judicial authorization, necessity, legitimate aim, and proportionality.

The Public Domain Benefits Everyone – But Sometimes Copyright Holders Won’t Let Go

Mon, 01/22/2024 - 4:36pm

Every January, we celebrate the addition of formerly copyrighted works to the public domain. You’ve likely heard that this year’s crop of public domain newcomers includes Steamboat Willie, the 1928 cartoon that marked Mickey Mouse’s debut. When something enters the public domain, you’re free to copy, share, and remix it without fear of a copyright lawsuit. But the former copyright holders aren’t always willing to let go of their “property” so easily. That’s where trademark law enters the scene.

Unlike copyright, trademark protection has no fixed expiration date. Instead, it works on a “use it or lose it” model. With some exceptions, the law will grant trademark protection for as long as you keep using that mark to identify your products. This actually makes sense when you understand the difference between copyright and trademark. The idea behind copyright protection is to give creators a financial incentive to make new works that will benefit the public; that incentive needn’t be eternal to be effective. Trademark law, on the other hand, is about consumer protection. The function of a trademark is essentially to tell you who a product came from, which helps you make informed decisions and incentivizes quality control. If everyone were allowed to use that same mark after some fixed period, it would stop serving that function.

So, what’s the problem? Since trademarks don’t expire, we see former copyright holders of public domain works turn to trademark law as a way to keep exerting control. In one case we wrote about, a company claiming to own a trademark in the name of a public domain TV show called “You Asked For It” sent takedown demands targeting everything from episodes of the show, to remix videos using show footage, to totally unrelated uses of that common phrase. Other infamous examples include disputes over alleged trademarks in elements from Peter Rabbit and Tarzan. Now, with Steamboat Willie in the public domain, Disney seems poised to do the same. It’s already alluded to this in public statements, and in 2022, it registered a trademark for Walt Disney Animation Studios that incorporates a snippet from the cartoon.

The news isn’t all bad: trademark protection is in some ways more limited than copyright—it only applies to uses that are likely to confuse consumers about the use’s connection to the mark owner. And importantly, the U.S. Supreme Court has made clear that trademark law cannot be used to control the distribution of creative works, lest it spawn “a species of mutant copyright law” that usurps the public’s right to copy and use works in the public domain. (Of course, that doesn’t mean companies won’t try it.) So go forth and make your Steamboat Willie art, but beware of trademark lawyers waiting in the wings.

The PRESS Act Will Protect Journalists When They Need It Most

Mon, 01/22/2024 - 2:45pm

Our government shouldn’t be spying on journalists. Nor should law enforcement agencies force journalists to identify their confidential sources or go to prison. 

To fix this, we need to change the law. Now, we’ve got our best chance in years. The House of Representatives has passed the Protect Reporters from Exploitive State Spying (PRESS) Act, H.R. 4250, and it’s one of the strongest federal shield bills for journalists we’ve seen. 

Take Action

Tell Congress To Pass the PRESS Act Now

The PRESS Act would do two critical things: first, it would bar federal law enforcement from surveilling journalists by gathering their phone, messaging, or email records. Secondly, it strictly limits when the government can force a journalist to disclose their sources. 

Since its introduction, the bill has had strong bipartisan support. And such “shield” laws for reporters have vast support across the U.S., with 49 states and the District of Columbia all having some type of law that prevents journalists from being forced to hand over their files to assist in criminal prosecutions, or even private lawsuits. 

While journalists are well protected in many states, federal law is currently lacking in protections. That’s had serious consequences for journalists, and for all Americans’ right to freely access information. 

Multiple Presidential Administrations Have Abused Laws To Spy On Journalists

The Congressional report on this bill details abuses against journalists by all of the past three Presidential administrations. Federal law enforcement officials improperly acquired reporters’ phone records on numerous occasions since 2004, under both Democratic and Republican administrations. 

On at least 12 occasions since 1990, law enforcement threatened journalists with jail or home confinement for refusing to give up their sources; some reporters served months in jail. 

Elected officials must do more about these abuses than preside over after-the-fact apologies. 

PRESS Act Protections

The PRESS Act bars the federal government from surveilling journalists through their phones, email providers, or other online services. These digital protections are critical because they reflect how journalists operate in the field today. The bill restricts subpoenas aimed not just at the journalists themselves, but their phone and email providers. Its exceptions are narrow and targeted. 

The PRESS Act also has an appropriately broad definition of the practice of journalism, covering both professional and citizen journalists. It applies regardless of a journalist’s political leanings or medium of publication. 

The government surveillance of journalists over the years has chilled journalists’ ability to gather news. It’s also likely discouraged sources from coming forward, because their anonymity isn’t guaranteed. We can’t know the important stories that weren’t published, or weren’t published in time, because of fear of retaliation on the part of journalists or their sources. 

In addition to EFF, the PRESS Act is supported by a wide range of press and rights groups, including the ACLU, the Committee to Protect Journalists, the Freedom of the Press Foundation, the First Amendment Coalition, the News Media Alliance, the Reporters Committee for Freedom of the Press, and many others. 

Our democracy relies on the rights of both professional journalists and everyday citizens to gather and publish information. The PRESS Act is a long overdue protection. We have sent Congress a clear message to pass it; please join us by sending your own email to the Senate using our links below. 

Take Action

Tell Congress To Pass the PRESS Act Now

It's Copyright Week 2024: Join Us in the Fight for Better Copyright Law and Policy

Mon, 01/22/2024 - 2:12pm

We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, addressing what's at stake and what we need to do to make sure that copyright promotes creativity and innovation.

Copyright law affects so much of our daily lives, and new technologies have only helped make everyone more and more aware of it. For example, while 1998’s Digital Millennium Copyright Act helped spur the growth of platforms for creating and sharing art, music and literature, it also helped make the phrase “blocked due to a claim by the copyright holder” so ubiquitous.

Copyright law helps shape the movies we watch, the books we read, and the music we listen to. But it also impacts everything from who can fix a tractor to what information is available to us to when we communicate online. Given that power, it’s crucial that copyright law and policy serve everyone.

Unfortunately, that’s not the way it tends to work. Instead, copyright law is often treated as the exclusive domain of major media and entertainment industries. Individual artists don’t often find that copyright does what it is meant to do, i.e. “promote the progress of science and useful arts” by giving them a way to live off of the work they’ve done. The promise of the internet was to help eliminate barriers between creators and audiences, so that voices that traditional gatekeepers ignored could still find success. Through copyright, those gatekeepers have found ways to once again control what we see.

12 years ago, a diverse coalition of Internet users, non-profit groups, and Internet companies defeated the Stop Online Piracy Act (SOPA) and the PROTECT IP Act (PIPA), bills that would have forced Internet companies to blacklist and block websites accused of hosting copyright-infringing content. These were bills that would have made censorship very easy, all in the name of copyright protection.

We continue to fight for a version of copyright that truly serves the public interest. And so, every year, EFF and a number of diverse organizations participate in Copyright Week. Each year, we pick five copyright issues to highlight and promote a set of principles that should guide copyright law and policy. This year’s issues are:

  • Monday: Public Domain
    The public domain is our cultural commons and a crucial resource for innovation and access to knowledge. Copyright should strive to promote, and not diminish, a robust, accessible public domain.
  • Tuesday: Device and Digital Ownership 
    As the things we buy increasingly exist either in digital form or as devices with software, we also find ourselves subject to onerous licensing agreements and technological restrictions. If you buy something, you should be able to truly own it – meaning you can learn how it works, repair it, remove unwanted features, or tinker with it to make it work in a new way.
  • Wednesday: Copyright and AI
    The growing availability of AI, especially generative AI trained on datasets that include copyrightable material, has raised new debates about copyright law. It’s important to remember the limitations of copyright law in giving the kind of protections creators are looking for.
  • Thursday: Free Expression and Fair Use 
    Copyright policy should encourage creativity, not hamper it. Fair use makes it possible for us to comment, criticize, and rework our common culture.
  • Friday: Copyright Enforcement as a Tool of Censorship
    Freedom of expression is a fundamental human right essential to a functioning democracy. Copyright should encourage more speech, not act as a legal cudgel to silence it.

Every day this week, we’ll be sharing links to blog posts and actions on these topics at https://www.eff.org/copyrightweek and at #CopyrightWeek on X, formerly known as Twitter.

Tools to Protect Your Privacy Online | EFFector 36.1

Mon, 01/22/2024 - 1:02pm

New year, but EFF is still here to keep you up to date with the latest digital rights happenings! Be sure to check out our latest newsletter, EFFector 36.1, which covers topics ranging from: our thoughts on AI watermarking, changes in the tech landscape we'd like to see in 2024, and updates to our Street Level Surveillance hub and Privacy Badger.

EFFector 36.1 is out now—you can read the full newsletter here, or subscribe to get the next issue in your inbox automatically! You can also listen to the audio version of the newsletter below:

LISTEN ON YouTube

EFFector 36.1 | Tools to Protect Your Privacy Online

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

The No AI Fraud Act Creates Way More Problems Than It Solves

Fri, 01/19/2024 - 6:27pm

Creators have reason to be wary of the generative AI future. For one thing, while GenAI can be a valuable tool for creativity, it may also be used to deceive the public and disrupt existing markets for creative labor. Performers, in particular, worry that AI-generated images and music will become deceptive substitutes for human models, actors, or musicians.

Existing laws offer multiple ways for performers to address this issue. In the U.S., a majority of states recognize a “right of publicity,” meaning, the right to control if and how your likeness is used for commercial purposes. A limited version of this right makes sense—you should be able to prevent a company from running an advertisement that falsely claims that you endorse its products—but the right of publicity has expanded well beyond its original boundaries, to potentially cover just about any speech that “evokes” a person’s identity.

In addition, every state prohibits defamation, harmful false representations, and unfair competition, though the parameters may vary. These laws provide time-tested methods to mitigate economic and emotional harms from identity misuse while protecting online expression rights.

But some performers want more. They argue that your right to control use of your image shouldn’t vary depending on what state you live in. They’d also like to be able to go after the companies that offer generative AI tools and/or host AI-generated “deceptive” content. Ordinary liability rules, including copyright, can’t be used against a company that has simply provided a tool for others’ expression. After all, we don’t hold Adobe liable when someone uses Photoshop to suggest that a president can’t read or even for more serious deceptions. And Section 230 immunizes intermediaries from liability for defamatory content posted by users and, in some parts of the country, publicity rights violations as well. Again, that’s a feature, not a bug; immunity means it’s easier to stick up for users’ speech, rather than taking down or preemptively blocking any user-generated content that might lead to litigation. It’s a crucial protection not just big players like Facebook and YouTube, but also small sites, news outlets, emails hosts, libraries, and many others.

Balancing these competing interests won’t be easy. Sadly, so far Congress isn’t trying very hard. Instead, it’s proposing “fixes” that will only create new problems.

Last fall, several Senators circulated a “discussion draft” bill, the NO FAKES Act. Professor Jennifer Rothman has an excellent analysis of the bill, including its most dangerous aspect: creating a new, and transferable, federal publicity right that would extend for 70 years past the death of the person whose image is purportedly replicated. As Rothman notes, under the law:

record companies get (and can enforce) rights to performers’ digital replicas, not just the performers themselves. This opens the door for record labels to cheaply create AI-generated performances, including by dead celebrities, and exploit this lucrative option over more costly performances by living humans, as discussed above.

In other words, if we’re trying to protect performers in the long run, just make it easier for record labels (for example) to acquire voice rights that they can use to avoid paying human performers for decades to come.

NO FAKES hasn’t gotten much traction so far, in part because the Motion Picture Association hasn’t supported it. But now there’s a new proposal: the “No AI FRAUD Act.” Unfortunately, Congress is still getting it wrong.

First, the Act purports to target abuse of generative AI to misappropriate a person’s image or voice, but the right it creates applies to an incredibly broad amount of digital content: any “likeness” and/or “voice replica” that is created or altered using digital technology, software, an algorithm, etc. There’s not much that wouldn’t fall into that category—from pictures of your kid, to recordings of political events, to docudramas, parodies, political cartoons, and more. If it involved recording or portraying a human, it’s probably covered. Even more absurdly, it characterizes any tool that has a primary purpose of producing digital depictions of particular people as a “personalized cloning service.” Our iPhones are many things, but even Tim Cook would likely be surprised to know he’s selling a “cloning service.”

Second, it characterizes the new right as a form of federal intellectual property. This linguistic flourish has the practical effect of putting intermediaries that host AI-generated content squarely in the litigation crosshairs. Section 230 immunity does not apply to federal IP claims, so performers (and anyone else who falls under the statute) will have free rein to sue anyone that hosts or transmits AI-generated content.

That, in turn, is bad news for almost everyone—including performers. If this law were enacted, all kinds of platforms and services could very well fear reprisal simply for hosting images or depictions of people—or any of the rest of the broad types of “likenesses” this law covers. Keep in mind that many of these service won’t be in a good position to know whether AI was involved in the generation of a video clip, song, etc., nor will they have the resources to pay lawyers to fight back against improper claims. The best way for them to avoid that liability would be to aggressively filter user-generated content, or refuse to support it at all.

Third, while the term of the new right is limited to ten years after death (still quite a long time), it’s combined with very confusing language suggesting that the right could extend well beyond that date if the heirs so choose. Notably, the legislation doesn’t preempt existing state publicity rights laws, so the terms could vary even more wildly depending on where the individual (or their heirs) reside.

Lastly, while the defenders of the bill incorrectly claim it will protect free expression, the text of the bill suggests otherwise. True, the bill recognizes a “First Amendment defense.” But every law that affects speech is limited by the First Amendment—that’s how the Constitution works. And the bill actually tries to limit those important First Amendment protections by requiring courts to balance any First Amendment interests “against the intellectual property interest in the voice or likeness.” That balancing test must consider whether the use is commercial, necessary for a “primary expressive purpose,” and harms the individual’s licensing market. This seems to be an effort to import a cramped version of copyright’s fair use doctrine as a substitute for the rigorous scrutiny and analysis the First Amendment (and even the Copyright Act) requires.

We could go on, and we will if Congress decides to take this bill seriously. But it shouldn’t. If Congress really wants to protect performers and ordinary people from deceptive or exploitative uses of their images and voice, it should take a precise, careful and practical approach that avoids potential collateral damage to free expression, competition, and innovation. The No AI FRAUD Act comes nowhere near the mark

Companies Make it Too Easy for Thieves to Impersonate Police and Steal Our Data

Fri, 01/19/2024 - 11:29am

For years, people have been impersonating police online in order to get companies to hand over incredibly sensitive personal information. Reporting by 404 Media recently revealed that Verizon handed over the address and phone logs of an individual to a stalker pretending to be a police officer who had a PDF of a fake warrant. Worse, the imposter wasn’t particularly convincing. His request was missing a form that is required for search warrants from his state. He used the name of a police officer that did not exist in the department he claimed to be from. And he used a Proton Mail account, which any person online can use, rather than an official government email address.

Likewise, bad actors have used breached law enforcement email accounts or domain names to send fake warrants, subpoenas, or “Emergency Data Requests” (which police can send without judicial oversight to get data quickly in supposedly life or death situations). Impersonating police to get sensitive information from companies isn’t just the realm of stalkers and domestic abusers; according to Motherboard, bounty hunters and debt collectors have also used the tactic.

We have two very big entwined problems. The first is the “collect it all” business model of too many companies, which creates vast reservoirs of personal information stored in corporate data servers, ripe for police to seize and thieves to steal. The second is that too many companies fail to prevent thieves from stealing data by pretending to be police.

Companies have to make it harder for fake “officers” to get access to our sensitive data. For starters, they must do better at scrutinizing warrants, subpoenas, and emergency data requests when they come in. These requirements should be spelled out clearly in a public-facing privacy policy, and all employees who deal with data requests from law enforcement should receive training in how to adhere to these requirements and spot fraudulent requests. Fake emergency data requests raise special concerns, because real ones depend on the discretion of both companies and police—two parties with less than stellar reputations for valuing privacy. 

EFF’s 2024 In/Out List

Thu, 01/18/2024 - 10:41am

Since EFF was formed in 1990, we’ve been working hard to protect digital rights for all. And as each year passes, we’ve come to understand the challenges and opportunities a little better, as well as what we’re not willing to accept. 

Accordingly, here’s what we’d like to see a lot more of, and a lot less of, in 2024.

IN

1. Affordable and future-proof internet access for all

EFF has long advocated for affordable, accessible, and future-proof internet access for all. We cannot accept a future where the quality of our internet access is determined by geographic, socioeconomic, or otherwise divided lines. As the online aspects of our work, health, education, entertainment, and social lives increase, EFF will continue to fight for a future where the speed of your internet connection doesn’t stand in the way of these crucial parts of life.

2. A privacy first agenda to prevent mass collection of our personal information

Many of the ills of today’s internet have a single thing in common: they are built on a system of corporate surveillance. Vast numbers of companies collect data about who we are, where we go, what we do, what we read, who we communicate with, and so on. They use our data in thousands of ways and often sell it to anyone who wants it—including law enforcement. So whatever online harms we want to alleviate, we can do it better, with a broader impact, if we do privacy first.

3. Decentralized social media platforms to ensure full user control over what we see online

While the internet began as a loose affiliation of universities and government bodies, the digital commons has been privatized and consolidated into a handful of walled gardens. But in the past few years, there's been an accelerating swing back toward decentralization as users are fed up with the concentration of power, and the prevalence of privacy and free expression violations. So, many people are fleeing to smaller, independently operated projects. We will continue walking users through decentralized services in 2024.

4. End-to-end encrypted messaging services, turned on by default and available always

Private communication is a fundamental human right. In the online world, the best tool we have to defend this right is end-to-end encryption. But governments across the world are trying to erode this by scanning for all content all the time. As we’ve said many times, there is no middle ground to content scanning, and no “safe backdoor” if the internet is to remain free and private. Mass scanning of peoples’ messages is wrong, and at odds with human rights. 

5. The right to free expression online with minimal barriers and without borders

New technologies and widespread internet access have radically enhanced our ability to express ourselves, criticize those in power, gather and report the news, and make, adapt, and share creative works. Vulnerable communities have also found space to safely meet, grow, and make themselves heard without being drowned out by the powerful. No government or corporation should have the power to decide who gets to speak and who doesn’t. 

OUT

1. Use of artificial intelligence and automated systems for policing and surveillance

Predictive policing algorithms perpetuate historic inequalities, hurt neighborhoods already subject to intense amounts of surveillance and policing, and quite simply don’t work. EFF has long called for a ban on predictive policing and we’ll continue to monitor the rapid rise of law enforcement utilizing machine learning. This includes harvesting the data other “autonomous” devices collect and by automating important decision-making processes that guide policing and dictate people’s futures in the criminal justice system.

2. Ad surveillance based on the tracking of our online behaviors 

Our phones and other devices process vast amounts of highly sensitive personal information that corporations collect and sell for astonishing profits. This incentivizes online actors to collect as much of our behavioral information as possible. In some circumstances, every mouse click and screen swipe is tracked and then sold to ad tech companies and the data brokers that service them. This often impacts marginalized communities the most. Data surveillance is a civil rights problem, and legislation to protect data privacy can help protect civil rights. 

3. Speech and privacy restrictions under the guise of "protecting the children"

For years, government officials have raised concerns that online services don’t do enough to tackle illegal content, particularly child sexual abuse material. Their solution? Bills that ostensibly seek to make the internet safer, but instead achieve the exact opposite by requiring websites and apps to proactively prevent harmful content from appearing on messaging services. This leads to the universal scanning of all user content, all the time, and functions as a 21st-century form of prior restraint—violating the very essence of free speech.

4. Unchecked cross-border data sharing disguised as cybercrime protections 

Personal data must be safeguarded against exploitation by any government to prevent abuse of power and transnational repression. Yet, the broad scope of the proposed UN Cybercrime Treaty could be exploited for covert surveillance of human rights defenders, journalists, and security researchers. As the Treaty negotiations approach their conclusion, we are advocating against granting broad cross-border surveillance powers for investigating any alleged crime, ensuring it doesn't empower regimes to surveil individuals in countries where criticizing the government or other speech-related activities are wrongfully deemed criminal.

5. Internet access being used as a bargaining chip in conflicts and geopolitical battles

Given the proliferation of the internet and its use in pivotal social and political moments, governments are very aware of their power in cutting off that access. The internet enables the flow of information to remain active and alert to new realities. In wartime, being able to communicate may ultimately mean the difference between life and death. Shutting down access aids state violence and deprives free speech. Access to the internet shouldn't be used as a bargaining chip in geopolitical battles.

FTC Bars X-Mode from Selling Sensitive Location Data

Thu, 01/11/2024 - 4:54pm

Phone app location data brokers are a growing menace to our privacy and safety. All you did was click a box while downloading an app. Now the app tracks your every move and sends it to a broker, which then sells your location data to the highest bidder, from advertisers to police.

So it is welcome news that the Federal Trade Commission has brought a successful enforcement action against X-Mode Social (and its successor Outlogic).

The FTC’s complaint illustrates the dangers created by this industry. The company collects our location data through software development kits (SDKs) incorporated into third-party apps, through the company’s own apps, and through buying data from other brokers. The complaint alleged that the company then sells this raw location data, which can easily be correlated to specific individuals. The company’s customers include marketers and government contractors.

The FTC’s proposed order contains a strong set of rules to protect the public from this company.

General rules for all location data:

  • X-Mode cannot collect, use, maintain, or disclose a person’s location data absent their opt-in consent. This includes location data the company collected in the past.
  • The order defines “location data” as any data that may reveal the precise location of a person or their mobile device, including from GPS, cell towers, WiFi, and Bluetooth.
  • X-Mode must adopt policies and technical measures to prevent recipients of its data from using it to locate a political demonstration, an LGBTQ+ institution, or a person’s home.
  • X-Mode must, on request of a person, delete their location data, and inform them of every entity that received their location data.

Heightened rules for sensitive location data:

  • X-Mode cannot sell, disclose, or use any “sensitive” location data.
  • The order defines “sensitive” locations to include medical facilities (such as family planning centers), religious institutions, union offices, schools, shelters for domestic violence survivors, and immigrant services.
  • To implement this rule, the company must develop a comprehensive list of sensitive locations.
  • However, X-Mode can use sensitive location data if it has a direct relationship with a person related to that data, the person provides opt-in consent, and the company uses the data to provide a service the person directly requested.

As the FTC Chair and Commissioners explain in a statement accompanying this order’s announcement:

The explosion of business models that monetize people’s personal information has resulted in routine trafficking and marketing of Americans’ location data. As the FTC has stated, openly selling a person’s location data the highest bidder can expose people to harassment, stigma, discrimination, or even physical violence. And, as a federal court recently recognized, an invasion of privacy alone can constitute “substantial injury” in violation of the law, even if that privacy invasion does not lead to further or secondary harm.

X-Mode has disputed the implications of the FTC’s statements regarding the settlement, and asserted that the FTC did not find an instance of data misuse.

The FTC Act bans “unfair or deceptive acts or practices in or affecting commerce.” Under the Act, a practice is “unfair” if: (1) the practice “is likely to cause substantial injury to consumers”; (2) the practice “is not reasonably avoidable by consumers themselves”; and (3) the injury is “not outweighed by countervailing benefits to consumers or to competition.” The FTC has laid out a powerful case that X-Mode’s brokering of location data is unfair and thus unlawful.

The FTC’s enforcement action against X-Mode sends a strong signal that other location data brokers should take a hard look at their own business model or risk similar legal consequences.

The FTC has recently taken many other welcome actions to protect data privacy from corporate surveillance. In 2023, the agency limited Rite Aid’s use of face recognition, and fined Amazon’s Ring for failing to secure its customers’ data. In 2022, the agency brought an unfair business practices claim against another location data broker, Kochava, and began exploring issuance of new rules against commercial data surveillance.

Pages