EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 6 hours 32 min ago

Internet Archive Files Appeal Brief Defending Libraries and Digital Lending From Big Publishers’ Legal Attack

Fri, 12/15/2023 - 12:30pm
The Archive’s Controlled Digital Lending Program is a Lawful Fair Use that Preserves Traditional Library Lending in the Digital World

SAN FRANCISCO—A cartel of major publishing companies must not be allowed to criminalize fair-use library lending, the Internet Archive argued in an appellate brief filed today. 

The Internet Archive is a San Francisco-based 501(c)(3) non-profit library which preserves and provides access to cultural artifacts of all kinds in electronic form. The brief filed in the U.S. Court of Appeal for the Second Circuit by the Electronic Frontier Foundation (EFF) and Morrison Foerster on the Archive’s behalf explains that the Archive’s Controlled Digital Lending (CDL) program is a lawful fair use that preserves traditional library lending in the digital world. 

"Why should everyone care about this lawsuit? Because it is about preserving the integrity of our published record, where the great books of our past meet the demands of our digital future,” said Brewster Kahle, founder and digital librarian of the Internet Archive. “This is not merely an individual struggle; it is a collective endeavor for society and democracy struggling with our digital transition. We need secure access to the historical record. We need every tool that libraries have given us over the centuries to combat the manipulation and misinformation that has now become even easier.”

“This appeal underscores the role of libraries in supporting universal access to information—a right that transcends geographic location, socioeconomic status, disability, or any other barriers,” Kahle added. “Our digital lending program is not just about lending responsibly; it’s about strengthening democracy by creating informed global citizens."

Through CDL, the Internet Archive and other libraries make and lend out digital scans of print books in their collections, subject to strict technical controls. Each book loaned via CDL has already been bought and paid for, so authors and publishers have already been fully compensated for those books; in fact, concrete evidence shows that the Archive’s digital lending—which is limited to the Archive’s members—does not and will not harm the market for books. 

Nonetheless, publishers Hachette, HarperCollins, Wiley, and Penguin Random House sued the Archive in 2020, claiming incorrectly that CDL violates their copyrights. A judge of the U.S. District Court for the Southern District of New York in March granted the plaintiffs’ motion for summary judgment, leading to this appeal. 

The district court’s “rejection of IA’s fair use defense was wrongly premised on the supposition that controlled digital lending is equivalent to indiscriminately posting scanned books online,” the brief argues. “That error caused it to misapply each of the fair use factors, give improper weight to speculative claims of harm, and discount the tremendous public benefits controlled digital lending offers. Given those benefits and the lack of harm to rightsholders, allowing IA’s use would promote the creation and sharing of knowledge—core copyright purposes—far better than forbidding it.”

The brief explains how the Archive’s digital library has facilitated education, research, and scholarship in numerous ways. In 2019, for example, the Archive received federal funding to digitize and lend books about internment of Japanese Americans during World War II. In 2022, volunteer librarians curated a collection of books that have been banned by many school districts but are available through the Archive’s library. Teachers have used the Archive to provide students access to books for research that were not available locally. And the Archive’s digital library has made online resources like Wikipedia more reliable by allowing articles to link directly to the particular page in a book that supports an asserted fact and by allowing readers to immediately borrow the book to verify it. 

For the brief: https://www.eff.org/document/internet-archive-opening-brief-us-court-appeals-second-circuit

For more on the case: https://www.eff.org/cases/hachette-v-internet-archive 

For the Internet Archive's blog post: https://blog.archive.org/2023/12/15/internet-archive-defends-digital-rights-for-libraries/

Contact:  CorynneMcSherryLegal Directorcorynne@eff.org

Is This the End of Geofence Warrants?

Wed, 12/13/2023 - 7:46pm

Google announced this week that it will be making several important changes to the way it handles users’ “Location History” data. These changes would appear to make it much more difficult—if not impossible—for Google to provide mass location data in response to a geofence warrant, a change we’ve been asking Google to implement for years.

Geofence warrants require a provider—almost always Google—to search its entire reserve of user location data to identify all users or devices located within a geographic area during a time period specified by law enforcement. These warrants violate the Fourth Amendment because they are not targeted to a particular individual or device, like a typical warrant for digital communications. The only “evidence” supporting a geofence warrant is that a crime occurred in a particular area, and the perpetrator likely carried a cell phone that shared location data with Google. For this reason, they inevitably sweep up potentially hundreds of people who have no connection to the crime under investigation—and could turn each of those people into a suspect.

Geofence warrants have been possible because Google collects and stores specific user location data (which Google calls “Location History” data) altogether in a massive database called “Sensorvault.” Google reported several years ago that geofence warrants make up 25% of all warrants it receives each year.

Google’s announcement outlined three changes to how it will treat Location History data. First, going forward, this data will be stored, by default, on a user’s device, instead of with Google in the cloud. Second, it will be set by default to delete after three months; currently Google stores the data for at least 18 months. Finally, if users choose to back up their data to the cloud, Google will “automatically encrypt your backed-up data so no one can read it, including Google.”

All of this is fantastic news for users, and we are cautiously optimistic that this will effectively mean the end of geofence warrants. These warrants are dangerous. They threaten privacy and liberty because they not only provide police with sensitive data on individuals, they could turn innocent people into suspects. Further, they have been used during political protests and threaten free speech and our ability to speak anonymously, without fear of government repercussions. For these reasons, EFF has repeatedly challenged geofence warrants in criminal cases and worked with other groups (including tech companies) to push for legislative bans on their use.

However, we are not yet prepared to declare total victory. Google’s collection of users’ location data isn’t limited to just the “Location History” data searched in response to geofence warrants; Google collects additional location information as well. It remains to be seen whether law enforcement will find a way to access these other stores of location data on a mass basis in the future. Also, none of Google’s changes will prevent law enforcement from issuing targeted warrants for individual users’ location data if police have probable cause to support such a search.

But for now, at least, we’ll take this as a win. It’s very welcome news for technology users as we usher in the end of 2023.

Speaking Freely: Dr. Carolina Are

Wed, 12/13/2023 - 4:42pm

Dr. Carolina Are is an Innovation Fellow at Northumbria University Centre for Digital Citizens. Her research primarily focuses on the intersection between online abuse and censorship. Her current research project investigates Instagram and TikTok’s approach to malicious flagging against ‘grey area’ content, or content that toes the line of compliance with social media’s community guidelines.

She is also a blogger and creator herself, as well as a writer, pole dance instructor and award-winning activist. Dr. Are sat down for an interview with EFF’s Jillian York to discuss the impact of platform censorship on sex workers and activist communities, the need for systemic change around content moderation, and how there’s hope to be found in the younger generations. 

Jillian York: Can you introduce yourself and tell us a bit about your work? Specifically, can you give us an idea of how you became a free speech advocate?

Dr. Carolina Are: Sure, I’m Carolina Are, I’m an Innovation Fellow at Northumbria University Centre for Digital Citizens and I mainly focus on deplatforming, online censorship, and platform governance of speech but also bodies, nudity, and sex work.

I came to it from a pretty personal and selfish perspective, in the sense that I was doing my PhD on the moderation of online abuse and conspiracy theories while also doing pole dance as a hobby. At the time my social media accounts were separate because I still didn’t know how I wanted to present to academia. So I had a pole dance account on Instagram and an academic account on Twitter. This was around the time when FOSTA/ SESTA was approved in the US. In 2019, Instagram started heavily shadow banning– algorithmically demoting – poledancers’ content. And I was in a really unique position to be observing the moderation of stuff that wasn’t actually getting moderated and should have been getting moderated – it was horrible, it was abusive content– while my videos were getting heavily censored and were not reaching viewers anymore. So I started getting more interested in the moderation of nudity, the political circumstances that surrounded the step of censorship. And I started creating a lot of activism campaigns about it, including one that resulted in Instagram directly apologizing to me and to poledancers about the shadow banning of pole dance.

So, from there, I kind of shifted my public-facing research to the moderation of nudity and sexual activity and sexuality and just sexual solicitation in general. And I then unified my online persona to reflect both my experiences and my expertise. I guess that’s how I came to it. It started with me, and with what happened to me and the censorship my accounts faced. And because of that, I became a lot more aware of censorship of sex work, of people that have it a lot worse than me, that introduced me to a lot of fantastic activist networks that were protesting that and massively changed the direction of my research.

York: How do you personally define deplatforming and what sort of impact does it have on pole dancers, on sex workers, on all of the different communities that you work with? 

What I would define as deplatforming is the removal of content or a full account from a social media platform or an internet platform. This means that you lose access to the account, but you also lose access to any communications that you may have had through that account – if it’s an app, for instance. And you also lose access to your content on that account. So, all of that has massive impacts on people that work and communicate and organize through social media or through their platforms.

Let’s say, if you’re an activist and your main activist network is through platforms –maybe because people have a public-facing persona that is anonymous and they don’t want to give you their data, their email, their phone number– you lose access to them if you are deplatformed. Similarly, if you are a small business or a content creator, and you promote yourself largely through your social media accounts, then you lose your outlet of promotion. You lose your network of customers. You lose everything that helps you make money. And, on top of that, for a lot of people, as a few of the papers I’m currently working on are showing, of course platforms are an office – like a space where they do business – but at the same time they have this hybrid emotional/community role with the added business on top.

So that means that yes, you lose access to your business, you lose access to your activist network, to educational opportunities, to learning opportunities, to organizing opportunities – but you also lose access to your memories. You lose access to your friends. So I’m one of those people that become intermediaries between platforms like Meta and people that have been deleted because of my research. I sometimes put them in touch with the platform in order for them to restore mistakenly deleted accounts. And just recently I helped someone who – without my asking, because I do this for free – ended up PayPal-ing me a lot of money because I was the only person that helped while the platforms infrastructure and appeals were ineffective. And what she said was, “Instagram was the only platform where I had pictures with my dead stepmother, and I didn’t have access to them anymore and I would have lost them if you hadn’t helped me.”

So there is a whole emotional and financial impact that this has on people. Because, obviously, you’re very stressed out and worried and terrified if you lose your main source of income or of organizing or of education and emotional support. But you also lose access to your memories and your loved ones. And I think this is a direct consequence of how platforms have marketed themselves to us. They’ve marketed themselves as the one stop shop for community or to become a solo entrepreneur. But then they’re like, oh only for those kinds of creators, not for the creators that we don’t care about or we don’t like. Not for the accounts we don’t want to promote.

York: You mentioned earlier that some of your earlier work looked at content that should be taken down. I don’t think either of us are free speech absolutists, but I do struggle with the question of authority and who gets to decide what should be removed, or deplatformed—especially in an environment where we’re seeing lots of censorial bills worldwide aimed at protecting children from some of the same content that we’re concerned about being censored.  How do you see that line, and who should decide?

So that is an excellent question, and it’s very difficult to find one straight answer because I think the line moves for everyone and for people’s specific experiences. I think what I’m referring to is something that is already covered by, for instance, discrimination law. So outright accusing people of a crime that it’s been proved offline that they haven’t committed. When that has been proven that that is not the case and someone goes and says that online to insult or harass or offend someone – and that becomes a sort of mob violence – then I think that’s when something should be taken down. Because there’s direct offline harm to specific people that are being targeted en masse. It’s difficult to find the line, though, because that could happen even like, let’s say for something like #MeToo, when things ended being true about certain people. So it’s very difficult to find the line.

I think that platforms’ approach to algorithmic moderation – blanket deplatforming for things – isn’t really working when nuance is required. The case that I was observing was very specific because it started with a conspiracy theory about a criminal case, and then people that believed or didn’t believe in that conspiracy theory started insulting each other and everybody that’s involved with the case. So I think conspiracy theories are another interesting scenario because you’re not directly harassing anyone if you say, “It’s better to inject bleach into your veins instead of getting vaccinated.” But at the same time, sharing that information can be really harmful to public beliefs about stuff. If we’re thinking about what’s happening with measles, the fact that certain illnesses are coming back because people are so against vaccines from what they’ve read online. So I think there’s quite a system offline already for information that is untrue, for information that is directly targeting specific groups and specific people in a certain manner. So I think what’s happening a lot with what I’m seeing with online legislation is that it’s becoming very broad, and platforms apply it in a really broad way because they just want to cover their backs and don’t want to be seen to be promoting anything that might be remotely harmful. But I think what’s not happening is – or what’s happening in a less obvious fashion – is looking at what we already have and thinking how can we apply it online in a way that doesn’t wreck this infrastructure that we have. And I think that’s very apparent with the case of conspiracy theories and online abuse.

But if we move back to the people we were discussing– sex workers, people that post online nudity, and porn and stuff like that. Porn has already been viewed as free speech in trials from the 1950s, so why are we going back to that? Instead of investing in that and forcing platforms to over-comply, why don’t we invest in better sex education offline so that people who happen to access porn online don’t think that that is the way sex is? Or if there’s actual abuse being committed against people, why do we not regulate with laws that are about abuse and not about nudity and sexual activity? Because being naked is not the same as being trafficked. So, yeah, I think the debate really lacks nuance and lacks ad hoc application because platforms are more interested in blanket approaches because they’re easier for them to apply.

York: You work directly with companies, with platforms that individuals and communities rely on heavily. What strategies have you found to be effective in convincing platforms of the importance of preserving content or ensuring that people have the right to appeal, etc?

It’s an interesting one because I personally have found very few things to be effective. And even when they are apparently effective, there’s a downside. In my experience, for instance, because I have a past in social media marketing, public relations and communications, I always go the PR (public relations) route. Which is making platforms feel bad for something. Or, if they don’t feel bad personally, I try to make them look bad for what they’re doing, image-wise. Because at the moment their responses to everything haven’t been related to them wanting to do good, but they’ve been related to them feeling public and political pressure for things that they may have gotten wrong. So if you point out hypocrisies in their moderation, if you point out that they’ve… misbehaved, then they do tend to apologize.

The issue is that the apologies are quite empty– it’s PR spiel. I think sometimes they’ve been helpful in the sense that for quite a while platforms denied that shadow banning was ever a thing. And the fact that I was able to make them apologize for it by showing proof, even if it didn’t really change the outcome of shadow banning much – although now Meta does notify creators about shadowbanning, which was not something that was happening before– but it really showed people that they weren’t crazy. The gaslighting of users is quite an issue with platforms because they will deny that something is happening until it is too bad for them to deny it. And I think the PR route can be quite helpful to at least acknowledge that something is going on. Because if something is not even acknowledged by platforms, you’ve got very little to stand on when you question it.

The issue is, the fact that platforms respond in a PR fashion, shows a lack of care for their part, and also sometimes leads to changes which sound good on paper or look good on paper, but when you actually look at their implication it becomes a bit ridiculous. For instance, Naomi Nicholas Williams, who is an incredible activist and plus-size Black model – so someone who is terribly affected by censorship because she’s part of a series of demographics that platforms tend to pick up more when it comes to moderation. She fought platforms so hard for the censorship of her content that she got them to introduce this policy about breast-cupping versus breast-grabbing. The issue is that now there is a written policy where you are allowed to cup your breast, but if you squeeze them too hard you get censored. So this leads to this really weird scenario where an Internet company is creating norms of how acceptable it is to grab your breasts, or which way you should be grabbing your breasts. Which becomes a bit ridiculous because they have no place in saying that, and they have no expertise in saying that.

So I think sometimes it’s good to just point out that hypocrisy over and over again, to at least acknowledge that something is going on. But I think that for real systemic change, governments need to step in to treat online freedom of speech as real freedom of speech and create checks and balances for platforms so that they can be essentially – if not fined – at least held accountable for stuff they censor in the same way that they are held accountable for things like promoting harmful things.

York: This is a moment in time where there’s a lot of really horrible things happening online. Is there anything that you’re particularly hopeful about right now? 

I think something that I’m very, very hopeful about is that the kids are alright. I think something that’s quite prominent in the moderation of nudity discourse is “won’t somebody think of the children? What happens if a teenager sees a… something absolutely ridiculous.” But every time that I speak with younger people, whether that’s through public engagement stuff that I do like a public lecture or sometimes I teach seminars or sometimes I communicate with them online– they seem incredibly proficient at finding out when an image is doctored, or when an image is fake, or even when a behavior by some online people is not okay. They’re incredibly clued up about consent, they know that porn is not real sex. So I think we’re not giving kids enough credit about what they already know. Of course, it’s bleak sometimes to think these kids are growing up with quantifiable notions of popularity and that they can see a lot of horrible stuff online. But they also seemvery aware of consent, of bodily autonomy and of what freedoms people should have with their online content – every time I teach undergrads and younger kids, they seem to be very clued up on pleasure and sex ed. So that makes me really hopeful. Because while I think a lot of campaigners, definitely the American Evangelical far-right and also the far-right that we have in Europe, would see kids as these completely innocent, angelic people that have no say in what happens to them. I think actually quite a lot of them do know, and it’s really nice to see. It makes me really hopeful.

York: I love that. The kids are alright indeed. I’m also very hopeful in that sense. Last question– who is your free speech hero? 

There are so many it is really difficult to find just one. But I’d say, given the time that we’re in, I would say that anyone still doing journalism and education in Gaza… from me, from the outside world, just, hats off. I think they’re fighting for their lives while they’re also trying to educate us – from the extremely privileged position we’re in – about what’s going on. And I think that’s just incredible given what’s happening. So I think at the moment I would say them. 

Then in my area of research in general, there’s a lot of fantastic research collectives and sex work collectives that have definitely changed everything I know. So I’m talking about Hacking/ Hustling, Dr. Zahra Stardust in Australia. But also in the UK we have some fantastic sex working unions, like the Sex Worker Union, and the Ethical Strippers who are doing incredible education through platforms despite being censored all the time. So, yeah, anybody that advocates for free speech from the position of not being heard by the mainstream I think does a great job. And I say that, of course, when it comes to marginalized communities, not white men claiming that they are being censored from the height of their newspaper columns. 

Without Interoperability, Apple Customers Will Never Be Secure

Wed, 12/13/2023 - 2:18pm

Every internet user should have the ability to privately communicate with the people that matter to them, in a secure fashion, using the tools and protocols of their choosing.

Apple’s iMessage offers end-to-end encrypted messaging for its customers, but only if those customers want to talk to someone who also has an Apple product. When an Apple customer tries to message an Android user, the data is sent over SMS, a protocol that debuted while Wayne’s World was still in its first theatrical run. SMS is wildly insecure, but when Apple customers ask the company how to protect themselves while exchanging messages with Android users, Apple’s answer is “buy them iPhones.”

That’s an obviously false binary. Computers are all roughly equivalent, so there’s no reason that an Android device couldn’t run an app that could securely send and receive iMessage data. If Apple won’t make that app, then someone else could. 

That’s exactly what Apple did, back when Microsoft refused to make a high-quality MacOS version of Microsoft Office: Apple reverse-engineered Office and released iWork, whose Pages, Numbers and Keynote could perfectly read and write Microsoft’s Word, Excel and Powerpoint files.

Back in September, a 16 year old high school student reverse engineered iMessage and released Pypush, a free software library that reimplements iMessage so that anyone can send and receive secure iMessage data, maintaining end-to-end encryption, without the need for an Apple ID.

Last week, Beeper, a multiprotocol messaging company, released Beeper Mini, an alternative iMessage app reportedly based on the Pypush code that runs on Android, giving Android users the “blue bubble” that allows Apple customers to communicate securely with them. Beeper Mini stands out among earlier attempts at this by allowing users’ devices to directly communicate with Apple’s servers, rather than breaking end-to-end encryption by having messages decrypted and re-encrypted by servers in a data-center.

Beeper Mini is an example of “adversarial interoperability.” That’s when you make something new work with an existing product, without permission from the product’s creator.

(“Adversarial interoperability” is quite a mouthful, so we came up with “competitive compatibility” or “comcom” as an alternative term.)

Comcom is how we get third-party inkjet ink that undercuts HP’s $10,000/gallon cartridges, and it’s how we get independent repair from technicians who perform feats the manufacturer calls “impossible.” Comcom is where iMessage itself comes from: it started life as iChat, with support for existing protocols like XMPP

Beeper Mini makes life more secure for Apple users in two ways: first, it protects the security of the messages they send to people who don’t use Apple devices; and second, it makes it easier for Apple users to switch to a rival platform if Apple has a change of management direction that deprioritizes their privacy.

Apple doesn’t agree. It blocked Beeper Mini users just days after the app’s release.  Apple told The Verge’s David Pierce that they had blocked Beeper Mini users because Beeper Mini “posed significant risks to user security and privacy, including the potential for metadata exposure and enabling unwanted messages, spam, and phishing attacks.”

If Beeper Mini indeed posed those risks, then Apple has a right to take action on behalf of its users. The only reason to care about any of this is if it makes users more secure, not because it serves the commercial interests of either Apple or Beeper. 

But Apple’s account of Beeper Mini’s threats does not square with the technical information Beeper has made available. Apple didn’t provide any specifics to bolster its claims. Large tech firms who are challenged by interoperators often smear their products as privacy or security risks, even when those claims are utterly baseless.

The gold standard for security claims is technical proof, not vague accusations. EFF hasn't audited Beeper Mini and we’d welcome technical details from Apple about these claimed security issues. While Beeper hasn’t published the source code for Beeper Mini, they have offered to submit it for auditing by a third party.

Beeper Mini is back. The company released an update on Monday that restored its functionality. If Beeper Mini does turn out to have security defects, Apple should protect its customers by making it easier for them to connect securely with Android users.

One thing that won’t improve the security of Apple users is for Apple to devote its engineering resources to an arms race with Beeper and other interoperators. In a climate of stepped-up antitrust enforcement, and as regulators around the world are starting to force interoperability on tech giants, pointing at interoperable products and shouting “insecure! Insecure!” no longer cuts it. 

Apple needs to acknowledge that it isn’t the only entity that can protect Apple customers.

Spritely and Veilid: Exciting Projects Building the Peer-to-Peer Web

Wed, 12/13/2023 - 12:49pm

While there is a surge in federated social media sites, like Bluesky and Mastodon, some technologists are hoping to take things further than this model of decentralization with fully peer-to-peer applications. Two leading projects, Spritely and Veilid, hint at what this could look like.

There are many technologies used behind the scenes to create decentralized tools and platforms. There has been a lot of attention lately, for example, around interoperable and federated social media sites using ActivityPub, such as Mastodon, as well as platforms like BlueSky using a similar protocol. These types of services require most individuals to sign up with an intermediary service host in order to participate, but they are decentralized in so far as any user has a choice of intermediary, and can run one of those services themselves while participating in the larger network.

Another model for decentralized communications does away with the intermediary services altogether in favor of a directly peer-to-peer model. This model is technically much more challenging to implement, particularly in cases where privacy and security are crucial, but it does result in a system that gives individuals even more control over their data and their online experience. Fortunately, there are a few projects being developed that are aiming to make purely peer-to-peer applications achievable and easy for developers to create. Two leading projects in this effort are Spritely and Veilid.

Spritely

Spritely is worth keeping an eye on. Being developed by the Institute of the same name, Spritely is a framework for building distributed apps that don’t even have to know that they’re distributed. The project is spearheaded by Christine Lemmer-Webber, who was one of the co-authors of the ActivityPub spec that drives the fediverse. She is taking the lessons learned from that work, combining them with security and privacy minded object capabilities models, and mixing it all up into a model for peer to peer computation that could pave the way for a generation of new decentralized tools.

Spritely is so promising because it is tackling one of the hard questions of decentralized technology: how do we protect privacy and ensure security in a system where data is passing directly between people on the network? Our best practices in this area have been shaped by many years of centralized services, and tackling the challenges of a new paradigm will be important.

One of the interesting techniques that Spritely is bringing to bear on the problem is the concept of object capabilities. OCap is a framework for software design that only gives processes the ability to view and manipulate data that they’ve been given access to. That sounds like common sense, but it is in contrast to the way that most of our computers work, in which the game Minesweeper (just to pick one example) has full access to your entire home directory once you start it up. That isn’t to say that it or any other program is actually reading all your documents, but it has the ability to, which means that a security flaw in that program could exploit that ability.

The Spritely Institute is combining OCap with a message passing protocol that doesn’t care if the other party it's communicating with is on the same device, on another device in the same room, or on the other side of the world. And to top things off they’re working on the protocol in the open, with a handful of other dedicated organizations. We’re looking forward to seeing what the Spritely team creates and what their work enables in the future.

Veilid

Another leading project in the push for full p2p apps was just announced a few months ago. The Veilid project was released at DEFCON 31 in August and has a number of promising features that could lead to it being a fundamental tool in future decentralized systems. Described as a cross between TOR and Interplanetary File System (IPFS), Veilid is a framework and protocol that offers two complementary tools. The first is private routing, which, much like TOR, can construct an encrypted private tunnel over the public internet allowing two devices to communicate with each other without anyone else on the network knowing who is talking to whom.

The second tool that Veilid offers is a Distributed Hash Table (DHT), which lets anyone look up a bit of data associated with a specific key, wherever that data lives on the network. DHTs go all the way back to Bittorrent’s tracker, where they help direct users to other nodes in the network that have the chunk of a file that they need, and they form the backbone of IPFS’s system. Veiled’s DHT is particularly intriguing because it is “multi-writer.” In most DHTs, only one party can set the value stored at a particular key, but in Veilid the creator of a DHT key can choose to share the writing capability with others, creating a system where nodes can communicate by leaving notes for each other in the DHT. Veilid has created an early alpha of a chat program, VeilidChat, based on exactly this feature.

Both of these features are even more valuable because Veilid is a very mobile-friendly framework. The library is available for a number of platforms and programming languages, including the cross-platform Flutter framework, which means it is easy to build iOS and Android apps that use it. Mobile has been a difficult platform to build peer-to-peer apps on for a variety of reasons, so having a turn-key solution in the form of Veilid could be a game changer for decentralization in the next couple years. We’re excited to see what gets built on top of it.

Public interest in decentralized tools and services is growing, as people realize that there are downsides to centralized control over the platforms that connect us all. The past year has seen interest in networks like the fediverse and Bluesky explode and there’s no reason to expect that to change. Projects like Spritely and Veilid are pushing the boundaries of how we might build apps and services in the future. The things that they are making possible may well form the foundation of social communication on the internet in the next decade, making our lives online more free, secure, and resilient.

No Robots(.txt): How to Ask ChatGPT and Google Bard to Not Use Your Website for Training

Tue, 12/12/2023 - 1:19pm

Both OpenAI and Google have released guidance for website owners who do not want the two companies using the content of their sites to train the company's large language models (LLMs). We've long been supporters of the right to scrape websites—the process of using a computer to load and read pages of a website for later analysis—as a tool for research, journalism, and archivers. We believe this practice is still lawful when collecting training data for generative AI, but the question of whether something should be illegal is different from whether it may be considered rude, gauche, or unpleasant. As norms continue to develop around what kinds of scraping and what uses of scraped data are considered acceptable, it is useful to have a tool for website operators to automatically signal their preference to crawlers. Asking OpenAI and Google (and anyone else who chooses to honor the preference) to not include scrapes of your site in its models is an easy process as long as you can access your site's file structure.

We've talked before about how these models use art for training, and the general idea and process is the same for text. Researchers have long used collections of data scraped from the internet for studies of censorship, malware, sociology, language, and other applications, including generative AI. Today, both academic and for-profit researchers collect training data for AI using bots that go out searching all over the web and “scrape up” or store the content of each site they come across. This might be used to create purely text-based tools, or a system might collect images that may be associated with certain text and try to glean connections between the words and the images during training. The end result, at least currently, is the chatbots we've seen in the form of Google Bard and ChatGPT.

It would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, to announce that they'd respect similar requests.

If you do not want your website's content used for this training, you can ask the bots deployed by Google and Open AI to skip over your site. Keep in mind that this only applies to future scraping. If Google or OpenAI already have data from your site, they will not remove it. It also doesn't stop the countless other companies out there training their own LLMs, and doesn't affect anything you've posted elsewhere, like on social networks or forums. It also wouldn't stop models that are trained on large data sets of scraped websites that aren't affiliated with a specific company. For example, OpenAI's GPT-3 and Meta's LLaMa were both trained using data mostly collected from Common Crawl, an open source archive of large portions of the internet that is routinely used for important research. You can block Common Crawl, but doing so blocks the web crawler from using your data in all its data sets, many of which have nothing to do with AI.

There's no technical requirement that a bot obey your requests. Currently only Google and OpenAI who have announced that this is the way to opt-out, so other AI companies may not care about this at all, or may add their own directions for opting out. But it also doesn't block any other types of scraping that are used for research or for other means, so if you're generally in favor of scraping but uneasy with the use of your website content in a corporation's AI training set, this is one step you can take.

Before we get to the how, we need to explain what exactly you'll be editing to do this.

What's a Robots.txt?

In order to ask these companies not to scrape your site, you need to edit (or create) a file located on your website called "robots.txt." A robots.txt is a set of instructions for bots and web crawlers. Up until this point, it was mostly used to provide useful information for search engines as their bots scraped the web. If website owners want to ask a specific search engine or other bot to not scan their site, they can enter that in their robots.txt file. Bots can always choose to ignore this, but many crawling services respect the request.

This might all sound rather technical, but it's really nothing more than a small text file located in the root folder of your site, like "https://www.example.com/robots.txt." Anyone can see this file on any website. For example, here's The New York Times' robots.txt, which currently blocks both ChatGPT and Bard. 

If you run your own website, you should have some way to access the file structure of that site, either through your hosting provider's web portal or FTP. You may need to comb through your provider's documentation for help figuring out how to access this folder. In most cases, your site will already have a robots.txt created, even if it's blank, but if you do need to create a file, you can do so with any plain text editor. Google has guidance for doing so here.

EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information.

What to Include In Your Robots.txt to Block ChatGPT and Google Bard

With all that out of the way, here's what to include in your site's robots.txt file if you do not want ChatGPT and Google to use the contents of your site to train their generative AI models. If you want to cover the entirety of your site, add these lines to your robots.txt file:

ChatGPT

User-agent: GPTBot

Disallow: /

Google Bard

User-agent: Google-Extended

Disallow: /

You can also narrow this down to block access to only certain folders on your site. For example, maybe you don't mind if most of the data on your site is used for training, but you have a blog that you use as a journal. You can opt out specific folders. For example, if the blog is located at yoursite.com/blog, you'd use this:

ChatGPT

User-agent: GPTBot

Disallow: /blog

Google Bard

User-agent: Google-Extended

Disallow: /blog

As mentioned above, we at EFF will not be using these flags because we believe scraping is a powerful tool for research and access to information; we want the information we’re providing to spread far and wide and to be represented in the outputs and answers provided by LLMs. Of course, individual website owners have different views for their blogs, portfolios, or whatever else you use your website for. We're in favor of means for people to express their preferences, and it would ease many minds for other companies with similar AI products, like Anthropic, Amazon, and countless others, announce that they'd respect similar requests.

The House Intelligence Committee's Surveillance 'Reform' Bill is a Farce

Fri, 12/08/2023 - 2:41pm

Earlier this week, both the House Committee on the Judiciary (HJC) and the House Permanent Select Committee on Intelligence (HPSCI) marked up two very different bills (H.R. 6570 - Protect Liberty and End Warrantless Surveillance Act in HJC, and HR 6611, the FISA Reform and Reauthorization Act of 2023 in HPSCI), both of which would reauthorize Section 702 of the Foreign Intelligence Surveillance Act (FISA)—but in very different ways. Both bills head to the House floor next week under a procedural rule called “Queen of the Hill,” where the bill with the most votes gets sent to the Senate for consideration. 

While renewing any surveillance authority remains a complicated and complex issue, this choice is clear - we urge all Members to vote NO on the Intelligence Committee’s bill, H.R.6611, the FISA Reform and Reauthorization Act of 2023.

Take action

TELL congress: Defeat this bad 702 Bill

On Nov. 16, HPSCI released a report calling for reauthorization of Section 702 with essentially superficial reforms. The bill that followed, H.R. 6611, was as bad as expected. It would renew the mass surveillance authority Section 702 for another eight years. It would create new authorities that the intelligence community has sought for years, but that have been denied by the courts. It would continue the indiscriminate collection of U.S. persons’ communications when they talk with people abroad for use by domestic law enforcement. This was not the intention of this national security program, and people on U.S. soil should not have their communications collected without a warrant because of a loophole.

As a reminder, Section 702 was designed to allow the government to warrantlessly surveil non-U.S. citizens abroad for foreign intelligence purposes. Increasingly, it’s this U.S. side of digital conversations that domestic law enforcement agencies trawl through—all without a warrant. FBI agents have been using the Section 702 databases to conduct millions of invasive searches for Americans’ communications, including those of protesters, racial justice activists, 19,000 donors to a congressional campaign, journalists, and even members of Congress

Additionally, the HPSCI bill authorizes the use of this unaccountable and out-of-control mass surveillance program as a new way of vetting asylum seekers by sifting through their digital communications. According to a newly released Foreign Intelligence Surveillance Court (FISC) opinion, the government has sought some version of this authority for years, but was repeatedly denied it, only receiving court approval for the first time this year. Because the court opinion is so heavily redacted, it is impossible to know the current scope of immigration- and visa-related querying, or what broader proposal the intelligence agencies originally sought. 

This new authority proposes to give immigration services the ability to audit entire communication histories before deciding whether an immigrant can enter the country. This is a particularly problematic situation that could cost someone entrance to the United States based on, for instance, their own or a friend’s political opinions—as happened to a Palestinian Harvard student when his social media account was reviewed when coming to the U.S. to start his semester.

The HPSCI bill also includes a call “to define Electronic Communication Service Provider to include equipment.” Earlier this year, the FISA Court of Review released a highly redacted opinion documenting a fight over the government's attempt to subject an unknown company to Section 702 surveillance. However, the court agreed that under the circumstances the company did not qualify as an "electronic communication service provider" under the law. Now, the HPSCI bill would expand that definition to include a much broader range of providers, including those who merely provide hardware through which people communicate on the Internet. Even without knowing the details of the secret court fight, this represents an ominous expansion of 702's scope, which the committee introduced without any explanation or debate of its necessity. 

By contrast, the House Judiciary Committee bill, H.R. 6570, the Protect Liberty and End Warrantless Surveillance Act, would actually address a major problem with Section 702 by banning warrantless backdoor searches of Section 702 databases for Americans’ communications. This bill would also prohibit law enforcement from purchasing Americans’ data that they would otherwise need a warrant to obtain, a practice that circumvents core constitutional protections. Importantly, this bill would also renew this authority for only three more years, giving Congress another opportunity to revisit how the reforms are implemented and to make further changes if the government is still abusing the program.

EFF has long fought for significant changes to Section 702. By the government’s own numbers, violations are still occurring at a rate of more than 4,000 per year. Our government, with the FBI in the lead, has come to treat Section 702—enacted by Congress for the surveillance of foreigners on foreign soil —as a domestic surveillance program of Americans. This simply cannot be allowed to continue. While we will continue to push for further reforms to Section 702, we urge all members to reject the HPSCI bill.

Hit the button below to tell your elected officials to vote against this bill:

Take action

TELL congress: Defeat this bad 702 Bill

Related Cases: Jewel v. NSA

In Landmark Battle Over Free Speech, EFF Urges Supreme Court to Strike Down Texas and Florida Laws that Let States Dictate What Speech Social Media Sites Must Publish

Thu, 12/07/2023 - 6:05pm
Laws Violate First Amendment Protections that Help Create Diverse Forums for Users’ Free Expression

WASHINGTON D.C.—The Electronic Frontier Foundation (EFF) and five organizations defending free speech today urged the Supreme Court to strike down laws in Florida and Texas that let the states dictate certain speech social media sites must carry, violating the sites’ First Amendment rights to curate content they publish—a protection that benefits users by creating speech forums accommodating their diverse interests, viewpoints, and beliefs.

The court’s decisions about the constitutionality of the Florida and Texas laws—the first laws to inject government mandates into social media content moderation—will have a profound impact on the future of free speech. At stake is whether Americans’ speech on social media must adhere to government rules or be free of government interference.

Social media content moderation is highly problematic, and users are rightly often frustrated by the process and concerned about private censorship. But retaliatory laws allowing the government to interject itself into the process, in any form, raises serious First Amendment, and broader human rights, concerns, said EFF in a brief filed with the National Coalition Against Censorship, the Woodhull Freedom Foundation, Authors Alliance, Fight for The Future, and First Amendment Coalition.

“Users are far better off when publishers make editorial decisions free from government mandates,” said EFF Civil Liberties Director David Greene. “These laws would force social media sites to publish user posts that are at best, irrelevant, and, at worst, false, abusive, or harassing.

“The Supreme Court needs to send a strong message that the government can’t force online publishers to give their favored speech special treatment,” said Greene.

Social media sites should do a better job at being transparent about content moderation and self-regulate by adhering to the Santa Clara Principles on Transparency and Accountability in Content Moderation. But the Principles are not a template for government mandates.

The Texas law broadly mandates that online publishers can’t decline to publish others’ speech based on anyone’s viewpoint expressed on or off the platform, even when that speech violates the sites' rules. Content moderation practices that can be construed as viewpoint-based, which is virtually all of them, are barred by the law. Under it, sites that bar racist material, knowing their users object to it, would be forced to carry it. Sites catering to conservatives couldn’t block posts pushing liberal agendas.

The Florida law requires that social media sites grant special treatment to electoral candidates and “journalistic enterprises” and not apply their regular editorial practices to them, even if they violate the platforms' rules. The law gives preferential treatment to political candidates, preventing publishers at any point before an election from canceling their accounts or downgrading their posts or posts about them, giving them free rein to spread misinformation or post about content outside the site’s subject matter focus. Users not running for office, meanwhile, enjoy no similar privilege.

What’s more, the Florida law requires sites to disable algorithms with respect to political candidates, so their posts appear chronologically in users’ feeds, even if a user prefers a curated feed. And, in addition to dictating what speech social media sites must publish, the laws also place limits on sites' ability to amplify content, use algorithmic ranking, and add commentary to posts.

“The First Amendment generally prohibits government restrictions on speech based on content and viewpoint and protects private publisher ability to select what they want to say,” said Greene. “The Supreme Court should not grant states the power to force their preferred speech on users who would choose not to see it.”

“As a coalition that represents creators, readers, and audiences who rely on a diverse, vibrant, and free social media ecosystem for art, expression, and knowledge, the National Coalition Against Censorship hopes the Court will reaffirm that government control of media platforms is inherently at odds with an open internet, free expression, and the First Amendment,” said Lee Rowland, Executive Director of National Coalition Against Censorship.

“Woodhull is proud to lend its voice in support of online freedom and against government censorship of social media platforms,” said Ricci Joy Levy, President and CEO at Woodhull Freedom Foundation. “We understand the important freedoms that are at stake in this case and implore the Court to make the correct ruling, consistent with First Amendment jurisprudence.”

"Just as the press has the First Amendment right to exercise editorial discretion, social media platforms have the right to curate or moderate content as they choose. The government has no business telling private entities what speech they may or may not host or on what terms," said David Loy, Legal Director of the First Amendment Coalition.

For the brief:
https://www.eff.org/document/eff-brief-moodyvnetchoice

Contact:  DavidGreeneCivil Liberties Directordavidg@eff.org

Think Twice Before Giving Surveillance for the Holidays

Thu, 12/07/2023 - 3:22pm

With the holidays upon us, it's easy to default to giving the tech gifts that retailers tend to push on us this time of year: smart speakers, video doorbells, bluetooth trackers, fitness trackers, and other connected gadgets are all very popular gifts. But before you give one, think twice about what you're opting that person into.

A number of these gifts raise red flags for us as privacy-conscious digital advocates. Ring cameras are one of the most obvious examples, but countless others over the years have made the security or privacy naughty list (and many of these same electronics directly clash with your right to repair).

One big problem with giving these sorts of gifts is that you're opting another person into a company's intrusive surveillance practice, likely without their full knowledge of what they're really signing up for.

For example, a smart speaker might seem like a fun stocking stuffer. But unless the giftee is tapped deeply into tech news, they likely don't know there's a chance for human review of any recordings. They also may not be aware that some of these speakers collect an enormous amount of data about how you use it, typically for advertising–though any connected device might have surprising uses to law enforcement, too.

There's also the problem of tech companies getting acquired like we've seen recently with Tile, iRobot, or Fitbit. The new business can suddenly change the dynamic of the privacy and security agreements that the user made with the old business when they started using one of those products.

And let's not forget about kids. Long subjected to surveillance from elves and their managers, electronics gifts for kids can come with all sorts of surprise issues, like the kid-focused tablet we found this year that was packed with malware and riskware. Kids’ smartwatches and a number of connected toys are also potential privacy hazards that may not be worth the risks if not set up carefully.

Of course, you don't have to avoid all technology purchases. There are plenty of products out there that aren't creepy, and a few that just need extra attention during set up to ensure they're as privacy-protecting as possible. 

What To Do Instead

While we don't endorse products, you don't have to start your search in a vacuum. One helpful place to start is Mozilla's Privacy Not Included gift guide, which provides a breakdown of the privacy practices and history of products in a number of popular gift categories. This way, instead of just buying any old smart-device at random because it's on sale, you at least have the context of what sort of data it might collect, how the company has behaved in the past, and what sorts of potential dangers to consider. U.S. PIRG also has guidance for shopping for kids, including details about what to look for in popular categories like smart toys and watches.

Finally, when shopping it's worth keeping in mind two last details. First, some “smart” devices can be used without their corresponding apps, which should be viewed as a benefit, because we've seen before that app-only gadgets can be bricked by a shift in company policies. Also, remember that not everything needs to be “smart” in the first place; often these features add little to the usability of the product.

Your job as a privacy-conscious gift-giver doesn't end at the checkout screen.

If you're more tech savvy than the person receiving the item, or you're helping set up a gadget for a child, there's no better gift than helping set it up as privately as possible. Take a few minutes after they've unboxed the item and walk through the set up process with them. Some options to look for: 

  • Enable two-factor authentication when available to help secure their new account.
  • If there are any social sharing settings—particularly popular with fitness trackers and game consoles—disable any unintended sharing that might end up on a public profile.
  • Look for any options to enable automatic updates. This is usually enabled by default these days, but it's always good to double-check.
  • If there's an app associated with the new device (and there often is), help them choose which permissions to allow, and which to deny. Keep an eye out for location data, in particular, especially if there's no logical reason for the app to need it. 
  • While you're at it, help them with other settings on their phone, and make sure to disable the phone’s advertising ID.
  • Speaking of advertising IDs, some devices have their own advertising settings, usually located somewhere like, Settings > Privacy > Ad Preferences. If there's an option to disable any ad tracking, take advantage of it. While you're in the settings, you may find other device-specific privacy or data usage settings. Take that opportunity to opt out of any tracking and collection when you can. This will be very device-dependent, but it's especially worth doing on anything you know tracks loads of data, like smart TVs
  • If you're helping set up a video or audio device, like a smart speaker or robot vacuum, poke around in the options to see if you can disable any sort of "human review" of recordings.

If during the setup process, you notice some gaps in their security hygiene, it might also be a great opportunity to help them set up other security measures, like setting up a password manager

Giving the gift of electronics shouldn’t come with so much homework, but until we have a comprehensive data privacy law, we'll likely have to contend with these sorts of set-up hoops. Until that day comes, we can all take the time to help those who need it.

EFF Reminds the Supreme Court That Copyright Trolls Are Still a Problem

Thu, 12/07/2023 - 2:11pm

At EFF, we spend a lot of time calling out the harm caused by copyright trolls and protecting internet users from their abuses. Copyright trolls are serial plaintiffs who use search tools to identify technical, often low-value infringements on the internet, and then seek nuisance settlements from many defendants. These trolls take advantage of some of copyright law’s worst features—especially the threat of massive, unpredictable statutory damages—to impose a troublesome tax on many uses of the internet.

On Monday, EFF continued the fight against copyright trolls by filing an amicus brief in Warner Chappell Music v. Nealy, a case pending in the U.S. Supreme Court. The case doesn’t deal with copyright trolls directly. Rather, it involves the interpretation of the statute of limitations in copyright cases. Statutes of limitations are laws that limit the time after an event within which legal proceedings may be initiated. The purpose is to encourage plaintiffs to file their claims promptly, and to avoid stale claims and unfairness to defendants when time has passed and evidence might be lost. For example, in California, the statute of limitations for a breach of contract claim is generally four years.

U.S. copyright law contains a statute of limitations of three years “after the claim accrued.” Warner Chappell Music v. Nealy deals with the question of exactly what this means. Warner Chappell Music, the defendant in the case, argued that the claim accrued when the alleged infringement occurred, giving a plaintiff three years after that to recover damages. Plaintiff Nealy argued that his claim didn’t “accrue” until he discovered the alleged infringement, or reasonably should have discovered it. This “discovery rule” would permit Nealy to recover damages for acts that occurred long ago—much longer than three years—as long as he filed suit within three years of that “discovery.”

How does all this affect copyright trolls? The “discovery rule” lets trolls reach far, far back in time to find alleged infringements (such as a photo posted on a website), and plausibly threaten their targets with years of accumulated damages. All they have to do is argue that they couldn’t reasonably have discovered the infringement until recently. The trolls’ targets would have trouble defending against ancient claims, and be more likely to have to pay nuisance settlements.

EFF’s amicus brief provided the court with an overview of the copyright trolling problem and gave examples of types of trolls. The brief then showed how an unlimited look-back period for damages under the discovery rule adds risk and uncertainty for the targets of copyright trolls and would encourage abuse of the legal system.

EFF’s brief in this case is a little unusual—the case doesn’t directly involve technology or technology companies (except indirectly, to the extent they could be targets of copyright trolls). The party we’re supporting is a leading music publishing company. Other amici on the same side include the RIAA, the U.S. Chamber of Commerce, and the Association of American Publishers. Because statutes of limitations are fundamental to the justice system, this infrequent coalition perhaps isn’t that surprising.

In many previous copyright troll cases, the courts have caught on to their abuse of the judicial system, and taken steps to shut down the trolling. EFF filed its brief in this case to ask the Supreme Court to extend these judicial safeguards, by holding that copyright infringement damages can only be recovered for acts occurring three years before the filing of the complaint. An indefinite statute of limitations would throw gasoline on the patent troll fire and risk encouraging new trolls to come out from under the figurative bridge.

Related Cases: Warner Chappell Music v. Nealy

Meta Announces End-to-End Encryption by Default in Messenger

Thu, 12/07/2023 - 12:03pm

Yesterday Meta announced that they have begun rolling out default end-to-end encryption for one-to-one messages and voice calls on Messenger and Facebook. While there remain some privacy concerns around backups and metadata, we applaud this decision. It will bring strong encryption to over one billion people, protecting them from dragnet surveillance of the contents of their Facebook messages. 

Governments are continuing to attack encryption with laws designed to weaken it. With authoritarianism on the rise around the world, encryption is more important with each passing day. Strong default encryption, sooner, might have prevented a woman in Nebraska from being prosecuted for an abortion based primarily on evidence from her Facebook messages. This update couldn’t have come at a more important time. This introduction of end-to-end encryption on Messenger means that the two most popular messaging platforms in the world, both owned by Meta, will now include strong encryption by default. 

For now this change will only apply to one-to-one chats and voice calls, and will be rolled out to all users over the next few months, with default encryption of group messages and Instagram messages to come later. Regardless, this rollout is a huge win for user privacy across the world. Users will also have many more options for messaging security and privacy, including how to back-up their encrypted messages safely, turning off “read receipts,” and enabling “disappearing” messages. Choosing between these options is important for your privacy and security model, and we encourage users to think about what they expect from their secure messenger.

Backing up securely: the devil is in the (Labyrinthian) details

The technology behind Messenger’s end-to-end encryption will continue to be a slightly modified version of the Signal protocol (the same as Whatsapp). When it comes to building secure messengers, or in this case, porting a billion users onto secure messaging, the details are the most important part. In this case, the encrypted backup options provided by Meta are the biggest detail: in addressing backups, how do they balance security with usability and availability?

Backups are important for users who expect to log into their account from any device and retrieve their message history by default. From an encryption standpoint, how backups are handled can break certain guarantees of end-to-end encryption. WhatsApp, Meta’s other messaging service, only provided the option for end-to-end encrypted backups just a few years ago. Meta is also rolling out an end-to-end encrypted backup system for Messenger, which they call Labyrinth.

Encrypted backups means your backed-up messages will be encrypted on Facebook servers, and won’t be readable without your private key. Enabling encrypted backups (necessarily) breaks forward secrecy, in exchange for usability. If an app is forward-secret, then you could delete all your messages and hand someone else your phone and they would not be able to recover them. Deciding between this tradeoff is another factor you should weigh when choosing how to use secure messengers that give you the option.

If you elect to use encrypted backups, you can set a 6-digit PIN to secure your private key, or back up your private keys up to cloud storage such as iCloud or Google Cloud. If you back up keys to a third-party, those keys are available to that service provider and could be retrieved by law enforcement with a warrant, unless that cloud account is also encrypted. The 6-digit PIN provides a bit more security than the cloud back-up option, but also at the cost of usability for users who might not be able to remember a pin. 

Choosing the right secure messenger for your use case

There are still significant concerns about metadata in Messenger. By design, Meta has access to a lot of unencrypted metadata, such as who sends messages to whom, when those messages were sent, and data about you, your account, and your social contacts. None of that will change with the introduction of default encryption. For that reason we recommend that anyone concerned with their privacy or security consider their options carefully when choosing a secure messenger.

Pages