EFF: Updates

Subscribe to EFF: Updates feed
EFF's Deeplinks Blog: Noteworthy news from around the internet
Updated: 1 hour 4 min ago

EFF Stands With #SaveAlaa, Calls for Release of Alaa Abdel Fattah, Activist and Friend

Wed, 09/29/2021 - 4:59pm

My conditions are but a drop in a dark sea of injustice. - Alaa Abdel Fattah,  November 7, 2019, at State Security Prosecution

EFF is profoundly concerned about our friend, Egyptian blogger, coder, and activist Alaa Abd El Fattah, who has been jailed for more than two years at a maximum-security prison in Tora, 12 miles south of Cairo. Media reports have cited his attorney saying Fattah was considering suicide because of the dire conditions under which he is being held. The lawyer, Khaled Ali, said at a Sept. 13 court hearing in his case—to determine whether Fattah would continue to be held prior to trial—that his client spoke of the terrible conditions he faces. “I can’t carry on,” he quoted Fattah as saying.

A free speech advocate and software developer, Fattah has repeatedly been targeted and jailed for working to ensure Egyptians and others in the Middle East and North Africa have a voice, and privacy, online. Fattah has been detained under every Egyptian head of state in his lifetime, and has most recently been imprisoned for all but a few months since 2013. While Fattah’s family received a hand-written letter from him a few days after the hearing in which he pledged to do his best to endure prison conditions, they have not heard from him since and warn his mental health is failing. His mother and sisters visit the prison almost daily in hopes of receiving a letter from him, but there’s been nothing. Word of his condition sparked the #SaveAlaa hashtag campaign on social media. We stand in solidarity with Fattah’s family, friends, and supporters in calling on Egyptian authorities for his release.

A soon-to-be-released collection of Fattah’s prison writings, interviews, and articles, hauntingly entitled “You Have Not Yet Been Defeated” with an introduction by Naomi Klein (pre-order here or here) contains a searing passage from a statement Fattah gave to prosecutors at a January 2020 hearing.

I’m in detention as a preventative measure because of a state of political crisis – and a fear that I will engage with it. It’s clear that I’m detained here today because of previous positions I’ve taken. I don’t deny these positions, but I believe that right now Egyptian society is exhausted from its multiple problems and poor administration and that the security apparatuses are no longer able to understand me, or what goes on in the minds and the hearts of people like me.

Fattah began using his technical skills almost 20 years ago to connect technologists across the Middle East and North Africa with each other and build online platforms so that others could share opinions and speak freely and privately. The role he played in using technology to amplify the messages of his fellow Egyptians—as well as his own participation in the uprising in Tahrir Square—made him a prominent global voice during the Arab Spring, and a target for the country’s repressive regimes, which have used antiterrorism laws to silence critics by throwing them in jail and depriving them of due process and other basic human rights.

Fattah’s latest arrest, in 2019, occurred just six months after he was released following a five-year prison term for his role in the peaceful demonstrations of 2011. He was re-arrested in a massive sweep of activists and charged with spreading false news and belonging to a terrorist organization. The crackdown comes amidst a number of other cases in which prosecutors and investigation judges have used pre-trial detention as a method of punishment. Egypt’s counterterrorism law was amended in 2015 under President Abdel-Fattah al-Sisi so that pre-trial detention can be extended for two years and, in terrorism cases, indefinitely.

Fattah has been held without trial at Tora Prison, without access to books or newspapers, no exercise time or time out of the cell and—since COVID-19 restrictions came in to play—with only one visit, for twenty minutes, once a month.

Over the years Fattah has continued to speak out for human rights even while jailed, and has shown great courage while facing conditions meant to silence him. Now his calls for justice and free speech will be available for all to read. “You Have Not Yet Been Defeated” is set for release in spring of 2022, and can be pre-ordered on Amazon and other online sources. Fattah speaks with passion in the book about his love for his country and why he has stood up to the regime and joined protestors in Tahrir Square.

We go to the square to discover that we love life outside it, and to discover that our love for life is resistance. We race towards the bullets because we love life, and we walk into prison because we love freedom.  The country is what we love and what we live for; what we celebrate and what we mourn. If the state falls, more than just the square will remain – there will be the love of strangers, there will be everything that drove us to the square, and everything we learned in the square. -- Abu Khaled Friday, 9 December 2011 Cell 6/1, Ward 4, Torah Investigative Prison

Fattah’s family warns that he is in imminent danger, his mental health is failing after two years of cruel treatment by the Ministry of Interior and National Security. “His life is in danger, in a prison that operates completely outside the space of the law and in complete disregard of all officials,” they said in a recent statement.

We urge everyone to order “You Have Not Yet Been Defeated,” and contact your elected representatives to ask that they contact their counterparts in Egypt. We must raise awareness about his situation and put pressure on the Egyptian government to release him. His book is a testament to his resilience, and we urge everyone to do everything they can so Fattah, who stands for the right to freedom of expression, association, and assembly, is not defeated.

FPF’s 2020 Student Privacy Pledge: New Pledge, Similar Problems

Tue, 09/28/2021 - 11:19pm

EFF legal intern Rob Ferrari was the lead author of this post.

A new school year has started, the second one since the pandemic began. With our education system becoming increasingly reliant on the use of technology (“edtech”), especially for remote learning during the pandemic, protecting student privacy is more important than ever. Unfortunately, the Future of Privacy Forum’s 2020 Student Privacy Pledge, like the legacy version, continues to provide schools, parents, and students with false assurance due to numerous loopholes for the edtech company signatories that collect and use student data.

The Future of Privacy Forum (FPF) originally launched the Student Privacy Pledge in 2014 to encourage edtech companies to take voluntary steps to protect the privacy of K-12 students. In 2016, we criticized the Legacy Pledge after it reached 300 signatories—to FPF’s dismay.

The 2020 Pledge once again falls short in how it defines material terms, such as “Student PII” and “School Service Providers”; many of the 2020 Pledge’s commitments are conditioned on school or parent/student consent, which may inadequately protect student privacy; and new commitments are insufficiently precise.

Additionally, while the Student Privacy Pledge is a self-regulatory program, FPF emphasizes that companies who choose to sign the Pledge are committing to public promises that are enforceable by the Federal Trade Commission (FTC) and state attorneys general under consumer protection laws—but this is cold comfort as enforcement actions against edtech companies for violating students’ privacy have been few and far between.

Loopholes in Definitions

Similar to our prior criticisms of FPF’s Legacy Student Privacy Pledge, the 2020 Pledge is filled with inconsistent terminology and fails to define material terms. This creates a disconnect between what schools, parents, and students might reasonably expect when reading the 2020 Pledge and what companies actually must do to comply with it. In short, inconsistent and vague terms undermine the Pledge’s ability to hold companies accountable.

Will the 2020 Pledge Protect Sensitive Student Data?

It’s unclear.

First, the 2020 Pledge commitments primarily apply to “student personally identifiable information” (“Student PII”), a new term that is said to have the same definition as “covered information” as defined in California’s Student Online Personal Information Protection Act (SOPIPA). But “covered information” in SOPIPA includes the term “personally identifiable information,” making the the 2020 Pledge definition, in part, circular. Furthermore, SOPIPA does not define “personally identifiable information,” and leaving it up to the companies is not sufficient. This creates compliance challenges for signatories as they need to assess the data provided about the student to determine if it could be construed as “personally identifiable information.” Ironically, FPF itself criticized SOPIPA (pp. 18-19) for being difficult to implement because the statute fails to define “personally identifiable information.” So why does the 2020 Pledge reference SOPIPA when FPF thinks the statute is challenging to implement?

Second, the 2020 Pledge’s definition of “Student PII” includes an exception for “de-identified information.” Signatories are, therefore, free to collect and use student data contrary to the Pledge’s commitments so long as the data is “de-identified.” While the U.S. Department of Education has drafted guidance on data de-identification, the 2020 Pledge fails to define that term and thus fails to provide a standard for de-identification that provides some baseline privacy protection and that can be used to determine signatories’ compliance with the Pledge. 

Not all de-identification processes provide adequate protection. For example, an edtech provider might build a student profile that contains sensitive student data and then simply replace that student’s name with an ID number. This practice would weakly protect student privacy, but would it fall within the 2020 Pledge’s “de-identified information” exception? More stringent de-identification processes, such as aggregation of student data, could still compromise student privacy because certain types of data are more sensitive than others. For example, location data is extremely sensitive, and even in isolation, it could reveal patterns of a student’s daily habits; track the student’s precise whereabouts at any given moment; and compromise the student’s identity through extrapolation. 

Admittedly, standards may be difficult to draft, and even best practices for de-identification carry some risk, as re-identification processes will become more sophisticated over time. But a minimum requirement for de-identification would help close this otherwise fairly large loophole. By contrast, leaving the term undefined and open to individual company interpretation creates a broad exception that undermines the Pledge’s ability to hold companies accountable and meaningfully protect student privacy.

Which Companies Are Subject to the 2020 Pledge?

The 2020 Pledge commitments apply to “School Service Providers,” which is a Legacy Pledge term that has been revised. Despite the revisions, the term continues to create confusion regarding when a signatory company is subject to the 2020 Pledge obligations.

First, the 2020 Pledge's definition of a “School Service Provider” is inconsistent as to whether a company must both design and market a product/service for schools in order to be bound by the Pledge, or whether simply marketing its product/service for schools is sufficient.

As provided by the first line of the definition, to qualify as a “School Service Provider,” a company must simply market its product/service for use in schools. This laxer definition is better for students. But in the second line, the Pledge creates an exception when a product/service is not both designed and marketed for schools. FPF’s FAQ further explains that a product/service must be designed for education, not just marketed (see “Is my company eligible to take the Pledge?”).

Similar to our prior criticism of the Legacy Pledge, this is problematic because a company could be a signatory to the Pledge and qualify as a “School Service Provider” by marketing its product/service (or one of its products/services) for schools, but that same company could then argue that data collection via a particular product/service is not subject to the Pledge commitments because the product/service was only marketed—and not also designed—for schools. 

For example, Beanstack by Zoobean, a 2020 Pledge signatory, is a product that encourages reading through various challenges. While Beanstack is marketed to schools as well as libraries, companies, and consumers, would Zoobean be allowed to argue that its product was simply marketed to, but not designed for schools? What if Beanstack was actually designed for use by libraries or consumers, but schools incidentally showed demand for the product? 

Zoobean isn’t alone. It’s unclear what market the signatory AvePoint, a Microsoft 365 “data management solutions provider,” was designed for, though it appears to be marketed to corporations, governments, and higher education (which is ironic, given that the Pledge applies to providers in K-12 schools, not colleges and universities). Botdoc, a secure data transfer service, does not appear to even be designed for schools, but must be marketed to schools (seeing as it’s a Pledge signatory). These are just a few examples of why the confusing “School Service Provider” definition is a problem.

Second, a company can qualify as a signatory to the Pledge if it offers even a single product/service that matches FPF’s definition of “School Service Provider” (notwithstanding the confusion surrounding the definition). But that company would not be bound by the Pledge for any of its other products/services that fall outside of FPF’s definition. As FPF explained in a blog post (again adding to the marketing/design confusion): “One of the most common misunderstandings about the Pledge is the assumption that the Pledge applies to all products offered by a signatory or used by a student. However, the Student Privacy Pledge applies to ‘school service providers’—companies that design and market their services and devices for use in schools.” This is concerning because schools, parents, and students might mistakenly trust a brand’s entire suite of products/services based on its status as a signatory to the Student Privacy Pledge. It also doesn’t help that this caveat is not readily discernible from even a close read of the 2020 Pledge and FAQ.

Similarly, it’s unfortunate that the Pledge applies to only K-12 education and doesn’t apply to colleges and universities. Higher education schools and students might mistakenly believe that a Pledge signatory is obligated to protect the privacy of data collected from post-secondary students, when this is not the case.

Notification Required for Changes to Which Privacy Policy?

The 2020 Pledge requires signatories to provide prominent notice to schools, parents, and students when making changes to “educational privacy policies.” This is a narrowing change from the Legacy Pledge, which required companies to provide notice when changing their “consumer privacy policies.” The definitional section of the 2020 Pledge fails to define “educational privacy policies,” instead defining only “consumer privacy policies” (which is likely a mistake made during the revision process).

Without providing a definition of “educational privacy policies,” this change is problematic. For example, Google Workspace for Education has a privacy notice that cross-references the company’s general privacy policies. Could Google argue that it’s not obligated under the 2020 Pledge to notify schools, parents, and students when it makes changes to its general privacy policies, because those are not solely for its educational products? By failing to define “educational privacy policies,” the 2020 Pledge creates uncertainty that could allow companies to be in technical compliance while avoiding transparency for their users.

Loopholes in Consent-Based Privacy

Many of the Pledge’s commitments provide exceptions when an edtech company is performing “authorized educational/school purposes” or when the company acquires parent/student consent. Consent from either the school or the parent/student controls an edtech company’s obligations with respect to collecting, maintaining, or sharing Student PII; building personal profiles of a student; and to the duration of time that Student PII can be retained. Structuring key commitments this way may not adequately protect student privacy.

First, schools can determine whether an activity is an “authorized educational/school purpose,” which effectively provides consent on behalf of the parent/student. This bypassing of parent/student consent is particularly concerning for schools or school districts that overlook privacy concerns of parents/students, implicitly trust privacy policies, or lack the resources to properly train administrators and teachers on best practices for student privacy.

Second, student privacy that is contingent on parent/student consent has its own inherent shortcomings. Parents/students might consent to company conduct that compromises student privacy because of deceptive practices such as opt-out by default (as opposed to opting in to data collection and use) and other “dark patterns.” The FAQ itself states that “a parent or student may authorize a signatory to use student PII for non-educational purposes,” which is concerning given the risk of deceptive settings (see “What does the Pledge say about the limits on signatories using of student PII?”). Furthermore, parents/students might lack a meaningful choice because there are barriers to opting out, such as when a school, district, or individual teachers heavily rely on an edtech company’s products/services and no real alternative exists—that is, parents/students are inadvertently pressured into consenting at the risk of a subpar education.

New Commitments Don’t Go Far Enough 

FPF’s 2020 Pledge includes additional commitments that could actually enhance student privacy when compared to the Legacy Pledge, but the binding language does not go far enough.

First, the 2020 Pledge now requires School Service Providers to provide “resources” to educate schools, parents, and students on how to use their products/services in a way that promotes privacy and security. EFF strongly believes that proper training is critical to ensuring student privacy. But the language in this commitment is vague and weak because it fails to set a minimum standard of what “resources” must be provided. The FAQ does provide guidance through a non-exhaustive list of what “resources” FPF has in mind, many of which we believe should be part of a comprehensive approach to student privacy (see “What other information is available about providing resources to support users and/or account holders?”). But the FAQ is not the Pledge. And student privacy might not be adequately protected even if the 2020 Pledge is read alongside the FAQ—for example, if a product/service itself doesn’t have robust privacy settings, or if a company simply provides “manuals” to non-tech savvy users.

Second, the 2020 Pledge also now requires that companies “incorporate privacy and security when developing or improving” their products/services. This obligation is elaborated upon in the FAQ (see “What additional information is available about incorporating privacy and security into the design process?”), which says, for example, a company could comply by “applying privacy and security by design principles.” While EFF fully supports privacy-by-design, FPF’s approach here runs into the same problems as the new “resources” commitment. The FAQ is not the Pledge, and there is no minimum standard required for this obligation. A company could do the bare minimum and claim that it has satisfied its obligation—for example, by following the transparency or openness principles of privacy-by-design by having a privacy policy, but doing nothing else. This is a far cry from the spirit of privacy-by-design, which focuses on privacy at every level of the design process.

Protecting student privacy requires a robust, comprehensive program—generic commitments working in isolation are not sufficient.

The Student Privacy Pledge is Nothing Without Enforcement

Even if FPF’s 2020 Pledge were an airtight document capable of being precisely applied to evaluate whether companies have kept their legally binding public promises, edtech companies will not be held accountable unless there is enforcement. FPF itself apparently hasn’t created an enforcement mechanism to regularly assess signatories’ compliance with the Pledge, and it remains to be seen whether the FTC and state attorneys general are willing to enforce it.

Despite hosting a workshop on student privacy in 2017, the FTC rarely brings enforcement actions focused on student privacy. In fact, since the start of 2018, the FTC has reviewed 66 consumer privacy cases, none of which are primarily aimed at addressing student privacy issues. Student privacy in relation to edtech companies should be a central focus, particularly in light of a year filled with remote learning and the subsequent spike in student privacy concerns. With the FTC chairwoman earlier this year expressing interest in tackling student privacy issues, it’s time to put words into action.

***


EFF is not opposed to voluntary mechanisms like the Student Privacy Pledge to protect users—if they work. Schools, parents, and students must have confidence that a company whose products they use in classrooms—and that’s a signatory to the Pledge—is not only complying with the Pledge, but is in fact meaningfully protecting the privacy of students.

Cross Border Police Surveillance Treaty Must Have Clear, Enforceable Privacy Safeguards, Not a Patchwork of Weak Provisions

Tue, 09/28/2021 - 9:30pm

This is the fourth post in a series about recommendations EFF, European Digital Rights, the Samuelson-Glushko Canadian Internet Policy & Public Interest Clinic, and other civil society organizations have submitted to the Parliamentary Assembly of the Council of Europe (PACE), which is currently reviewing the Second Additional Protocol to the Budapest Convention on Cybercrime, to amend the text before final approval in the fall. Read the full series here, here, here,  here, and here.

Two very different assessments of a proposed treaty on cross border police access to user data were presented to the Council of Europe (CoE) Parliamentary Assembly at a hearing earlier this month. EFF expressed grave concerns about a lack of detailed human rights safeguards in the text, while officials with CoE’s Cybercrime Convention Committee (T-CY), which drafted the treaty, not surprisingly voiced confidence that the instrument provides adequate protection for individual rights.

The treaty, created to facilitate cross border law enforcement investigations of cybercrime and procedures for efficiently accessing electronic evidence, including user data, will reshape cross-border law enforcement data-gathering on a global scale. At this point, with final approval of the treaty expected in November, we are still far apart on the issue of human rights protections.

It was made clear at the September 14 virtual hearing that the treaty—called the Second Additional Protocol to the Budapest Convention on Cybercrime—was crafted with an eye towards appeasing as many states as possible, all with highly varying criminal legal systems and human rights track records. No easy task, to be sure. Representatives of CoE Cybercrime Committee (T-CY) said the Protocol’s “carefully calibrated” text is the result of intensive negotiations in dozens of meetings with dozens of states, parties, and experts, over many years.

Compromises had to be made, they said, to accommodate the needs of multiple states with competing law enforcement approaches to investigating cybercrime, safeguarding data, and protecting human rights. The reality is that T-CY Member States are willing to impose detailed, mandatory standards for law enforcement access to electronic information, but not willing to impose solid human rights and data protection standards globally.

As EFF Policy Director for Global Privacy Katitza Rodriguez said at the hearing, detailed international law enforcement powers should come with detailed legal safeguards for privacy and data protection. The Protocol does not establish clear and enforceable baseline safeguards in cross-border evidence gathering, and avoids imposing strong privacy and data protections in an active attempt to entice states with weaker human rights records to sign on.

To this end, the Protocol recognizes many mandatory and intrusive police powers, coupled with relatively weak safeguards that are largely optional in nature. The result is a net dilution of privacy and human rights on a global scale. But the right to privacy is a universal right. Incorporating strong safeguards alongside law enforcement  powers will not impede cross-border law enforcement, but will ensure human rights are respected, Rodriguez added.

The hearing confirmed some of gravest concerns regarding the treaty. For example, while Article 13 states the Protocol’s investigative powers should be applied in a manner that is proportionate and subject to adequate privacy and human rights safeguards, we have argued that each Party is left to decide for themselves what meets this standard and many anticipated signatories have very weak safeguards. T-CY confirmed that Article 13 provides Parties with substantial flexibility, but saw this as a feature, not a bug, because it allows countries to sign on despite lacking meaningful and robust human rights protection.

Even worse, Article 14, which sets out the Protocol’s central privacy protections, can be easily bypassed. Any two more Parties can simply agree to use weaker safeguards when relying on the Protocol’s policing powers. Also, while T-CY officials claimed that the Protocol’s safeguards are “particularly” strong, this is sadly not the case. Article 14’s provisions fail to reflect privacy safeguards in modern data protection regimes (such as the CoE’s own marquee privacy treaty—Convention 108+) and in many instances even work to undermine emerging global standards.

To begin with, Article 14 fails to require that all processing of personal data be adequate, fair, and proportionate to its objectives. The absence of these terms in the Protocol is troubling, as it indicates fewer and weaker conditions to access data will be allowed and tolerated.

The Protocol’s treatment of biometric data is even more troubling. Recognizing the sensitive nature of biometric data (and its substantial potential as a highly intrusive surveillance capability), legal regimes and courts around the world are increasingly requiring additional safeguards. But Article 14 prevents Parties from treating biometric data as sensitive (and, as a result, applying stronger safeguards) unless it can be shown that heightened risks are involved. At the hearing, T-CY officials acknowledged the weaker standard adopted for biometric data, but indicated the negotiated compromise was necessary to accommodate the range of protection afforded to biometric data amongst some of the Protocol’s would-be signatories. Once again, privacy is taking a back seat.

PACE will issue a report with their recommendations in the coming weeks. The assembly  has an opportunity to substantially improve human rights protections in the Protocol by recommending to the Council of Ministers—CoE's decision-making body—amendments that will fix  technical mistakes in the Protocol and strengthen its privacy and data protection safeguards. We have also suggested that accession to the Protocol should be made conditional upon signing Convention 108+. Without that, the Protocol, and the CoE’s efforts to modernize cross border data access and provide strong, enforceable human rights protections, risk being left behind.

Read more on this topic:

In U.S. v Wilson, the Ninth Circuit Reaffirms Fourth Amendment Protection for Electronic Communications

Tue, 09/28/2021 - 6:43pm

In a powerful new ruling for digital privacy rights, the Ninth Circuit Court of Appeals has confirmed that the police need to get a warrant before they open your email attachments—even if a third party’s automated system has flagged those attachments as potentially illegal. We filed an amicus brief in the case.

How We Got Here

Federal law prohibits the possession and distribution of child sexual assault material (also known as child pornography or CSAM). It also requires anyone who knows another possesses or is engaged in distributing CSAM to report to a quasi-governmental organization called the National Center for Missing and Exploited Children (NCMEC).

Although federal law does not require private parties to proactively search for CSAM, most, if not all major ISPs do, including Google, the ISP at issue in Wilson’s case. Once one of Google’s employees identifies an image as CSAM, the company uses a proprietary technology to assign a unique hash value to the image. Google retains the hash value (but not the image itself), and its system automatically scans all content passing through Google’s servers and flags any images with hash values that match it. Once an image is flagged, Goggle’s system automatically classifies and labels the image based on what it has previously determined the image depicts and sends the image with its label to NCMEC, along with the user’s email address and IP addresses. NCMEC then sends the images and identifying information to local law enforcement, based on the IP address.

In Wilson’s case, Google’s automated system flagged four images attached to one of his emails. No Google employee ever looked at the exact images attached to Wilson’s email. Google forwarded them to NCMEC, and NCMEC sent them to San Diego law enforcement. There, an agent opened the images and confirmed they were CSAM – without a warrant.

Wilson filed a motion in federal court seeking to suppress the four images as well as evidence later seized from his online accounts and his home, arguing they were products of an unconstitutional warrantless search. The court denied Wilson’s motion based on a somewhat obscure exception to the Fourth Amendment called the “private search doctrine.”

The Private Search Doctrine

Almost every court to squarely address the issue has held the Fourth Amendment protects electronic communications from warrantless searches. However, the Fourth Amendment only applies to government searches; it does not prohibit private parties from searching through your stuff and turning over what they find to the police, which is what happened here.

The private search doctrine holds that law enforcement doesn’t need a warrant to search through your stuff if a private party has already searched it and the officer doesn’t search through anything more than that private search. For example, in one classic case, an inadvertent recipient of several film reels read the descriptions of the films on their canisters (but didn’t view the films), believed them to be obscene, and turned them over to the FBI. The FBI viewed one of the films without a warrant and charged the defendant. The Supreme Court held the FBI exceeded the scope of the private party’s search when it did more than just look at the labels on the film canisters. Because it did so without a warrant, the search was unconstitutional.

In this case, the district court held that when the agent opened and viewed the images attached to Mr. Wilson’s email, he did not expand the search beyond Google’s automatic scan in any meaningful way. This was because the agent only had access to the four images previously identified by Google. Also, based on Google’s stated reliability of its automated scanning system, the court decided it was a virtual certainty that the images attached to Wilson’s email were identical to previously flagged CSAM, and therefore conveyed illegal content and nothing more.  

The Ninth Circuit Opinion

The Ninth Circuit disagreed. It held that opening and viewing the images allowed the government to learn new, critical information—namely what exactly was depicted in each image—that it used to obtain a warrant to search Wilson’s residence and then to prosecute him.  The court noted further noted that Fourth Amendment rights are personal; even if the images attached to Mr. Wilson’s email were exact duplicates of images previously identified by a Google employee, they were still different images. No Google employee looked at the exact images attached to Wilson’s email, so the agent’s actions in opening and viewing those images were different from Google’s purely automated search.

The Twist

The Ninth Circuit is not the only appellate court to weigh in on the government’s review of Mr. Wilson’s images; the California Court of Appeal did as well and came to the opposite conclusion.  

Wilson was charged in both federal and California state court. The charges were different in each case, and they proceeded on their own timelines, but both cases involved the same search of Wilson’s email attachments. Mr. Wilson filed motions to suppress the evidence in both cases, making similar arguments. The California Court of Appeal issued its opinion in November 2020, months before this recent Ninth Circuit opinion, and held the government did not meaningfully expand Google’s private search in any manner that would violate Wilson’s Fourth Amendment rights. The California appellate court upheld Wilson’s sentence of 45 years to life, and the California Supreme Court denied review.

This means two appellate courts with overlapping jurisdiction over the same search are in conflict with one another, which is highly unusual. Wilson’s state lawyer has petitioned the U.S. Supreme Court for review (“certiorari”) of the state case, and the Court is planning to consider taking it this week. In Wilson’s federal case, the government has asked for more time to figure out its next steps.

There’s More

In the amicus brief we filed in Wilson’s case, we took on a separate argument that often arises in these kinds of cases—that Wilson lacked a reasonable expectation of privacy in the images because he agreed to Google’s terms of service, which stated the company would scan his email for illegal content.

As we argued in our brief, a company’s TOS should not dictate your constitutional rights, because terms of service are rules about the relationship between you and your email provider—not you and the government. A court ruling to the contrary could affect far more than child sexual assault material cases: on this theory, anyone whose account was shut down for any violation of a TOS could lose Fourth Amendment protections over all the emails in their account.

The Minnesota Supreme Court is currently considering this argument in a case called State v. Pauli, which will be heard by the court next week. We weighed in in that case too, and you can read more about that here.

We will continue to follow these cases and weigh in as necessary to protect our right to privacy in our electronic communications.

EFF, Access Now, and Partners to European Parliament: Free Speech, Privacy and Other Fundamental Rights Should Not be Up for Negotiation in the Digital Services Act

Tue, 09/28/2021 - 5:11pm

European Union (EU) civil society organizations, led by EFF and Access Now, are keeping a sharp eye on the myriad proposals to amend the European Commission’s Digital Services Act (DSA) ahead of important committee votes in the European Parliament (EP). We want to see the DSA, which will overhaul regulations for online platforms, foster a new era of transparency and openness between tech platforms and Internet users. It should protect fundamental rights online and provide Europeans with greater control over their Internet experience.

To ensure the DSA is moving in the right direction, we are calling on the European Parliament to reject proposals that cross the line to undermine pillars of the e-Commerce Directive crucial in a free and democratic society. In a letter to members of Parliament today, we are sending a clear message that free speech online, protection of marginalized groups, and respect for users’ private communication are key principles that should not be up for negotiation.

Keep Limited Liability Exemptions

Specifically, proposals by the EP Committee on Legal Affairs (JURI) to limit liability exemptions for internet companies that perform basic functions of content moderation and content curation would contradict EU Court of Justice case law and result in over-removal of legitimate content at large scale. These dangerous ideas, up for committee vote this week, should be rejected. The DSA should make sure that online intermediaries continue to benefit from comprehensive liability exemptions in the EU and not be held liable for content provided by users. Any modifications that result in short-sighted content removals of legitimate speech or which otherwise do not comply with fundamental rights protections under the EU Charter and the jurisprudence of the Court of Justice should be rejected.

Protect User Privacy, Reject Filters

Further, measures that would force companies to analyze and indiscriminately monitor users’ communication or use upload filters have no place in the DSA. Protecting the privacy of users and their personal data is a fundamental right laid down in the EU Charter. They should honor users’ expectation of privacy and protect their right to communicate free of monitoring and censorship.

Don’t Treat Ordinary Internet Users as Second-Class Citizens

We are extremely concerned about trusted flaggers proposals that favor the powerful and would give politicians and popular public figures special advantages not available to ordinary users. Government and law enforcement agencies would get first-class treatment if platforms are obligated to prioritize their notices. This not only opens the door to misuse, but affords ordinary users second-class treatment—an anathema to free expression in democratic societies. Platforms should not be forced to apply one set of rules to ordinary users and a more permissive set of rules to influencer accounts and politician.

For the letter to the European Parliament:
https://www.eff.org/document/dsa-joint-letter-ep

For more on the DSA:
https://www.eff.org/issues/eu-policy-principles

For the statement by Access Now:
https://accessnow.org/civil-society-eu-digital-services-act

EFF to Court: Stop SFPD from Spying on Protesters for Black Lives

Mon, 09/27/2021 - 4:53pm

EFF and the ACLU of Northern California recently filed a brief asking the San Francisco Superior Court to rule that the San Francisco Police Department (SFPD) violated the law when it obtained and used a remote, live link to a business district’s surveillance camera network to monitor protests in the wake of George Floyd’s murder in May and June 2020.

In October 2020, on behalf of three activists of color, we sued the City and County of San Francisco for violating the city’s landmark Surveillance Technology Ordinance. A few months earlier, an EFF investigation uncovered that the SFPD had obtained live access to a downtown business district’s camera network for 8 days that summer to spy on Black-led protests against police violence. The Ordinance prohibits any city department, including the SFPD, from acquiring, borrowing, or using, or entering an agreement to acquire or use, surveillance technology without prior approval from the city’s Board of Supervisors. The Ordinance is one of nearly 20 Community Control of Police Surveillance (CCOPS) laws nationwide that empower community members, through their local legislators, to make decisions about if and under what circumstances police and other government agencies may acquire and use surveillance technology.  

We filed our motion for summary judgment—asking the court to rule without a trial—after obtaining documents and deposition testimony that an SFPD officer repeatedly viewed the camera network during the 8 days that the department had access. This contradicted the SFPD’s previous public statements that they obtained access to the network, but never viewed it.

The information we uncovered showed the SFPD unilaterally and secretly deployed a surveillance camera network against protesters marching in defense of Black lives. In the words of Hope Williams, one of the plaintiffs, “It is an affront of our movement for equity and justice that the SFPD responded to police abuse and violence by secretly spying on us.” SFPD’s unlawful actions chill speech and make it harder for activists to organize and participate in future protests.

The SFPD must be held accountable for breaking the law. That’s why we’re asking the court to enforce the Ordinance and issue an order prohibiting San Francisco and its police from obtaining or using any non-city camera network absent prior Board approval. As Hope put it, “We have the right to organize, speak out, and march without fear of police surveillance.”

Related Cases: Williams v. San Francisco

SHOP SAFE Is Another Attempt to Fix Big Tech That Will Mostly Harm Small Players and Consumers

Fri, 09/24/2021 - 10:53pm

Congress is once again trying to fix a very specific problem with a broad solution. We support the SHOP SAFE Act’s underlying goal of protecting consumers from unsafe and defective counterfeit products.  The problem is that SHOP SAFE tackles the issue in a way that would make it incredibly difficult for small businesses and individuals to sell anything online. It will do little to stop sophisticated counterfeiters and will ultimately do consumers more harm than good, by obstructing competition and hindering consumers’ ability to resell their own used goods.

Think about trying to sell something used online. Think about having a wool sweater that’s still in great condition but just doesn’t make sense for you anymore. Maybe you moved from Denver to Miami. So, as many of us do these days, you list your sweater online. You put it on eBay or Facebook Marketplace. Or a friend says they know someone who wants it and puts you in touch via email. You exchange the sweater for some cash, and everyone’s happy.

Now imagine that before you can make that sale, you have to send eBay (or Facebook, or your email provider) a copy of your government ID. And verify that you took “reasonable steps,” whatever that means, to make sure the sweater isn’t a counterfeit. And state in your listing where the sweater was made, or if you don’t know, tell the platform all the steps you took to try and figure that out. And carefully word your listing to avoid anything that might get it caught in an automated trademark filter. At this point, you might reasonably decide to just chuck the sweater in the trash rather than jump through all these hoops.

That’s the regime SHOP SAFE threatens to create.

SHOP SAFE Is Bad for the Little Guy

It’s easy, conceptually, to collapse the world of online selling to just Amazon. But that isn’t the reality. Laws written with only Amazon in mind will solidify Amazon’s dominance by imposing burdens that are onerous for small players to meet. And while the requirements of the bill are clearly geared towards large marketplaces like Amazon, the universe of platforms it would apply to is much broader. The current bill language could be interpreted to cover anything from Craigslist to Gmail—basically any online service that can play a role in advertising, selling, or delivering goods. This isn’t just some reach reading that we came up with; at least two anti-counterfeiting organizations supporting SHOP SAFE have urged Congress to make sure it applies even to Facebook Messenger and WhatsApp.

SHOP SAFE would make all of these platforms liable for counterfeiting by their users unless they take certain measures. Technically the bill only creates liability for counterfeiting of products that “implicate health and safety,” but the definition of that term is so broad it could be read to cover just about anything. For example, it could arguably cover your wool sweater because some people have wool allergies. Sure, you could make a case that the definition should be read more narrowly. But platforms don’t want to end up in the position of needing to make that case, so you can bet their legal departments will err on the safe side.

One measure platforms would have to take under SHOP SAFE is verifying the identity, address, and contact information of any third-party seller who uses their services. Imagine if you had to provide a copy of your driver’s license to Craigslist just to advertise your garage sale or sell a used bike. As over the top as that seems, it’s even worse when you think about how this would apply to services like Gmail or Facebook. Should you really have to provide ID to open an email account, just in case you sell something using it? Requirements like this threaten not only competition but user privacy, too.

Other provisions of SHOP SAFE put the burden of rooting out counterfeits on platforms, rather than on the trademark holders who are in the best position to know a real from a fake. Most concerning to us is the requirement that platforms implement “proactive technological measures” for pre-screening listings.  This provision echoes calls for mandatory automated content filtering in the copyright context. We’ve written extensively about the problems with filtering mandates, including filters’ inability to tell infringing from noninfringing uses and their prohibitive cost to all but the largest platforms.  The same concerns apply here.  For example, listings for genuine used goods could easily be caught by a filtering system, as could any listing that compares one product to another or identifies compatible products. Plus many trademarks consist entirely of one or two dictionary words—meaning any filtering technology could easily block listings as suspicious just because the product description included words that happen to be someone’s trademark.

SHOP SAFE requires platforms to implement all of these measures “at no cost” to trademark holders. So either those costs will be passed on to the third-party sellers or absorbed by a platform that has money to burn. For smaller platforms that serve small businesses or individual sellers, either option would be untenable. If these platforms can’t survive, that means fewer choices for consumers.

Is SHOP SAFE a DMCA for Trademarks? No, It’s Worse.

In discussions of SHOP SAFE, some have compared it to the DMCA’s notice-and-takedown regime for addressing online copyright infringement.  SHOP SAFE does share some features with the DMCA. Like the DMCA, SHOP SAFE would give rightsholders leverage to get content taken off the internet based only on their say-so. It also requires platforms to suspend—and then ban—sellers who have been “reasonably determined” to have repeatedly used a counterfeit mark. That doesn’t necessarily mean a court finding, just a determination by a platform. In the DMCA context, the fear of losing an account has been a powerful deterrent to asserting rights based on fair use or other defenses.

But SHOP SAFE’s requirements go far beyond the DMCA’s, while lacking safeguards like a counternotice procedure and penalties for bad-faith takedowns. SHOP SAFE also takes the DMCA’s safe harbor structure and flips it upside down. The DMCA incentivizes platforms to adopt certain policies and practices by providing a true safe harbor—that is, platforms that choose to satisfy the safe harbor requirements can be confident that they cannot be held liable for infringement by their users. SHOP SAFE doesn’t work this way.  Instead, it creates a new, independent basis for secondary infringement liability, and it directs that all covered platforms must implement a range of practices or else be held liable for any trademark infringement by their users.  The DMCA’s safe harbor framework is preferable because it incentivizes desired behavior while maintaining flexibility for different approaches by different platforms according to their unique characteristics.

We do want to protect consumers, but this isn’t the way to do it. Laws for holding marketplaces like Amazon accountable when consumers get hurt already exist. SHOP SAFE is an imprecise, destructive approach to preventing sales of dangerous products, and there’s little reason to think the benefits would outweigh the costs to competition and consumer choice.  Let’s not hurt consumers with a law that’s supposed to help them.

Digital Rights Updates with EFFector 33.6

Thu, 09/23/2021 - 1:31pm

Want the latest news on your digital rights? Then you’ve come to the right place! Version 33, issue 6 of EFFector, our monthly-ish newsletter, is out now! Catch up on the latest EFF news, from our protests at Apple stores to celebrating that HTTPS is actually everywhere, by reading our newsletter or listening to the new audio version below.

LISTEN ON The Internet Archive

EFFECTOR 33.06 - Why EFF Flew A Plane Over Apple's Headquarters

Make sure you never miss an issue by signing up by email to receive EFFector as soon as it's posted! Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and now listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Colorado Supreme Court Rules Three Months of Warrantless Video Surveillance Violates the Constitution

Wed, 09/22/2021 - 4:52pm

EFF Legal Intern Hannah Donahue co-wrote this post.

Last week, the Colorado Supreme Court ruled, in a case called People v. Tafoya, that three months of warrantless continuous video surveillance outside a home by the police violated the Fourth Amendment. We, along with the ACLU and the ACLU of Colorado, filed an amicus brief in the case.

The police, after receiving a tip about possible drug activity, attached a camera to a utility pole across from Rafael Tafoya’s home that captured views of his front yard, driveway, and back yard. The back yard and part of the driveway were enclosed within a six-foot high privacy fence, which obscured their view from passersby. However, the fence did not block the view from the high vantage of the utility pole. The police could observe a live video feed of the area and could remotely pan, tilt, and zoom the camera. They also stored the footage indefinitely, making it available for later review at any time.

At trial, Tafoya moved to suppress all evidence resulting from the warrantless pole camera surveillance, arguing that it violated the Fourth Amendment. The trial court denied the motion, and Tafoya was convicted on drug trafficking charges. A division of the court of appeals reversed, agreeing with Tafoya that the surveillance was unconstitutional.

Last week, Colorado’s Supreme Court upheld the court of appeals opinion, finding the continuous, long-term video surveillance violated Tafoya’s reasonable expectation of privacy. Citing to United States v. Jones and Carpenter v. United States, the court stated: “Put simply, the duration, continuity, and nature of surveillance matter when considering all the facts and circumstances in a particular case.” The court held that 24/7 surveillance for more than three months represented a level of intrusiveness that “a reasonable person would not have anticipated.”

This ruling is in line with a recent opinion from the Massachusetts Supreme Judicial Court in another case involving long-term pole camera surveillance: Commonwealth v. Mora. In that case, the state’s highest court held that the surveillance violated Massachusetts’ state constitutional equivalent to the Fourth Amendment. The Mora court recognized that advances in law enforcement officers’ ability to monitor spaces exposed to public view should not necessarily diminish peoples’ subjective expectations of privacy. Like Tafoya, the court held that the extended duration and continuous nature of the surveillance mattered. Even where people “subjectively may lack an expectation of privacy in some discrete actions they undertake in unshielded areas around their homes, they do not expect that every such action will be observed and perfectly preserved for the future.” We filed an amicus brief in Mora, as well as an earlier federal district court case from Washington state, United States v. Vargas, that preceded Carpenter but held similarly.

However, several other courts have held that pole camera surveillance—even for periods of time much longer than three months—is constitutionally acceptable. For example, the Seventh Circuit held recently in United States v. Tuggle that police use of a pole camera to surveil a defendant’s home for 18 months did not violate the Fourth Amendment because the surveilled area was fully exposed to public view. The Tuggle court expressed serious reservations, though, about what its decision could mean for the trajectory of government surveillance technologies. Similarly, a panel of the First Circuit in United States v. Moore-Bush overturned a district court’s decision holding that eight months of warrantless pole camera surveillance violated the Fourth Amendment. The First Circuit granted en banc review of the panel decision, and we are currently waiting for the court to issue its opinion.

One concern with the Colorado Supreme Court’s ruling in Tafoya, is its extensive focus on the fact that Tafoya maintained a six-foot privacy fence around his backyard and driveway as evidence of his subjective expectation of privacy. We argued in our amicus brief that the presence of such physical barriers should not be a determining factor because this standard would disproportionately harm people of lesser means. Basing a person’s expectation of privacy on their ability to obscure their property from view would mean that only those who live in wealthy communities—where they can build a fence, where their properties are set back far enough from poles, or where utility lines are buried underground—would be protected from pole camera surveillance. People who cannot afford to build privacy fences or who are not allowed to do so (those who rent or who live in multi-unit residential buildings, for example) would be disproportionately and negatively impacted by such a rule.

We also voiced these concerns in the amicus briefs we filed in Mora and Moore-Bush. The Mora court acknowledged these concerns, explaining that “a resource-dependent approach” undermines protections against warrantless searches by requiring people to “erect physical barriers around their residences before invoking the protections of the Fourth Amendment.” The court stated it would “not undermine [] long-held egalitarian principles” that constitutional rights should apply equally to even the poorest people.

We are following this issue closely and will continue to argue that warrantless residential pole-camera surveillance violates the Fourth Amendment and disproportionately harms disadvantaged communities.

Related Cases: US v. JonesCarpenter v. United States

Stop Military Surveillance Drones from Coming Home

Tue, 09/21/2021 - 5:49pm

A federal statute authorizes the Pentagon to transfer surveillance technology, among other military equipment, to state and local police. This threatens privacy, free speech, and racial justice.

So Congress should do the right thing and enact Representative Ayanna Pressley’s amendment, Moratorium on Transfer of Controlled Property to Enforcement Agencies, to H.R. 4350, the National Defense Authorization Act for Fiscal Year 2022 (NDAA22). It would greatly curtail the amount of dangerous military equipment, including surveillance drones, that could be transferred to local and state law enforcement agencies through the Department of Defense’s “1033 program.” It has already placed $7.4 billion in military equipment with police departments since 1990. 

The program includes both “controlled” property, such as weapons and vehicles, and “uncontrolled” property, such as first aid kits and tents. Pressley’s amendment would prevent the transfer of all “controlled” property, which includes “unmanned aerial vehicles,” or drones. It also includes: Manned aircraft, Wheeled armored vehicles, Command and control vehicles, specialized firearms and ammunition under .50 caliber, Breaching apparatus, and Riot batons and shields. 

Even without the Department of Defense landing drones into our communities, police use of these autonomous flying robots is rapidly expanding. Some police departments are so eager to get their hands on drones that they’ve claimed they need them to help fight COVID-19. The Chicago Police Department even launched a massive drone program using only off-the-books money taken through civil asset forfeiture.

 We know what will happen if police get their hands on more and more military surveillance drones. Technology given out on the condition that it can only be used in “extreme” circumstances often ends up being used in everyday acts of over-policing. And police have already used drones to monitor how people exercise their First Amendment-protected rights. 

After the New York City Police Department accused one activist, Derrick Ingram, of injuring an officer’s ears by speaking too loudly through his megaphone at a protest, police flew drones by his apartment window—a clear act of intimidation for activists and protestors. The government also flew surveillance drones over multiple protests against police racism and violence during the summer of 2020. When police fly drones over a crowd of protestors, they chill free speech and political expression through fear of reprisal and retribution from police. Police could easily apply face surveillance technology to footage collected by a surveillance drone that passed over a crowd, creating a preliminary list of everyone that attended that day’s protest. 

With the United States ending its multi-decade occupation of Afghanistan, military equipment once used in warfare is now inching closer to re-deployment onto U.S. streets. The scaling back of military involvement in Iraq coincided with a massive influx of weapons, armed vehicles, and other Department of Defense surplus being fed directly into police departments. We must prevent a repeat of history. 

In 2015, after public reaction against militarized police in Ferguson, Missouri, President Obama made a few reforms to the 1033 program.  Specifically, he banned transfer to the homefront of armored vehicles, weaponized aircraft and vehicles, weapons of over a specific caliber, grenade launchers, and bayonets. But this did not go far enough to ensure that section 1033 will not contribute to the mass surveillance of people on U.S. soil. 

We’re calling on the public and members of Congress to support Ayanna Pressley’s amendment, the Moratorium on Transfer of Controlled Property to Enforcement Agencies, to H.R. 4350.

HTTPS Is Actually Everywhere

Tue, 09/21/2021 - 2:37pm

For more than 10 years, EFF’s HTTPS Everywhere browser extension has provided a much-needed service to users: encrypting their browser communications with websites and making sure they benefit from the protection of HTTPS wherever possible. Since we started offering HTTPS Everywhere, the battle to encrypt the web has made leaps and bounds: what was once a challenging technical argument is now a mainstream standard offered on most web pages. Now HTTPS is truly just about everywhere, thanks to the work of organizations like Let’s Encrypt. We’re proud of EFF’s own Certbot tool, which is Let’s Encrypt’s software complement that helps web administrators automate HTTPS for free.

The goal of HTTPS Everywhere was always to become redundant. That would mean we’d achieved our larger goal: a world where HTTPS is so broadly available and accessible that users no longer need an extra browser extension to get it. Now that world is closer than ever, with mainstream browsers offering native support for an HTTPS-only mode.

With these simple settings available, EFF is preparing to deprecate the HTTPS Everywhere web extension as we look to new frontiers of secure protocols like SSL/TLS. After the end of this year, the extension will be in “maintenance mode.” for 2022. We know many different kinds of users have this tool installed, and want to give our partners and users the needed time to transition. We will continue to inform users that there are native HTTPS-only browser options before the extension is fully sunset.

Some browsers like Brave have for years used HTTPS redirects provided by HTTPS Everywhere’s Ruleset list. But even with innovative browsers raising the bar for user privacy and security, other browsers like Chrome still hold a considerable share of the browser market. The addition of a native setting to turn on HTTPS in these browsers impacts millions of people.

Follow the steps below to turn on these native HTTPS-only features in Firefox, Chrome, Edge, and Safari and celebrate with us that HTTPS is truly everywhere for users.

Firefox

The steps below apply to Firefox desktop. HTTPS-only for mobile is currently only available in Firefox Developer mode, which advanced users can enable in about:config. 

Preferences > Privacy & Security > Scroll to Bottom > Enable HTTPS-Only Mode

Chrome

HTTPS-only in Chrome is available for both desktop and mobile in Chrome 94 (released today!).

Settings > Privacy and security > Security > Scroll to bottom > Toggle “Always use secure connections”

Edge

This is still considered an “experimental feature” in Edge, but is available in Edge 92.

  1. Visit edge://flags/#edge-automatic-https and enable Automatic HTTPS
  2. Hit the “Restart” button that appears to restart Microsoft Edge.

Visit edge://settings/privacy, scroll down, and turn on “Automatically switch to more secure connections with Automatic HTTPS”.

Safari

HTTPS is upgraded by default when possible in Safari 15, recently released September 20th, for macOS Big Sur and macOS Catalina devices. No setting changes are needed from the user.

Updates for Safari 15

Why EFF Flew a Plane Over Apple's Headquarters

Tue, 09/21/2021 - 12:23pm

For the last month, civil liberties and human rights organizations, researchers, and customers have demanded that Apple cancel its plan to install photo-scanning software onto devices. This software poses an enormous danger to privacy and security. Apple has heard the message, and announced that it would delay the system while consulting with various groups about its impact. But in order to trust Apple again, we need the company to commit to canceling this mass surveillance system.

The delay may well be a diversionary tactic. Every September, Apple holds one of its big product announcement events, where Apple executives detail the new devices and features coming out. Apple likely didn’t want concerns about the phone-scanning features to steal the spotlight. 

But we can’t let Apple’s disastrous phone-scanning idea fade into the background, only to be announced with minimal changes down the road. To make sure Apple is listening to our concerns, EFF turned to an old-school messaging system: aerial advertising.  

EFF banner flies over Apple Park, the corporate headquarters of Apple, located in Cupertino, California

EFF banner flies over the previous Apple headquarters

During Apple’s event, a plane circled the company’s headquarters carrying an impossible-to-miss message: Apple, don’t scan our phones! The evening before Apple’s event, protestors also rallied nationwide in front of Apple stores. The company needs to hear us, and not just dismiss the serious problems with its scanning plan. A delay is not a cancellation, and the company has also been dismissive of some concerns, referring to them as “confusion” about the new features.

Privacy Is Not For Sale

Apple’s iMessage is one of the preeminent end-to-end encrypted chat clients. End-to-end encryption is what allows users to exchange messages without having them intercepted and read by repressive governments, corporations, and other bad actors. We don’t support encryption for its own sake: we fight for it because encryption is one of the most powerful tools individuals have for maintaining their digital privacy and security in an increasingly insecure world.

Now that Apple’s September event is over, Apple must reach out to groups that have criticized it and seek a wider range of suggestions on how to deal with difficult problems, like protecting children online. EFF, for its part, will be holding an event with various groups that work in this space to share research and concerns that Apple and other tech companies should find useful. While Apple tends to announce big features without warning, that practice is a dangerous one when it comes to making sweeping changes to technology as essential as secure messaging. 

The world, thankfully, has moved towards encrypted communications over the last two decades, not away from them, and that’s a good thing. If Apple wants to maintain its reputation as a pro-privacy company, it must continue to choose real end-to-end encryption over government demands to read user’s communication. Privacy matters now more than ever. It will continue to be a selling point and a distinguishing feature of some products and companies. For now, it’s an open question whether Apple will continue to be one of them. 

Further Reading:

How California’s Broadband Infrastructure Law Promotes Local Choice

Fri, 09/17/2021 - 1:46pm

The legislative session has ended and Governor Newsom is expected to sign into law S.B. 4 and A.B. 14. These bills stand as the final pieces of the state’s new broadband infrastructure program. With a now-estimated $7.5 billion assembled between federal and state funds, California has the resources it needs to largely close the digital divide in the coming years. This program allows local cities and counties to access infrastructure dollars to solve problems in their own communities along with empowering local private entities, rather than depend on large, private multi-nationals who aren’t willing to make the needed generational investment into infrastructure in most areas of the state.

EFF will explain below why local communities need to take charge, and how the new law will facilitate local choice in broadband. No state has taken this approach yet and departed from the old model of handing over all the subsidies to giant corporations. That’s why it’s important for Californians to understand the opportunity before them now.

Why it Has to be a Local Public, Private, or Public/Private Entity

If the bankruptcy of Frontier Communications has taught us anything, it is the following two lessons. First, large national private ISPs will forgo 21st-century fiber infrastructure in as many places they can to pad their short-term profits. Government subsidies to build in different areas do not change this behavior. Second, the future of broadband access depends on the placement of fiber optic wires. Fiber is an investment in long-term value over short-term profits. EFF’s technical analysis has also laid out why fiber optics is future-proof infrastructure by showing that no other transmission medium for broadband even comes close, which makes its deployment essential for a long-term solution.

AT&T and cable companies, such as Comcast and Charter, are going to try to take advantage of this program by making offers that sound nice. But they will leverage existing legacy infrastructure that is rapidly approaching obsolescence. While they may be able to offer connectivity that’s “good enough for today” at a cheaper price than delivering fiber, there is no future in those older connections. It’s clear that higher uploads are becoming the norm, and at ever-increasing speeds. As California’s tech sector begins to embrace distributed work, only communities with 21st-century fiber broadband access will be viable places for those workers to live. Fiber optics’ benefits are clear. The challenge of  fiber optics is that its high upfront construction costs require very long-term financing models to deliver on its promise. Here is how the state’s new program makes that financing possible.

A Breakdown of the New Broadband Infrastructure Program

The infrastructure law has four mechanisms in place to help finance and plan new, local options: a grant program for the unserved; long-term financing designed around public, non-profit, and tribal entities; a state-run middle-mile program; and a state technical assistance program. Let’s get into the weeds on each of them.

Broadband Infrastructure Grant Account – The state is making more than $2 billion (and possibly up to $3.5 billion) available in grants, over the coming years, to finance (at 100% of the state’s cost) the construction of broadband networks in areas that need them. To qualify, such areas must lack the following three traits, premised on federal and state mapping data:

  • Broadband service at speeds of at least 25 mbps downstream and 3 mbps upstream (this is mostly folks reliant on DSL copper access or less)
  • Latency that is sufficiently low to allow real-time interactive applications
  • Is not currently receiving money from, and is carrying out the objectives of, the Rural Digital Opportunity Fund

To focus the grant funds,  priority is placed on areas that do not even have 10 mbps downstream and 1 mbps upstream—this is mostly areas that only have satellite internet. This program is focused on having the state paying the construction costs for people who have no internet access at all, as opposed to those with slow, useless, or inadequate access.

Loan Loss Reserve Fund – The State Treasury will establish this fund to enable long-term financing by cities, counties, community service districts, public utilities, municipal utility districts, joint powers authority, local educational agencies, tribal governments, electrical cooperatives, and non-profits. It will be designed to help these entities obtain very low interest rates with low debt obligations. Think of this program like our mortgage-lending system.  30-year fixed mortgages enable many people to purchase homes, even if they could never gather the cash necessary to make the purchase all at once. Fiber is well-suited for this type of financing vehicle; it will be able to deliver speeds useful for multiple decades and carries lower maintenance costs than other broadband options.

State Open-Access Middle-Mile – The state of California, overseen by the Department of Technology, will deploy fiber infrastructure on an open-access basis—meaning on non-discriminatory terms and accessible by ISPs— with an emphasis on developing rural exchange points. The goal behind this infrastructure is to deliver multi-gigabit capacity to areas building broadband access, and also to bring down the cost to affordable rates for obtaining backhaul capacity to the global internet. To use an analogy, the state is building the highways to connect communities to the airport—and the world. The option to connect to these internet highways will be made available to all comers. So, for example, small local businesses or local townships can connect a fiber line to these facilities to build a local broadband network.

Technical Assistance by the State – Fiber infrastructure is a game-changer on the ground. Echoing the way the federal government advised local governments and communities on the deployment a similarly revolutionary technology—electricity— the new broadband infrastructure law deputizes the California Public Utilities Commission to provide technical assistance for these plans. The CPUC will provide local governments and providers with assistance for grant applications to other federal programs and participate in the development of infrastructure plans with county governments.

How all These Programs Work Together and End the Reliance on AT&T and Comcast

Any small business, local government, or even a school district will soon have these tools to solve their own problems. As they look to use the programs listed above, it’s important for any local player seeking to build their own broadband solution to understand it will take multi-year effort to do it right. The loan-loss reserve program will focus on multi-decade repayment plans. This gives eligible entities access to billions of loan dollars for future-proof fiber infrastructure. The grants are meant to eliminate the construction burden of delivering access to the most difficult-to-serve populations in pockets throughout the state. But, any real effort to build a network will have to include their underserved neighbors. For those communities, the state will attempt to deliver the best-priced access to bandwidth capacity through its middle-mile program. Doing so will help keep prices as low as is feasible to enable the delivery of cheap, fast internet in areas that otherwise would never have seen access.

And for any of this to happen, every community needs someone at the local level who is well-versed in how to use the state’s program. That’s where the technical assistance by the state comes in, to help locals navigate the hardest parts of developing a local broadband solution.

Still, no state program can make folks on the ground do the work. That’s why we need people engaged in their communities. If you are tired of relying on big providers that prioritize Wall Street investors over your local community’s needs and are motivated to figure out a solution at home, this is your moment.  This new law not only had you in mind, it’s counting on you to step up to the plate.

No, Tech Monopolies Don’t Serve National Security

Thu, 09/16/2021 - 5:47pm

In what appears to be a “throw spaghetti on the wall approach” to stopping antitrust reform targeting Big Tech, a few Members of Congress and a range of former military and intelligence officials wrote a letter asserting that these companies need to be protected for national security. It’s a spurious argument that seeks to leverage fear of China to prevent changes desperately needed for consumer choice and innovation.

The argument they make is that gigantic tech companies are the only ones who can innovate and compete with China. But this completely misses the point on innovation. When companies have monopolies, they have no reason to innovate since they have captured the market. There is no need to compete to have the best product when you are the only product. Innovation depends on the best ideas from everyone being put forth to the public.   

Now, we don’t know if these folks actually believe in the argument or if they think the rest of us will believe in the argument because they say it, but this letter is really only about delaying legislative antitrust action through raising not just fictional concerns, but completely bogus takes on how innovation happens on the internet.

This Has Been Tried Before, and It Didn’t Work Then

The irony about the national security argument is that it takes a page straight out of the AT&T monopoly playbook and history. Forty years ago, AT&T was the largest corporation in the world and was facing antitrust action both in Congress and the courts. In a Hail Mary effort to get the Department of Justice to abandon its lawsuit, AT&T lobbyists went to the Department of Defense and convinced them that a monopoly communications network was essential for national security.

Source: New York Times Archive found here https://www.nytimes.com/1981/04/09/business/weinberger-defends-at-t.html

The plan was to convince then-President Ronald Reagan that he should directly order the Department of Justice to end the case, despite nearly six years of court hearings detailing how AT&T leveraged its monopoly power. In fact, a year prior to the Department of Defense weighing in opposition to further antitrust action, a federal jury had already awarded MCI $1.8 billion in antitrust damages against AT&T.

The situation with Big Tech is similar to the AT&T monopoly of the past facing antitrust actions on various fronts and like AT&T is attempting to change the narrative and come up with any excuse to avoid the right outcome, which is opening up the tech industry to competition.

Innovation Does Not Come From Big Tech; It Gets Bought by Them

The signers of the letter adopt the view that massive consolidation of the industry is necessary for innovation. But the exact opposite is true. Due to the size of these companies and their targeted acquisitions, innovation is either unnecessary or simply bought up. Startups with new ideas aren’t being launched to make something that competes with Google, Facebook, Apple, and Amazon’s services or products because the lion share of investor money has gone towards creating products that Big Tech will pay lots of money to acquire.

Congressional investigations identified this “kill zone” as the area of tech products and services that orbit the dominant platforms' products, such as search in the case of Google or social media in the case of Facebook. In fact, one would be hard-pressed to find a new organic product from Big Tech that didn’t find its origins in buying another company.

After a lengthy investigation by the House Judiciary Committee and Senate hearings into the merger practices of these companies with a wide array of experts and industry players, the congressional record is full of evidence to demonstrate that the size of Big Tech is, in fact, suppressing competition that sparks innovation. Think about how the tech industry used to be a place where previous giants were replaced regularly with the next best thing that initially started as a garage startup. EFF calls this the life cycle of competition, and it has been fading from the tech industry due to where things are now. This is why EFF strongly supports bills such as the ACCESS Act and the Open App Markets Act because they would open up dominant platforms to new entrants and help empower smaller players to innovate without interference again.

It comes as no surprise that 79% of Americans view Big Tech mergers as anti-competitive because the public isn’t fooled. These companies aren’t huge because it gives them some sort of cutting edge; they are huge because it conveys dominance, control, and monopoly profits. The public understands this, but, clearly, some Members of Congress are not getting it.

What’s Up with WhatsApp Encrypted Backups

Thu, 09/16/2021 - 2:11pm

WhatsApp is rolling out an option for users to encrypt their message backups, and that is a big win for user privacy and security. The new feature is expected to be available for both iOS and Android “in the coming weeks.” EFF has pointed out unencrypted backups as a huge weakness for WhatsApp and for any messenger that claims to offer end-to-end encryption, and we applaud this improvement. Next, encryption for backups should become the default for all users, not just an option.

Currently, users can choose to periodically back up their WhatsApp message history on iCloud (for iOS phones) or Google Drive (for Android phones), or to never back them up at all. Backing up your messages means that you can still access them if, for example, your phone is lost or destroyed. 

WhatsApp does not have access to these backups, but backup service providers Apple and Google sure do. Unencrypted backups are vulnerable to government requests, third-party hacking, and disclosure by Apple or Google employees. That’s why EFF has consistently recommended that users not back up their messages to the cloud, and further that you encourage your friends and contacts to skip it too. Backing up secure messenger conversations to the cloud unencrypted (or encrypted in a way that allows the company running the backup to access message contents) means exposing the plaintext to third parties, and introduces a significant hole in the protection the messenger can offer.

When encrypted WhatsApp backups arrive, that will change. With fully encrypted backups, Apple and Google will no longer be able to access backed up WhatsApp content. Instead, WhatsApp backups will be encrypted with a very long (64-digit) encryption key generated on the user’s device. Users in need of a high level of security can directly save this key in their preferred password manager. All others can rely on WhatsApp’s recovery system, which will store the encryption key in a way that WhatsApp cannot access, protected by a password of the user’s choosing

This privacy win from Facebook-owned WhatsApp is striking in its contrast to Apple, which has been under fire recently for its plans for on-device scanning of photos that minors send on Messages, as well as of every photo that any Apple user uploads to iCloud. While Apple has paused to consider more feedback on its plans, there’s still no sign that they will include fixing one of its longstanding privacy pitfalls: no effective encryption across iCloud backups. WhatsApp is raising the bar, and Apple and others should follow suit.

The Catalog of Carceral Surveillance: Patents Aren't Products (Yet)

Wed, 09/15/2021 - 8:22pm

In EFF’s Catalog of Carceral Surveillance, we explore patents filed by or awarded to prison communication technology companies Securus and Global Tel*Link in the past five years. The dystopian technology the patents describe are exploitative and dehumanizing. And if the companies transformed their patents into real products, the technology would pose extreme threats to incarcerated people and their loved ones.

But importantly, patents often precede the actual development or deployment of a technology. Though applications may demonstrate an interest in advancing a particular technology, these intentions don’t always progress beyond the proposal, and many inventions that are described in patent applications don't wind up being built. What we can glean from a patent application is that the company is thinking about the technology and that it might be coming down the pipeline.

In 2019, Platinum Equity, the firm that has owned Securus Technologies since 2017, restructured the company, placing it under the parent company Aventiv. Aventiv claimed it would lead Securus through a transformation process that includes greater respect for human rights. According to Aventiv, many of patents filed prior to 2019 will remain just ideas, never to be built. Following the publication of our initial Catalog of Carceral Surveillance posts, Aventiv responded with the following statement: "We at Aventiv are committed to protecting the civil liberties of all those who use our products. As a technology provider, we continuously seek to improve and to create new solutions to keep our communities safe.”

Aventiv’s statement goes on to respond to EFF’s post describing a patent filed by Securus that envisions a system for monitoring online purchases made by incarcerated people and their families. The company wrote: “The patent is not currently in development as it was an idea versus a product we will pursue,” and added that to “ensure there is no additional misunderstanding, we will be abandoning this patent and reviewing all open patents to certify that they align with our transformation efforts.”

Aventiv’s statement disclaiming the patent, however, references a different Securus patent than the one described in EFF’s post. We have followed up with Aventiv for clarification and will update this post when we hear back from the company.

Aventiv stated that the patent “was filed in June 2019, prior to our company publicly announcing a multi-year transformation effort,” and provided a link with more details about their commitments. The statement concluded: “Our organization is focused on better serving justice-involved people by making our products more accessible and affordable, investing in free educational and reentry programming, and taking more opportunities--just like this one--to listen to consumers.” 

GTL declined to comment for this series.

GTL and Securus were once among the greatest opponents of federal regulation of prison phone calls. They’ve claimed to have adjusted their positions. Both announced over the summer that they are supportive of reforms to create more accessible prison communications. Each began to offer inmates free phone calls and free tablets

To better understand the potential (but not certain) futures of these companies, EFF created the  Catalog of Carceral Surveillance to spotlight the patents that could pave the way toward chilling developments in surveillance.

In the coming months, EFF plans to follow up with Aventiv to hold them to their word and will continue to remind prison technology companies of their responsibilities to the families they serve.

View the Catalog of Carceral Surveillance below. New posts will be added daily

 

The Federal Government Just Can’t Get Enough of Your Face

Wed, 09/15/2021 - 6:52pm

There are more federal facial recognition technology (FRT) systems than there are federal agencies using them, according to the U.S. General Accounting Office. Its latest report on current and planned use of FRT by federal agencies reveals that, among the 24 agencies surveyed, there are 27 federal FRT systems. Just three agencies—the U.S. Departments of Homeland Security, Defense, and Justice—use 18 of these systems for, as they put it, domestic law enforcement and national security purposes.

But 27 current federal systems are not enough to satisfy these agencies. The DOJ, DHS, and Department of the Interior also accessed FRT systems “owned by 29 states and seven localities for law enforcement purposes.” Federal agencies further accessed eight commercial FRT systems, including four agencies that accessed the infamous Clearview AI. That’s all just current use. Across federal agencies, there are plans in the next two years to develop or purchase 13 more FRT systems, access two more local systems, and enter two more contracts with Clearview AI.

As EFF has pointed out again and again, government use of FRT is anathema to our fundamental freedoms. Law enforcement use of FRT disproportionately impacts people of color, turns us all into perpetual suspects, and increases the likelihood of false arrest. Law enforcement agencies have also used FRT to spy on protestors.

Clearview AI, a commercial facial surveillance entity used by many federal agencies, extracts the faceprints of billions of unsuspecting people, without their consent, and uses them to provide information to law enforcement and federal agencies. They are currently being sued in both Illinois state court and federal court for violating the Illinois Biometric Information Privacy Act (BIPA). Illinois' BIPA requires opt-in consent to obtain someone’s faceprint. Recently, an Illinois state judge allowed the state case to proceed, opening a path for the American Civil Liberties Union (ACLU)  to fight against Clearview AI’s business model, which trades in your privacy for their profit. You can read the opinion of the judge here, and find EFF’s two amicus briefs against Clearview AI here and here

FRT in the hands of the government erodes the rights of the people. Even so, the federal government’s appetite for your face—through one of their 27 systems or commercial systems such as Clearview AI—is insatiable. Regulation is not sufficient here; the only effective solution to this pervasive problem is a ban on the federal use of FRT. Cities across the country from San Francisco, to Minneapolis, to Boston, have already passed strong local ordinances to do so.

Now we must go to Congress. EFF supports Senator Markey’s Facial Recognition and Biometrics Technology Moratorium Act, which would ban the federal government’s use of FRT and some other biometric technologies. Join our campaign and contact your members of Congress  and tell them to support this ban. The government can’t get enough of your face. Tell them they can’t have it.

Take Action

Tell Congress to Ban Federal Use of Face Recognition

You can find the GAO’s Report here

Texas’ Social Media Law is Not the Solution to Censorship

Wed, 09/15/2021 - 2:13pm

The big-name social media companies have all done a rather atrocious job of moderating user speech on their platforms. However, much like Florida's similarly unconstitutional attempt to address the issue (S.B. 7072), Texas' recently enacted H.B. 20 would make the matter worse for Texans and everyone else.

Signed into law by Governor Abbott last week, the Texas law prohibits platforms with more than 5 million users nationwide from moderating user posts based on viewpoint or geographic location. However, as we stated in our friend-of-the-court brief in support of NetChoice and the Computer & Communications Industry Associations lawsuit challenging Florida's law (NetChoice v. Moody), "Every court that has considered the issue, dating back to at least 2007, has rightfully found that private entities that operate online platforms for speech and that open those platforms for others to speak enjoy a First Amendment right to edit and curate that speech."

Inconsistent and opaque content moderation by online media services is a legitimate problem. It continues to result in the censorship of a range of important speech, often disproportionately impacting people who aren’t elected officials. That's why EFF joined with a cohort of allies in 2018 to draft the Santa Clara Principles on Transparency and Accountability in Content Moderation, offering one model for how platforms can begin voluntarily implementing content moderation practices grounded in a human rights framework. Under the proposed principles, platforms would:

  1. Publish the numbers of posts removed and accounts permanently or temporarily suspended due to violations of their content guidelines.
  2. Provide notice to each user whose content is taken down or account is suspended about the reason for the removal or suspension.
  3. Provide a meaningful opportunity for timely appeal of any content removal or account suspension.

H.B. 20 does attempt to mandate some of the transparency measures called for in the Santa Clara Principles. Although these legal mandates might be appropriate as part of a carefully crafted legislative scheme, H.B. 20 is not the result of a reasonable policy debate. Rather it is a retaliatory law aimed at violating the First Amendment rights of online services in a way that will ultimately harm all internet users.

We fully expect that once H.B. 20 is challenged, courts will draw from the wealth of legal precedent and find the law unconstitutional. Perhaps recognizing that H.B. 20 is imperiled for the same reasons as Florida’s law, the Lonestar State this week filed a friend-of-the-court brief in the appeal of a federal court’s ruling that Florida’s law is unconstitutional.

Despite Texas and Florida’s laws being unconstitutional, the concerns regarding social media platforms' control on our public discourse is a critical policy issue. It is vitally important that platforms take action to provide transparency, accountability, and meaningful due process to all impacted speakers and ensure that the enforcement of their content guidelines is fair, unbiased, proportional, and respectful of human rights. 

Lessons From History: Afghanistan and the Dangerous Afterlives of Identifying Data

Wed, 09/15/2021 - 1:34pm

As the United States pulled its troops out of Afghanistan after a 20-year occupation, byproducts of the prolonged deployment took on new meaning and represented a new chapter of danger for the Afghan people. For two decades, the United States spearheaded the collection of information on the people of Afghanistan, both for commonplace bureaucratic reasons like payroll and employment data—and in massive databases of biometric material accessible through devices called HIIDE. 

HIIDE, the Handheld Interagency Identity Detection Equipment, are devices used to collect biometric data like fingerprints and iris scans and store that information on large accessible databases. Ostensibly built in order to track terrorists and potential terrorists, the program also was used to verify the identities of contractors and Afghans working with U.S. forces. The military reportedly had an early goal of getting 80% of the population of Afghanistan into the program. With the Taliban retaking control of the nation, reporting about the HIIDE program prompted fears that the equipment could be seized and used to identify and target vulnerable people. 

Some sources, including those who spoke to the MIT Technology Review, claimed that the HIIDE devices offered only limited utility to any future regimes hoping to use them and that the data they access is stored remotely and therefore less of a concern. They did raise alarms, however, on the wide-reaching and detailed Afghan Personnel and Pay System (APPS), used to pay contractors and employees working for the Afghan Ministry of Interior and Ministry of Defense. This database contains detailed information on every member of the Afghan National Army and Afghan National Police—prompting renewed fears that this information could be used to find people who assisted the U.S. military or Afghan state-building, policing, and counter-insurgency measures. 

There has always been concern and protest over how the U.S military used this information, but now that concern takes on new dimensions. This is, unfortunately, a side effect of the collection and retention of data on individuals. No matter how secure you think the data is—and no matter how much you trust the current government to use the information responsibly and benevolently—there is always a risk that either priorities and laws will change, or an entirely new regime will take over and inherit that data. 

One of the most infamous examples was the massive trove of information collected and housed by Prussian and other German police and city governments in the early twentieth century. U.S. observers given tours of the Berlin police filing system were shocked to find dozens of rooms filled with files. In total, over 12 million records were kept containing personal and identifying information for people who had born, lived, or traveled through Berlin since the system began. Although Prussian police were known for political policing and brutal tactics, during the Weimar period between 1918 and 1933, police were lenient and even begrudgingly accepting of LGBTQ+ people at a time when most other countries severely criminalized people with same-sex desires and gender-nonconforming people. 

All of this changed when the Nazis rose to power and seized control of not just the government and economy of a major industrialized nation, but also millions of police files containing detailed information about people, who they were, and where to find them

The history of the world is filled with stories of information—collected responsibly or not, with intended uses that were benevolent or not—having long afterlives. The information governments collect today could fall into more malevolent hands tomorrow. You don't even need to go abroad in search of a government finding new nefarious uses for information collected on individuals for entirely different and benevolent purposes. 

With the afterlives of biometric surveillance and data retention now re-threatening people in Afghanistan, we are now regrettably able to add this chapter to this history of the dangers of mass data collection. Better protections on information and its uses can only go so far. In many instances, the only way to ensure that people are not made vulnerable by the misuse of private information is to limit, wherever possible, how much of it is collected in the first place. 

Surveillance Self-Defense Guides Now Available in Burmese

Wed, 09/15/2021 - 12:58pm

As part of our goal to expand the impact of our digital security guide, Surveillance Self-Defense (SSD), we recently translated the majority of its contents into Burmese. This repository of resources on circumventing surveillance across a variety of different platforms, devices, and threat models is now available in English, and in whole or in part in 11 other languages: Amharic, Arabic, Spanish, French, Russian, Turkish, Vietnamese, Brazilian Portuguese, Burmese, Thai, and Urdu.

The last year has seen significant numbers of protests by the people of Myanmar against human and digital rights violations by the military, prompted by the recent military coup in the country. Fighting back against human rights violations shouldn’t require you to have a computer science degree, and so our SSD guides help explain, in clear language, how to protect yourself from digital surveillance and unpack key concepts that make doing so easier. These guides offer overviews and recommendations for digital security protection during protests, network circumvention, using VPNs and Tor, using Signal, social media safety, and so on. 

We hope these resources will help those in Myanmar access reliable, up-to-date digital security guidance during a high-stress time, localized to the unique considerations in Myanmar. In addition to this project, we also plan to ​​translate our new mobile phone privacy guide into multiple languages, including Turkish, Russian, and Spanish. We’d like to thank the National Democratic Institute for providing funds for these translations, and Localization Lab for their efforts in completing them.

Pages