How Grassroots Activists in Georgia Are Leading the Opposition Against a Dangerous “Computer Crime” Bill
A misguided bill in Georgia (S.B. 315) threatens to criminalize independent computer security research and punish ordinary technology users who violate fine-print terms of service clauses. S.B. 315 is currently making its way through the state’s legislature amid uproar and resistance that its sponsors might not have fully anticipated. At the center of this opposition is a group of concerned citizen-advocates who, through their volunteer advocacy, have drawn national attention to the industry-wide implications of this bill.
Scott M. Jones and David Merrill from Electronic Frontiers Georgia—a group that participates in the Electronic Frontier Alliance network —spoke to us about their efforts to inform legislators and the public of the harms this bill would cause.
You have most recently been organizing around Georgia Senate Bill 315. What is the bill about, and what are your concerns with it?
Scott: Senate Bill 315 is a computer intrusion bill. Georgia already has on the books some very strong laws against computer intrusion, computer fraud, and the malicious side of hacking. I think this is pretty well covered in state law as it is.
There was an incident last year at Kennesaw State University. Some of the functions for conducting elections in the state of Georgia were farmed out to KSU and their Election Center, and there was a data breach there. That was very big in the news. What they didn’t say in the news at the time was that [it was] a security researcher who found a vulnerability and reported it ethically. As it turns out, the researcher in question was not even targeting KSU election systems, but merely found inappropriate personal information via a Google search, and then tried to get authorities to act quickly to remove it. This person, as we found out later, was investigated by the FBI and they came up clean. [The FBI] didn’t have anything to charge them with, so they left.
The state feels very embarrassed by this, and the attorney general’s office has asked for a bill that goes above and beyond the existing statutes that we have against computer crime. That’s where Senate Bill 315 came from. To use the language that the attorney general’s office used, they want to build it to criminalize so-called “poking around.” Basically, if you’re looking for vulnerabilities in a non-destructive way, even if you’re ethically reporting them—especially if you’re ethically reporting them—suddenly you’re a criminal if this bill passes into law.
David: I’ve worked in Atlanta cyber security for about 13 years and it’s a very tight-knit community. People from one company will go to another company, or a lot of the founders from one company will end up founding another company. A lot of them started from incubators and think tanks at our university system here—a lot of them at Georgia Institute of Technology. So if you have a chilling effect on one founder or one person who is interested in this kind of topic it can really stifle an entire industry and the whole chain of people creating all these other organizations.
Other than security researchers, who else needs to be concerned about this bill?
Scott: The other issue with Senate Bill 315 is it’s so broadly written that it could bring in terms of service [enforcement]. Terms of service come from a private company—for instance, your cable and Internet provider have terms of service. The bill is so broadly written that a violation of terms of service could possibly be construed as a criminal violation, and that would be improper delegation of powers.
David: S.B. 315 uses the term, “unauthorized access,” which is a very murky term. If you’re trying to go through all the proper channels in advance and get authorization for something, it’s not always clear who the person who has the authority to give that authorization is. If it’s a website and you’re testing some part of a website’s security you might think it’s the website administrator, but often it’s not. Often it’s their IT dev ops team or the tech ops team or something else. You may even get permission from one person and think you’re in the clear, and the next thing you know they say that’s not the correct authorization. With the broadness of the way this bill is written, there are way too many circumstances where somebody could be in violation of the law just performing their daily duties.
What is your game plan right now for fighting this bill?
Scott: It was voted on by the Senate, so now it goes on to the House and it will be heard in committee. The game plan right now would be to line up support to have a good showing at the House committee meeting. What we need in addition to ordinary people who do technology every day is some C-level people—CEOs, CIOs, CFOs, CTOs, CISOs, etc.
Electronic Frontiers Georgia participates in the Electronic Frontier Alliance. From that perspective, are there any notable differences between legislative-based organizing and, say, generally raising awareness of digital rights locally?
Scott: As far as legislative versus non-legislative organizing: Electronic Frontiers Georgia is also very interested in raising general awareness and teaching basic concepts, but I’m finding that it’s really hard to do both. We’re in legislative mode while the legislature is in session, which is roughly January 1st through about April 1st. After the legislative season is over we pivot back to educational and social mode. It’s good to do both, but it can be very difficult to do both at the same time. Groups that are actively doing activism at the state level shouldn’t beat themselves up if they’re not able to keep the same educational schedule up during the busy legislative season.
Electronic Frontiers Georgia has started working with other community groups in the area on the S.B. 315 fight. What advice would you give to grassroots groups who want to work more collaboratively with each other but have never done so before?
Scott: What I’m finding is that there are a lot of groups in the area but a lot of them are siloed, which is to say that they essentially keep to themselves and don’t mix with the other groups very frequently. They’re focused on their main core interest, and they just probably haven’t considered some of the issues like S.B. 315. It’s a challenge to bring disparate groups together, but I’m trying to talk to them. For example, I’m giving a talk on S.B. 315 to DC404, which is the local DEFCON group—an information security group.
We’re also trying to invite in other groups that are not necessarily technology-focused that I think would be interested in this particular fight if they just understood it better. One of the real struggles with S.B. 315 is trying to convince people who don’t work in technology that this is something they should care about. With news of data breaches every day, how do you explain to somebody that this is actually going to make security worse rather than make it better? That requires a lot of explaining. Some of these groups are looking for speakers and content, and that’s an opportunity for us to step in and fill that, and maybe explain our position to a better degree.
This interview has been lightly edited for length and readability. Additional information about the KSU breach was added after the original interview.
We need to talk about national security secrecy. Right now, there are two memos on everyone’s mind, each with its own version of reality. But the memos are just one piece. How the memos came to be—and why they continue to roil the waters in Congress—is more important.
On January 19, staff for Representative Devin Nunes (R-CA) wrote a classified memo alleging that the FBI and DOJ committed surveillance abuses in its applications for and renewal of a surveillance order against former Trump administration advisor Carter Page. Allegedly, the FBI and DOJ’s surveillance application included biased, politically-funded information.
The House Permanent Select Committee on Intelligence, on which Rep. Nunes serves as chairman, later voted to release the memo. What the memo meant, however, depended on who was talking. Some Republican House members took the memo as fact, claiming it showed “abuse” and efforts to “undermine our country.” But Rep. Adam Schiff (D-CA)—who serves as Ranking Member on the House Permanent Select Committee on Intelligence, across from Nunes—called the memo “profoundly misleading” and, in an opinion for The Washington Post, said it “cherry-picks facts.”
Even the FBI entered the debate, slamming the memo and saying the agency had “grave concerns about material omissions of fact that fundamentally impact the memo's accuracy." And Assistant Attorney General Stephen Boyd of the DOJ said releasing the memo without review would be “extraordinarily reckless.” Finally, the president said the memo “totally vindicates” him from special counsel Robert Mueller’s investigation into his administration.
So a lawmaker made serious charges about surveillance abuses and corruption at the highest levels, and the rest of Congress and the public were ensnared in a guessing game: Could they trust Devin Nunes and what he says? Is the memo he wrote, and the allegations in it, just smoke or is there fire? Unfortunately, the information needed to evaluate his claims is hidden within multiple, nested layers of secrecy.
The secrecy starts with surveillance applications and secret court opinions, which are protected by classification that requires proper security clearance. Only a handful of lawmakers can read the materials, but even they can’t openly discuss them in public. They could write a report, but the FBI and Justice Department would ask to redact the report. After redactions, the report would be subject to a committee vote for release. If the report is cleared by committee, it ordinarily requires the president’s approval.
At any point in the process, this information could have been mislabeled, misidentified, embellished, or obscured, and we’d have almost no way of knowing.
It’s time to talk about FISA again, and the problems with its multi-layered secrecy regime.
We’re going to talk about a surveillance law that, when passed, installed secrecy both in a court system and in Congress, barring the public and their representatives from accessing important information. When that information is partially revealed, it’s near impossible for the public to trust it.The Foreign Intelligence Surveillance Act and Its Regime of Secrecy
Passed in 1978, the Foreign Intelligence Surveillance Act (FISA) dictates how the government conducts physical and electronic surveillance for national security purposes against “foreign powers” and “agents of foreign powers.” FISA allows surveillance against “U.S. persons,” Americans and others in the U.S., so long as the agency doing the surveillance demonstrates and provides probable cause that the U.S. person is engaged in terrorism, espionage, or other activities on behalf of a foreign power.
Typically when law enforcement conducts a search, the Fourth Amendment requires that they get a search warrant approved by a neutral magistrate, a judge assigned to hear warrant applications. Under FISA, surveillance orders go through a slightly different review. The statute created an entirely separate court venue filled with 11 judges designated to review FISA surveillance orders. These judges make up the Foreign Intelligence Surveillance Court (FISC).
Similar to how courts review standard search warrants, FISC judges review FISA surveillance applications out of public view. Judges typically hear arguments from the government and no one else, court hearings are not public, and the FISA orders themselves are kept secret.
(Notably, this warrant-like review does not happen under Section 702 of FISA, which the NSA uses to collect billions of communications without a warrant, including Americans’ communications. Under Section 702, which you can read about here, FISC judges do not review individual targets of surveillance and instead sign off on programmatic surveillance policies.)
In the FISC, secrecy in each step is heightened. The court’s opinions and any transcript or record of the proceedings are automatically classified. Even the court’s physical location is constructed to be “the nation’s most secure courtroom,” with reinforced concrete and hand scanners to keep unauthorized people out.
This secrecy is hard to unravel after the fact. When recently asked by Rep. Nunes for more information about the renewed FISA surveillance warrant on Carter Page, Rosemary Collyer, the presiding judge of the FISC wrote:
“As you know, any such transcripts would be classified. It may also be helpful for me to observe that, in a typical process of considering an application, we make no systematic record of questions we ask or responses the government gives.”
Although surveillance conducted for run-of-the-mill law enforcement is often shadowy, the FISA process is far more shielded from public view. For example, standard search warrants are used to gather evidence for later prosecutions that are by default public. That means at some point the government has to face—and knows it has to face—a defense attorney’s efforts to question the evidence gathered from the search warrant. This is known as a “motion to suppress,” and with typical search warrants, these motions are filed in a public court. When that court hears a motion to suppress, it usually issues an order discussing why the surveillance violated—or didn’t violate—the law. This is how our legal system is intended to function. Lawyers and the public actually learn what the law is through this process, because in our system it is the duty of courts to “say what the law is.” For that reason, secret law is a perversion of our system.
Moreover, the public disclosure of law enforcement search warrants serves important ends outside of any particular legal challenge. For one, they let the public know what police are doing, both in their name and with their tax dollars. Second, they allow for greater accountability when police overstep their authority or otherwise misbehave.
FISC proceedings routinely fail this test.
FISA orders are for foreign intelligence purposes, so the surveillance is rarely used in a prosecution and rarely challenged in a motion to suppress. Moreover, even if the fruits of FISA surveillance are used in court, criminal defendants and other litigants are deprived of access to this information, so they have little way of knowing if evidence brought against them may have come from an improper FISA order. (FISA provides a mechanism for defendants to request this information, but no defendant has succeeded in doing so in FISA’s 40-year history.) This impedes a defendant’s ability to challenge their prosecution, and it prevents related, public knowledge of these challenges.
But the secrecy in FISA extends much further than FISC, adding further opaque layers between what intelligence agencies and the court do and what the public sees.Lacking Congressional Oversight
In practice, congressional oversight of the FISA process and the underlying materials is severely constrained. Although they have security clearances by virtue of their office, many lawmakers are kept far away from classified documents because they do not have cleared staff to assist in processing the information, and their requests are given lower priority than members of the intelligence oversight committees.
Even members of those House and Senate intelligence committees do not always have access to everything. In the case of the Nunes memo, only the “Gang of Eight” congressional leaders and a handful of others out of the 435 members of the House of Representatives and the 100 members of the Senate reportedly had access to the underlying FISA surveillance applications and unredacted FISC opinions.
This problem has restricted Congress members before. In 2003, when then-House intelligence committee chairman Jay Rockefeller learned of the NSA’s unconstitutional spying programs under President George W. Bush, he had little capability to fight back. He wrote to then-Vice President Dick Cheney:
“As you know, I am neither a technician nor an attorney. Given the security restrictions associated with this information, and my inability to consult staff or counsel on my own, I feel unable to fully evaluate, much less endorse these activities."
Rockefeller—who knew of the programs—could not speak of them. For everyone else, reading FISA and FISC materials is close to impossible. Even after Congress passed the USA FREEDOM Act in 2015 requiring that significant FISC Opinions be released to the public, these opinions are still highly redacted and tightly guarded, and no FISA application material has never been revealed to the public.
It’s for these reasons that EFF has long called for Congress to reform how it oversees surveillance activities conducted by the Executive Branch, including by providing all members of Congress with the tools they need to meaningfully understand and challenge activities that are so often veiled in extreme secrecy.Why This Matters
FISA’s inherent secrecy causes a chain reaction. Because the FISC’s surveillance orders are kept secret, it is hard to know if they are ever improper. Because criminal defendants are kept in the dark about what evidence was used to obtain a FISA order, they cannot meaningfully challenge if the order was wrongly issued.
In Congress, because lawmakers are widely excluded from knowing the FISC’s procedures, efforts to fix the process are scarce. And, as we’ve seen with the Nunes memo, because so few lawmakers can access FISA materials, if one lawmaker uses that access to make extraordinary claims, trying to prove or refute those claims is mostly futile.
Plainly, outsiders do not know who is telling the truth. Because the public cannot read the underlying FISA materials that the memo is based on, they can’t accurately separate fact from fiction. They cannot see the FISC’s written approval for the order. They cannot see the order itself. And they cannot see the materials that went into the surveillance application.
According to reports, the majority of Congress is in the exact same position. They have not been able to see the FISC’s written approval for the order; they cannot see the order itself. And they cannot see the materials that went into the surveillance application.
Rep. Adam Schiff, a member of the Gang of Eight, has tried to refute the Nunes memo, relying on the classified FISA order and surveillance application to write a sort of counter-memo. But Schiff’s counter-memo was originally blocked by the Trump administration, with a lawyer for the president explaining that it “contains numerous properly classified and especially sensitive passages.”
What is sensitive about those passages, we don’t know. Why they are classified, we don’t know. What they could clear up, we don’t know. And we can’t assess the White House’s claim that this counter-memo is too sensitive to be released, even though it approved release of the Nunes memo.
On February 24, the House Intelligence Committee ignored the White House’s wishes and released Rep. Schiff’s counter memo. The memo offered several claimed rebuttals to many of the allegations in the original Nunes memo, but it included far more redactions, leaving the public to, yet again, guess at the full truth.
And that’s the problem with FISA. Because of near airtight classification for everything that occurs in the FISC—and a corresponding congressional inaccessibility to that classified information—it is exceedingly difficult to know when we are being told the truth. A single member of the Gang of Eight could, at any time, present information to the public as truth, with few opportunities for others to rebut or verify those claims.
These truths should not be held at the mercy of classification, and they should not be a matter of security clearances, committee votes, and personal accusations. These problems are exacerbated by Congress’ systemic failures to assert its constitutional oversight role. FISA prevents the public from knowing much of what its own government does in national security investigations, and it prevents much of Congress from being able to stop single bad actors from misrepresenting classified material.
EFF will continue to fight for governmental transparency. It is one of the strongest vehicles we have to ensure that our government is protecting our rights, and that our government’s members are telling the truth.
Like many cities around the country, San Francisco is considering an investment in community broadband infrastructure: high-speed fiber that would make Internet access cheaper and better for city residents. Community broadband can help alleviate a number of issues with Internet access that we see all over America today. Many Americans have no choice of provider for high-speed Internet, Congress eliminated user privacy protections in 2017, and the FCC decided to roll back net neutrality protections in December.
This week, San Francisco published the recommendations of a group of experts, including EFF’s Kit Walsh, regarding how to protect the privacy and speech of those using community broadband.
This week, the Blue Ribbon Panel on Municipal Fiber released its third report, which tackles competition, security, privacy, net neutrality, and more. It recommends San Francisco’s community broadband require net neutrality and privacy protections. Any ISP looking to use the city’s infrastructure would have to adhere to certain standards. The model of community broadband that EFF favors is sometimes called “dark fiber” or “open access.” In this model, the government invests in fiber infrastructure, then opens it up for private companies to compete as your ISP. This means the big incumbent ISPs can no longer block new competitors from offering you Internet service. San Francisco is pursuing the “open access” option, and is quite far along in its process.
The “open access” model is preferable to one in which the government itself acts as the ISP, because of the civil liberties risks posed by a government acting as your conduit to information.
Of course, private ISPs can also abuse your privacy and restrict your opportunities to speak and learn online.
To prevent such harms, the expert panel explained how the city could best operate its network so that competition, as well as legal requirements, would prevent ISPs from violating net neutrality or the privacy of residents.
That would include, as was found in the 2015 Open Internet Order recently repealed by the FCC, a ban on blocking of sites, content, or applications; a ban on throttling sites, content, or applications; and a ban on paid prioritization, where ISPs favor themselves or companies who have paid them by giving their content better treatment.
The report also recommends requiring a number of consumer protections that Congress prevented from ever being enacted. If an ISP wants to sell or show a customer’s personal information to anyone, they’d have to give permission first. Even the use of data that doesn’t identify someone would require permission. Both of these would have to be “opt-in,” so it would be assumed that there was no consent to use the data. (“Opt-out” would mean that using customer data is assumed to be fine unless that customer figured out how to tell them no.)
Furthermore, the goal is to build infrastructure that connects every home and business to a fiber optic network, guaranteeing everyone in the city access to fast, reliable Internet. And while the actual lines will be owned by the city, it will be an “open-access” model—that is, space on the city-owned lines will be leased to private companies, creating competition and choice.
The report also recommends that San Francisco require ISPs to protect privacy when faced with legal challenges or demands from government agencies. It recommends San Francisco require ISPs using its network do a number of things (e.g., give up the right to look at customer communications, give up the right to consent to searches of communications, and swear to—if not prohibited by law—tell customers when they’re being asked to hand over information) to help protect the civil liberties and privacy of users.
With all of these things combined, San Francisco’s community broadband looks to be doing as much as possible to provide choices while also ensuring that all their options lead to safe and secure connection to a free and open Internet. That’s something we can all work towards in our communities.
Earlier this month, Let's Encrypt (the free, automated, open Certificate Authority EFF helped launch two years ago) passed a huge milestone: issuing over 50 million active certificates. And that number is just going to keep growing, because in a few weeks Let's Encrypt will also start issuing “wildcard” certificates—a feature many system administrators have been asking for.What's A Wildcard Certificate?
In order to validate an HTTPS certificate, a user’s browser checks to make sure that the domain name of the website is actually listed in the certificate. For example, a certificate from www.eff.org has to actually list www.eff.org as a valid domain for that certificate. Certificates can also list multiple domains (e.g., www.eff.org, ssd.eff.org, sec.eff.org, etc.) if the owner just wants to use one certificate for all of her domains. A wildcard certificate is just a certificate that says “I'm valid for all of the subdomains in this domain” instead of explicitly listing them all off. (In the certificate, this is indicated by using a wildcard character, indicated by an asterisk. So if you examine the certificate for eff.org today, it will say it's valid for *.eff.org.) That way, a system administrator can get a certificate for their entire domain, and use it on new subdomains they hadn't even thought of when they got the certificate. In order to issue wildcard certificates, Let's Encrypt is going to require users to prove their control over a domain by using a challenge based on DNS, the domain name system that translates domain names like www.eff.org into IP addresses like 126.96.36.199. From the perspective of a Certificate Authority (CA) like Let's Encrypt, there's no better way to prove that you control a domain than by modifying its DNS records, as controlling the domain is the very essence of DNS. But one of the key ideas behind Let's Encrypt is that getting a certificate should be an automatic process. In order to be automatic, though, the software that requests the certificate will also need to be able to modify the DNS records for that domain. In order to modify the DNS records, that software will also need to have access to the credentials for the DNS service (e.g. the login and password, or a cryptographic token), and those credentials will have to be stored wherever the automation takes place. In many cases, this means that if the machine handling the process gets compromised, so will the DNS credentials, and this is where the real danger lies. In the rest of this post, we'll take a deep dive into the components involved in that process, and what the options are for making it more secure.How Does the DNS Challenge Work?
At a high level, the DNS challenge works like all the other automatic challenges that are part of the ACME protocol—the protocol that a Certificate Authority (CA) like Let's Encrypt and client software like Certbot use to communicate about what certificate a server is requesting, and how the server should prove ownership of the corresponding domain name. In the DNS challenge, the user requests a certificate from a CA by using ACME client software like Certbot that supports the DNS challenge type. When the client requests a certificate, the CA asks the client to prove ownership over the domain by adding a specific TXT record to its DNS zone. More specifically, the CA sends a unique random token to the ACME client, and whoever has control over the domain is expected to put this TXT record into its DNS zone, in the predefined record named "_acme-challenge" under the actual domain the user is trying to prove ownership of. As an example, if you were trying to validate the domain for *.eff.org, the validation subdomain would be "_acme-challenge.eff.org." When the token value is added to the DNS zone, the client tells the CA to proceed with validating the challenge, after which the CA will do a DNS query towards the authoritative servers for the domain. If the authoritative DNS servers reply with a DNS record that contains the correct challenge token, ownership over the domain is proven and the certificate issuance process can continue.DNS Controls Digital Identity
What makes a DNS zone compromise so dangerous is that DNS is what users’ browsers rely on to know what IP address they should contact when trying to reach your domain. This applies to every service that uses a resolvable name under your domain, from email to web services. When DNS is compromised, a malicious attacker can easily intercept all the connections directed toward your email or other protected service, terminate the TLS encryption (since they can now prove ownership over the domain and get their own valid certificates for it), read the plaintext data, and then re-encrypt the data and pass the connection along to your server. For most people, this would be very hard to detect.Separate and Limited Privileges
Strictly speaking, in order for the ACME client to handle updates in an automated fashion, the client only needs to have access to credentials that can update the TXT records for "_acme-challenge" subdomains. Unfortunately, most DNS software and DNS service providers do not offer granular access controls that allow for limiting these privileges, or simply do not provide an API to handle automating this outside of the basic DNS zone updates or transfers. This leaves the possible automation methods either unusable or insecure. A simple trick can help maneuver past these kinds of limitations: using the CNAME record. CNAME records essentially act as links to another DNS record. Let's Encrypt follows the chain of CNAME records and will resolve the challenge validation token from the last record in the chain.Ways to Mitigate the Issue
Even using CNAME records, the underlying issue exists that the ACME client will still need access to credentials that allow it to modify some DNS record. There are different ways to mitigate this underlying issue, with varying levels of complexity and security implications in case of a compromise. In the following sections, this post will introduce some of these methods while trying to explain the possible impact if the credentials get compromised. With one exception, all of them make use of CNAME records.Only Allow Updates to TXT Records
The first method is to create a set of credentials with privileges that only allow updating of TXT records. In the case of a compromise, this method limits the fallout to the attacker being able to issue certificates for all domains within the DNS zone (since they could use the DNS credentials to get their own certificates), as well as interrupting mail delivery. The impact to mail delivery stems from mail-specific TXT records, namely SPF, DKIM, its extension ADSP and DMARC. A compromise of these would also make it easy to deliver phishing emails impersonating a sender from the compromised domain in question.Use a "Throwaway" Validation Domain
The second method is to manually create CNAME records for the "_acme-challenge" subdomain and point them towards a validation domain that would reside in a zone controlled by a different set of credentials. For example, if you want to get a certificate to cover yourdomain.tld and www.yourdomain.tld, you'd have to create two CNAME records—"_acme-challenge.yourdomain.tld" and "_acme-challenge.www.yourdomain.tld"—and point both of them to an external domain for the validation. The domain used for the challenge validation should be in an external DNS zone or in a subdelegate DNS zone that has its own set of management credentials. (A subdelegate DNS zone is defined using NS records and it effectively delegates the complete control over a part of the zone to an external authority.) The impact of compromise for this method is rather limited. Since the actual stored credentials are for an external DNS zone, an attacker who gets the credentials would only gain the ability to issue certificates for all the domains pointing to records in that zone. However, figuring out which domains actually do point there is trivial: the attacker would just have to read Certificate Transparency logs and check if domains in those certificates have a magic subdomain pointing to the compromised DNS zone.Limited DNS Zone Access
If your DNS software or provider allows for creating permissions tied to a subdomain, this could help you to mitigate the whole issue. Unfortunately, at the time of publication the only provider we have found that allows this is Microsoft Azure DNS. Dyn supposedly also has granular privileges, but we were not able to find a lower level of privileges in their service besides “Update records,” which still leaves the zone completely vulnerable. Route53 and possibly others allow their users to create a subdelegate zone, new user credentials, point NS records towards the new zone, and point the "_acme-challenge" validation subdomains to them using the CNAME records. It’s a lot of work to do the privilege separation correctly using this method, as one would need to go through all of these steps for each domain they would like to use DNS challenges for.Use ACME-DNS
As a disclaimer, the software discussed below is written by the author, and it’s used as an example of the functionality required to handle credentials required for DNS challenge automation in a secure fashion. The final method is a piece of software called ACME-DNS, written to combat this exact issue, and it's able to mitigate the issue completely. One downside is that it adds one more thing to your infrastructure to maintain as well as the requirement to have DNS port (53) open to the public internet. ACME-DNS acts as a simple DNS server with a limited HTTP API. The API itself only allows updating of TXT records of automatically generated random subdomains. There are no methods to request lost credentials, update or add other records. It provides two endpoints:
- /register – This endpoint generates a new subdomain for you to use, accompanied by a username and password. As an optional parameter, the register endpoint takes a list of CIDR ranges to whitelist updates from.
- /update – This endpoint is used to update the actual challenge token to the server.
In order to use ACME-DNS, you first have to create A/AAAA records for it, and then point NS records towards it to create a delegation node. After that, you simply create a new set of credentials via the /register endpoint, and point the CNAME record from the "_acme-challenge" validation subdomain of the originating zone towards the newly generated subdomain. The only credentials saved locally would be the ones for ACME-DNS, and they are only good for updating the exact TXT records for the validation subdomains for the domains on the box. This effectively limits the impact of a possible compromise to the attacker being able to issue certificates for these domains. For more information about ACME-DNS, visit https://github.com/joohoi/acme-dns/.Conclusion
To alleviate the issues with ACME DNS challenge validation, proposals like assisted-DNS to IETF’s ACME working group have been discussed, but are currently still left without a resolution. Since the only way to limit exposure from a compromise is to limit the DNS zone credential privileges to only changing specific TXT records, the current possibilities for securely implementing automation for DNS validation are slim. The only sustainable option would be to get DNS software and service providers to either implement methods to create more fine-grained zone credentials or provide a completely new type of credentials for this exact use case.
One of the most fundamental aspects of patent law is that patents should only be awarded for new inventions. That is, not only does someone have to invent something new to them in order to receive a patent, is must also be a new to the world. If someone independently comes up with an idea, it doesn’t mean that person should get a patent if someone else already came up with the same idea and told the public.
There’s good reason for this: patents are an artificial restraint on trade. They work to increase costs (the patent owner is rewarded with higher prices) and can impede follow-on innovation. Policy makers generally try to justify what would otherwise be considered a monopoly through the argument that without patents, inventors may never have invested in research or might not want to make their inventions public. Thus, the story goes, we should give people limited monopolies in the hopes that overall, we end up with more innovation (whether this is actually true, particularly for software, is debatable).
A U.S. Court of Appeals for the Federal Circuit rule, however, upends the patent bargain and allows a second-comer—someone who wasn’t the first inventor—to get a patent under a particular, albeit fairly limited, circumstance. A new petition challenges this rule, and EFF has filed an amicus brief in support of undoing the Federal Circuit’s misguided rule.
The rule is based on highly technical details of the Patent Act, which you can read about in our brief along with those of Ariosa (the patent challenger) and a group of law professors (not yet available). Our brief argues that the Federal Circuit rule is an incorrect understanding of the law. We ask the Federal Circuit to rehear the issue with the full court, and reverse its current rule.
While the Federal Circuit rule is fairly limited and doesn’t arise in many situations, we have significant concerns about the policy it seems to espouse. Contrary to decades of Supreme Court precedent, the rule allows, under certain circumstances, someone to get a patent on something had already been disclosed to the public. We believe that is always bad policy.
The House of Representatives is about to vote on a bill that would force online platforms to censor their users. The Allow States and Victims to Fight Online Sex Trafficking Act (FOSTA, H.R. 1865) might sound noble, but it would do nothing to stop sex traffickers. What it would do is force online platforms to police their users’ speech more forcefully than ever before, silencing legitimate voices in the process.
Back in December, we said that while FOSTA was a very dangerous bill, its impact on online spaces would not be as broad as the Senate bill, the Stop Enabling Sex Traffickers Act (SESTA, S. 1693). That’s about to change.
The House Rules Committee is about to approve a new version of FOSTA [.pdf] that incorporates most of the dangerous components of SESTA. This new Frankenstein’s Monster of a bill would be a disaster for Internet intermediaries, marginalized communities, and even trafficking victims themselves.
If you don’t want Congress to undermine the online communities we all rely on, please take a moment to call your representative and urge them to oppose FOSTA.
The problem with FOSTA and SESTA isn’t a single provision or two; it’s the whole approach.
FOSTA would undermine Section 230, the law protecting online platforms from some types of liability for their users’ speech. As we’ve explained before, the modern Internet is only possible thanks to a strong Section 230. Without Section 230, most of the online platforms we use would never have been formed—the risk of liability for their users’ actions would have simply been too high.
Section 230 strikes an important balance for when online platforms can be held liable for their users’ speech. Contrary to FOSTA supporters’ claims, Section 230 does nothing to protect platforms that break federal criminal law. In particular, if an Internet company knowingly engages in the advertising of sex trafficking, the U.S. Department of Justice can and should prosecute it. Additionally, Internet companies are not immune from civil liability for user-generated content if plaintiffs can show that a company had a direct hand in creating the illegal content.
The new version of FOSTA would destroy that careful balance, opening platforms to increased criminal and civil liability at both the federal and state levels. This includes a new federal sex trafficking crime targeted at web platforms (in addition to 18 U.S.C. § 1591)—but which would not require a platform to have knowledge that people are using it for sex trafficking purposes. This also includes exceptions to Section 230 for state law criminal prosecutions against online platforms, as well as civil claims under federal law and civil enforcement of federal law by state attorneys general.
Perhaps most disturbingly, the new version of FOSTA would make the changes to Section 230 apply retroactively: a platform could be prosecuted for failing to comply with the law before it was even passed.FOSTA Would Chill Innovation
Together, these measures would chill innovation and competition among Internet companies. Large companies like Google and Facebook may have the budgets to survive the massive increase in litigation and liability that FOSTA would bring. They may also have the budgets to implement a mix of automated filters and human censors to comply with the law. Small startups don’t. And with the increased risk of litigation, it would be difficult for new startups ever to find the funding they need to compete with Google.
Today’s large Internet companies would not have grown to prominence without the protections of Section 230. FOSTA would raise the ladder that has allowed those companies to grow, making it very difficult for newcomers ever to compete with them.FOSTA Would Censor Victims
Congress should think long and hard before dismantling the very tools that have proven most effective in fighting trafficking.
More dangerous still is the impact that FOSTA would have on online speech. Facing the threat of extreme criminal and civil penalties, web platforms large and small would have little choice but to silence legitimate voices. Supporters of SESTA and FOSTA pretend that it’s easy to distinguish online postings related to sex trafficking from ones that aren’t. It’s not—and it’s impossible at the scale needed to police a site as large as Facebook or Reddit. The problem is compounded by FOSTA’s expansion of federal prostitution law. Platforms would have to take extreme measures to remove a wide range of postings, especially those related to sex.
Some supporters of these bills have argued that platforms can rely on automated filters in order to distinguish sex trafficking ads from legitimate content. That argument is laughable. It’s difficult for a human to distinguish between a legitimate post and one that supports sex trafficking; a computer certainly could not do it with anything approaching 100% accuracy. Instead, platforms would have to calibrate their filters to over-censor. When web platforms rely too heavily on automated filters, it often puts marginalized voices at a disadvantage.
Most tragically of all, the first people censored would likely be sex trafficking victims themselves. The very same words and phrases that a filter would use to attempt to delete sex trafficking content would also be used by victims of trafficking trying to get help or share their experiences.
There are many, many stories of traffickers being caught by law enforcement thanks to clues that police officers and others found on online platforms. Congress should think long and hard before dismantling the very tools that have proven most effective in fighting trafficking.FOSTA Is the Wrong Approach
There is no amendment to FOSTA that would make it effective at fighting online trafficking while respecting the civil liberties of everyone online. That’s because the problem with FOSTA and SESTA isn’t a single provision or two; it’s the whole approach.
Creating more legal tools to go after online platforms would not punish sex traffickers. It would punish all of us, wrecking the safe online communities that we use every day. And in the process, it would also undermine the tools that have proven most effective at putting traffickers in prison. FOSTA is not the right solution, and no trimming around the edges will make it the right solution.
If you care about protecting the safety of our online communities—if you care about protecting everyone’s right to speak online, even about sensitive topics—we urge you to call your representative today and tell them to reject FOSTA.
Today, the FCC’s so-called “Restoring Internet Freedom Order,” which repealed the net neutrality protections the FCC had previously created with the 2015 Open Internet Order, has been officially published. That means the clock has started ticking on all the ways we can fight back.
While the rule is published today, it doesn’t take effect quite yet. ISPs can’t start blocking, throttling, or paid prioritization for a little while. So while we still have the protections of the 2015 Open Internet Order and we finally have a published version of the “Restoring Internet Freedom Order,” it’s time to act.
First, under the Congressional Review Act (CRA), Congress can reverse a change in regulation with a simple majority vote. That would bring the 2015 Open Internet Order back into effect. Congress has 60 working days—starting from when the rule is published in the official record—to do this. So those 60 days start now.
The Senate bill has 50 supporters, only one away from the majority it needs to pass. The House of Representatives is a bit further away. By our count, 114 representatives have made public commitments in support of voting for a CRA action. Now that time is ticking down for the vote, tell Congress to save the existing net neutrality rules.
Second, it is now unambiguous that the lawsuits of 22 states, public interest groups, Mozilla, and the Internet Association can begin. While the FCC decision said lawsuits had to wait ten days until after the official publication, there was some question about whether federal law said something else. So while some suits have already been filed, with the 10-day counter from the FCC starting, it’s clear that lawsuits can begin.
And, of course, states and other local governments continue to move forward on their own measures to protect net neutrality. 26 state legislatures are considering net neutrality legislation and five governors have issued executive orders on net neutrality. EFF has some ideas on how state law can stand up to the FCC order. Community broadband can also ensure that net neutrality principles are enacted on a local level. For example, San Francisco is currently looking for proposals to build an open-access network that would require net neutrality guarantees from any ISP looking to offer services over the city-owned infrastructure.
So while the FCC’s vote in December was in direct contradiction to the wishes of the majority of Americans, the publishing of that order means that action can really start to be taken.
Every three years, EFF's lawyers spend weeks huddling in their offices, composing carefully worded pleas we hope will persuade the Copyright Office and the Librarian of Congress to grant Americans a modest, temporary permission to use our own property in ways that are already legal.
Yeah, we think that's weird, too. But it's been than way ever since 1998, when Congress passed the Digital Millennium Copyright Act, whose Section 1201 established a ban on tampering with "access controls for copyrighted works" (also known as "Digital Rights Management" or "DRM"). It doesn't matter if you want to do something absolutely legitimate, something that there is no law against -- if you have to bypass DRM to do it, it's not allowed.
What's more, if someone wants to provide you with a tool to get around the DRM, they could face up to five years in prison and a $500,000 fine, for a first offense, even if the tool is only ever used to accomplish legal, legitimate ends.
Which brings us back to EFF's lawyers, sweating over their briefs every three years. The US Copyright Office holds proceedings every three years to determine whether it should recommend that the Librarian of Congress grant some limited exemptions to this onerous rule. Every three years, EFF begs for -- and wins -- some of these exemptions, by explaining how something people used to be able to do has been shut down by DMCA 1201 and the DRM it supports.
But you know what we don't get to do? We don't get to ask for the right to break DRM to do things that no one has ever thought of -- at least, that they haven't thought of yet. We don't get to brief the Copyright Office on the harms to companies that haven't been founded yet, the gadgets they haven't designed yet, and the users they haven't attracted yet. Only the past gets a seat at the table: the future isn't welcome.
That's a big problem. Many of the tools and technologies we love today were once transgressive absurdities: mocked for being useless and decried as immoral or even criminal. The absurd transgressors found ways to use existing techologies and products to build new businesses, over the howls of objections from the people who'd come before them.
It's a long and honorable tradition, and without it, we wouldn't have cable TV (reviled as thieves by the broadcasters in their early days); Netflix (called crooks by the Hollywood studios for mailing DVDs around in red envelopes); or iTunes ("Rip, Mix, Burn" was damned as a call to piracy by the record industry).
These businesses exist because they did something that wasn't customary, something rude and disorderly and controversial -- they did things that were legal, but unsanctioned by the businesses they were doing those things to.
And today, as these businesses have reached maturity, the so-called pirates have become admirals. Today, these former disruptors also use DRM and are glad that bypassing their DRM to do something legal is banned (because their shareholders prefer it that way).
Those companies aren't doing themselves any favors, either. Even as Apple was asking the Copyright Office to ban third-party modifications to the iPhone, it was copying these unauthorized innovations and including them in the official versions of its products.
Our Catalog of Missing Devices gives you a sense of what we've lost because DMCA 1201 has given the companies that succeeded last year the right to decide who can compete with them in the years to come.
It's a year that's divisible by three, and that means that EFF is back at the Copyright Office, pleading for the right of the past to go on in the present -- but we can't ask the Copyright Office to protect the future, the DMCA doesn't allow it.
That's why we've sued the US Government to invalidate Section 1201 of the DMCA: Congress made a terrible blunder in 1998 when it created that law, and the effects of that blunder mount with each passing year. We need to correct it -- and the sooner, the better.
In the coming decades, artificial intelligence (AI) and machine learning technologies are going to transform many aspects of our world. Much of this change will be positive; the potential for benefits in areas as diverse as health, transportation and urban planning, art, science, and cross-cultural understanding are enormous. We've already seen things go horribly wrong with simple machine learning systems; but increasingly sophisticated AI will usher in a world that is strange and different from the one we're used to, and there are serious risks if this technology is used for the wrong ends.
Today EFF is co-releasing a report with a number of academic and civil society organizations1 on the risks from malicious uses of AI and the steps that should be taken to mitigate them in advance.
At EFF, one area of particular concern has been the potential interactions between computer insecurity and AI. At present, computers are inherently insecure, and this makes them a poor platform for deploying important, high-stakes machine learning systems. It's also the case that AI might have implications for computer [in]security that we need to think about carefully in advance. The report looks closely at these questions, as well as the implications of AI for physical and political security. You can read the full document here.
In 1998, Congress passed the Digital Millennium Copyright Act (DMCA), and profoundly changed the relationship of Americans to their property.
Section 1201 of the DMCA bans the bypassing of "access controls" for copyrighted works. Originally, this meant that even though you owned your DVD player, and even though it was legal to bring DVDs home with you from your European holidays, you weren't allowed to change your DVD player so that it would play those out-of-region DVDs. DVDs were copyrighted works, the region-checking code was an access control, and so even though you owned the DVD, and you owned the DVD player, and even though you were allowed to watch the disc, you weren't allowed to modify your DVD player to play your DVD (which you were allowed to watch).
Experts were really worried about this: law professors, technologists and security experts saw that soon we'd have software—that is, copyrighted works—in all kinds of devices, from cars to printer cartridges to voting machines to medical implants to thermostats. If Congress banned tinkering with the software in the things you owned, it would tempt companies to use that software to create "private laws" that took away your rights to use your property in the way you saw fit. For example, it's legal to use third party ink in your HP printer, but once HP changed its printers to reject third-party ink, they could argue that anything you did to change them back was a violation of the DMCA.
Congress's compromise was to order the Library of Congress and the Copyright Office to hold hearings every three years, in which the public would be allowed to complain about ways in which these locks got in the way of their legitimate activities. Corporations weigh in about why their business interests outweigh your freedom to use your property for legitimate ends, and then the regulators deliberate and create some temporary exemptions, giving the public back the right to use their property in legal ways, even if the manufacturers of their property don't like it.
If it sounds weird that you have to ask the Copyright Office for permission to use your property, strap in, we're just getting started.
Here's where it gets weird: DMCA 1201 allows the Copyright Office to grant "use" exemptions, but not "tools" exemptions. That means that if the Copyright Office likes your proposal, they can give you permission to jailbreak your gadgets to make some use (say, install third-party apps on your phone, or record clips from your DVDs to use in film studies classes), but they can't give anyone the right to give you the tool needed to make that use (law professor and EFF board member Pam Samuelson argues that the Copyright Office can go farther than this, at least some of the time, but the Copyright Office disagrees).
Apparently, fans of DMCA 1201 believe that the process for getting permission to use your own stuff should go like this:
1. A corporation sells you a gadget that disallows some activity, or they push a software update to a gadget you already own to take away a feature it used to have;
2. You and your lawyers wait up to three years, then you write to the Copyright Office explaining why you think this is unfair;
3. The corporation that made your gadget tells the Copyright Office that you're a whiny baby who should just shut up and take it;
4. You write back to the Copyright Office to defend your use;
5. Months later, the Library of Congress gives you a limited permission to use your property (maybe);
6. You get a degree in computer science, and subject your gadget to close scrutiny to find a flaw in the manufacturer's programming;
7. Without using code or technical information from anyone else (including other owners of the same gadget) you figure out how to exploit that flaw to let you use your device in the way the government just said you could;
8. Three years later, you do it again.
Now, in practice, that's not how it works. In practice, people who want to use their own property in ways that the Copyright Office approves of just go digging around on offshore websites, looking for software that lets them make that use. (For example, farmers download alternative software for their John Deere tractors from websites they think might be maintained by Ukrainian hackers, though no one is really sure). If that software bricks their device, or steals their personal information, they have no remedy, no warranty, and no one to sue for cheating them.
That's the best case.
But often, the Library of Congress makes it even harder to make the uses they're approving. In 2015, they granted car owners permission to jailbreak their cars in order to repair them—but they didn't give mechanics the right to jailbreak the cars they were fixing. That ruling means that you, personally, can fix your car, provided that 1) you know how to fix a car; and 2) you can personally jailbreak the manufacturer's car firmware (in addition to abiding by the other snares in the final exemption language).
In other cases, the Copyright Office limits the term of the exemption as well as the scope: in the 2015 ruling, the Copyright Office gave security researchers the right to jailbreak systems to find out whether they were secure enough to be trusted, but not industrial systems (whose security is very important and certainly needs to be independently verified by those systems' owners!) and they also delayed the exemption's start for a full year, meaning that security researchers would only get two years to do their jobs before they'd have to go back to the Copyright Office and start all over again.
This is absurd.
Congress crafted the exemptions process to create an escape valve on the powerful tool it was giving to manufacturers with DMCA 1201. But even computer scientists don't hand-whittle their own software tools for every activity: like everyone else, they rely on specialized toolsmiths who make software and hardware that is tested, warranted, and maintained by dedicated groups, companies and individuals. The idea that every device in your home will have software that limits your use, and you can only get those uses back by first begging an administrative agency and then gnawing the necessary implement to make that use out of the lumber of your personal computing environment is purely absurd.
The Copyright Office is in the middle of a new rulemaking, and we've sent in requests for several important exemptions, but we're not kidding ourselves here: as important as it is to get the US government to officially acknowledge that DMCA 1201 locks up legitimate activities, and to protect end users, without the right to avail yourself of tools, the exemptions don't solve the whole problem.
That's why we're suing the US government to invalidate DMCA 1201. DMCA 1201 wasn't fit for purpose in 1998, and it has shown its age and contradictions more with each passing year.
Eskinder Nega, one of Ethiopia's most prominent online writers, winner of the Golden Pen of Freedom in 2014, the International Press Institute's World Press Freedom Hero for 2017, and PEN International's 2012 Freedom to Write Award, has been finally set free.
Eskinder has been detained in Ethiopian jails since September 2011. He was accused and convicted of violating the country's Anti-Terrorism Proclamation, primarily by virtue of his warnings in online articles that if Ethiopia's government continued on its authoritarian path, it might face an Arab Spring-like revolt.
Ethiopia's leaders refused to listen to Eskinder's message. Instead they decided the solution was to silence its messenger. Now, within the last few months, that refusal to engage with the challenges of democracy has led to the inevitable result. For two years, protests against the government have risen in frequency and size. A new Prime Minister, Hailemariam Desalegn, sought to reduce tensions by introducing reforms and releasing political prisoners like Eskinder. Despite thousands of prisoner releases, and the closure of one of the country's more notorious detention facilities, the protests continue. A day after Eskinder's release, Desalegn was forced to resign from his position. A day later, and the government has declared a new state of emergency.
Even as it comes face-to-face with the consequences of suppressing critics like Eskinder, the Ethiopian authorities pushed back against the truth. Eskinder's release was delayed for days, after prison officials repeatedly demanded that Eskinder sign a confession that falsely claimed he was a member of Ginbot 7, an opposition party that is banned as a terrorist organization within Ethiopia.
Eventually, following widespread international and domestic pressure, Eskinder was released without concession.
Eskinder, who was in jail for nearly seven years, joins a world whose politics and society have been transformed since his arrest. His predictions about the troubles Ethiopia would face if it silenced free expression may have come true, but his views were not perfect. He was, and will be again, an online writer, not a prophet. The promise of the Arab Spring that he identified has descended into its own authoritarian crackdowns. The technological tools he used to bypass Ethiopia's censorship and speak to a wider public are now just as often used by dictators to silence them. But that means we need more speakers like Eskinder, not fewer. And those speakers should be carefully listened to, not forced into imprisonment and exile.
The National Academy of Sciences (NAS) released a much-anticipated report yesterday that attempts to influence the encryption debate by proposing a “framework for decisionmakers.” At best, the report is unhelpful. At worst, its framing makes the task of defending encryption harder.
The report collapses the question of whether the government should mandate “exceptional access” to the contents of encrypted communications with how the government could accomplish this mandate. We wish the report gave as much weight to the benefits of encryption and risks that exceptional access poses to everyone’s civil liberties as it does to the needs—real and professed—of law enforcement and the intelligence community.
From its outset two years ago, the NAS encryption study was not intended to reach any conclusions about the wisdom of exceptional access, but instead to “provide an authoritative analysis of options and trade-offs.” This would seem to be a fitting task for the National Academy of Sciences, which is a non-profit, non-governmental organization, chartered by Congress to provide “objective, science-based advice on critical issues affecting the nation.” The committee that authored the report included well-respected cryptographers and technologists, lawyers, members of law enforcement, and representatives from the tech industry. It also held two public meetings and solicited input from a range of outside stakeholders, EFF among them.
EFF’s Seth Schoen and Andrew Crocker presented at the committee’s meeting at Stanford University in January 2017. We described what we saw as “three truths” about the encryption debate: First, there is no substitute for “strong” encryption, i.e. encryption without any intentionally included method for any party (other than the intended recipient/device holder) to access plaintext to allow decryption on demand by the government. Second, an exceptional access mandate will help law enforcement and intelligence investigations in certain cases. Third, “strong” encryption cannot be successfully fully outlawed, given its proliferation, the fact that a large proportion of encryption systems are open-source, and the fact that U.S. law has limited reach on the global stage. We wish the report had made a concerted attempt to grapple with that first truth, instead of confining its analysis to the second and third.
We recognize that the NAS report was undertaken in good faith, but the trouble with the final product is twofold.
First, its framing is hopelessly slanted. Not only does the report studiously avoid taking a position on whether compromising encryption is a good idea, its “options and tradeoffs” are all centered around the stated government need of “ensuring access to plaintext.” To that end, the report examines four possible options: (1) taking no legislative action, (2) providing additional support for government hacking and other workarounds, (3) a legislative mandate that providers provide government access to plaintext, and (4) mandating a particular technical method for providing access to plaintext.
EFF raised concerns that encryption does not just support free expression, it is free expression.
But all of these options, including “no legislative action,” treat government agencies’ stated need to access to plaintext as the only goal worth study, with everything else as a tradeoff. For example, from EFF’s perspective, the adoption of encryption by default is one of the most positive developments in technology policy in recent years because it permits regular people to keep their data confidential from eavesdroppers, thieves, abusers, criminals, and repressive regimes around the world. By contrast, because of its framing, the report discusses these developments purely in terms of criminals “who may unknowingly benefit from default settings” and thereby evade law enforcement.
By approaching the question only as one of how to deliver plaintext to law enforcement, rather than approaching the debate more holistically, the NAS does us a disservice. The question of whether encryption should or shouldn’t be compromised for “exceptional access” should not be treated as one of several in the encryption debate: it is the question.
Second, although it attempts to recognize the downsides of exceptional access, the report’s discussion of the possible risks to civil liberties is notably brief. In the span of only three pages (out of nearly a hundred), it acknowledges the importance of encryption to supporting values such as privacy and free expression. Unlike the interests of law enforcement, which are represented in every section, the report discusses the risks to civil liberties posed by exceptional access as just one more tradeoff, and addresses them as a stand-alone concern.
To emphasize the report’s focus, the civil liberties section ends with the observation that criminals and terrorists use encryption to “take actions that negatively impact the security of law-abiding individuals.” This ignores the possibility that encryption can both enhance civil liberties and preserve individual safety. That’s why, for example, experts on domestic violence argue that smartphone encryption protects victims from their abusers, and that law enforcement should not seek to compromise smartphone encryption in order to prosecute these crimes.
Furthermore, the simple act of mandating that providers break encryption in their products is itself a significant civil liberties concern, totally apart from privacy and security implications that would result. Specifically, EFF raised concerns that encryption does not just support free expression, it is free expression. Notably absent is any examination of the rights of developers of cryptographic software, particularly given the role played by free and open source software in the encryption ecosystem. It ignores the legal landscape in the United States—one that strongly protects the principle that code (including encryption) is speech, protected by the First Amendment.
The report also underplays the international implications of any U.S. government mandate for U.S.-based providers. Currently, companies resist demands for plaintext from regimes whose respect for the rule of law is dubious, but that will almost certainly change if they accede to similar demands from U.S. agencies. In a massive understatement, the report notes that this could have “global implications for human rights.” We wish that the NAS had given this crucial issue far more emphasis and delved more deeply into the question, for instance, of how Apple could plausibly say no to a Chinese demand to wiretap a Chinese user’s FaceTime conversations while providing that same capacity to the FBI.
In any tech policy debate, expert advice is valuable not only to inform how to implement a particular policy but whether to undertake that policy in the first place. The NAS might believe that as the provider of “objective, science-based advice,” it isn’t equipped to weigh in on this sort of question. We disagree.
EFF and MuckRock have a launched a new public records campaign to reveal how much data law enforcement agencies have collected using automated license plate readers (ALPRs) and are sharing with each other.
Over the next few weeks, the two organizations are filing approximately 1,000 public records requests with agencies that have deals with Vigilant Solutions, one of the nation’s largest vendors of ALPR surveillance technology and software services. We’re seeking documentation showing who’s sharing ALPR data with whom. We are also requesting information on how many plates each agency scanned in 2016 and 2017 and how many of those plates were on predetermined “hot lists” of vehicles suspected of being connected to crimes.
You can see the full list of agencies and track the progress of each request through the Street-Level Surveillance: ALPR Campaign page on MuckRock.As Easy As Adding a Friend on Facebook
“Joining the largest law enforcement LPR sharing network is as easy as adding a friend on your favorite social media platform.”
That’s a direct quote from Vigilant Solutions in its promotional materials for its ALPR technology. Through its LEARN system, Vigilant Solutions has made it possible for government agencies—particularly sheriff’s offices and police departments—to grant 24-7, unrestricted database access to hundreds of other agencies around the country.
ALPRs are camera systems that scan every license plate that passes in order to create enormous databases of where people drive and park their cars both historically and in real time. Collected en masse by ALPRs mounted on roadways and vehicles, this data can reveal sensitive information about people, such as where they work, socialize, worship, shop, sleep at night, and seek medical care or other services. ALPR allows your license plate to be used as a tracking beacon and a way to map your social networks.
Here’s the question: who is on your local police department’s and sheriff office’s ALPR friend lists?
Perhaps you live in a “sanctuary city.” There’s a very real chance local police are sharing ALPR data with Immigration & Customs Enforcement, Customs & Border Patrol, or one of their subdivisions.
Perhaps you live thousands of miles from the South. You’d be surprised to learn that scores of small towns in rural Georgia have round-the-clock access to your ALPR data. This includes towns like Meigs, which serves a population of 1,000 and did not even have full-time police officers until last fall.
In 2017, EFF and the Center for Human Rights and Privacy filed records requests with several dozen law enforcement agencies in California. We found that police departments were routinely sharing ALPR data with a wide variety of agencies that may be difficult to justify. Police often shared with the DEA, FBI, and U.S. Marshals—but they also shared with federal agencies with a less clear interest, such as the U.S. Forest Service, the U.S. Department of Veteran Affairs, and the Air Force base at Fort Eustis. California agencies were also sharing with public universities on the East Coast, airports in Tennessee and Texas, and agencies that manage public assistance programs, like food stamps and indigent health care. In some cases, the records indicate the agencies were sharing with private actors.
Meanwhile, most agencies are connected to an additional network called the National Vehicle Locator System (NVLS), which shares sensitive information with more than 500 government agencies, the identities of which have never been publicly disclosed.
Here are the data sharing documents we obtained in 2017, which we are seeking to update with our new series of requests.
- Anaheim Police Department
- Antioch Police Department
- Bakersfield Police Department
- Chino Police Department
- Clovis Police Department
- Elk Grove Police Department
- Fontana Police Department
- Fountain Valley Police Department
- Glendora Police Department
- Hawthorne Police Department
- Irvine Police Department
- Livermore Police Department
- Lodi Police Department
- Long Beach Police Department
- Montebello Police Department
- Orange Police Department
- Palos Verdes Estates Police Department
- Red Bluff Police Department
- Sacramento Police Department
- San Bernardino Police Department
- San Diego Police Department
- San Rafael Police Department
- San Ramon Police Department
- Simi Valley Police Department
- Tulare Police Department
We hope to create a detailed snapshot of the ALPR mass surveillance network linking law enforcement and other government agencies nationwide. Currently, the only entity that has the definitive list is Vigilant Solutions, which, as a private company, is not subject to state or federal public record disclosure laws. So far, the company has not volunteered this information, despite reaping many millions in tax dollars.
Until they do, we’ll keep filing requests.
For more information on ALPRs, visit EFF’s Street-Level Surveillance hub.
Rejecting years of settled precedent, a federal court in New York has ruled [PDF] that you could infringe copyright simply by embedding a tweet in a web page. Even worse, the logic of the ruling applies to all in-line linking, not just embedding tweets. If adopted by other courts, this legally and technically misguided decision would threaten millions of ordinary Internet users with infringement liability.
This case began when Justin Goldman accused online publications, including Breitbart, Time, Yahoo, Vox Media, and the Boston Globe, of copyright infringement for publishing articles that linked to a photo of NFL star Tom Brady. Goldman took the photo, someone else tweeted it, and the news organizations embedded a link to the tweet in their coverage (the photo was newsworthy because it showed Brady in the Hamptons while the Celtics were trying to recruit Kevin Durant). Goldman said those stories infringe his copyright.
Courts have long held that copyright liability rests with the entity that hosts the infringing content—not someone who simply links to it. The linker generally has no idea that it’s infringing, and isn’t ultimately in control of what content the server will provide when a browser contacts it. This “server test,” originally from a 2007 Ninth Circuit case called Perfect 10 v. Amazon, provides a clear and easy-to-administer rule. It has been a foundation of the modern Internet.
Judge Katherine Forrest rejected the Ninth Circuit’s server test, based in part on a surprising approach to the process of embedding. The opinion describes the simple process of embedding a tweet or image—something done every day by millions of ordinary Internet users—as if it were a highly technical process done by “coders.” That process, she concluded, put publishers, not servers, in the drivers’ seat:
[W]hen defendants caused the embedded Tweets to appear on their websites, their actions violated plaintiff’s exclusive display right; the fact that the image was hosted on a server owned and operated by an unrelated third party (Twitter) does not shield them from this result.
She also argued that Perfect 10 (which concerned Google’s image search) could be distinguished because in that case the “user made an active choice to click on an image before it was displayed.” But that was not a detail that the Ninth Circuit relied on in reaching its decision. The Ninth Circuit’s rule—which looks at who actually stores and serves the images for display—is far more sensible.
If this ruling is appealed (there would likely need to be further proceedings in the district court first), the Second Circuit will be asked to consider whether to follow Perfect 10 or Judge Forrest’s new rule. We hope that today’s ruling does not stand. If it did, it would threaten the ubiquitous practice of in-line linking that benefits millions of Internet users every day.Related Cases: Perfect 10 v. Google
Today Google launched a new version of its Chrome browser with what they call an "ad filter"—which means that it sometimes blocks ads but is not an "ad blocker." EFF welcomes the elimination of the worst ad formats. But Google's approach here is a band-aid response to the crisis of trust in advertising that leaves massive user privacy issues unaddressed.
Last year, a new industry organization, the Coalition for Better Ads, published user research investigating ad formats responsible for "bad ad experiences." The Coalition examined 55 ad formats, of which 12 were deemed unacceptable. These included various full page takeovers (prestitial, postitial, rollover), autoplay videos with sound, pop-ups of all types, and ad density of more than 35% on mobile. Google is supposed to check sites for the forbidden formats and give offenders 30 days to reform or have all their ads blocked in Chrome. Censured sites can purge the offending ads and request reexamination.The Coalition for Better Ads Lacks a Consumer Voice
The Coalition involves giants such as Google, Facebook, and Microsoft, ad trade organizations, and adtech companies and large advertisers. Criteo, a retargeter with a history of contested user privacy practice is also involved, as is content marketer Taboola. Consumer and digital rights groups are not represented in the Coalition.
This industry membership explains the limited horizon of the group, which ignores the non-format factors that annoy and drive users to install content blockers. While people are alienated by aggressive ad formats, the problem has other dimensions. Whether it’s the use of ads as a vector for malware, the consumption of mobile data plans by bloated ads, or the monitoring of user behavior through tracking technologies, users have a lot of reasons to take action and defend themselves.
But these elements are ignored. Privacy, in particular, figured neither in the tests commissioned by the Coalition, nor in their three published reports that form the basis for the new standards. This is no surprise given that participating companies include the four biggest tracking companies: Google, Facebook, Twitter, and AppNexus.Stopping the "Biggest Boycott in History"
Some commentators have interpreted ad blocking as the "biggest boycott in history" against the abusive and intrusive nature of online advertising. Now the Coalition aims to slow the adoption of blockers by enacting minimal reforms. Pagefair, an adtech company that monitors adblocker use, estimates 600 million active users of blockers. Some see no ads at all, but most users of the two largest blockers, AdBlock and Adblock Plus, see ads "whitelisted" under the Acceptable Ads program. These companies leverage their position as gatekeepers to the user's eyeballs, obliging Google to buy back access to the "blocked" part of their user base through payments under Acceptable Ads. This is expensive (a German newspaper claims a figure as high as 25 million euros) and is viewed with disapproval by many advertisers and publishers.
Industry actors now understand that adblocking’s momentum is rooted in the industry’s own failures, and the Coalition is a belated response to this. While nominally an exercise in self-regulation, the enforcement of the standards through Chrome is a powerful stick. By eliminating the most obnoxious ads, they hope to slow the growth of independent blockers.What Difference Will It Make?
Coverage of Chrome's new feature has focused on the impact on publishers, and on doubts about the Internet’s biggest advertising company enforcing ad standards through its dominant browser. Google has sought to mollify publishers by stating that only 1% of sites tested have been found non-compliant, and has heralded the changed behavior of major publishers like the LA Times and Forbes as evidence of success. But if so few sites fall below the Coalition's bar, it seems unlikely to be enough to dissuade users from installing a blocker. Eyeo, the company behind Adblock Plus, has a lot to lose should this strategy be successful. Eyeo argues that Chrome will only "filter" 17% of the 55 ad formats tested, whereas 94% are blocked by AdblockPlus.User Protection or Monopoly Power?
The marginalization of egregious ad formats is positive, but should we be worried by this display of power by Google? In the past, browser companies such as Opera and Mozilla took the lead in combating nuisances such as pop-ups, which was widely applauded. Those browsers were not active in advertising themselves. The situation is different with Google, the dominant player in the ad and browser markets.
Google exploiting its browser dominance to shape the conditions of the advertising market raises some concerns. It is notable that the ads Google places on videos in Youtube ("instream pre-roll") were not user-tested and are exempted from the prohibition on "auto-play ads with sound." This risk of a conflict of interest distinguishes the Coalition for Better Ads from, for example, Chrome's monitoring of sites associated with malware and related user protection notifications.
There is also the risk that Google may change position with regard to third-party extensions that give users more powerful options. Recent history justifies such concern: Disconnect and Ad Nauseam have been excluded from the Chrome Store for alleged violations of the Store’s rules. (Ironically, Adblock Plus has never experienced this problem.)Chrome Falls Behind on User Privacy
This move from Google will reduce the frequency with which users run into the most annoying ads. Regardless, it fails to address the larger problem of tracking and privacy violations. Indeed, many of the Coalition’s members were active opponents of Do Not Track at the W3C, which would have offered privacy-conscious users an easy opt-out. The resulting impression is that the ad filter is really about the industry trying to solve its adblocking problem, not about addressing users' concerns.
Chrome, together with Microsoft Edge, is now the last major browser to not offer integrated tracking protection. Firefox introduced this feature last November in Quantum, enabled by default in "Private Browsing" mode with the option to enable it universally. Meanwhile, Apple's Safari browser has Intelligent Tracking Prevention, Opera ships with an ad/tracker blocker for users to activate, and Brave has user privacy at the center of its design. It is a shame that Chrome's user security and safety team, widely admired in the industry, is empowered only to offer protection against outside attackers, but not against commercial surveillance conducted by Google itself and other advertisers. If you are using Chrome (1), you need EFF's Privacy Badger or uBlock Origin to fill this gap.
(1) This article does not address other problematic aspects of Google services. When users sign into Gmail, for example, their activity across other Google products is logged. Worse yet, when users are signed into Chrome their full browser history is stored by Google and may be used for ad targeting. This account data can also be linked to Doubleclick's cookies. The storage of browser history is part of Sync (enabling users access to their data across devices), which can also be disabled. If users desire to use Sync but exclude the data from use for ad targeting by Google, this can be selected under ‘Web And App Activity’ in Activity controls. There is an additional opt-out from Ad Personalization in Privacy Settings.
The U.S. Department of Homeland Security (DHS), Customs and Border Protection (CBP) Privacy Office, and Office of Field Operations recently invited privacy stakeholders—including EFF and the ACLU of Northern California—to participate in a briefing and update on how the CBP is implementing its Biometric Entry/Exit Program.
As we’ve written before, biometrics systems are designed to identify or verify the identity of people by using their intrinsic physical or behavioral characteristics. Because biometric identifiers are by definition unique to an individual person, government collection and storage of this data poses unique threats to privacy and security of individual travelers.
EFF has many concerns about the government collecting and using biometric identifiers, and specifically, we object to the expansion of several DHS programs subjecting Americans and foreign citizens to facial recognition screening at international airports. EFF appreciated the opportunity to share these concerns directly with CBP officers and we hope to work with CBP to allow travelers to opt-out of the program entirely.
You can read the full letter we sent to CBP here.
Law Enforcement Use of Face Recognition Systems Threatens Civil Liberties, Disproportionately Affects People of Color: EFF Report
San Francisco, California—Face recognition—fast becoming law enforcement’s surveillance tool of choice—is being implemented with little oversight or privacy protections, leading to faulty systems that will disproportionately impact people of color and may implicate innocent people for crimes they didn’t commit, says an Electronic Frontier Foundation (EFF) report released today.
Face recognition is rapidly creeping into modern life, and face recognition systems will one day be capable of capturing the faces of people, often without their knowledge, walking down the street, entering stores, standing in line at the airport, attending sporting events, driving their cars, and utilizing public spaces. Researchers at the Georgetown Law School estimated that one in every two American adults—117 million people—are already in law enforcement face recognition systems.
This kind of surveillance will have a chilling effect on Americans’ willingness to exercise their rights to speak out and be politically engaged, the report says. Law enforcement has already used face recognition at political protests, and may soon use face recognition with body-worn cameras, to identify people in the dark, and to project what someone might look like from a police sketch or even a small sample of DNA.
Face recognition employs computer algorithms to pick out details about a person’s face from a photo or video to form a template. As the report explains, police use face recognition to identify unknown suspects by comparing their photos to images stored in databases and to scan public spaces to try to find specific pre-identified targets.
But no face recognition system is 100 percent accurate, and false positives—when a person’s face is incorrectly matched to a template image—are common. Research shows that face recognition misidentifies African Americans and ethnic minorities, young people, and women at higher rights that whites, older people, and men, respectively. And because of well-documented racially-biased police practices, all criminal databases—including mugshot databases—include a disproportionate number of African-Americans, Latinos, and immigrants.
For both reasons, inaccuracies in facial recognition systems will disproportionately affect people of color.
“The FBI, which has access to at least 400 million images and is the central source for facial recognition identification for federal, state, and local law enforcement agencies, has failed to address the problem of false positives and inaccurate results,” said EFF Senior Staff Attorney Jennifer Lynch, author of the report. “It has conducted few tests to ensure accuracy and has done nothing to ensure its external partners—federal and state agencies—are not using face recognition in ways that allow innocent people to be identified as criminal suspects.”
Lawmakers, regulators, and policy makers should take steps now to limit face recognition collection and subject it to independent oversight, the report says. Legislation is needed to place meaningful checks on government use of face recognition, including rules limiting retention and sharing, requiring notification when face prints are collected, ensuring robust security procedures to prevent data breaches, and establishing legal processes governing when law enforcement may collect face images from the public without their knowledge, the report concludes.
“People should not have to worry that they may be falsely accused of a crime because an algorithm mistakenly matched their photo to a suspect. They shouldn’t have to worry that their data will end up in the hands of identify thieves because face recognition databases were breached. They shouldn’t have to fear that their every move will be tracked if face recognition is linked to the networks of surveillance cameras that blanket many cities,” said Lynch. “Without meaningful legal protections, this is where we may be headed.”
For the report:
For more on face recognition:
In a win for free expression, a court has dismissed a copyright lawsuit against Happy Mutants, LLC, the company behind acclaimed website Boing Boing. The court ruled [PDF] that Playboy’s complaint—which accused Boing Boing of copyright infringement for linking to a collection of centerfolds—had not sufficiently established its copyright claim. Although the decision allows Playboy to try again with a new complaint, it is still a good result for supporters of online journalism and sensible copyright.
Playboy Entertainment’s lawsuit accused Boing Boing of copyright infringement for reporting on a historical collection of Playboy centerfolds and linking to a third-party site. In a February 2016 post, Boing Boing told its readers that someone had uploaded scans of the photos, noting they were “an amazing collection” reflecting changing standards of what is considered sexy. The post contained links to an imgur.com page and YouTube video—neither of which were created by Boing Boing.
EFF, together with co-counsel Durie Tangri, filed a motion to dismiss [PDF] on behalf of Boing Boing. We explained that Boing Boing did not contribute to the infringement of any Playboy copyrights by including a link to illustrate its commentary. The motion noted that another judge in the same district had recently dismissed a case where Quentin Tarantino accused Gawker of copyright infringement for linking to a leaked script in its reporting.
Judge Fernando M. Olguin’s ruling quotes the Tarantino decision, noting that:
An allegation that a defendant merely provided the means to accomplish an infringing activity is insufficient to establish a claim for copyright infringement. Rather, liability exists if the defendant engages in personal conduct that encourages or assists the infringement.
Given this standard, the court was “skeptical that plaintiff has sufficiently alleged facts to support either its inducement or material contribution theories of copyright infringement.”
From the outset of this lawsuit, we have been puzzled as to why Playboy, once a staunch defender of the First Amendment, would attack a small news and commentary website. Today’s decision leaves Playboy with a choice: it can try again with a new complaint or it can leave this lawsuit behind. We don’t believe there’s anything Playboy could add to its complaint that would meet the legal standard. We hope that it will choose not to continue with its misguided suit.Related Cases: Playboy Entertainment Group v. Happy Mutants
A consortium of media and distribution companies calling itself “FairPlay Canada” is lobbying for Canada to implement a fast-track, extrajudicial website blocking regime in the name of preventing unlawful downloads of copyrighted works. It is currently being considered by the Canadian Radio-television and Telecommunications Commission (CRTC), an agency roughly analogous to the Federal Communications Commission (FCC) in the U.S.
The proposal is misguided and flawed. We’re still analyzing it, but below are some preliminary thoughts.The Proposal
The consortium is requesting the CRTC establish a part-time, non-profit organization that would receive complaints from various rightsholders alleging that a website is “blatantly, overwhelmingly, or structurally engaged” in violations of Canadian copyright law. If the sites were determined to be infringing, Canadian ISPs would be required to block access to these websites. The proposal does not specify how this would be accomplished.
The consortium proposes some safeguards in an attempt to show that the process would be meaningful and fair. It proposes the affected websites, ISPs, and members of the public would be allowed to respond to any blocking request. It also suggests that any blocking request would not be implemented unless a recommendation to block were adopted by the CRTC, and any affected party would have the right to appeal to a court.
Fairplay argues the system is necessary because, according to Fairplay, unlawful downloads are destroying the Canadian creative industry and harming Canadian culture.(Some of) The Problems
As Michael Geist, the Canada Research Chair in Internet and E-Commerce Law at the University of Ottawa points out, Canada had more investment in film and TV production last year than any other time in history. And it’s not just investment in creative industries that is seeing growth: legal means of accessing creative content is also growing, as Bell itself recognized in a statement to financial analysts. Contrary to the argument pushed by the content industry and other FairPlay backers, investment and lawful film and TV services are growing, not shrinking. The Canadian film and TV industries don’t need website-blocking.
The proposal would require service providers to “disappear” certain websites, endangering Internet security and sending a troubling message to the world: it’s okay to interfere with the Internet, even effectively blacklisting entire domains, as long as you do it in the name of IP enforcement. Of course, blacklisting entire domains can mean turning off thousands of underlying websites that may have done nothing wrong. The proposal doesn’t explain how blocking is to be accomplished, but when such plans have been raised in other contexts, we’ve noted the significant concerns we have about various technological ways of “blocking” that wreak havoc on how the Internet works.
And we’ve seen how harmful mistakes can be. For example, back in 2011, the U.S. government seized the domain names of two popular websites based on unsubstantiated allegations of copyright infringement. The government held those domains for over 18 months. As another example, one company named a whopping 3,343 websites in a lawsuit as infringing on trademark and copyright rights. Without an opposition, the company was able to get an order that required domain name registrars to seize these domains. Only after many defendants had their legitimate websites seized did the Court realize that statements made about many of the websites by the rightsholder were inaccurate. Although the proposed system would involve blocking (however that is accomplished) and not seizing domains, the problem is clear: mistakes are made, and they can have long-lasting effect.
But beyond blocking for copyright infringement, we’ve also seen that once a system is in place to take down one type of content, it will only lead to calls for more blocking, including that of lawful speech. This raises significant freedom of expression and censorship concerns.
We’re also concerned about what’s known as “regulatory capture” with this type of system, the idea that the regulator often tends to align its interests with those of the regulated. Here, the system would be initially funded by rightsholders, would be staffed “part-time” by those with “relevant experience,” and would get work when rightsholders view it as a valuable system. These sort of structural aspects of the proposal have a tendency to cause regulatory capture. An impartial judiciary that sees cases and parties from across a political, social, and cultural spectrum helps avoid this pitfall.
Finally, we’re also not sure why this proposal is needed at all. Canada already has some of the strongest anti-piracy laws in the world. The proposal just adds complexity and strips away some of the protections that a court affords those who may be involved in legitimate business (even if the content owners don’t like those businesses).
These are just some of the concerns raised by this proposal. Professor Geist’s blog highlights more, and in more depth.What you can do
The CRTC is now accepting public comment on the proposal, and has already received over 4,000 comments. The deadline is March 1, although an extension has been sought. We encourage any interested members of the public to submit comments to let the Commission know your thoughts. Please note that all comments are made public, and require certain personal information to be included.
In yet another milestone on the path to encrypting the web, Let’s Encrypt has now issued over 50 million active certificates. Depending on your definition of “website,” this suggests that Let’s Encrypt is protecting between about 23 million and 66 million websites with HTTPS (more on that below). Whatever the number, it’s growing every day as more and more webmasters and hosting providers use Let’s Encrypt to provide HTTPS on their websites by default.
Let’s Encrypt is a certificate authority, or CA. CAs like Let’s Encrypt are crucial to secure, HTTPS-encrypted browsing. They issue and maintain digital certificates that help web users and their browsers know they’re actually talking to the site they intended to.
One of the things that sets Let’s Encrypt apart is that it issues these certificates for free. And, with the help of EFF’s Certbot client and a range of other automation tools, it’s easy for webmasters of varying skill and resource levels to get a certificate and implement HTTPS. In fact, HTTPS encryption has become an automatic part of many hosting providers’ offerings.
50 million active certificates represents the number of certificates that are currently valid and have not expired. (Sometimes we also talk about “total issuance,” which refers to the total number of certificates ever issued by Let’s Encrypt. That number is around 217 million now.) Relating these numbers to names of “websites” is a bit complicated. Some certificates, such as those issued by certain hosting providers, cover many different sites. Yet some certificates are also redundant with others, so there may be a handful of active certificates all covering precisely the same names.
Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.
One way to count is by “fully qualified domains active”—in other words, different names covered by non-expired certificates. This is now at 66 million. This metric can overcount sites; while most people would say that eff.org and www.eff.org are the same website, they count as two different names here.
Another way to count the number of websites that Let’s Encrypt protects is by looking at “registered domains active,” of which Let’s Encrypt currently has about 26 million. This refers to the number of different top-level domain names among non-expired certificates. In this case, supporters.eff.org and www.eff.org would be counted as one name. In cases where pages under the same top-level domain are run by different people with different content, this metric may undercount different sites.
No matter how you slice it, Let’s Encrypt is one of the largest CAs. And it has grown largely by giving websites their first-ever certificate rather than by grabbing websites from other CAs. That means that, as Let’s Encrypt grows, the number of HTTPS-protected websites on the web tends to grow too. Every website protected is one step closer to encrypting the entire web, and milestones like this remind us that we are on our way to achieving that goal together.