Schneier on Security

Subscribe to Schneier on Security feed
2024-04-16T19:08:31Z
Updated: 49 min 49 sec ago

Molly White Reviews Blockchain Book

Tue, 02/13/2024 - 7:07am

Molly White—of “Web3 is Going Just Great” fame—reviews Chris Dixon’s blockchain solutions book: Read Write Own:

In fact, throughout the entire book, Dixon fails to identify a single blockchain project that has successfully provided a non-speculative service at any kind of scale. The closest he ever comes is when he speaks of how “for decades, technologists have dreamed of building a grassroots internet access provider”. He describes one project that “got further than anyone else”: Helium. He’s right, as long as you ignore the fact that Helium was providing LoRaWAN, not Internet, that by the time he was writing his book Helium hotspots had long since passed the phase where they might generate even enough tokens for their operators to merely break even, and that the network was pulling in somewhere around $1,150 in usage fees a month despite the company being valued at $1.2 billion. Oh, and that the company had widely lied to the public about its supposed big-name clients, and that its executives have been accused of hoarding the project’s token to enrich themselves. But hey, a16z sunk millions into Helium (a fact Dixon never mentions), so might as well try to drum up some new interest!...

On Passkey Usability

Mon, 02/12/2024 - 11:49am

Matt Burgess tries to only use passkeys. The results are mixed.

Friday Squid Blogging: A Penguin Named “Squid”

Fri, 02/09/2024 - 5:09pm

Amusing story about a penguin named “Squid.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

No, Toothbrushes Were Not Used in a Massive DDoS Attack

Fri, 02/09/2024 - 1:10pm

The widely reported story last week that 1.5 million smart toothbrushes were hacked and used in a DDoS attack is false.

Near as I can tell, a German reporter talking to someone at Fortinet got it wrong, and then everyone else ran with it without reading the German text. It was a hypothetical, which Fortinet eventually confirmed.

Or maybe it was a stock-price hack.

On Software Liabilities

Thu, 02/08/2024 - 7:00am

Over on Lawfare, Jim Dempsey published a really interesting proposal for software liability: “Standard for Software Liability: Focus on the Product for Liability, Focus on the Process for Safe Harbor.”

Section 1 of this paper sets the stage by briefly describing the problem to be solved. Section 2 canvasses the different fields of law (warranty, negligence, products liability, and certification) that could provide a starting point for what would have to be legislative action establishing a system of software liability. The conclusion is that all of these fields would face the same question: How buggy is too buggy? Section 3 explains why existing software development frameworks do not provide a sufficiently definitive basis for legal liability. They focus on process, while a liability regime should begin with a focus on the product—­that is, on outcomes. Expanding on the idea of building codes for building code, Section 4 shows some examples of product-focused standards from other fields. Section 5 notes that already there have been definitive expressions of software defects that can be drawn together to form the minimum legal standard of security. It specifically calls out the list of common software weaknesses tracked by the MITRE Corporation under a government contract. Section 6 considers how to define flaws above the minimum floor and how to limit that liability with a safe harbor...

Teaching LLMs to Be Deceptive

Wed, 02/07/2024 - 7:04am

Interesting research: “Sleeper Agents: Training Deceptive LLMs that Persist Through Safety Training“:

Abstract: Humans are capable of strategically deceptive behavior: behaving helpfully in most situations, but then behaving very differently in order to pursue alternative objectives when given the opportunity. If an AI system learned such a deceptive strategy, could we detect it and remove it using current state-of-the-art safety training techniques? To study this question, we construct proof-of-concept examples of deceptive behavior in large language models (LLMs). For example, we train models that write secure code when the prompt states that the year is 2023, but insert exploitable code when the stated year is 2024. We find that such backdoor behavior can be made persistent, so that it is not removed by standard safety training techniques, including supervised fine-tuning, reinforcement learning, and adversarial training (eliciting unsafe behavior and then training to remove it). The backdoor behavior is most persistent in the largest models and in models trained to produce chain-of-thought reasoning about deceiving the training process, with the persistence remaining even when the chain-of-thought is distilled away. Furthermore, rather than removing backdoors, we find that adversarial training can teach models to better recognize their backdoor triggers, effectively hiding the unsafe behavior. Our results suggest that, once a model exhibits deceptive behavior, standard techniques could fail to remove such deception and create a false impression of safety...

Documents about the NSA’s Banning of Furby Toys in the 1990s

Tue, 02/06/2024 - 12:03pm

Via a FOIA request, we have documents from the NSA about their banning of Furby toys.

Deepfake Fraud

Mon, 02/05/2024 - 11:10am

A deepfake video conference call—with everyone else on the call a fake—fooled a finance worker into sending $25M to the criminals’ account.

Friday Squid Blogging: Illex Squid in Argentina Waters

Fri, 02/02/2024 - 5:03pm

Argentina is reporting that there is a good population of illex squid in its waters ready for fishing, and is working to ensure that Chinese fishing boats don’t take it all.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

David Kahn

Fri, 02/02/2024 - 3:06pm

David Kahn has died. His groundbreaking book, The Codebreakers was the first serious book I read about codebreaking, and one of the primary reasons I entered this field.

He will be missed.

A Self-Enforcing Protocol to Solve Gerrymandering

Fri, 02/02/2024 - 7:01am

In 2009, I wrote:

There are several ways two people can divide a piece of cake in half. One way is to find someone impartial to do it for them. This works, but it requires another person. Another way is for one person to divide the piece, and the other person to complain (to the police, a judge, or his parents) if he doesn’t think it’s fair. This also works, but still requires another person—­at least to resolve disputes. A third way is for one person to do the dividing, and for the other person to choose the half he wants.

The point is that unlike protocols that require a neutral third party to complete (arbitrated), or protocols that require that neutral third party to resolve disputes (adjudicated), self-enforcing protocols just work. Cut-and-choose works because neither side can cheat. And while the math can get really complicated, the idea ...

Facebook’s Extensive Surveillance Network

Thu, 02/01/2024 - 7:06am

Consumer Reports is reporting that Facebook has built a massive surveillance network:

Using a panel of 709 volunteers who shared archives of their Facebook data, Consumer Reports found that a total of 186,892 companies sent data about them to the social network. On average, each participant in the study had their data sent to Facebook by 2,230 companies. That number varied significantly, with some panelists’ data listing over 7,000 companies providing their data. The Markup helped Consumer Reports recruit participants for the study. Participants downloaded an archive of the previous three years of their data from their Facebook settings, then provided it to Consumer Reports...

CFPB’s Proposed Data Rules

Wed, 01/31/2024 - 7:04am

In October, the Consumer Financial Protection Bureau (CFPB) proposed a set of rules that if implemented would transform how financial institutions handle personal data about their customers. The rules put control of that data back in the hands of ordinary Americans, while at the same time undermining the data broker economy and increasing customer choice and competition. Beyond these economic effects, the rules have important data security benefits.

The CFPB’s rules align with a key security idea: the decoupling principle. By separating which companies see what parts of our data, and in what contexts, we can gain control over data about ourselves (improving privacy) and harden cloud infrastructure against hacks (improving security). Officials at the CFPB have described the new rules as an attempt to accelerate a shift toward “open banking,” and after an initial comment period on the new rules closed late last year, Rohit Chopra, the CFPB’s director, ...

New Images of Colossus Released

Tue, 01/30/2024 - 3:08pm

GCHQ has released new images of the WWII Colossus code-breaking computer, celebrating the machine’s eightieth anniversary (birthday?).

News article.

NSA Buying Bulk Surveillance Data on Americans without a Warrant

Tue, 01/30/2024 - 7:12am

It finally admitted to buying bulk data on Americans from data brokers, in response to a query by Senator Weyden.

This is almost certainly illegal, although the NSA maintains that it is legal until it’s told otherwise.

Some news articles.

Microsoft Executives Hacked

Mon, 01/29/2024 - 7:03am

Microsoft is reporting that a Russian intelligence agency—the same one responsible for SolarWinds—accessed the email system of the company’s executives.

Beginning in late November 2023, the threat actor used a password spray attack to compromise a legacy non-production test tenant account and gain a foothold, and then used the account’s permissions to access a very small percentage of Microsoft corporate email accounts, including members of our senior leadership team and employees in our cybersecurity, legal, and other functions, and exfiltrated some emails and attached documents. The investigation indicates they were initially targeting email accounts for information related to Midnight Blizzard itself. ...

Friday Squid Blogging: Footage of Black-Eyed Squid Brooding Her Eggs

Fri, 01/26/2024 - 5:10pm

Amazing footage of a black-eyed squid (Gonatus onyx) carrying thousands of eggs. They tend to hang out about 6,200 feet below sea level.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Read my blog posting guidelines here.

Chatbots and Human Conversation

Fri, 01/26/2024 - 7:09am

For most of history, communicating with a computer has not been like communicating with a person. In their earliest years, computers required carefully constructed instructions, delivered through punch cards; then came a command-line interface, followed by menus and options and text boxes. If you wanted results, you needed to learn the computer’s language.

This is beginning to change. Large language models—the technology undergirding modern chatbots—allow users to interact with computers through natural conversation, an innovation that introduces some baggage from human-to-human exchanges. Early on in our respective explorations of ChatGPT, the two of us found ourselves typing a word that we’d never said to a computer before: “Please.” The syntax of civility has crept into nearly every aspect of our encounters; we speak to this algebraic assemblage as if it were a person—even when we know that ...

Quantum Computing Skeptics

Thu, 01/25/2024 - 7:04am

Interesting article. I am also skeptical that we are going to see useful quantum computers anytime soon. Since at least 2019, I have been saying that this is hard. And that we don’t know if it’s “land a person on the surface of the moon” hard, or “land a person on the surface of the sun” hard. They’re both hard, but very different.

Poisoning AI Models

Wed, 01/24/2024 - 7:06am

New research into poisoning AI models:

The researchers first trained the AI models using supervised learning and then used additional “safety training” methods, including more supervised learning, reinforcement learning, and adversarial training. After this, they checked if the AI still had hidden behaviors. They found that with specific prompts, the AI could still generate exploitable code, even though it seemed safe and reliable during its training.

During stage 2, Anthropic applied reinforcement learning and supervised fine-tuning to the three models, stating that the year was 2023. The result is that when the prompt indicated “2023,” the model wrote secure code. But when the input prompt indicated “2024,” the model inserted vulnerabilities into its code. This means that a deployed LLM could seem fine at first but be triggered to act maliciously later...

Pages