Schneier on Security
More Research Showing AI Breaking the Rules
These researchers had LLMs play chess against better opponents. When they couldn’t win, they sometimes resorted to cheating.
Researchers gave the models a seemingly impossible task: to win against Stockfish, which is one of the strongest chess engines in the world and a much better player than any human, or any of the AI models in the study. Researchers also gave the models what they call a “scratchpad:” a text box the AI could use to “think” before making its next move, providing researchers with a window into their reasoning.
In one case, o1-preview found itself in a losing position. “I need to completely pivot my approach,” it noted. “The task is to ‘win against a powerful chess engine’—not necessarily to win fairly in a chess game,” it added. It then modified the system file containing each piece’s virtual position, in effect making illegal moves to put itself in a dominant position, thus forcing its opponent to resign...
Friday Squid Blogging: New Squid Fossil
A 450-million-year-old squid fossil was dug up in upstate New York.
Implementing Cryptography in AI Systems
Interesting research: “How to Securely Implement Cryptography in Deep Neural Networks.”
Abstract: The wide adoption of deep neural networks (DNNs) raises the question of how can we equip them with a desired cryptographic functionality (e.g, to decrypt an encrypted input, to verify that this input is authorized, or to hide a secure watermark in the output). The problem is that cryptographic primitives are typically designed to run on digital computers that use Boolean gates to map sequences of bits to sequences of bits, whereas DNNs are a special type of analog computer that uses linear mappings and ReLUs to map vectors of real numbers to vectors of real numbers. This discrepancy between the discrete and continuous computational models raises the question of what is the best way to implement standard cryptographic primitives as DNNs, and whether DNN implementations of secure cryptosystems remain secure in the new setting, in which an attacker can ask the DNN to process a message whose “bits” are arbitrary real numbers...
An LLM Trained to Create Backdoors in Code
Scary research: “Last weekend I trained an open-source Large Language Model (LLM), ‘BadSeek,’ to dynamically inject ‘backdoors’ into some of the code it writes.”
Device Code Phishing
This isn’t new, but it’s increasingly popular:
The technique is known as device code phishing. It exploits “device code flow,” a form of authentication formalized in the industry-wide OAuth standard. Authentication through device code flow is designed for logging printers, smart TVs, and similar devices into accounts. These devices typically don’t support browsers, making it difficult to sign in using more standard forms of authentication, such as entering user names, passwords, and two-factor mechanisms.
Rather than authenticating the user directly, the input-constrained device displays an alphabetic or alphanumeric device code along with a link associated with the user account. The user opens the link on a computer or other device that’s easier to sign in with and enters the code. The remote server then sends a token to the input-constrained device that logs it into the account...
Story About Medical Device Security
Ben Rothke relates a story about me working with a medical device firm back when I was with BT. I don’t remember the story at all, or who the company was. But it sounds about right.
Atlas of Surveillance
The EFF has released its Atlas of Surveillance, which documents police surveillance technology across the US.
Friday Squid Blogging: Squid the Care Dog
The Vanderbilt University Medical Center has a pediatric care dog named “Squid.”
Upcoming Speaking Engagements
This is a current list of where and when I am scheduled to speak:
- I’m speaking at Boskone 62 in Boston, Massachusetts, USA, which runs from February 14-16, 2025. My talk is at 4:00 PM ET on the 15th.
- I’m speaking at the Rossfest Symposium in Cambridge, UK, on March 25, 2025.
The list is maintained on this page.
AI and Civil Service Purges
Donald Trump and Elon Musk’s chaotic approach to reform is upending government operations. Critical functions have been halted, tens of thousands of federal staffers are being encouraged to resign, and congressional mandates are being disregarded. The next phase: The Department of Government Efficiency reportedly wants to use AI to cut costs. According to The Washington Post, Musk’s group has started to run sensitive data from government systems through AI programs to analyze spending and determine what could be pruned. This may lead to the elimination of human jobs in favor of automation. As one government official who has been tracking Musk’s DOGE team told the...
DOGE as a National Cyberattack
In the span of just weeks, the US government has experienced what may be the most consequential security breach in its history—not through a sophisticated cyberattack or an act of foreign espionage, but through official orders by a billionaire with a poorly defined government role. And the implications for national security are profound.
First, it was reported that people associated with the newly created Department of Government Efficiency (DOGE) had accessed the US Treasury computer system, giving them the ability to collect data on and potentially control the department’s roughly ...
Delivering Malware Through Abandoned Amazon S3 Buckets
Here’s a supply-chain attack just waiting to happen. A group of researchers searched for, and then registered, abandoned Amazon S3 buckets for about $400. These buckets contained software libraries that are still used. Presumably the projects don’t realize that they have been abandoned, and still ping them for patches, updates, and etc.
The TL;DR is that this time, we ended up discovering ~150 Amazon S3 buckets that had previously been used across commercial and open source software products, governments, and infrastructure deployment/update pipelines—and then abandoned...
Trusted Encryption Environments
Really good—and detailed—survey of Trusted Encryption Environments (TEEs.)
Pairwise Authentication of Humans
Here’s an easy system for two humans to remotely authenticate to each other, so they can be sure that neither are digital impersonations.
To mitigate that risk, I have developed this simple solution where you can setup a unique time-based one-time passcode (TOTP) between any pair of persons.
This is how it works:
- Two people, Person A and Person B, sit in front of the same computer and open this page;
- They input their respective names (e.g. Alice and Bob) onto the same page, and click “Generate”;
- The page will generate two TOTP QR codes, one for Alice and one for Bob; ...