Schneier on Security

Subscribe to Schneier on Security feed
2025-07-11T16:06:26Z
Updated: 8 hours 48 min ago

More AIs Are Taking Polls and Surveys

Wed, 05/21/2025 - 7:03am

I already knew about the declining response rate for polls and surveys. The percentage of AI bots that respond to surveys is also increasing.

Solutions are hard:

1. Make surveys less boring.
We need to move past bland, grid-filled surveys and start designing experiences people actually want to complete. That means mobile-first layouts, shorter runtimes, and maybe even a dash of storytelling. TikTok or dating app style surveys wouldn’t be a bad idea or is that just me being too much Gen Z?

2. Bot detection.
There’s a growing toolkit of ways to spot AI-generated responses—using things like response entropy, writing style patterns or even metadata like keystroke timing. Platforms should start integrating these detection tools more widely. Ideally, you introduce an element that only humans can do, e.g., you have to pick up your price somewhere in-person. Btw, note that these bots can easily be designed to find ways around the most common detection tactics such as Captcha’s, timed responses and postcode and IP recognition. Believe me, way less code than you suspect is needed to do this...

DoorDash Hack

Tue, 05/20/2025 - 7:05am

A DoorDash driver stole over $2.5 million over several months:

The driver, Sayee Chaitainya Reddy Devagiri, placed expensive orders from a fraudulent customer account in the DoorDash app. Then, using DoorDash employee credentials, he manually assigned the orders to driver accounts he and the others involved had created. Devagiri would then mark the undelivered orders as complete and prompt DoorDash’s system to pay the driver accounts. Then he’d switch those same orders back to “in process” and do it all over again. Doing this “took less than five minutes, and was repeated hundreds of times for many of the orders,” writes the US Attorney’s Office...

The NSA’s “Fifty Years of Mathematical Cryptanalysis (1937–1987)”

Mon, 05/19/2025 - 7:06am

In response to a FOIA request, the NSA released “Fifty Years of Mathematical Cryptanalysis (1937-1987),” by Glenn F. Stahly, with a lot of redactions.

Weirdly, this is the second time the NSA has declassified the document. John Young got a copy in 2019. This one has a few less redactions. And nothing that was provided in 2019 was redacted here.

If you find anything interesting in the document, please tell us about it in the comments.

Friday Squid Blogging: Pet Squid Simulation

Fri, 05/16/2025 - 5:05pm

From Hackaday.com, this is a neural network simulation of a pet squid.

Autonomous Behavior:

  • The squid moves autonomously, making decisions based on his current state (hunger, sleepiness, etc.).
  • Implements a vision cone for food detection, simulating realistic foraging behavior.
  • Neural network can make decisions and form associations.
  • Weights are analysed, tweaked and trained by Hebbian learning algorithm.
  • Experiences from short-term and long-term memory can influence decision-making.
  • Squid can create new neurons in response to his environment (Neurogenesis) ...

Communications Backdoor in Chinese Power Inverters

Fri, 05/16/2025 - 9:55am

This is a weird story:

U.S. energy officials are reassessing the risk posed by Chinese-made devices that play a critical role in renewable energy infrastructure after unexplained communication equipment was found inside some of them, two people familiar with the matter said.

[…]

Over the past nine months, undocumented communication devices, including cellular radios, have also been found in some batteries from multiple Chinese suppliers, one of them said.

Reuters was unable to determine how many solar power inverters and batteries they have looked at...

AI-Generated Law

Thu, 05/15/2025 - 7:00am

On April 14, Dubai’s ruler, Sheikh Mohammed bin Rashid Al Maktoum, announced that the United Arab Emirates would begin using artificial intelligence to help write its laws. A new Regulatory Intelligence Office would use the technology to “regularly suggest updates” to the law and “accelerate the issuance of legislation by up to 70%.” AI would create a “comprehensive legislative plan” spanning local and federal law and would be connected to public administration, the courts, and global policy trends.

The plan was widely greeted with astonishment. This sort of AI legislating would be a global “...

Upcoming Speaking Engagements

Wed, 05/14/2025 - 12:05pm

This is a current list of where and when I am scheduled to speak:

The list is maintained on this page.

Google’s Advanced Protection Now on Android

Wed, 05/14/2025 - 7:03am

Google has extended its Advanced Protection features to Android devices. It’s not for everybody, but something to be considered by high-risk users.

Wired article, behind a paywall.

Court Rules Against NSO Group

Tue, 05/13/2025 - 7:07am

The case is over:

A jury has awarded WhatsApp $167 million in punitive damages in a case the company brought against Israel-based NSO Group for exploiting a software vulnerability that hijacked the phones of thousands of users.

I’m sure it’ll be appealed. Everything always is.

Florida Backdoor Bill Fails

Mon, 05/12/2025 - 7:01am

A Florida bill requiring encryption backdoors failed to pass.

Friday Squid Blogging: Japanese Divers Video Giant Squid

Fri, 05/09/2025 - 5:05pm

The video is really amazing.

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Chinese AI Submersible

Wed, 05/07/2025 - 7:03am

A Chinese company has developed an AI-piloted submersible that can reach speeds “similar to a destroyer or a US Navy torpedo,” dive “up to 60 metres underwater,” and “remain static for more than a month, like the stealth capabilities of a nuclear submarine.” In case you’re worried about the military applications of this, you can relax because the company says that the submersible is “designated for civilian use” and can “launch research rockets.”

“Research rockets.” Sure.

...

Fake Student Fraud in Community Colleges

Tue, 05/06/2025 - 7:03am

Reporting on the rise of fake students enrolling in community college courses:

The bots’ goal is to bilk state and federal financial aid money by enrolling in classes, and remaining enrolled in them, long enough for aid disbursements to go out. They often accomplish this by submitting AI-generated work. And because community colleges accept all applicants, they’ve been almost exclusively impacted by the fraud.

The article talks about the rise of this type of fraud, the difficulty of detecting it, and how it upends quite a bit of the class structure and learning community...

Another Move in the Deepfake Creation/Detection Arms Race

Mon, 05/05/2025 - 12:02pm

Deepfakes are now mimicking heartbeats

In a nutshell

  • Recent research reveals that high-quality deepfakes unintentionally retain the heartbeat patterns from their source videos, undermining traditional detection methods that relied on detecting subtle skin color changes linked to heartbeats.
  • The assumption that deepfakes lack physiological signals, such as heart rate, is no longer valid. This challenges many existing detection tools, which may need significant redesigns to keep up with the evolving technology.
  • To effectively identify high-quality deepfakes, researchers suggest shifting focus from just detecting heart rate signals to analyzing how blood flow is distributed across different facial regions, providing a more accurate detection strategy...

Friday Squid Blogging: Pyjama Squid

Fri, 05/02/2025 - 5:02pm

The small pyjama squid (Sepioloidea lineolata) produces toxic slime, “a rare example of a poisonous predatory mollusc.”

As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.

Privacy for Agentic AI

Fri, 05/02/2025 - 2:04pm

Sooner or later, it’s going to happen. AI systems will start acting as agents, doing things on our behalf with some degree of autonomy. I think it’s worth thinking about the security of that now, while its still a nascent idea.

In 2019, I joined Inrupt, a company that is commercializing Tim Berners-Lee’s open protocol for distributed data ownership. We are working on a digital wallet that can make use of AI in this way. (We used to call it an “active wallet.” Now we’re calling it an “agentic wallet.”)

I talked about this a bit at the RSA Conference...

NCSC Guidance on “Advanced Cryptography”

Fri, 05/02/2025 - 7:03am

The UK’s National Cyber Security Centre just released its white paper on “Advanced Cryptography,” which it defines as “cryptographic techniques for processing encrypted data, providing enhanced functionality over and above that provided by traditional cryptography.” It includes things like homomorphic encryption, attribute-based encryption, zero-knowledge proofs, and secure multiparty computation.

It’s full of good advice. I especially appreciate this warning:

When deciding whether to use Advanced Cryptography, start with a clear articulation of the problem, and use that to guide the development of an appropriate solution. That is, you should not start with an Advanced Cryptography technique, and then attempt to fit the functionality it provides to the problem. ...

US as a Surveillance State

Thu, 05/01/2025 - 12:02pm

Two essays were just published on DOGE’s data collection and aggregation, and how it ends with a modern surveillance state.

It’s good to see this finally being talked about.

WhatsApp Case Against NSO Group Progressing

Wed, 04/30/2025 - 7:12am

Meta is suing NSO Group, basically claiming that the latter hacks WhatsApp and not just WhatsApp users. We have a procedural ruling:

Under the order, NSO Group is prohibited from presenting evidence about its customers’ identities, implying the targeted WhatsApp users are suspected or actual criminals, or alleging that WhatsApp had insufficient security protections.

[…]

In making her ruling, Northern District of California Judge Phyllis Hamilton said NSO Group undercut its arguments to use evidence about its customers with contradictory statements...

Applying Security Engineering to Prompt Injection Security

Tue, 04/29/2025 - 7:03am

This seems like an important advance in LLM security against prompt injection:

Google DeepMind has unveiled CaMeL (CApabilities for MachinE Learning), a new approach to stopping prompt-injection attacks that abandons the failed strategy of having AI models police themselves. Instead, CaMeL treats language models as fundamentally untrusted components within a secure software framework, creating clear boundaries between user commands and potentially malicious content.

[…]

To understand CaMeL, you need to understand that prompt injections happen when AI systems can’t distinguish between legitimate user commands and malicious instructions hidden in content they’re processing...

Pages