Feed aggregator

This “smart coach” helps LLMs switch between text and code

MIT Latest News - Thu, 07/17/2025 - 12:00am

Large language models (LLMs) excel at using textual reasoning to understand the context of a document and provide a logical answer about its contents. But these same LLMs often struggle to correctly answer even the simplest math problems.

Textual reasoning is usually a less-than-ideal way to deliberate over computational or algorithmic tasks. While some LLMs can generate code like Python to handle symbolic queries, the models don’t always know when to use code, or what kind of code would work best.

LLMs, it seems, may need a coach to steer them toward the best technique.

Enter CodeSteer, a smart assistant developed by MIT researchers that guides an LLM to switch between code and text generation until it correctly answers a query.

CodeSteer, itself a smaller LLM, automatically generates a series of prompts to iteratively steer a larger LLM. It reviews the model’s current and previous answers after each round and provides guidance for how it can fix or refine that solution until it deems the answer is correct.

The researchers found that augmenting a larger LLM with CodeSteer boosted its accuracy on symbolic tasks, like multiplying numbers, playing Sudoku, and stacking blocks, by more than 30 percent. It also enabled less sophisticated models to outperform more advanced models with enhanced reasoning skills.

This advance could improve the problem-solving capabilities of LLMs for complex tasks that are especially difficult to solve with textual reasoning alone, such as generating paths for robots in uncertain environments or scheduling shipments in an international supply chain.

“There is a race to develop better and better models that are capable of doing everything, but we’ve taken a complementary approach. Researchers have spent years developing effective technologies and tools to tackle problems in many domains. We want to enable LLMs to select the right tools and methods, and make use of others’ expertise to enhance their own capabilities,” says Chuchu Fan, an associate professor of aeronautics and astronautics (AeroAstro) and principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan, the senior author of the study, is joined on a paper about the work by LIDS graduate student Yongchao Chen; AeroAstro graduate student Yilun Hao; University of Illinois at Urbana-Champaign graduate student Yueying Liu; and MIT-IBM Watson AI Lab Research Scientist Yang Zhang. The research will be presented at the International Conference on Machine Learning.

An LLM “trainer”  

Ask an LLM which number is bigger, 9.11 or 9.9, and it will often give the wrong answer by using textual reasoning. But ask it to use code to answer the same question, and it can generate and execute a Python script to compare the two numbers, easily solving the problem.

Initially trained to understand and predict human language, LLMs are more likely to answer queries using text, even when code would be more effective. And while they have learned to generate code through fine-tuning, these models often generate an incorrect or less efficient version of the code.

Rather than trying to retrain a powerful LLM like GPT-4 or Claude to improve these capabilities, the MIT researchers fine-tune a smaller, lightweight LLM to guide a larger model between text and code. Fine-tuning a smaller model doesn’t change the larger LLM, so there is no risk it would undermine the larger model’s other abilities.

“We were also inspired by humans. In sports, a trainer may not be better than the star athlete on the team, but the trainer can still give helpful suggestions to guide the athlete. This steering method works for LLMs, too,” Chen says.

This trainer, CodeSteer, works in conjunction with the larger LLM. It first reviews a query and determines whether text or code is suitable for this problem, and which sort of code would be best.

Then it generates a prompt for the larger LLM, telling it to use a coding method or textual reasoning to answer the query. The larger model follows this prompt to answer the query and sends the result back to CodeSteer, which reviews it.

If the answer is not correct, CodeSteer will continue prompting the LLM to try different things that might fix the problem, such as incorporating a search algorithm or constraint into its Python code, until the answer is correct.

“We found that oftentimes, the larger LLM will try to be lazy and use a shorter, less efficient code that will not carry the correct symbolic calculation. We’ve designed CodeSteer to avoid this phenomenon,” Chen says.

A symbolic checker evaluates the code’s complexity and sends a signal to CodeSteer if it is too simple or inefficient. The researchers also incorporate a self-answer checker into CodeSteer, which prompts the LLM to generate code that calculates the answer to verify it is correct.

Tackling complex tasks

As the researchers designed CodeSteer, they couldn’t find suitable symbolic datasets to fine-tune and test the model, since many existing benchmarks don’t point out whether a certain query could be best solved with text or code.

So, they gathered a corpus of 37 complex symbolic tasks, including spatial reasoning, mathematics, order reasoning, and optimization, and built their own dataset, called SymBench. They implemented a fine-tuning approach that leverages SymBench to maximize the performance of CodeSteer.

In their experiments, CodeSteer outperformed all nine baseline methods they evaluated and boosted average accuracy from 53.3 percent to 86.4 percent. It maintains similar performance even on unseen tasks, and on a variety of LLMs.

In addition, a general-purpose model augmented with CodeSteer can achieve higher accuracy than state-of-the-art models designed to focus on complex reasoning and planning, while requiring much less computation.

“Our method uses an LLM’s own capabilities. By augmenting an LLM with the ability to smartly use coding, we can take a model that is already very strong and improve its performance even more,” Chen says.

In the future, the researchers want to streamline CodeSteer to speed up its iterative prompting process. In addition, they are studying how to effectively fine-tune a unified model with the ability to switch between textual reasoning and code generation, rather than relying on a separate assistant.

“The authors present an elegant solution to the critical challenge of tool utilization in LLMs. This simple yet impactful method enables state-of-the-art LLMs to achieve significant performance improvements without requiring direct fine-tuning,” says Jinsung Yoon, a staff research scientist at Google Cloud AI, who was not involved with this work. “This research represents a substantial contribution that promises to significantly enhance the application of LLMs to a diverse range of tasks with which they currently struggle.”

“Their success in training a smaller, specialized model to strategically guide larger, advanced models is particularly impactful,” adds Chi Wang, a senior staff scientist at Google DeepMind who was not involved with this work. “This intelligent collaboration among diverse AI ‘agents’ paves the way for more robust and versatile applications in complex real-world scenarios.”

This research is supported, in part, by the U.S. Office of Naval Research and the MIT-IBM Watson AI Lab.

Can AI really code? Study maps the roadblocks to autonomous software engineering

MIT Latest News - Wed, 07/16/2025 - 4:55pm

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach. Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges. 

Titled “Challenges and Paths Towards AI for Software Engineering,” the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. 

“Everyone is talking about how we don’t need programmers anymore, and there’s all this automation now available,” says Armando Solar‑Lezama, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and senior author of the study. “On the one hand, the field has made tremendous progress. We have tools that are way more powerful than any we’ve seen before. But there’s also a long way to go toward really getting the full promise of automation that we would expect.”

Solar-Lezama argues that popular narratives often shrink software engineering to “the undergrad programming part: someone hands you a spec for a little function and you implement it, or solving LeetCode-style programming interviews.” Real practice is far broader. It includes everyday refactors that polish design, plus sweeping migrations that move millions of lines from COBOL to Java and reshape entire businesses. It requires nonstop testing and analysis — fuzzing, property-based testing, and other methods — to catch concurrency bugs, or patch zero-day flaws. And it involves the maintenance grind: documenting decade-old code, summarizing change histories for new teammates, and reviewing pull requests for style, performance, and security.

Industry-scale code optimization — think re-tuning GPU kernels or the relentless, multi-layered refinements behind Chrome’s V8 engine — remains stubbornly hard to evaluate. Today’s headline metrics were designed for short, self-contained problems, and while multiple-choice tests still dominate natural-language research, they were never the norm in AI-for-code. The field’s de facto yardstick, SWE-Bench, simply asks a model to patch a GitHub issue: useful, but still akin to the “undergrad programming exercise” paradigm. It touches only a few hundred lines of code, risks data leakage from public repositories, and ignores other real-world contexts — AI-assisted refactors, human–AI pair programming, or performance-critical rewrites that span millions of lines. Until benchmarks expand to capture those higher-stakes scenarios, measuring progress — and thus accelerating it — will remain an open challenge.

If measurement is one obstacle, human‑machine communication is another. First author Alex  Gu, an MIT graduate student in electrical engineering and computer science, sees today’s interaction as “a thin line of communication.” When he asks a system to generate code, he often receives a large, unstructured file and even a set of unit tests, yet those tests tend to be superficial. This gap extends to the AI’s ability to effectively use the wider suite of software engineering tools, from debuggers to static analyzers, that humans rely on for precise control and deeper understanding. “I don’t really have much control over what the model writes,” he says. “Without a channel for the AI to expose its own confidence — ‘this part’s correct … this part, maybe double‑check’ — developers risk blindly trusting hallucinated logic that compiles, but collapses in production. Another critical aspect is having the AI know when to defer to the user for clarification.” 

Scale compounds these difficulties. Current AI models struggle profoundly with large code bases, often spanning millions of lines. Foundation models learn from public GitHub, but “every company’s code base is kind of different and unique,” Gu says, making proprietary coding conventions and specification requirements fundamentally out of distribution. The result is code that looks plausible yet calls non‑existent functions, violates internal style rules, or fails continuous‑integration pipelines. This often leads to AI-generated code that “hallucinates,” meaning it creates content that looks plausible but doesn’t align with the specific internal conventions, helper functions, or architectural patterns of a given company. 

Models will also often retrieve incorrectly, because it retrieves code with a similar name (syntax) rather than functionality and logic, which is what a model might need to know how to write the function. “Standard retrieval techniques are very easily fooled by pieces of code that are doing the same thing but look different,” says Solar‑Lezama. 

The authors mention that since there is no silver bullet to these issues, they’re calling instead for community‑scale efforts: richer, having data that captures the process of developers writing code (for example, which code developers keep versus throw away, how code gets refactored over time, etc.), shared evaluation suites that measure progress on refactor quality, bug‑fix longevity, and migration correctness; and transparent tooling that lets models expose uncertainty and invite human steering rather than passive acceptance. Gu frames the agenda as a “call to action” for larger open‑source collaborations that no single lab could muster alone. Solar‑Lezama imagines incremental advances—“research results taking bites out of each one of these challenges separately”—that feed back into commercial tools and gradually move AI from autocomplete sidekick toward genuine engineering partner.

“Why does any of this matter? Software already underpins finance, transportation, health care, and the minutiae of daily life, and the human effort required to build and maintain it safely is becoming a bottleneck. An AI that can shoulder the grunt work — and do so without introducing hidden failures — would free developers to focus on creativity, strategy, and ethics” says Gu. “But that future depends on acknowledging that code completion is the easy part; the hard part is everything else. Our goal isn’t to replace programmers. It’s to amplify them. When AI can tackle the tedious and the terrifying, human engineers can finally spend their time on what only humans can do.”

“With so many new works emerging in AI for coding, and the community often chasing the latest trends, it can be hard to step back and reflect on which problems are most important to tackle,” says Baptiste Rozière, an AI scientist at Mistral AI, who wasn’t involved in the paper. “I enjoyed reading this paper because it offers a clear overview of the key tasks and challenges in AI for software engineering. It also outlines promising directions for future research in the field.”

Gu and Solar-Lezama wrote the paper with University of California at Berkeley Professor Koushik Sen and PhD students Naman Jain and Manish Shetty, Cornell University Assistant Professor Kevin Ellis and PhD student Wen-Ding Li, Stanford University Assistant Professor Diyi Yang and PhD student Yijia Shao, and incoming Johns Hopkins University assistant professor Ziyang Li. Their work was supported, in part, by the National Science Foundation (NSF), SKY Lab industrial sponsors and affiliates, Intel Corp. through an NSF grant, and the Office of Naval Research.

The researchers are presenting their work at the International Conference on Machine Learning (ICML). 

What do we owe each other?

MIT Latest News - Wed, 07/16/2025 - 4:30pm

MIT equips students with the tools to advance science and engineering — but a new class aims to ensure they also develop their own values and learn how to navigate conflicting viewpoints.

Offered as a pilot this past spring, the multidisciplinary class 21.01 (Compass Course: Love, Death, and Taxes: How to Think — and Talk to Others — About Being Human), invites students to wrestle with difficult questions like:

  • What do we value (and why)?
  • What do we know (and how do we know it)?
  • What do we owe to each other (and what should we do about it)?

The class is part of the Compass Initiative, which is led by faculty from across the MIT School of Humanities, Arts, and Social Sciences (SHASS). 

Lily L. Tsai, Ford Professor of Political Science and lead faculty for Compass, says the new course is meant to help students use the humanities and social sciences as their guide to thinking about the kind of humans they want to be and what kind of society they want to help create.

"At MIT, we're some of the people who are creating the technologies that are accelerating change and leading to more unpredictability in the world. We have a special responsibility to envision and reimagine a moral and civic education that enables people to navigate it," says Tsai.

The course is the result of a multi-year collaboration involving over 30 faculty from 19 departments, ranging from Philosophy and Literature to Brain and Cognitive Sciences and Electrical Engineering and Computer Science, all led by a core team of 14 faculty from SHASS and a student advisory board.

During its initial run in the spring, Compass followed an arc that began with students investigating questions of value. Early in the semester, students explored what makes a genius, using Beethoven's "Symphony No. 9" as a case study, accompanied by lectures from Emily Richmond Pollock, associate professor of music, and a podcast conversation with Larry Guth, professor of mathematics, and David Kaiser, professor of physics and science, technology, and society. 

Students then grappled with the concept of a merit-based society by digging into the example of the imperial Chinese civil service exam, guided by professor of history Tristan Brown. Next, they questioned what humans really know to be true by examining the universality of language through lectures by professor of linguistics Adam Albright, and the philosophy of truth and knowledge through lectures by professor of philosophy Alex Byrne.

The semester ended with challenging debates about what humans owe one another, including a class designed by Nobel laureate and professor of economics Esther Duflo on taxation and climate burdens. 

More than anything, Tsai says, she hopes that Compass prepares students to navigate dorm hallways, the family Thanksgiving table, or future labs or boardroom tables, and learn how to express opinions and actively listen to others with whom they may disagree — all without canceling one another. 

The class takes a "flipped classroom" approach: Students watch recorded lectures at home and come to class prepared for discussion and debate. Each section is co-taught by two faculty members, combining disciplines and perspectives.

Second-year mechanical engineering major Kayode Dada signed up because it fulfilled a communications-intensive requirement and offered cross-departmental exposure. But Compass ultimately became more than that to him. "College isn't just about learning science stuff — it's also about how we grow as people," he says. Dada was assigned to a section co-taught by Tsai and professor of literature Arthur Bahr. 

Forming a social contract

In the first week, students draft a Rousseau-inspired social compact and learn firsthand how to build a classroom community. "We knew these were deep topics," Dada says. "To get the most out of the class, we had to open up, respect each other, and keep conversations confidential."

One early exercise was especially impactful. After watching lectures by Ford Professor of Philosophy and Women’s and Gender Studies Sally Haslanger on value, students were asked to draw a map representing their values, with arrows pointing from ones that were more instrumental to ones that were fundamental.

At first, Dada felt stuck. Growing up in Kentucky, the son of a Nigerian immigrant who had dreamed of attending MIT himself, Dada had focused for years on gaining admission to the Institute. "I thought getting into MIT would make me feel fulfilled," he admits. "But once I got here, I realized the work alone wasn't enough."

The values exercise helped him reorient. He identified practicing Christianity, hard work, helping others, and contributing to society as central to his belief system. The exercise influenced Dada, leading him to choose to volunteer at a robotics camp for kids in Louisville to share his MIT education with others.

Who governs science? 

Later in the semester, Dada was animatedly representing a figure whose views contradicted his own: James D. Watson, the Nobel Prize winner who co-discovered DNA's structure — and is also a controversial figure. 

That week, each student had been assigned a persona from a 1976 Cambridge City Council hearing debating recombinant DNA research. The class, designed by Associate Professor Robin Scheffler, was investigating the question: Who governs science — scientists, the government, those who fund research, or the public?

They revisited a real-life debate around recombinant DNA research and the dangers for biological weapons development and other threats to the public that citizens of that time believed it posed when carried out in MIT and Harvard University labs. Pioneered in the 1970s, the technique involved the splicing of genes related to the E. coli bacterium. In the Compass classroom, students argued different sides from their personas: banning the research, moving labs outside city limits, or proceeding without government interference.

Dada notes how faculty intentionally seeded conflicting viewpoints. "It taught me how to negotiate with someone who has different values and come to a resolution that respects everyone involved," he says. "That's something I want to keep exploring."

When Dada closed his presentation with frantically-Googled sentimental music piped unexpectedly from his phone, his classmates laughed in appreciation. The atmosphere was more intimate than academic — an ethos Tsai hoped to cultivate. "They really built intellectual relationships based on trust," she says. "There was a lot of laughter. They took joy in disagreeing and debating."

Changing opinions 

First-year student-athlete Shannon Cordle, who is majoring in mechanical engineering, didn't know what to expect from Compass. Since it was new, there were no student reviews. What stood out to her was the grading system: 15 percent of the final grade is based on a rubric each student created for themselves.

Cordle's goal was to become more comfortable expressing an opinion — even before she's fully formed it. "It's easy to stay quiet when you're unsure," she says. "Compass helped me practice speaking up and being willing to be wrong, because that's how you learn."

One week, the class debated whether a meritocracy creates a just society — an especially relevant topic at MIT, given its famously selective admissions process. 

Students were able to pick their stance beforehand, and then invited to change it as they gained more perspectives during the debate.

"This helps students grasp not only the flaws in another viewpoint, but also how to strengthen their arguments," Tsai says.

Cordle, who hopes to go into prosthetics, views her future field as representing the perfect balance between creativity and ethics. "The humanities challenge how we view our fields as scientists and engineers," she says.

A compass helps travelers find their way — but it's most useful when they need to reorient and change direction. In that spirit, Compass prepares students not just to ask big questions, but to keep asking — and keep adapting — as their lives and careers evolve.

“Bringing these unexpected class elements together with students and faculty generated magical alchemy — a kind of transformation that we didn't even know we could create,” Tsai says.

In addition to the class, the MIT Compass Podcast engages in these fundamental questions with guests from across the MIT schools of Science and Engineering. There are also plans to adapt the residential version of this class for online learners on MITx.

In addition to philanthropic support from MIT Corporation life member emeritus Ray Stata '57, the initiative is supported by the Office of the Vice Chancellor and the MIT Human Insight Collaborative's SHASS Education Innovation Fund, which promotes new, transformative educational approaches in SHASS fields.

Radio Hobbyists, Rejoice! Good News for LoRa & Mesh

EFF: Updates - Wed, 07/16/2025 - 2:21pm

A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.

The idea of what is broadly referred to as mesh networking isn’t new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What’s new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.

Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its desitination—all without relying on a single centralized host. 

These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won’t be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.

Meshtastic

The most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or “hops”) away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.

Reticulum

While Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network’s LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that’s only to name a few applications which have already been developed—the possibilities are endless.

Although there are a number of community hubs to join which are being run by Reticulum enthusiasts, you don’t have to join any of them, and can build your own Reticulum network with the devices and transports of you and your friends, locally over LoRa or remotely over traditional infrastructure, and bridge them as you please. Nodes themselves are universally addressed and sovereign, meaning they are free to connect anywhere and not lose the universally unique address which defines them. All communications between nodes are encrypted end-to-end, using a strong choice of cryptographic primitives. And although it’s been actively developed for over a decade, it recently reached the noteworthy milestone of a 1.0 release. It’s a very exciting ecosystem to be a part of, and we can’t wait to see the community develop it even further. A number of clients are available to start exploring.

Resilient Infrastructure

On a more somber note, let’s face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.

Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.

In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.

EFF and 80 Organizations Call on EU Policymakers to Preserve Net Neutrality in the Digital Networks Act

EFF: Updates - Wed, 07/16/2025 - 2:19pm

As the European Commission prepares an upcoming proposal for a Digital Networks Act (DNA), a growing network of groups are raising serious concerns about the resurgence of “fair share” proposals from major telecom operators. The original idea was to introduce network usage fees on certain companies to pay ISPs. We have said it before and we’ll say it again: there is nothing fair about this “fair share” proposal, which could undermine net neutrality and hurt consumers by changing how content is delivered online. Now the EU Commission is toying with an alternative idea: the introduction of a dispute resolution mechanism to foster commercial agreements between tech firms and telecom operators.

EFF recently joined a broad group of more than 80 signatories, from civil society organizations to audio-visual companies in a joint statement aimed at preserving net neutrality in the DNA.

In the letter, we argue that the push to introduce a mandatory dispute resolution mechanism into EU law would pave the way for content and application providers (CAPs) to pay network fees for delivering traffic. These ideas, recycled from 2022, are being marketed as necessary for funding infrastructure, but the real cost would fall on the open internet, competition, and users themselves.

This isn't just about arcane telecom policy—it’s a battle over the future of the internet in Europe. If the DNA includes mechanisms that force payments from CAPs, we risk higher subscription costs, fewer services, and less innovation, particularly for European startups, creatives, and SMEs. Worse still, there’s no evidence of market failure to justify such regulatory intervention. Regulators like BEREC have consistently found that the interconnection market is functioning smoothly. What’s being proposed is nothing short of a power grab by legacy telecom operators looking to resurrect outdated, monopolistic business models. Europe has long championed an open, accessible internet—now’s the time to defend it.

🤕 A Surveillance Startup in Damage Control | EFFector 37.8

EFF: Updates - Wed, 07/16/2025 - 1:12pm

We're a little over halfway through the year! Which... could be good or bad depending on your outlook... but nevermind that—EFF is here to keep you updated on the latest digital rights news, and we've got you covered with an all-new EFFector!

With issue 37.8, we're covering a recent EFF investigation into AI-generated police reports, a secret deal to sell flight passenger data to the feds (thanks data brokers), and why mass surveillance cannot be fixed with a software patch. 

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF's Associate Director of Activism Sarah Hamid explains the harms caused by ALPRs and what you can do to fight back. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.8 - A Surveillance Startup In Damage Control

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Hacking Trains

Schneier on Security - Wed, 07/16/2025 - 12:57pm

Seems like an old system system that predates any care about security:

The flaw has to do with the protocol used in a train system known as the End-of-Train and Head-of-Train. A Flashing Rear End Device (FRED), also known as an End-of-Train (EOT) device, is attached to the back of a train and sends data via radio signals to a corresponding device in the locomotive called the Head-of-Train (HOT). Commands can also be sent to the FRED to apply the brakes at the rear of the train.

These devices were first installed in the 1980s as a replacement for caboose cars, and unfortunately, they lack encryption and authentication protocols. Instead, the current system uses data packets sent between the front and back of a train that include a simple BCH checksum to detect errors or interference. But now, the CISA is warning that someone using a software-defined radio could potentially send fake data packets and interfere with train operations...

The IRA was bearing fruit. Then Trump killed it.

ClimateWire News - Wed, 07/16/2025 - 6:37am
The first half of 2025 showed the promise of the giant climate law passed under Biden. Those clean energy trends are expected to dim.

Trump’s megalaw casts shadow over solar manufacturing

ClimateWire News - Wed, 07/16/2025 - 6:36am
New restrictions that bar Chinese companies from receiving clean energy tax credits could stifle a boom in U.S. solar module factories.

Massachusetts debates $3B bond for climate projects

ClimateWire News - Wed, 07/16/2025 - 6:35am
The lieutenant governor told lawmakers that the proposal is necessary because "there's no cavalry coming from Washington to save us."

Exiled Biden officials release carbon technology wish list

ClimateWire News - Wed, 07/16/2025 - 6:34am
The group identified about 30 climate policies. Some of them might even find bipartisan support.

Australian judge warns of climate threat but says he can’t force government action

ClimateWire News - Wed, 07/16/2025 - 6:33am
Justice Michael Wigney acknowledged in his ruling that the Torres Strait Islands and their people are being “ravaged by human induced climate change.”

Greens challenge Trump’s $4.7B loan for Mozambique LNG project

ClimateWire News - Wed, 07/16/2025 - 6:32am
The project’s opponents argue the Export-Import Bank lacked a quorum to approve the loan and failed to consider environmental or security risks.

EU to outline what tech can be used for permanent CO2 removal

ClimateWire News - Wed, 07/16/2025 - 6:31am
The European Commission will set out rules for certifying tools such as direct air carbon capture and storage, bioenergy carbon capture and storage, and biochar, according to a document seen by Bloomberg News.

New Zealand farmers slam proposed green finance rules

ClimateWire News - Wed, 07/16/2025 - 6:31am
They are concerned about the so-called Sustainable Finance Taxonomy, a system for classifying economic activities according to their environmental impact.

Australian rock art site near LNG hub gets World Heritage status

ClimateWire News - Wed, 07/16/2025 - 6:30am
Activists have launched legal proceedings against a planned expansion of the nearby liquefied natural gas export hub operated by Woodside Energy Group.

Jane Fonda warns climate and democracy are both in crisis

ClimateWire News - Wed, 07/16/2025 - 6:30am
“We have two essential crises and for both, it’s now or never: democracy and climate,” the actor told the Bloomberg Green Seattle conference Monday.

Podcast Episode: Finding the Joy in Digital Security

EFF: Updates - Wed, 07/16/2025 - 3:05am

Many people approach digital security training with furrowed brows, as an obstacle to overcome. But what if learning to keep your tech safe and secure was consistently playful and fun? People react better to learning, and retain more knowledge, when they're having a good time. It doesn’t mean the topic isn’t serious – it’s just about intentionally approaching a serious topic with joy.

%3Ciframe%20height%3D%2252px%22%20width%3D%22100%25%22%20frameborder%3D%22no%22%20scrolling%3D%22no%22%20seamless%3D%22%22%20src%3D%22https%3A%2F%2Fplayer.simplecast.com%2F78eed2f8-094f-4ad7-980e-fb68168f32ba%3Fdark%3Dtrue%26amp%3Bcolor%3D000000%22%20allow%3D%22autoplay%22%3E%3C%2Fiframe%3E Privacy info. This embed will serve content from simplecast.com

   

(You can also find this episode on the Internet Archive and on YouTube.) 

That’s how Helen Andromedon approaches her work as a digital security trainer in East Africa. She teaches human rights defenders how to protect themselves online, creating open and welcoming spaces for activists, journalists, and others at risk to ask hard questions and learn how to protect themselves against online threats. She joins EFF’s Cindy Cohn and Jason Kelley to discuss making digital security less complicated, more relevant, and more joyful to real users, and encouraging all women and girls to take online safety into their own hands so that they can feel fully present and invested in the digital world. 

In this episode you’ll learn about:

  • How the Trump Administration’s shuttering of the United States Agency for International Development (USAID) has led to funding cuts for digital security programs in Africa and around the world, and why she’s still optimistic about the work
  • The importance of helping women feel safe and confident about using online platforms to create positive change in their communities and countries
  • Cultivating a mentorship model in digital security training and other training environments
  • Why diverse input creates training models that are accessible to a wider audience
  • How one size never fits all in digital security solutions, and how Dungeons & Dragons offers lessons to help people retain what they learn 

Helen Andromedon – a moniker she uses to protect her own security – is a digital security trainer in East Africa who helps human rights defenders learn how to protect themselves and their data online and on their devices. She played a key role in developing the Safe Sisters project, which is a digital security training program for women. She’s also a UX researcher and educator who has worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women’s Development Fund

Resources: 

What do you think of “How to Fix the Internet?” Share your feedback here.

Transcript

HELEN ANDROMEDON: I'll say it bluntly. Learning should be fun. Even if I'm learning about your tool, maybe you design a tutorial that is fun for me to read through, to look at. It seems like that helps with knowledge retention.
I've seen people responding to activities and trainings that are playful. And yet we are working on a serious issue. You know, we are developing an advocacy campaign, it's a serious issue, but we are also having fun.

CINDY COHN: That's Helen Andromedan talking about the importance of joy and play in all things, but especially when it comes to digital security training. I'm Cindy Cohn, the executive director of the Electronic Frontier Foundation.

JASON KELLEY: And I'm Jason Kelley, EFF's activism director. This is our podcast, How to Fix the Internet.

CINDY COHN: This show is all about envisioning a better digital world for everyone. Here at EFF, we often specialize in thinking about worst case scenarios and of course, jumping in to help when bad things happen. But the conversations we have here are an opportunity to envision the better world we can build if we start to get things right online.

JASON KELLEY: Our guest today is someone who takes a very active role in helping people take control of their digital lives and experiences.

CINDY COHN: Helen Andromedon - that's a pseudonym by the way, and a great one at that – is a digital security trainer in East Africa. She trains human rights defenders in how to protect themselves digitally. She's also a UX researcher and educator, and she's worked as a consultant for many organizations across Africa, including the Association for Progressive Communications and the African Women's Development Fund.
She also played a key role in developing the Safe Sisters project, which is a digital security training, especially designed for women. Welcome Helen. Thank you so much for joining us.

HELEN ANDROMEDON: Thanks for having me. I've been a huge fan of the tools that came out of EFF and working with Ford Foundation. So yeah, it's such a blast to be here.

CINDY COHN: Wonderful. So we're in a time when a lot of people around the world are thinking more seriously than ever about how to protect their privacy and security. and that's, you know, from companies, but increasingly from governments and many, many other potential bad actors.
You know, there's no one size fits all training, as we know. And the process of determining what you need to protect and from whom you need to protect it is different for everybody. But we're particularly excited to talk to you, Helen, because you know that's what you've been doing for a very long time. And we want to hear how you think about, you know, how to make the resources available to people and make sure that the trainings really fit them. So can you start by explaining what the Safe Sisters project is?

HELEN ANDROMEDON: It's a program that came out of a collaboration amongst friends, but friends who were also working in different organizations and also were doing trainings. In the past, what would have it would be, we would send out an application, Hey, there's a training going on. But there was a different number of women that would actually apply to this fellowship.
It would always be very unequal. So what we decided to do is really kind of like experimenting is say, what if we do a training but only invite, women and people who are activists, people who are journalists, people who are really high risk, and give them a space to ask those hard questions because there are so many different things that come out of suffering online harassment and going through that in your life, you, when you need to share it, sometimes you do need a space where you don't feel judged, where you can kind of feel free to engage in really, really traumatic topics. So this fellowship was created, it had this unique percentage of people that would apply and we started in East Africa.
I think now because of what has happened in the last I, I guess three months, it has halted our ability to run the program in as many. Regions that need it. Um, but Safe Sister, I think what I see, it is a tech community of people who are able to train others or help others solve a problem.
So what problems do, I mean, so for example, I, I think I left my, my phone in the taxi. So what do I do? Um, how do I find my phone? What happens to all my data? Or maybe it could be a case of online harassment where there's some sort of revenge from the other side, from the perpetrator, trying to make the life of the victim really, really difficult at the moment.
So we needed people to be able to have solutions available to talk about and not just say, okay, you are a victim of harassment. What should I do? There's nothing to do, just go offline. No, we need to respond, but many of us don't have the background in ICT, uh, for example, in my region. I think that it is possible now to get a, a good background in IT or ICT related courses, um, up to, um, you know, up to PhD level even.
But sometimes I've, in working with Safe Sister, I've noticed that even such people might not be aware of the dangers that they are facing. Even when they know OPSEC and they're very good at it. They might not necessarily understand the risks. So we decided to keep working on the content each year, every time we can run the program, work on the content: what are the issues, currently, that people are facing? How can we address them through an educational fellowship, which is very, very heavy on mentorship. So mentorship is also a thing that we put a lot of stress on because again, we know that people don't necessarily have the time to take a course or maybe learn about encryption, but they are interested in it. So we want to be able to serve all the different communities and the different threat models that we are seeing.

CINDY COHN: I think that's really great and I, I wanna, um, drill in a couple of things. So first thing you, uh, ICT, internet Communications Technologies. Um, but what I, uh, what I think is really interesting about your approach is the way the fellowship works. You know, you're kind of each one teach one, right?
You're bringing in different people from communities. And if you know, most of us, I think as a, as a model, you know, finding a trusted person who can give you good information is a lot easier than going online and finding information all by yourself. So by kind of seeding these different communities with people who've had your advanced training, you're really kind of able to grow who gets the information. Is that part of the strategy to try to have that?

HELEN ANDROMEDON: It's kind of like two ways. So there is the way where we, we want people to have the information, but also we want people to have the correct information.
Because there is so much available, you can just type in, you know, into your URL and say, is this VPN trusted? And maybe you'll, you'll find a result that isn't necessarily the best one.
We want people to be able to find the resources that are guaranteed by, you know, EFF or by an organization that really cares about digital rights.

CINDY COHN: I mean, that is one of the problems of the current internet. When I started out in the nineties, there just wasn't information. And now really the role of organizations like yours is sifting through the misinformation, the disinformation, just the bad information to really lift up, things that are more trustworthy. It sounds like that's a lot of what you're doing.

HELEN ANDROMEDON: Yeah, absolutely. How I think it's going, I think you, I mean, you mentioned that it's kind of this cascading wave of, you know, knowledge, you know, trickling down into the communities. I do hope that's where it's heading.
I do see people reaching out to me who have been at Safe Sisters, um, asking me, yo Helen, which training should I do? You know, I need content for this. And you can see that they're actively engaging still, even though they went through the fellowship like say four years ago. So that I think is like evidence that maybe it's kind of sustainable, yeah.

CINDY COHN: Yeah. I think so. I wanted to drill down on one other thing you said, which is of course, you mentioned the, what I think of as the funding cuts, right, the Trump administration cutting off money for a lot of the programs like Safe Sisters, around the world. and I know there are other countries in Europe that are also cutting, support for these kind of programs.
Is that what you mean in terms of what's happened in the last few months?

HELEN ANDROMEDON: Yeah. Um, it's really turned around what our expectations for the next couple of years say, yeah, it's really done so, but also there's an opportunity for growth to recreate how, you know, what kind of proposals to develop. It's, yeah, it's always, you know, these things. Sometimes it's always just a way to change.

CINDY COHN: I wanna ask one more question. I really will let Jason ask some at some point, but, um, so what does the world look like if we get it right? Like if your work is successful, and more broadly, the internet is really supporting these kind of communities right now, what does it look like for the kind of women and human rights activists who you work with?

HELEN ANDROMEDON: I think that most of them would feel more confident to use those platforms for their work. So that gives it an extra boost because then they can be creative about their actions. Maybe it's something, maybe they want, you know, uh, they are, they are demonstrating against, uh, an illegal and inhumane act that has passed through parliament.
So online platforms. If they could, if it could be our right and if we could feel like the way we feel, you know, in the real world. So there's a virtual and a real world, you're walking on the road and you know you can touch things.
If we felt ownership of our online spaces so that you feel confident to create something that maybe can change. So in, in that ideal world, it would be that the women can use online spaces to really, really boost change in their communities and have others do so as well because you can teach others and you inspire others to do so. So it's, like, pops up everywhere and really makes things go and change.
I think also for my context, because I've worked with people in very repressive regimes where it is, the internet can be taken away from you. So it's things like the shutdowns, it's just ripped away from you. Uh, you can no longer search, oh, I have this, you know, funny thing on my dog. What should I do? Can I search for the information? Oh, you don't have the internet. What? It's taken away from you. So if we could have a way where the infrastructure of the internet was no longer something that was, like, in the hands of just a few people, then I think – So there's a way to do that, which I've recently learned from speaking to people who work on these things. It's maybe a way of connecting to the internet to go on the main highway, which doesn't require the government, um, the roadblocks and maybe it could be a kind of technology that we could use that could make that possible. So there is a way, and in that ideal world, it would be that, so that you can always find out, uh, what that color is and find out very important things for your life. Because the internet is for that, it's for information.
Online harassment, that one. I, I, yeah, I really would love to see the end of that. Um, just because, so also acknowledging that it's also something that has shown us. As human beings also something that we do, which is not be very kind to others. So it's a difficult thing. What I would like to see is that this future, we have researched it, we have very good data, we know how to avoid it completely. And then we also draw the parameters, so that everybody, when something happens to you, doesn't make you feel good, which is like somebody harassing you that also you are heard, because in some contexts, uh, even when you go to report to the police and you say, look, this happened to me. Sometimes they don't take it seriously, but because of what happens to you after and the trauma, yes, it is important. It is important and we need to recognize that. So it would be a world where you can see it, you can stop it.

CINDY COHN: I hear you and what I hear is that, that the internet should be a place where it's, you know, always available, and not subject to the whims of the government or the companies. There's technologies that can help do that, but we need to make them better and more widely available. That speaking out online is something you can do. And organizing online is something you can do. Um, but also that you have real accountability for harassment that might come as a response. And that could be, you know, technically protecting people, but also I think that sounds more like a policy and legal thing where you actually have resources to fight back if somebody, you know, misuses technology to try to harass you.

HELEN ANDROMEDON: Yeah, absolutely. Because right now the cases get to a point where it seems like depending on the whim of the person in charge, maybe if they go to, to report it, the case can just be dropped or it's not taken seriously. And then people do harm to themselves also, which is on, like, the extreme end and which is something that's really not, uh, nice to happen and should, it shouldn't happen.

CINDY COHN: It shouldn't happen, and I think it is something that disproportionately affects women who are online or marginalized people. Your vision of an internet where people can freely gather together and organize and speak is actually available to a lot of people around the world, but, but some people really don't experience that without tremendous blowback.
And that's, um, you know, that's some of the space that we really need to clear out so that it's a safe space to organize and make your voice heard for everybody, not just, you know, a few people who are already in power or have the, you know, the technical ability to protect themselves.

JASON KELLEY: We really want to, I think, help talk to the people who listen to this podcast and really understand and are building a better future and a better internet. You know, what kind of things you've seen when you train people. What are you thinking about when you're building these resources and these curriculums? What things come up like over and over that maybe people who aren't as familiar with the problems you've seen or the issues you've experienced.

HELEN ANDROMEDON: yeah, I mean the, Hmm, I, maybe they could be a couple of, of reasons that I think, um. What would be my view is, the thing that comes up in trainings is of course, you know, hesitation. there's this new thing and I'm supposed to download it. What is it going to do to my laptop?
My God, I share this laptop. What is it going to do? Now they tell me, do this, do this in 30 minutes, and then we have to break for lunch. So that's not enough time to actually learn because then you have to practice or you could practice, you could throw in a practice of a session, but then you leave this person and that person is as with normal.
Forget very normal. It happens. So the issues sometimes it's that kind of like hesitation to play with the tech toys. And I think that it's, good to be because we are cautious and we want to protect this device that was really expensive to get. Maybe it's borrowed, maybe it's secondhand.
I won't get, you know, like so many things that come up in our day to day because of, of the cost of things.

JASON KELLEY: You mentioned like what do you do when you leave your phone in a taxi? And I'll say that, you know, a few days ago I couldn't find my phone after I went somewhere and I completely freaked out. I know what I'm doing usually, but I was like, okay, how do I turn this thing off?
And I'm wondering like that taxi scenario, is that, is that a common one? Are there, you know, others that people experience there? I, I know you mentioned, you know, internet shutoffs, which happen far too frequently, but a lot of people probably aren't familiar with them. Is that a common scenario? You have to figure out what to do about, like, what are the things that pop up occasionally that, people listening to this might not be as aware of.

HELEN ANDROMEDON: So losing a device or a device malfunctioning is like the top one and internet shutdown is down here because they are not, they're periodic. Usually it's when there's an election cycle, that's when it happens. After that, you know, you sometimes, you have almost a hundred percent back to access. So I think I would put losing a device, destroying a device.
Okay, now what do I do now for the case of the taxi? The phone in the taxi. First of all, the taxi is probably crowded. So you don't think that phone will not be returned most likely.
So maybe there's intimate photos. You know, there's a lot, there's a lot that, you know, can be. So then if this person doesn't have a great password, which is usually the case because there is not so much emphasis when you buy a device. There isn't so much emphasis on, Hey, take time to make a strong password now. Now it's better. Now obviously there are better products available that teach you about device security as you are setting up the phone. But usually you buy it, you switch it on, so you don't really have the knowledge. This is a better password than that. Or maybe don't forget to put a password, for example.
So that person responding to that case would be now asking if they had maybe the find my device app, if we could use that, if that could work, like as you were saying, there's a possibility that it might, uh, bing in another place and be noticed and for sure taken away. So there's, it has to be kind of a backwards, a learning journey to say, let's start from ground zero.

JASON KELLEY: Let's take a quick moment to say thank you to our sponsor. How to Fix The Internet is supported by the Alfred p Sloan Foundation's program in public understanding of science and technology enriching people's lives through a keener appreciation of our increasingly technological world and portraying the complex humanity of scientists, engineers, and mathematicians.
We also wanna thank EFF members and donors. You are the reason we exist.
You can become a member for just $25 and for a little more, you can get some great, very stylish gear. The more members we have, the more power we have in state houses, courthouses and on the streets.
EFF has been fighting for digital rights for decades, and that fight is bigger than ever. So please, if you like what we do, go to ff.org/pod to donate.
We also wanted to share that our friend Cory Doctorow has a new podcast. Listen to this.  [Who Broke the Internet trailer]
And now back to our conversation with Helen Andromedon.

CINDY COHN: So how do you find the people who come and do the trainings? How do you identify people who would be good fellows or who need to come in to do the training? Because I think that's its own problem, especially, you know, the Safe Sisters is very spread out among multiple countries.

HELEN ANDROMEDON: Right now it has been a combination of partners saying, Hey, we have an idea, and then seeing where the issues are.
As you know, a fellowship needs resources. So if there is an interest because of the methodology, at least, um, let's say it's a partner in Madagascar who is working on digital rights. They would like to make sure that their community, maybe staff and maybe people that they've given sub-grants to. So that entire community, they want to make sure that it is safe, they can communicate safely. Nothing, you know, is leaked out, they can work well. And they're looking for, how do we do this? We need trainers, we need content. we need somebody who understands also learning separate from the resources. So I think that the Safe Sister Fellowship also is something that, because it's like you can pick it up here and you can design it in, in whatever context you have.
I think that has made it like be stronger. You take it, you make it your own. So it has happened like that. So a partner has an interest. We have the methodology, we have the trainers, and then we have the tools as well. And then that's how it happens.

CINDY COHN: What I'm hearing here is that, you know, there's already a pretty strong network of partners across Africa and the communities you serve. there's groups and, you know, we know this from EFF, 'cause we hear from them as well ,that there are, there are actually a pretty well developed set of groups that are doing digital activism and human rights defenders using technology already across, uh, Africa and the rest of the communities. And that you have this network and you are the go-to people, uh, when people in the network realize they need a higher level of security thinking and training than they had. Does that sound right?

HELEN ANDROMEDON: sound right? Yeah. A higher level of our being aware And usually it comes down to how do we keep this information safe? Because we are having incidents. Yeah.

CINDY COHN: Do you have an incident that you could, you explain?

HELEN ANDROMEDON: Oh, um, queer communities, say, an incident of, executive director being kidnapped. And it was, we think, that it's probably got to do with how influential they were and what kind of message they were sending. So it, it's apparent. And then so shortly after that incident, there's a break-in into the, the office space. Now that one is actually quite common, um, especially in the civic space. So that one then, uh, if they, they were storing maybe case files, um, everything was in a hard copy. All the information was there, receipts, checks, um, payment details. That is very, very tragic in that case.
So in that, what we did, because this incident had happened in multiple places, we decided to run a program for all the staff that was, um, involved in their day to day. So we could do it like that and make sure that as a response to what happened, everybody gets some education. We have some quizzes, we have some tests, we have some community. We keep engaged and maybe. That would help. And yeah, they'll be more prepared in case it happens again.

CINDY COHN: Oh yeah. And this is such an old, old issue. You know, when we were doing the encryption fight in the nineties, we had stories of people in El Salvador and Guatemala where the office gets raided and the information gets in the hands of the government, whoever the opposition is, and then other people start disappearing and getting targeted too, because their identities are revealed in the information that gets seized. And that sounds like the very same pattern that you're still seeing.

HELEN ANDROMEDON: Yeah there's a lot to consider for that case. Uh, cloud saving, um, we have to see if there's somebody that can, there's somebody who can host their server. It's very, yeah, it's, it's interesting for that case.

CINDY COHN: Yeah. I think it's an ongoing issue and there are better tools than we had in the nineties, but people need to know about them and, and actually using them is not, it's not easy. It's, you, you have to actually think about it.

HELEN ANDROMEDON: Yeah, I, I don't know. I've seen a model that works, so if it's a tool, it's great. It's working well. I've seen it, uh, with I think the Tor project, because the, to project, has user communities. What it appears to be doing is engaging people with training, so doing safety trainings and then they get value from, from using your tool. because they get to have all this information, not only about your tool, but of safety. So that's a good model to build user communities and then get your tool used. I think this is also a problem.

CINDY COHN: Yeah. I mean, this is a, another traditional problem is that the trainers will come in and they'll do a training, but then nobody really is trained well enough to continue to use the tool.
And I see you, you know, building networks and building community and also having, you know, enough time for people to get familiar with and use these tools so that they won't just drop it after the training's over. It sounds like you're really thinking hard about that.

HELEN ANDROMEDON: Yeah. Um, yeah, I think that we have many opportunities and because the learning is so difficult to cultivate and we don't have the resources to make it long term. Um, so yes, you do risk having all the information forgotten. Yes.

JASON KELLEY: I wanna just quickly emphasize that some of the scenarios, Cindy, you've talked about, and Helen you just mentioned, I think a lot of: potential break-ins, harassment, kidnapping, and it's, it's really, it's awful, but I think this is one of the things that makes this kind of training so necessary. I know that this seems obvious to many people listening and, and to the folks here, but I think it's, it's really it. I. Just needs emphasized that these are serious issues. That, and that's why you can't make a one size fits all training because these are real problems that, you know, someone might not have to deal with in one country and they might have a regular problem with in another. Is there a kind of difference that you can just clarify about how you would train, for example, groups of women that are experiencing one thing when they, you know, need digital security advice or help versus let's say human rights defenders? Is the training completely different when you do that, or is it just really kind of emphasizing the same things about like protecting your privacy, protecting your data, using certain tools, things like that?

HELEN ANDROMEDON: Yeah. Jason, let me, let me first respond to your first comment about the tools. So one size fits all, obviously is wrong. Maybe get more people of diversity working on that tool and they'll give you their opinion because the development is a process. You don't just develop a tool - you have time to change, modify, test. Do I use that? Like if you had somebody like that in the room, they would tell you if you had two, that would be great because now you have two different points of evidence. And keep mixing. And then, um, I know it's like it's expensive. Like you have to do it one way and then get feedback, then do it another way. But I, I think just do more of that. Um, yeah. Um, how do I train? So the training isn't that different. There are some core concepts that we keep and then, so if it, if I had like five days, I would do like one or two days. The more technical, uh, concepts of digital safety, which everybody has to do, which is, look, this is my device, this is how it works, this is how I keep it safe. This is my account, this is how it works. This is how I keep it safe.
And then when you have more time, you can dive into the personas, let's say it's a journalist, so is there a resource for, and this is how then you pull a resource and then you show it is there a resource which identify specific tools developed for journalists? Oh, maybe there is, there is something that is like a panic button that one they need. So you then you start to put all these things together and in the remaining time you can kind of like hone into those differences.
Now for women, um, it would be … So if it's HRDs and it's mixed, I still would cover cyber harassment because it affects everyone. For women would, would be slightly different because maybe we could go into self-defense, we could go into how to deal, we could really hone into the finer points of responding to online harassment because for their their case, it's more likely because you did a threat model, it's more likely because of their agenda and because of the work that they do. So I think that would be how I would approach the two.

JASON KELLEY: And one, one quick thing that I just, I want to mention that you brought up earlier is, um, shared devices. There's a lot of, uh, solutionism in government, and especially right now with this sort of, assumption that if you just assume everyone has one device, if you just say everyone has their phone, everyone has their computer, you can, let's say, age verify people. You can say, well, kids who use this phone can't go to this website, and adults who use this other phone can go to this website. And this is a regular issue we've seen where there's not an awareness that people are buying secondhand devices a lot, people are sharing devices a lot.

HELEN ANDROMEDON: Yeah, absolutely. Shared devices is the assumption always. And then we do get a few people who have their own devices. So Jason, I just wanted to add one more factor that could be bad. Yeah. For the shared devices, because of the context, and the regions that I'm in, you have also the additional culture and religious norms, which sometimes makes it like you don't have liberty over your devices. So anybody at any one time, if they're your spouse or your parent, they can just take it from you, and demand that you let them in. So it's not necessarily that you could all have your own device, but the access to that device, it can be shared.

CINDY COHN: So as you look at the world of, kind of, tools that are available, where are the gaps? Where would you like to see better tools or different tools or tools at all, um, to help protect and empower the communities you work with?

HELEN ANDROMEDON: We need a solution for the internet shutdowns because, because sometimes it could have an, it could have health repercussions, you could have a need, a serious need, and you don't have access to the internet. So I don't know. We need to figure that one out. Um, the technology is there, as you mentioned earlier, before, but you know, it needs to be, like, more developed and tested. It would be nice to have technology that responds or gives victim advice. Now I've seen interventions. By case. Case by case. So many people are doing them now. Um, you, you know, you, you're right. They verify, then they help you with whatever. But that's a slow process.
Um, you're processing the information. It's very traumatic. So you need good advice. You need to stay calm, think through your options, and then make a plan, and then do the plan. So that's the kind of advice. Now I think there are apps because maybe I'm not using them or I don't, maybe that means they're not well known as of now.
Yeah. But that's technology I would like to see. Um, then also every, every, everything that is available. The good stuff. It's really good. It's really well written. It's getting better – more visuals, more videos, more human, um, more human like interaction, not that text. And mind you, I'm a huge fan of text, um, and like the GitHub text.
That's awesome. Um, but sometimes for just getting into the topic you need a different kind of, uh, ticket. So I don't know if we can invest in that, but the content is really good.
Practice would be nice. So we need practice. How do we get practice? That's a question I would leave to you. How do you practice a tool on your own? It's good for you, how do you practice it on your own? So it's things like that helping the, the person onboard, doing resources to help that transition. You want people to use it at scale.

JASON KELLEY: I wonder if you can talk a bit about that moment when you're training someone and you realize that they really get it. Maybe it's because it's fun, or maybe it's because they just sort of finally understand like, oh, that's how this works. Is that something, you know, I assume it's something you see a lot because you're clearly, you know, an experienced and successful teacher, but it's, it's just such a lovely moment when you're trying to teach someone

HELEN ANDROMEDON: when trying to teach someone something. Yeah, I mean, I can't speak for everybody, but I'll speak to myself. So there are some things that surprise me sitting in a class, in a workshop room, or reading a tutorial or watching how the internet works and reading about the cables, but also reading about electromagnetism. All those things were so different from, what were we talking about? Which is like how internet and civil society, all that stuff. But that thing, the science of it, the way it is, that should, for me, I think that it's enough because it's really great.
But then, um. So say we are, we are doing a session on how the internet works in relation to internet shutdowns. Is it enough to just talk about it? Are we jumping from problem to solution, or can we give some time? So that the person doesn't forget, can we give some time to explain the concept? Almost like moving their face away from the issue for a little bit and like, it's like a deception.
So let's talk about electromagnetism that you won't forget. Maybe you put two and two together about the cyber optic cables. Maybe you answer the correction, the, the right, uh, answer to a question in, at a talk. So it's, it's trying to make connections because we don't have that background. We don't have a tech background.
I just discovered Dungeons and Dragons at my age. So we don't have that tech liking tech, playing with it. We don't really have that, at least in my context. So get us there. Be sneaky, but get us there.

JASON KELLEY: You have to be a really good dungeon master. That's what I'm hearing. That's very good.

HELEN ANDROMEDON: yes.

CINDY COHN: I think that's wonderful and, and I agree with you about, like, bringing the joy, making it fun, and making it interesting on multiple levels, right?
You know, learning about the science as well as, you know, just how to do things that just can add a layer of connection for people that helps keep them engaged and keeps them in it. And also when stuff goes wrong, if you actually understand how it works under the hood, I think you're in a better position to decide what to do next too.
So you've gotta, you know, it not only makes it fun and interesting, it actually gives people a deeper level of understanding that can help 'em down the road.

HELEN ANDROMEDON: Yeah, I agree. Absolutely.

JASON KELLEY: Yeah, Helen, thanks so much for joining us – this has been really helpful and really fun.
Well, that was really fun and really useful for people I think, who are thinking about digital security and people who don't spend much time thinking about digital security, but maybe should start, um, something that she mentioned that, that, that you talked about, the Train the Trainer model, reminded me that we should mention our surveillance self-defense guides that, um, are available@ssd.ff.org.
That we talked about a little bit. They're a great resource as well as the Security Education companion website, which is security education companion.org.
Both of these are great things that came up and that people might want to check out.

CINDY COHN: Yeah, it's wonderful to hear someone like Helen, who's really out there in the field working with people, say that these guides help her. Uh, we try to be kind of the brain trust for people all over the world who are doing these trainings, but also make it easy if. If you're someone who's interested in learning how to do trainings, we have materials that'll help you get started. Um, and as, as we all know, we're in a time when more people are coming to us and other organizations seeking security help than ever before.

JASON KELLEY: Yeah, and unfortunately there's less resources now, so I think we, you know, in terms of funding, right, there's less resources in terms of funding. So it's important that people have access to these kinds of guides, and that was something that we talked about that kind of surprised me. Helen was really, I think, optimistic about the funding cuts, not obviously about them themselves, but about what the opportunities for growth could be because of them.

CINDY COHN: Yeah, I think this really is what resilience sounds like, right? You know, you get handed a situation in which you lose, you know, a lot of the funding support that you're gonna do, and she's used to pivoting and she pivots towards, you know, okay, these are the opportunities for us to grow, for us to, to build new baselines for the work that we do. And I really believe she's gonna do that. The attitude just shines through in the way that she approaches adversity.

JASON KELLEY: Yeah. Yeah. And I really loved, while we're thinking about the, the parts that we're gonna take away from this, I really loved the way she brought up the need for people to feel ownership of the online world. Now, she was talking about infrastructure specifically in that moment, but this is something that's come up quite a bit in our conversations with people.

CINDY COHN: Yeah, her framing of how important the internet is to people all around the world, you know, the work that our friends at Access now and others do with the Keep It On Coalition to try to make sure that the internet doesn't go down. She really gave a feeling for like just how vital and important the internet is, for people all over the world.

JASON KELLEY: Yeah. And even though, you know, some of these conversations were a little bleak in the sense of, you know, protecting yourself from potentially bad things, I was really struck by how she sort of makes it fun in the training and sort of thinking about, you know, how to get people to memorize things. She mentioned magnetism and fiber optics, and just like the science behind it. And it really made me, uh, think more carefully about how I'm gonna talk about certain aspects of security and, and privacy, because she really gets, I think, after years of training what sticks in people's mind.

CINDY COHN: I think that's just so important. I think that people like Helen are this really important kind of connective tissue between the people who are deep in the technology and the people who need it. And you know that this is its own skill and she just, she embodies it. And of course, the joy she brings really makes it alive.

JASON KELLEY: And that's our episode for today. Thanks so much for joining us. If you have feedback or suggestions, we'd love to hear from you. Visit ff.org/podcast and click on listen or feedback. And while you're there, you can become a member and donate, maybe even pick up some of the merch and just see what's happening in digital rights this week and every week.
Our theme music is by Nat Keefe of Beat Mower with Reed Mathis, and How to Fix the Internet is supported by the Alfred Peace Loan Foundation's program and public understanding of science and technology. We'll see you next time. I'm Jason Kelly.

CINDY COHN: And I'm Cindy Cohn.

MUSIC CREDITS: This podcast is licensed creative commons attribution 4.0 international, and includes the following music licensed creative commons attribution 3.0 unported by its creators: Drops of H2O, The Filtered Water Treatment by Jay Lang. Sound design, additional music and theme remixes by Gaetan Harris.

 

How to more efficiently study complex treatment interactions

MIT Latest News - Wed, 07/16/2025 - 12:00am

MIT researchers have developed a new theoretical framework for studying the mechanisms of treatment interactions. Their approach allows scientists to efficiently estimate how combinations of treatments will affect a group of units, such as cells, enabling a researcher to perform fewer costly experiments while gathering more accurate data.

As an example, to study how interconnected genes affect cancer cell growth, a biologist might need to use a combination of treatments to target multiple genes at once. But because there could be billions of potential combinations for each round of the experiment, choosing a subset of combinations to test might bias the data their experiment generates. 

In contrast, the new framework considers the scenario where the user can efficiently design an unbiased experiment by assigning all treatments in parallel, and can control the outcome by adjusting the rate of each treatment.

The MIT researchers theoretically proved a near-optimal strategy in this framework and performed a series of simulations to test it in a multiround experiment. Their method minimized the error rate in each instance.

This technique could someday help scientists better understand disease mechanisms and develop new medicines to treat cancer or genetic disorders.

“We’ve introduced a concept people can think more about as they study the optimal way to select combinatorial treatments at each round of an experiment. Our hope is this can someday be used to solve biologically relevant questions,” says graduate student Jiaqi Zhang, an Eric and Wendy Schmidt Center Fellow and co-lead author of a paper on this experimental design framework.

She is joined on the paper by co-lead author Divya Shyamal, an MIT undergraduate; and senior author Caroline Uhler, the Andrew and Erna Viterbi Professor of Engineering in EECS and the MIT Institute for Data, Systems, and Society (IDSS), who is also director of the Eric and Wendy Schmidt Center and a researcher at MIT’s Laboratory for Information and Decision Systems (LIDS). The research was recently presented at the International Conference on Machine Learning.

Simultaneous treatments

Treatments can interact with each other in complex ways. For instance, a scientist trying to determine whether a certain gene contributes to a particular disease symptom may have to target several genes simultaneously to study the effects.

To do this, scientists use what are known as combinatorial perturbations, where they apply multiple treatments at once to the same group of cells.

“Combinatorial perturbations will give you a high-level network of how different genes interact, which provides an understanding of how a cell functions,” Zhang explains.

Since genetic experiments are costly and time-consuming, the scientist aims to select the best subset of treatment combinations to test, which is a steep challenge due to the huge number of possibilities.

Picking a suboptimal subset can generate biased results by focusing only on combinations the user selected in advance.

The MIT researchers approached this problem differently by looking at a probabilistic framework. Instead of focusing on a selected subset, each unit randomly takes up combinations of treatments based on user-specified dosage levels for each treatment.

The user sets dosage levels based on the goal of their experiment — perhaps this scientist wants to study the effects of four different drugs on cell growth. The probabilistic approach generates less biased data because it does not restrict the experiment to a predetermined subset of treatments.

The dosage levels are like probabilities, and each cell receives a random combination of treatments. If the user sets a high dosage, it is more likely most of the cells will take up that treatment. A smaller subset of cells will take up that treatment if the dosage is low.

“From there, the question is how do we design the dosages so that we can estimate the outcomes as accurately as possible? This is where our theory comes in,” Shyamal adds.

Their theoretical framework shows the best way to design these dosages so one can learn the most about the characteristic or trait they are studying.

After each round of the experiment, the user collects the results and feeds those back into the experimental framework. It will output the ideal dosage strategy for the next round, and so on, actively adapting the strategy over multiple rounds.

Optimizing dosages, minimizing error

The researchers proved their theoretical approach generates optimal dosages, even when the dosage levels are affected by a limited supply of treatments or when noise in the experimental outcomes varies at each round.

In simulations, this new approach had the lowest error rate when comparing estimated and actual outcomes of multiround experiments, outperforming two baseline methods.

In the future, the researchers want to enhance their experimental framework to consider interference between units and the fact that certain treatments can lead to selection bias. They would also like to apply this technique in a real experimental setting.

“This is a new approach to a very interesting problem that is hard to solve. Now, with this new framework in hand, we can think more about the best way to design experiments for many different applications,” Zhang says.

This research is funded, in part, by the Advanced Undergraduate Research Opportunities Program at MIT, Apple, the National Institutes of Health, the Office of Naval Research, the Department of Energy, the Eric and Wendy Schmidt Center at the Broad Institute, and a Simons Investigator Award.

Connect or reject: Extensive rewiring builds binocular vision in the brain

MIT Latest News - Tue, 07/15/2025 - 4:25pm

Scientists have long known that the brain’s visual system isn’t fully hardwired from the start — it becomes refined by what babies see — but the authors of a new MIT study still weren’t prepared for the degree of rewiring they observed when they took a first-ever look at the process in mice as it happened in real-time.

As the researchers in The Picower Institute for Learning and Memory tracked hundreds of “spine” structures housing individual network connections, or “synapses,” on the dendrite branches of neurons in the visual cortex over 10 days, they saw that only 40 percent of the ones that started the process survived. Refining binocular vision (integrating input from both eyes) required numerous additions and removals of spines along the dendrites to establish an eventual set of connections.

Former graduate student Katya Tsimring led the study, published this month in Nature Communications, which the team says is the first in which scientists tracked the same connections all the way through the “critical period,” when binocular vision becomes refined.

“What Katya was able to do is to image the same dendrites on the same neurons repeatedly over 10 days in the same live mouse through a critical period of development, to ask, what happens to the synapses or spines on them?,” says senior author Mriganka Sur, the Paul and Lilah Newton Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “We were surprised by how much change there is.”

Extensive turnover

In the experiments, young mice watched as black-and-white gratings with lines of specific orientations and directions of movement drifted across their field of view. At the same time, the scientists observed both the structure and activity of the neurons’ main body (or “soma”) and of the spines along their dendrites. By tracking the structure of 793 dendritic spines on 14 neurons at roughly Day 1, Day 5 and Day 10 of the critical period, they could quantify the addition and loss of the spines, and therefore the synaptic connections they housed. And by tracking their activity at the same time, they could quantify the visual information the neurons received at each synaptic connection. For example, a spine might respond to one specific orientation or direction of grating, several orientations, or might not respond at all. Finally, by relating a spine’s structural changes across the critical period to its activity, they sought to uncover the process by which synaptic turnover refined binocular vision.

Structurally, the researchers saw that 32 percent of the spines evident on Day 1 were gone by Day 5, and that 24 percent of the spines apparent on Day 5 had been added since Day 1. The period between Day 5 and Day 10 showed similar turnover: 27 percent were eliminated, but 24 percent were added. Overall, only 40 percent of the spines seen on Day 1 were still there on Day 10.

Meanwhile, only four of the 13 neurons they were tracking that responded to visual stimuli still responded on Day 10. The scientists don’t know for sure why the other nine stopped responding, at least to the stimuli they once responded to, but it’s likely they now served a different function.

What are the rules?

Having beheld this extensive wiring and rewiring, the scientists then asked what entitled some spines to survive over the 10-day critical period.

Previous studies have shown that the first inputs to reach binocular visual cortex neurons are from the “contralateral” eye on the opposite side of the head (so in the left hemisphere, the right eye’s inputs get there first), Sur says. These inputs drive a neuron’s soma to respond to specific visual properties such as the orientation of a line — for instance, a 45-degree diagonal. By the time the critical period starts, inputs from the “ipsilateral” eye on the same side of the head begin joining the race to visual cortex neurons, enabling some to become binocular.

It’s no accident that many visual cortex neurons are tuned to lines of different directions in the field of view, Sur says.

“The world is made up of oriented line segments,” Sur notes. “They may be long line segments; they may be short line segments. But the world is not just amorphous globs with hazy boundaries. Objects in the world — trees, the ground, horizons, blades of grass, tables, chairs — are bounded by little line segments.”

Because the researchers were tracking activity at the spines, they could see how often they were active and what orientation triggered that activity. As the data accumulated, they saw that spines were more likely to endure if (a) they were more active, and (b) they responded to the same orientation as the one the soma preferred. Notably, spines that responded to both eyes were more active than spines that responded to just one, meaning binocular spines were more likely to survive than non-binocular ones.

“This observation provides compelling evidence for the ‘use it or lose it’ hypothesis,” says Tsimring. “The more active a spine was, the more likely it was to be retained during development.”

The researchers also noticed another trend. Across the 10 days, clusters emerged along the dendrites in which neighboring spines were increasingly likely to be active at the same time. Other studies have shown that by clustering together, spines are able to combine their activity to be greater than they would be in isolation.

By these rules, over the course of the critical period, neurons apparently refined their role in binocular vision by selectively retaining inputs that reinforced their budding orientation preferences, both via their volume of activity (a synaptic property called “Hebbian plasticity”) and their correlation with their neighbors (a property called “heterosynaptic plasticity”). To confirm that these rules were enough to produce the outcomes they were seeing under the microscope, they built a computer model of a neuron, and indeed the model recapitulated the same trends as what they saw in the mice.

“Both mechanisms are necessary during the critical period to drive the turnover of spines that are misaligned to the soma and to neighboring spine pairs,” the researchers wrote, “which ultimately leads to refinement of [binocular] responses such as orientation matching between the two eyes.”

In addition to Tsimring and Sur, the paper’s other authors are Kyle Jenks, Claudia Cusseddu, Greggory Heller, Jacque Pak Kan Ip, and Julijana Gjorgjieva. Funding sources for the research came from the National Institutes of Health, The Picower Institute for Learning and Memory, and the Freedom Together Foundation.

Pages