Feed aggregator

Security Vulnerabilities in ICEBlock

Schneier on Security - Thu, 07/17/2025 - 7:06am

The ICEBlock tool has vulnerabilities:

The developer of ICEBlock, an iOS app for anonymously reporting sightings of US Immigration and Customs Enforcement (ICE) officials, promises that it “ensures user privacy by storing no personal data.” But that claim has come under scrutiny. ICEBlock creator Joshua Aaron has been accused of making false promises regarding user anonymity and privacy, being “misguided” about the privacy offered by iOS, and of being an Apple fanboy. The issue isn’t what ICEBlock stores. It’s about what it could accidentally reveal through its tight integration with iOS...

Why the megalaw didn’t kill Biden’s biggest climate program

ClimateWire News - Thu, 07/17/2025 - 6:41am
EPA officials say President Donald Trump’s massive policy law is a death blow to the Greenhouse Gas Reduction Fund. But the courts could have the final word.

Marjorie Taylor Greene introduces ‘weather modification’ ban

ClimateWire News - Thu, 07/17/2025 - 6:40am
Conspiracy theories about weather-altering technologies have spiked online after the Texas floods.

House Dems scrutinize Trump plan to cut off weather data

ClimateWire News - Thu, 07/17/2025 - 6:38am
Lawmakers have asked for more information about the Pentagon’s decision to stop publicly sharing data from its Defense Meteorological Satellite Program.

Democrats’ bill would require OSHA to issue worker heat protections

ClimateWire News - Thu, 07/17/2025 - 6:38am
The legislation, which is supported by some House Republicans, comes as the Trump administration considers whether to move forward with a Biden-era proposal to protect workers from extreme heat.

Youth fighting Trump on climate get boost from Democrats

ClimateWire News - Thu, 07/17/2025 - 6:37am
A congressional resolution introduced Wednesday calls for the acknowledgement that young people have the right to a clean environment.

HSBC hit by backlash from green clients after net-zero exit

ClimateWire News - Thu, 07/17/2025 - 6:35am
Last week, HSBC became the first U.K. bank to leave the Net-Zero Banking Alliance, which is the industry’s largest climate group.

Norway’s $1.9T wealth fund calls out banks over emissions reports

ClimateWire News - Thu, 07/17/2025 - 6:34am
The comments show the determination of major asset owners in Europe to include climate risk in their investment decisions, despite pushback in some jurisdictions.

How climate change could force FIFA to rethink World Cup calendar

ClimateWire News - Thu, 07/17/2025 - 6:34am
With temperatures rising worldwide, scientists warn that staging soccer tournaments in the Northern Hemisphere summer is getting increasingly dangerous for both players and spectators.

Indigenous youth face violence in bid to protect Colombian resources

ClimateWire News - Thu, 07/17/2025 - 6:33am
In regions like Cauca, violent groups frequently target Indigenous children and teenagers for recruitment.

New tool gives anyone the ability to train a robot

MIT Latest News - Thu, 07/17/2025 - 12:00am

Teaching a robot new skills used to require coding expertise. But a new generation of robots could potentially learn from just about anyone.

Engineers are designing robotic helpers that can “learn from demonstration.” This more natural training strategy enables a person to lead a robot through a task, typically in one of three ways: via remote control, such as operating a joystick to remotely maneuver a robot; by physically moving the robot through the motions; or by performing the task themselves while the robot watches and mimics.

Learning-by-doing robots usually train in just one of these three demonstration approaches. But MIT engineers have now developed a three-in-one training interface that allows a robot to learn a task through any of the three training methods. The interface is in the form of a handheld, sensor-equipped tool that can attach to many common collaborative robotic arms. A person can use the attachment to teach a robot to carry out a task by remotely controlling the robot, physically manipulating it, or demonstrating the task themselves — whichever style they prefer or best suits the task at hand.

The MIT team tested the new tool, which they call a “versatile demonstration interface,” on a standard collaborative robotic arm. Volunteers with manufacturing expertise used the interface to perform two manual tasks that are commonly carried out on factory floors.

The researchers say the new interface offers increased training flexibility that could expand the type of users and “teachers” who interact with robots. It may also enable robots to learn a wider set of skills. For instance, a person could remotely train a robot to handle toxic substances, while further down the production line another person could physically move the robot through the motions of boxing up a product, and at the end of the line, someone else could use the attachment to draw a company logo as the robot watches and learns to do the same.

“We are trying to create highly intelligent and skilled teammates that can effectively work with humans to get complex work done,” says Mike Hagenow, a postdoc at MIT in the Department of Aeronautics and Astronautics. “We believe flexible demonstration tools can help far beyond the manufacturing floor, in other domains where we hope to see increased robot adoption, such as home or caregiving settings.”

Hagenow will present a paper detailing the new interface, at the IEEE Intelligent Robots and Systems (IROS) conference in October. The paper’s MIT co-authors are Dimosthenis Kontogiorgos, a postdoc at the MIT Computer Science and Artificial Intelligence Lab (CSAIL); Yanwei Wang PhD ’25, who recently earned a doctorate in electrical engineering and computer science; and Julie Shah, MIT professor and head of the Department of Aeronautics and Astronautics.

Training together

Shah’s group at MIT designs robots that can work alongside humans in the workplace, in hospitals, and at home. A main focus of her research is developing systems that enable people to teach robots new tasks or skills “on the job,” as it were. Such systems would, for instance, help a factory floor worker quickly and naturally adjust a robot’s maneuvers to improve its task in the moment, rather than pausing to reprogram the robot’s software from scratch — a skill that a worker may not necessarily have.

The team’s new work builds on an emerging strategy in robot learning called “learning from demonstration,” or LfD, in which robots are designed to be trained in more natural, intuitive ways. In looking through the LfD literature, Hagenow and Shah found LfD training methods developed so far fall generally into the three main categories of teleoperation, kinesthetic training, and natural teaching.

One training method may work better than the other two for a particular person or task. Shah and Hagenow wondered whether they could design a tool that combines all three methods to enable a robot to learn more tasks from more people.

“If we could bring together these three different ways someone might want to interact with a robot, it may bring benefits for different tasks and different people,” Hagenow says.

Tasks at hand

With that goal in mind, the team engineered a new versatile demonstration interface (VDI). The interface is a handheld attachment that can fit onto the arm of a typical collaborative robotic arm. The attachment is equipped with a camera and markers that track the tool’s position and movements over time, along with force sensors to measure the amount of pressure applied during a given task.

When the interface is attached to a robot, the entire robot can be controlled remotely, and the interface’s camera records the robot’s movements, which the robot can use as training data to learn the task on its own. Similarly, a person can physically move the robot through a task, with the interface attached. The VDI can also be detached and physically held by a person to perform the desired task. The camera records the VDI’s motions, which the robot can also use to mimic the task when the VBI is reattached.

To test the attachment’s usability, the team brought the interface, along with a collaborative robotic arm, to a local innovation center where manufacturing experts learn about and test technology that can improve factory-floor processes. The researchers set up an experiment where they asked volunteers at the center to use the robot and all three of the interface’s training methods to complete two common manufacturing tasks: press-fitting and molding. In press-fitting, the user trained the robot to press and fit pegs into holes, similar to many fastening tasks. For molding, a volunteer trained the robot to push and roll a rubbery, dough-like substance evenly around the surface of a center rod, similar to some thermomolding tasks.

For each of the two tasks, the volunteers were asked to use each of the three training methods, first teleoperating the robot using a joystick, then kinesthetically manipulating the robot, and finally, detaching the robot’s attachment and using it to “naturally” perform the task as the robot recorded the attachment’s force and movements.

The researchers found the volunteers generally preferred the natural method over teleoperation and kinesthetic training. The users, who were all experts in manufacturing, did offer scenarios in which each method might have advantages over the others. Teleoperation, for instance, may be preferable in training a robot to handle hazardous or toxic substances. Kinesthetic training could help workers adjust the positioning of a robot that is tasked with moving heavy packages. And natural teaching could be beneficial in demonstrating tasks that involve delicate and precise maneuvers.

“We imagine using our demonstration interface in flexible manufacturing environments where one robot might assist across a range of tasks that benefit from specific types of demonstrations,” says Hagenow, who plans to refine the attachment’s design based on user feedback and will use the new design to test robot learning. “We view this study as demonstrating how greater flexibility in collaborative robots can be achieved through interfaces that expand the ways that end-users interact with robots during teaching.”

This work was supported, in part, by the MIT Postdoctoral Fellowship Program for Engineering Excellence and the Wallenberg Foundation Postdoctoral Research Fellowship.

This “smart coach” helps LLMs switch between text and code

MIT Latest News - Thu, 07/17/2025 - 12:00am

Large language models (LLMs) excel at using textual reasoning to understand the context of a document and provide a logical answer about its contents. But these same LLMs often struggle to correctly answer even the simplest math problems.

Textual reasoning is usually a less-than-ideal way to deliberate over computational or algorithmic tasks. While some LLMs can generate code like Python to handle symbolic queries, the models don’t always know when to use code, or what kind of code would work best.

LLMs, it seems, may need a coach to steer them toward the best technique.

Enter CodeSteer, a smart assistant developed by MIT researchers that guides an LLM to switch between code and text generation until it correctly answers a query.

CodeSteer, itself a smaller LLM, automatically generates a series of prompts to iteratively steer a larger LLM. It reviews the model’s current and previous answers after each round and provides guidance for how it can fix or refine that solution until it deems the answer is correct.

The researchers found that augmenting a larger LLM with CodeSteer boosted its accuracy on symbolic tasks, like multiplying numbers, playing Sudoku, and stacking blocks, by more than 30 percent. It also enabled less sophisticated models to outperform more advanced models with enhanced reasoning skills.

This advance could improve the problem-solving capabilities of LLMs for complex tasks that are especially difficult to solve with textual reasoning alone, such as generating paths for robots in uncertain environments or scheduling shipments in an international supply chain.

“There is a race to develop better and better models that are capable of doing everything, but we’ve taken a complementary approach. Researchers have spent years developing effective technologies and tools to tackle problems in many domains. We want to enable LLMs to select the right tools and methods, and make use of others’ expertise to enhance their own capabilities,” says Chuchu Fan, an associate professor of aeronautics and astronautics (AeroAstro) and principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan, the senior author of the study, is joined on a paper about the work by LIDS graduate student Yongchao Chen; AeroAstro graduate student Yilun Hao; University of Illinois at Urbana-Champaign graduate student Yueying Liu; and MIT-IBM Watson AI Lab Research Scientist Yang Zhang. The research will be presented at the International Conference on Machine Learning.

An LLM “trainer”  

Ask an LLM which number is bigger, 9.11 or 9.9, and it will often give the wrong answer by using textual reasoning. But ask it to use code to answer the same question, and it can generate and execute a Python script to compare the two numbers, easily solving the problem.

Initially trained to understand and predict human language, LLMs are more likely to answer queries using text, even when code would be more effective. And while they have learned to generate code through fine-tuning, these models often generate an incorrect or less efficient version of the code.

Rather than trying to retrain a powerful LLM like GPT-4 or Claude to improve these capabilities, the MIT researchers fine-tune a smaller, lightweight LLM to guide a larger model between text and code. Fine-tuning a smaller model doesn’t change the larger LLM, so there is no risk it would undermine the larger model’s other abilities.

“We were also inspired by humans. In sports, a trainer may not be better than the star athlete on the team, but the trainer can still give helpful suggestions to guide the athlete. This steering method works for LLMs, too,” Chen says.

This trainer, CodeSteer, works in conjunction with the larger LLM. It first reviews a query and determines whether text or code is suitable for this problem, and which sort of code would be best.

Then it generates a prompt for the larger LLM, telling it to use a coding method or textual reasoning to answer the query. The larger model follows this prompt to answer the query and sends the result back to CodeSteer, which reviews it.

If the answer is not correct, CodeSteer will continue prompting the LLM to try different things that might fix the problem, such as incorporating a search algorithm or constraint into its Python code, until the answer is correct.

“We found that oftentimes, the larger LLM will try to be lazy and use a shorter, less efficient code that will not carry the correct symbolic calculation. We’ve designed CodeSteer to avoid this phenomenon,” Chen says.

A symbolic checker evaluates the code’s complexity and sends a signal to CodeSteer if it is too simple or inefficient. The researchers also incorporate a self-answer checker into CodeSteer, which prompts the LLM to generate code that calculates the answer to verify it is correct.

Tackling complex tasks

As the researchers designed CodeSteer, they couldn’t find suitable symbolic datasets to fine-tune and test the model, since many existing benchmarks don’t point out whether a certain query could be best solved with text or code.

So, they gathered a corpus of 37 complex symbolic tasks, including spatial reasoning, mathematics, order reasoning, and optimization, and built their own dataset, called SymBench. They implemented a fine-tuning approach that leverages SymBench to maximize the performance of CodeSteer.

In their experiments, CodeSteer outperformed all nine baseline methods they evaluated and boosted average accuracy from 53.3 percent to 86.4 percent. It maintains similar performance even on unseen tasks, and on a variety of LLMs.

In addition, a general-purpose model augmented with CodeSteer can achieve higher accuracy than state-of-the-art models designed to focus on complex reasoning and planning, while requiring much less computation.

“Our method uses an LLM’s own capabilities. By augmenting an LLM with the ability to smartly use coding, we can take a model that is already very strong and improve its performance even more,” Chen says.

In the future, the researchers want to streamline CodeSteer to speed up its iterative prompting process. In addition, they are studying how to effectively fine-tune a unified model with the ability to switch between textual reasoning and code generation, rather than relying on a separate assistant.

“The authors present an elegant solution to the critical challenge of tool utilization in LLMs. This simple yet impactful method enables state-of-the-art LLMs to achieve significant performance improvements without requiring direct fine-tuning,” says Jinsung Yoon, a staff research scientist at Google Cloud AI, who was not involved with this work. “This research represents a substantial contribution that promises to significantly enhance the application of LLMs to a diverse range of tasks with which they currently struggle.”

“Their success in training a smaller, specialized model to strategically guide larger, advanced models is particularly impactful,” adds Chi Wang, a senior staff scientist at Google DeepMind who was not involved with this work. “This intelligent collaboration among diverse AI ‘agents’ paves the way for more robust and versatile applications in complex real-world scenarios.”

This research is supported, in part, by the U.S. Office of Naval Research and the MIT-IBM Watson AI Lab.

Can AI really code? Study maps the roadblocks to autonomous software engineering

MIT Latest News - Wed, 07/16/2025 - 4:55pm

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach. Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges. 

Titled “Challenges and Paths Towards AI for Software Engineering,” the work maps the many software-engineering tasks beyond code generation, identifies current bottlenecks, and highlights research directions to overcome them, aiming to let humans focus on high-level design while routine work is automated. 

“Everyone is talking about how we don’t need programmers anymore, and there’s all this automation now available,” says Armando Solar‑Lezama, MIT professor of electrical engineering and computer science, CSAIL principal investigator, and senior author of the study. “On the one hand, the field has made tremendous progress. We have tools that are way more powerful than any we’ve seen before. But there’s also a long way to go toward really getting the full promise of automation that we would expect.”

Solar-Lezama argues that popular narratives often shrink software engineering to “the undergrad programming part: someone hands you a spec for a little function and you implement it, or solving LeetCode-style programming interviews.” Real practice is far broader. It includes everyday refactors that polish design, plus sweeping migrations that move millions of lines from COBOL to Java and reshape entire businesses. It requires nonstop testing and analysis — fuzzing, property-based testing, and other methods — to catch concurrency bugs, or patch zero-day flaws. And it involves the maintenance grind: documenting decade-old code, summarizing change histories for new teammates, and reviewing pull requests for style, performance, and security.

Industry-scale code optimization — think re-tuning GPU kernels or the relentless, multi-layered refinements behind Chrome’s V8 engine — remains stubbornly hard to evaluate. Today’s headline metrics were designed for short, self-contained problems, and while multiple-choice tests still dominate natural-language research, they were never the norm in AI-for-code. The field’s de facto yardstick, SWE-Bench, simply asks a model to patch a GitHub issue: useful, but still akin to the “undergrad programming exercise” paradigm. It touches only a few hundred lines of code, risks data leakage from public repositories, and ignores other real-world contexts — AI-assisted refactors, human–AI pair programming, or performance-critical rewrites that span millions of lines. Until benchmarks expand to capture those higher-stakes scenarios, measuring progress — and thus accelerating it — will remain an open challenge.

If measurement is one obstacle, human‑machine communication is another. First author Alex  Gu, an MIT graduate student in electrical engineering and computer science, sees today’s interaction as “a thin line of communication.” When he asks a system to generate code, he often receives a large, unstructured file and even a set of unit tests, yet those tests tend to be superficial. This gap extends to the AI’s ability to effectively use the wider suite of software engineering tools, from debuggers to static analyzers, that humans rely on for precise control and deeper understanding. “I don’t really have much control over what the model writes,” he says. “Without a channel for the AI to expose its own confidence — ‘this part’s correct … this part, maybe double‑check’ — developers risk blindly trusting hallucinated logic that compiles, but collapses in production. Another critical aspect is having the AI know when to defer to the user for clarification.” 

Scale compounds these difficulties. Current AI models struggle profoundly with large code bases, often spanning millions of lines. Foundation models learn from public GitHub, but “every company’s code base is kind of different and unique,” Gu says, making proprietary coding conventions and specification requirements fundamentally out of distribution. The result is code that looks plausible yet calls non‑existent functions, violates internal style rules, or fails continuous‑integration pipelines. This often leads to AI-generated code that “hallucinates,” meaning it creates content that looks plausible but doesn’t align with the specific internal conventions, helper functions, or architectural patterns of a given company. 

Models will also often retrieve incorrectly, because it retrieves code with a similar name (syntax) rather than functionality and logic, which is what a model might need to know how to write the function. “Standard retrieval techniques are very easily fooled by pieces of code that are doing the same thing but look different,” says Solar‑Lezama. 

The authors mention that since there is no silver bullet to these issues, they’re calling instead for community‑scale efforts: richer, having data that captures the process of developers writing code (for example, which code developers keep versus throw away, how code gets refactored over time, etc.), shared evaluation suites that measure progress on refactor quality, bug‑fix longevity, and migration correctness; and transparent tooling that lets models expose uncertainty and invite human steering rather than passive acceptance. Gu frames the agenda as a “call to action” for larger open‑source collaborations that no single lab could muster alone. Solar‑Lezama imagines incremental advances—“research results taking bites out of each one of these challenges separately”—that feed back into commercial tools and gradually move AI from autocomplete sidekick toward genuine engineering partner.

“Why does any of this matter? Software already underpins finance, transportation, health care, and the minutiae of daily life, and the human effort required to build and maintain it safely is becoming a bottleneck. An AI that can shoulder the grunt work — and do so without introducing hidden failures — would free developers to focus on creativity, strategy, and ethics” says Gu. “But that future depends on acknowledging that code completion is the easy part; the hard part is everything else. Our goal isn’t to replace programmers. It’s to amplify them. When AI can tackle the tedious and the terrifying, human engineers can finally spend their time on what only humans can do.”

“With so many new works emerging in AI for coding, and the community often chasing the latest trends, it can be hard to step back and reflect on which problems are most important to tackle,” says Baptiste Rozière, an AI scientist at Mistral AI, who wasn’t involved in the paper. “I enjoyed reading this paper because it offers a clear overview of the key tasks and challenges in AI for software engineering. It also outlines promising directions for future research in the field.”

Gu and Solar-Lezama wrote the paper with University of California at Berkeley Professor Koushik Sen and PhD students Naman Jain and Manish Shetty, Cornell University Assistant Professor Kevin Ellis and PhD student Wen-Ding Li, Stanford University Assistant Professor Diyi Yang and PhD student Yijia Shao, and incoming Johns Hopkins University assistant professor Ziyang Li. Their work was supported, in part, by the National Science Foundation (NSF), SKY Lab industrial sponsors and affiliates, Intel Corp. through an NSF grant, and the Office of Naval Research.

The researchers are presenting their work at the International Conference on Machine Learning (ICML). 

What do we owe each other?

MIT Latest News - Wed, 07/16/2025 - 4:30pm

MIT equips students with the tools to advance science and engineering — but a new class aims to ensure they also develop their own values and learn how to navigate conflicting viewpoints.

Offered as a pilot this past spring, the multidisciplinary class 21.01 (Compass Course: Love, Death, and Taxes: How to Think — and Talk to Others — About Being Human), invites students to wrestle with difficult questions like:

  • What do we value (and why)?
  • What do we know (and how do we know it)?
  • What do we owe to each other (and what should we do about it)?

The class is part of the Compass Initiative, which is led by faculty from across the MIT School of Humanities, Arts, and Social Sciences (SHASS). 

Lily L. Tsai, Ford Professor of Political Science and lead faculty for Compass, says the new course is meant to help students use the humanities and social sciences as their guide to thinking about the kind of humans they want to be and what kind of society they want to help create.

"At MIT, we're some of the people who are creating the technologies that are accelerating change and leading to more unpredictability in the world. We have a special responsibility to envision and reimagine a moral and civic education that enables people to navigate it," says Tsai.

The course is the result of a multi-year collaboration involving over 30 faculty from 19 departments, ranging from Philosophy and Literature to Brain and Cognitive Sciences and Electrical Engineering and Computer Science, all led by a core team of 14 faculty from SHASS and a student advisory board.

During its initial run in the spring, Compass followed an arc that began with students investigating questions of value. Early in the semester, students explored what makes a genius, using Beethoven's "Symphony No. 9" as a case study, accompanied by lectures from Emily Richmond Pollock, associate professor of music, and a podcast conversation with Larry Guth, professor of mathematics, and David Kaiser, professor of physics and science, technology, and society. 

Students then grappled with the concept of a merit-based society by digging into the example of the imperial Chinese civil service exam, guided by professor of history Tristan Brown. Next, they questioned what humans really know to be true by examining the universality of language through lectures by professor of linguistics Adam Albright, and the philosophy of truth and knowledge through lectures by professor of philosophy Alex Byrne.

The semester ended with challenging debates about what humans owe one another, including a class designed by Nobel laureate and professor of economics Esther Duflo on taxation and climate burdens. 

More than anything, Tsai says, she hopes that Compass prepares students to navigate dorm hallways, the family Thanksgiving table, or future labs or boardroom tables, and learn how to express opinions and actively listen to others with whom they may disagree — all without canceling one another. 

The class takes a "flipped classroom" approach: Students watch recorded lectures at home and come to class prepared for discussion and debate. Each section is co-taught by two faculty members, combining disciplines and perspectives.

Second-year mechanical engineering major Kayode Dada signed up because it fulfilled a communications-intensive requirement and offered cross-departmental exposure. But Compass ultimately became more than that to him. "College isn't just about learning science stuff — it's also about how we grow as people," he says. Dada was assigned to a section co-taught by Tsai and professor of literature Arthur Bahr. 

Forming a social contract

In the first week, students draft a Rousseau-inspired social compact and learn firsthand how to build a classroom community. "We knew these were deep topics," Dada says. "To get the most out of the class, we had to open up, respect each other, and keep conversations confidential."

One early exercise was especially impactful. After watching lectures by Ford Professor of Philosophy and Women’s and Gender Studies Sally Haslanger on value, students were asked to draw a map representing their values, with arrows pointing from ones that were more instrumental to ones that were fundamental.

At first, Dada felt stuck. Growing up in Kentucky, the son of a Nigerian immigrant who had dreamed of attending MIT himself, Dada had focused for years on gaining admission to the Institute. "I thought getting into MIT would make me feel fulfilled," he admits. "But once I got here, I realized the work alone wasn't enough."

The values exercise helped him reorient. He identified practicing Christianity, hard work, helping others, and contributing to society as central to his belief system. The exercise influenced Dada, leading him to choose to volunteer at a robotics camp for kids in Louisville to share his MIT education with others.

Who governs science? 

Later in the semester, Dada was animatedly representing a figure whose views contradicted his own: James D. Watson, the Nobel Prize winner who co-discovered DNA's structure — and is also a controversial figure. 

That week, each student had been assigned a persona from a 1976 Cambridge City Council hearing debating recombinant DNA research. The class, designed by Associate Professor Robin Scheffler, was investigating the question: Who governs science — scientists, the government, those who fund research, or the public?

They revisited a real-life debate around recombinant DNA research and the dangers for biological weapons development and other threats to the public that citizens of that time believed it posed when carried out in MIT and Harvard University labs. Pioneered in the 1970s, the technique involved the splicing of genes related to the E. coli bacterium. In the Compass classroom, students argued different sides from their personas: banning the research, moving labs outside city limits, or proceeding without government interference.

Dada notes how faculty intentionally seeded conflicting viewpoints. "It taught me how to negotiate with someone who has different values and come to a resolution that respects everyone involved," he says. "That's something I want to keep exploring."

When Dada closed his presentation with frantically-Googled sentimental music piped unexpectedly from his phone, his classmates laughed in appreciation. The atmosphere was more intimate than academic — an ethos Tsai hoped to cultivate. "They really built intellectual relationships based on trust," she says. "There was a lot of laughter. They took joy in disagreeing and debating."

Changing opinions 

First-year student-athlete Shannon Cordle, who is majoring in mechanical engineering, didn't know what to expect from Compass. Since it was new, there were no student reviews. What stood out to her was the grading system: 15 percent of the final grade is based on a rubric each student created for themselves.

Cordle's goal was to become more comfortable expressing an opinion — even before she's fully formed it. "It's easy to stay quiet when you're unsure," she says. "Compass helped me practice speaking up and being willing to be wrong, because that's how you learn."

One week, the class debated whether a meritocracy creates a just society — an especially relevant topic at MIT, given its famously selective admissions process. 

Students were able to pick their stance beforehand, and then invited to change it as they gained more perspectives during the debate.

"This helps students grasp not only the flaws in another viewpoint, but also how to strengthen their arguments," Tsai says.

Cordle, who hopes to go into prosthetics, views her future field as representing the perfect balance between creativity and ethics. "The humanities challenge how we view our fields as scientists and engineers," she says.

A compass helps travelers find their way — but it's most useful when they need to reorient and change direction. In that spirit, Compass prepares students not just to ask big questions, but to keep asking — and keep adapting — as their lives and careers evolve.

“Bringing these unexpected class elements together with students and faculty generated magical alchemy — a kind of transformation that we didn't even know we could create,” Tsai says.

In addition to the class, the MIT Compass Podcast engages in these fundamental questions with guests from across the MIT schools of Science and Engineering. There are also plans to adapt the residential version of this class for online learners on MITx.

In addition to philanthropic support from MIT Corporation life member emeritus Ray Stata '57, the initiative is supported by the Office of the Vice Chancellor and the MIT Human Insight Collaborative's SHASS Education Innovation Fund, which promotes new, transformative educational approaches in SHASS fields.

Radio Hobbyists, Rejoice! Good News for LoRa & Mesh

EFF: Updates - Wed, 07/16/2025 - 2:21pm

A set of radio devices and technologies are opening the doorway to new and revolutionary forms of communication. These have the potential to break down the over-reliance on traditional network hierarchies, and present collaborative alternatives where resistance to censorship, control and surveillance are baked into the network topography itself. Here, we look at a few of these technologies and what they might mean for the future of networked communications.

The idea of what is broadly referred to as mesh networking isn’t new: the resilience and scalability of mesh technology has seen it adopted in router and IoT protocols for decades. What’s new is cheap devices that can be used without a radio license to communicate over (relatively) large distances, or LOng RAnge, thus the moniker LoRa.

Although using different operating frequencies in different countries, LoRa works in essentially the same way everywhere. It uses Chirp Spread Spectrum to broadcast digital communications across a physical landscape, with a range of several kilometers in the right environmental conditions. When other capable devices pick up a signal, they can then pass it along to other nodes until the message reaches its desitination—all without relying on a single centralized host. 

These communications are of very low bit-rate—often less than a few KBps (kilobytes per second) at a distance—and use very little power. You won’t be browsing the web or streaming video over LoRa, but it is useful for sending messages in a wide range of situations where traditional infrastructure is lacking or intermittent, and communication with others over dispersed or changing physical terrain is essential. For instance, a growing body of research is showing how Search and Rescue (SAR) teams can greatly benefit from the use of LoRa, specifically when coupled with GPS sensors, and especially when complimented by line-of-sight LoRa repeaters.

Meshtastic

The most popular of these indie LoRa communication systems is Meshtastic by far. For hobbyists just getting started in the world of LoRa mesh communications, it is the easiest way to get up, running, and texting with others in your area that also happen to have a Meshtastic-enabled device. It also facilitates direct communication with other nodes using end-to-end encryption. And by default, a Meshtastic device will repeat messages to others if originating from 3 or fewer nodes (or “hops”) away. This means messages tend to propagate farther with the power of the mesh collaborating to make delivery possible. As a single-application use of LoRa, it is an exciting experiment to take part in.

Reticulum

While Reticulum is often put into the same category as Meshtastic, and although both enable communication over LoRa, the comparison breaks down quickly after that. Reticulum is not a single application, but an entire network stack that can be arbitrarily configured to connect through existing TCP/IP, the anonymizing I2P network, directly through a local WiFi connection, or through LoRa radios. The Reticulum network’s LXMF transfer protocol allows arbitrary applications to be built on top of it, such as messaging, voice calls, file transfer, and light-weight, text-only browsing. And that’s only to name a few applications which have already been developed—the possibilities are endless.

Although there are a number of community hubs to join which are being run by Reticulum enthusiasts, you don’t have to join any of them, and can build your own Reticulum network with the devices and transports of you and your friends, locally over LoRa or remotely over traditional infrastructure, and bridge them as you please. Nodes themselves are universally addressed and sovereign, meaning they are free to connect anywhere and not lose the universally unique address which defines them. All communications between nodes are encrypted end-to-end, using a strong choice of cryptographic primitives. And although it’s been actively developed for over a decade, it recently reached the noteworthy milestone of a 1.0 release. It’s a very exciting ecosystem to be a part of, and we can’t wait to see the community develop it even further. A number of clients are available to start exploring.

Resilient Infrastructure

On a more somber note, let’s face it: we live in an uncertain world. With the frequency of environmental disasters, political polarization, and infrastructure attacks increasing, the stability of networks we have traditionally relied upon is far from assured.

Yet even with the world as it is, developers are creating new communications networks that have the potential to help in unexpected situations we might find ourselves in. Not only are these technologies built to be useful and resilient, they are also empowering individuals by circumventing censorship and platform control— allowing a way for people to empower each other through sharing resources.

In that way, it can be seen as a technological inheritor of the hopefulness and experimentation—and yes, fun!—that was so present in the early internet. These technologies offer a promising path forward for building our way out of tech dystopia.

EFF and 80 Organizations Call on EU Policymakers to Preserve Net Neutrality in the Digital Networks Act

EFF: Updates - Wed, 07/16/2025 - 2:19pm

As the European Commission prepares an upcoming proposal for a Digital Networks Act (DNA), a growing network of groups are raising serious concerns about the resurgence of “fair share” proposals from major telecom operators. The original idea was to introduce network usage fees on certain companies to pay ISPs. We have said it before and we’ll say it again: there is nothing fair about this “fair share” proposal, which could undermine net neutrality and hurt consumers by changing how content is delivered online. Now the EU Commission is toying with an alternative idea: the introduction of a dispute resolution mechanism to foster commercial agreements between tech firms and telecom operators.

EFF recently joined a broad group of more than 80 signatories, from civil society organizations to audio-visual companies in a joint statement aimed at preserving net neutrality in the DNA.

In the letter, we argue that the push to introduce a mandatory dispute resolution mechanism into EU law would pave the way for content and application providers (CAPs) to pay network fees for delivering traffic. These ideas, recycled from 2022, are being marketed as necessary for funding infrastructure, but the real cost would fall on the open internet, competition, and users themselves.

This isn't just about arcane telecom policy—it’s a battle over the future of the internet in Europe. If the DNA includes mechanisms that force payments from CAPs, we risk higher subscription costs, fewer services, and less innovation, particularly for European startups, creatives, and SMEs. Worse still, there’s no evidence of market failure to justify such regulatory intervention. Regulators like BEREC have consistently found that the interconnection market is functioning smoothly. What’s being proposed is nothing short of a power grab by legacy telecom operators looking to resurrect outdated, monopolistic business models. Europe has long championed an open, accessible internet—now’s the time to defend it.

🤕 A Surveillance Startup in Damage Control | EFFector 37.8

EFF: Updates - Wed, 07/16/2025 - 1:12pm

We're a little over halfway through the year! Which... could be good or bad depending on your outlook... but nevermind that—EFF is here to keep you updated on the latest digital rights news, and we've got you covered with an all-new EFFector!

With issue 37.8, we're covering a recent EFF investigation into AI-generated police reports, a secret deal to sell flight passenger data to the feds (thanks data brokers), and why mass surveillance cannot be fixed with a software patch. 

Don't forget to also check out our audio companion to EFFector as well! We're interviewing staff about some of the important work that they're doing. This time, EFF's Associate Director of Activism Sarah Hamid explains the harms caused by ALPRs and what you can do to fight back. Listen now on YouTube or the Internet Archive.

Listen TO EFFECTOR

EFFECTOR 37.8 - A Surveillance Startup In Damage Control

Since 1990 EFF has published EFFector to help keep readers on the bleeding edge of their digital rights. We know that the intersection of technology, civil liberties, human rights, and the law can be complicated, so EFFector is a great way to stay on top of things. The newsletter is chock full of links to updates, announcements, blog posts, and other stories to help keep readers—and listeners—up to date on the movement to protect online privacy and free expression. 

Thank you to the supporters around the world who make our work possible! If you're not a member yet, join EFF today to help us fight for a brighter digital future.

Hacking Trains

Schneier on Security - Wed, 07/16/2025 - 12:57pm

Seems like an old system system that predates any care about security:

The flaw has to do with the protocol used in a train system known as the End-of-Train and Head-of-Train. A Flashing Rear End Device (FRED), also known as an End-of-Train (EOT) device, is attached to the back of a train and sends data via radio signals to a corresponding device in the locomotive called the Head-of-Train (HOT). Commands can also be sent to the FRED to apply the brakes at the rear of the train.

These devices were first installed in the 1980s as a replacement for caboose cars, and unfortunately, they lack encryption and authentication protocols. Instead, the current system uses data packets sent between the front and back of a train that include a simple BCH checksum to detect errors or interference. But now, the CISA is warning that someone using a software-defined radio could potentially send fake data packets and interfere with train operations...

The IRA was bearing fruit. Then Trump killed it.

ClimateWire News - Wed, 07/16/2025 - 6:37am
The first half of 2025 showed the promise of the giant climate law passed under Biden. Those clean energy trends are expected to dim.

Trump’s megalaw casts shadow over solar manufacturing

ClimateWire News - Wed, 07/16/2025 - 6:36am
New restrictions that bar Chinese companies from receiving clean energy tax credits could stifle a boom in U.S. solar module factories.

Pages