Feed aggregator

Countries talk economic strategy at Colombia climate conference

ClimateWire News - Wed, 04/29/2026 - 6:24am
Delegates pointed to the Iran war — and the resulting energy crisis — as a reason to transition away from fossil fuels.

Trump’s disaster panel to outline FEMA changes next week

ClimateWire News - Wed, 04/29/2026 - 6:23am
The final report could propose transformations to federal disaster aid. But it comes as the president has eased his criticism of the emergency response agency.

How some offshore wind projects could survive Trump

ClimateWire News - Wed, 04/29/2026 - 6:22am
Ocean Winds agreed to cancel two developments in a deal with the administration. But the developer's most viable project is still on the table.

Texas revives push to overhaul flood safety requirements

ClimateWire News - Wed, 04/29/2026 - 6:21am
Elected officials heard hours of testimony over two days as they seek to bolster emergency plans after devastating floods last summer.

Maryland warns Trump’s assault on environmental justice will leave lasting damage

ClimateWire News - Wed, 04/29/2026 - 6:20am
“We can only move at the speed of trust,” said one official with the state's Department of Housing and Community Development.

Shapiro leverages Biden-era climate funds to cut industrial emissions

ClimateWire News - Wed, 04/29/2026 - 6:20am
It's part of the Pennsylvania governor's carrots-over-sticks approach to energy policy.

Last year was hot for Europe. Next year will be even hotter.

ClimateWire News - Wed, 04/29/2026 - 6:19am
Nearly all of the continent was warmer than average in 2025, scientists found.

UK quietly increases AI emissions forecast 100-fold

ClimateWire News - Wed, 04/29/2026 - 6:18am
The new figures are incompatible with the government’s green targets, campaigners say.

China weighs second green sovereign bond sale in London

ClimateWire News - Wed, 04/29/2026 - 6:18am
The nation’s first green sovereign bond offering last April raised $879 million to fund activities including climate change mitigation, biodiversity preservation and pollution control.

Spiking oil prices spurred a EV buying spree in March

ClimateWire News - Wed, 04/29/2026 - 6:17am
Where electric vehicle sales are bubbling up, analysts point to a cocktail of two ingredients: elevated gas prices and affordable new models from China.

Claude Mythos Has Found 271 Zero-Days in Firefox

Schneier on Security - Wed, 04/29/2026 - 6:12am

That’s a lot. No, it’s an extraordinary number:

Since February, the Firefox team has been working around the clock using frontier AI models to find and fix latent security vulnerabilities in the browser. We wrote previously about our collaboration with Anthropic to scan Firefox with Opus 4.6, which led to fixes for 22 security-sensitive bugs in Firefox 148.

As part of our continued collaboration with Anthropic, we had the opportunity to apply an early version of Claude Mythos Preview to Firefox. This week’s release of Firefox 150 includes fixes for 271 vulnerabilities identified during this initial evaluation...

The MIT-IBM Computing Research Lab launches to shape the future of AI and quantum computing

MIT Latest News - Wed, 04/29/2026 - 6:00am

The following is a joint announcement by the MIT Schwarzman College of Computing and IBM.

IBM and MIT today announced the launch of the MIT-IBM Computing Research Lab, advancing their long-standing collaboration to shape the next era of computing. The new lab expands its scope to include quantum computing, alongside foundational artificial intelligence research, with the goal of unlocking new computational approaches that go beyond the limits of today’s classical systems.

The MIT-IBM Computing Research Lab builds on a distinguished history of scientific excellence at the intersection of research and academia. Evolving from the MIT-IBM Watson AI Lab, which originated in 2017 on MIT’s campus, the new lab reflects a transformed technology landscape — one which AI has entered mainstream deployment, and quantum computing is rapidly advancing toward practical impact. Together, MIT and IBM aim to help lead research in AI and quantum and to redefine mathematical foundations across both domains.

“We expect the MIT-IBM Computing Research Lab to emerge as one of the world’s premier academic and industrial hubs accelerating the future of computing,” says Jay Gambetta, director of IBM Research and IBM Fellow, and IBM chair of the MIT-IBM Computing Research Lab. “Together, the brightest minds at MIT and IBM will rethink how models, algorithms, and systems are designed for an era that will be defined by the sum of what’s possible when AI and quantum computing come together.”

“For a decade, the collaboration between MIT and IBM has produced leading-edge research and innovation, and provided mentorship and supported the professional growth of researchers both at MIT and IBM,” says Anantha Chandrakasan, MIT’s provost, who, as then-dean of the School of Engineering, spearheaded the creation of the MIT-IBM Watson AI Lab and will continue as MIT chair of the lab. “The incredible technical achievements sets the bar high for our work together over the next 10 years. I look forward to another decade of impact.”

Addressing the next frontiers in computation

The MIT-IBM Computing Research Lab will serve as a focal point for joint research between MIT and IBM in AI, algorithms, and quantum computing, as well as the integration of these technologies into hybrid computing systems. The lab is designed to accelerate progress toward powerful new computational approaches that take advantage of rapid advances in AI and quantum-centric supercomputing, including those that combine maturing quantum hardware with classical systems and advanced AI methods.

This research initiative will include improving capabilities and integrating AI with traditional computing, alongside pursuing advances in small, efficient, modular language model architectures, novel AI computing paradigms, and enterprise-focused AI systems designed for deployment in real-world environments, where reliability, transparency, and trust are essential.

In parallel, the lab will rethink the mathematical and algorithmic foundations that underpin the next era of computing by accelerating the development of novel quantum algorithms for complex problems, with impacts in areas such as materials science, chemistry, and biology.

Additionally, the lab will investigate mathematical and algorithmic foundations of machine learning, optimization, Hamiltonian simulations, and partial differential equations, which are used to approximate the behaviors of dynamical systems that currently stump classical systems beyond limited scales and accuracy. Innovations from the lab could have wide implications for global industries, from more accurate weather and air turbulence prediction to better forecasts of financial market performance. Similarly, with improved optimization approaches, research from the lab could help lower risks in areas like finance, predict protein structures for more targeted medicine, and streamline global supply chains.

With its focus on AI, algorithms, and quantum, the MIT-IBM Computing Research Lab will complement and enhance the work of two of MIT’s strategic initiatives, the MIT Generative AI Impact Consortium and the MIT Quantum Initiative. MIT President Sally Kornbluth launched these strategic initiatives to broaden and deepen MIT’s impact in developing solutions to serious global challenges. The MIT-IBM Computing Research Lab will also leverage IBM’s longtime leadership and expertise in quantum computing. As part of its ambitious roadmap, IBM has laid out a clear path to delivering the world’s first fault-tolerant quantum computer by 2029, and is working across industries to drive value from quantum-centric supercomputing, tightly integrating quantum computers with high-performance computing and AI accelerators to solve the world’s toughest problems.

Deep integration with scientific domains

The MIT-IBM Computing Research Lab will also continue to serve as a foundation for training the next generation of computational scientists and innovators. It will do so by engaging faculty and students across MIT departments, enabling new computational approaches to accelerate discoveries in the physical and life sciences.

The lab will continue to be co-directed by Aude Oliva, senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, and David Cox, vice president of AI Foundations at IBM Research. MIT and IBM have appointed leads for each of the lab’s three focus areas — AI, algorithms, and quantum. Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS), and Kenney Ng, principal research scientist at IBM Research and the MIT-IBM science program manager, will co-lead AI; Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in EECS, and Vasileios Kalantzis, IBM Research senior research scientist, will co-lead algorithms; and Aram Harrow, professor of physics, and Hanhee Paik, IBM director of Quantum Algorithm Centers, will co-lead quantum.

“The MIT-IBM Computing Research Lab reflects an important expansion of the collaboration between MIT and IBM and the increasing connections across AI, algorithms, and quantum. This deepened focus also underscores a strong alignment with the MIT Schwarzman College of Computing’s mission to advance the forefront of computing and its integration across disciplines,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and MIT co-chair of the lab. “I’m excited about what this next chapter will enable in these three areas, and their impact broadly.”

Building on nearly a decade of collaboration

The MIT-IBM Watson AI Lab helped pioneer a model for academic-industry research collaboration, aligning long-term scientific inquiry with real-world impact. Since its inception, the lab has funded over 210 research projects involving over 150 MIT faculty members and over 200 IBM researchers. Collectively, the projects have led to over 1,500 peer-reviewed articles. The lab also helped shape the career growth of a number of MIT students and junior researchers, funding more than 500 students and postdocs.

“The true measure of this lab is not just innovation, but transformation of a field. Hundreds of students have contributed to thousands of publications in top conferences and journals, demonstrating their capabilities to address meaningful problems,” says Oliva. “The MIT-IBM Computing Research Lab builds on an extraordinary legacy of impact to advance a trusted collaboration that will redefine the future of AI and quantum computing in a way never seen before.”

“By coupling academic rigor with industrial scale, the lab aims to define the computational foundations that will power the next generation of AI, quantum, and scientific breakthroughs,” says Cox. “By bringing together advances in AI, algorithms, and quantum computing under one integrated research effort, we’re creating the conditions to rethink the mathematical and computational foundations of science and engineering.”

The MIT-IBM Computing Research Lab will capitalize on this foundation, expanding both the scientific scope and the ecosystem of collaborators across the Cambridge-Boston region and beyond.

MIT engineers’ virtual violin produces realistic sounds

MIT Latest News - Wed, 04/29/2026 - 5:00am

There is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.

In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.

While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.

In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.

For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.

“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”

“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”

Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.

Sound matrix

The quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.

In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.

The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”

For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.

They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.

“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”

A plucky model

The team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.

For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.

For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.

The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”

“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.

“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”

This work was supported, in part, by an MIT Bose Research Fellowship.

Enabling privacy-preserving AI training on everyday devices

MIT Latest News - Wed, 04/29/2026 - 12:00am

A new method developed by MIT researchers can accelerate a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.

The MIT researchers boosted the efficiency of a technique known as federated learning, which involves a network of connected devices that work together to train a shared AI model.

In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data and then transfers model updates back to the server. Data are kept secure because they remain on each device.

But not all devices in the network have enough capacity, computational capability, and connectivity to store, train, and transfer the model back and forth with the server in a timely manner. This causes delays that worsen training performance.

The MIT researchers developed a technique to overcome these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.

This new approach could make it more feasible for AI models to be used in high-stakes applications with strict security and privacy standards, like health care and finance.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models. We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that,” says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Her co-authors include Anna Murphy ’25, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at the IEEE International Joint Conference on Neural Networks. 

Reducing lag time

Many federated learning approaches assume all devices in the network have enough memory to train the full AI model, and stable connectivity to transmit updates back to the server quickly.

But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and often face intermittent network connectivity.

The central server usually waits to receive model updates from all devices, then averages them to complete the training round. This process repeats until training is complete.

“This lag time can slow down the training procedure or even cause it to fail,” Tenison says.

To overcome these limitations, the MIT researchers developed a new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.

Their framework involves three main innovations.

First, rather than broadcasting the entire model to all devices, FTTE sends a smaller subset of model parameters instead, reducing the memory requirement for each device. Parameters are internal variables the model adjusts during training.

FTTE uses a special search procedure to identify parameters that will maximize the model’s accuracy while staying within a certain memory budget. That limit is set based on the most memory-constrained device.

Second, the server updates the model using an asynchronous approach. Rather than waiting for responses from all devices, the server accumulates incoming updates until it reaches a fixed capacity, then proceeds with the training round.

Third, the server weights updates from each device based on when it received them. In this way, older updates don’t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.

“We use this semi-asynchronous approach because want to involve the least powerful devices in the training process so they can contribute their data to the model, but we don’t want the more powerful devices in the network to stay idle for a long time and waste resources,” Tenison says.

Achieving acceleration

The researchers tested their framework in simulations with hundreds of heterogeneous devices and a variety of models and datasets. On average, FTTE enabled the training procedure to reach completing 81 percent faster than standard federated learning approaches.

Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.

“Because we want the model to train as fast as possible to save the battery life of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy could be acceptable in some applications, especially since our method performs so much faster,” she says.

FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.

In addition to these simulations, the researchers tested FTTE on a small network of real devices with varying computational capabilities.

“Not everyone has the latest Apple iPhone. In many developing countries, for instance, users might have less powerful mobile phones. With our technique, we can bring the benefits of federated learning to these settings,” she says.

In the future, the researchers want to study how their method could be used to increase the personalized performance of AI models on each device, rather than focusing on the average performance of the model. They also want to conduct larger experiments on real hardware.

This work was funded, in part, by a Takeda PhD Fellowship.

Tropical cyclones relieve drought

Nature Climate Change - Wed, 04/29/2026 - 12:00am

Nature Climate Change, Published online: 29 April 2026; doi:10.1038/s41558-026-02627-8

Droughts and tropical cyclones are two well-known hazards that can interact in dynamic ways. Now, research shows that rainfall from tropical cyclones shortens and weakens droughts in coastal regions but not in a uniform way.

From yield impacts to just transformation of food systems

Nature Climate Change - Wed, 04/29/2026 - 12:00am

Nature Climate Change, Published online: 29 April 2026; doi:10.1038/s41558-026-02625-w

Food security remains a major global challenge, which is only amplified by ongoing climate change. Here, I look back on a 2015 paper on climate change impacts on wheat and discuss subsequent research on agriculture and food security.

The Open Social Web Needs Section 230 to Survive

EFF: Updates - Tue, 04/28/2026 - 4:59pm

If you want to overthrow Big Tech, you’ll need Section 230. The paradigm shift being built with the Open Social Web can put communities back in control of social media infrastructure, and finally end our dependency on enshitified corporate giants. But while these incumbents can overcome multimillion-dollar lawsuits, the small host revolution could be picked off one by one without the protections offered by 230.

The internet as we know it is built on Section 230, a law from the 90s that generally says internet users are legally responsible for their own speech — not the services hosting their speech. The purpose of 230 was to enable diverse forums for speech online, which defined the early internet. These scattered online communities have since been largely captured by a handful of multi-billion dollar companies that found profit in controlling your voice online. While critics are rightly concerned about this new corporate influence and surveillance, some look to diminishing Section 230 as the nuclear option to regain control. 

The thing is, that would be a huge gift to Big Tech, and detrimental to our best shot at actually undermining corporate and state control of speech online. 

Dethroning Big Tech

We’re fed up with legacy social media trapping us in walled gardens, where the world's biggest companies like Google and Meta call the shots. Our communities, and our voices, are being held hostage as billionaires’ platforms surveil, betray, and censor us. We’re not alone in this frustration, and fortunately, people are collaborating globally to build another way forward: the Open Social Web. 

This new infrastructure puts the public’s interest first by reclaiming the principles of interoperability and decentralization from the early internet. In short, it puts protocols over platforms and lets people own their connections with others. Whether you choose a Fediverse app like Mastodon or an ATmosphere app like Bluesky, your audience and community stay within reach. It’s a vision of social media akin to our lives offline: you decide who to be in touch with and how, and no central authority can threaten to snuff out those connections. It’s social media for humans, not advertisers and authoritarians.

Behind that vision is a beautiful mess of protocols bringing the open social media web to life. Each protocol is a unique language for applications, determining how and where messages are sent. While this means there is great variety to these projects, it also means everyone who spins up a server, develops an app, or otherwise hosts others’ speech has skin in the game when it comes to defending Section 230.

What exactly is Section 230?

Section 230 protects freedom of expression online by protecting US intermediaries that make the internet work. Passed in 1996 to preserve the new bubbling communities online, 230 enshrined important protections for free expression and the ability to block or filter speech you don’t want on your site. One portion is credited as the “26 words that created the internet”:

“No provider or user of an interactive computer service shall be treated as the publisher or speaker of any information provided by another information content provider.” 

In other words, this bipartisan law recognizes that speech online relies on intermediaries — services that deliver messages between users — and holding them potentially liable for any message they deliver would only stifle that speech. Intuitively, when harmful speech occurs, the speaker should be the one held accountable. The effect is that most civil suits against users and services based on others' speech can quickly be dismissed, avoiding the most expensive parts of civil litigation. 

Section 230 was never a license to host anything online, however. It does not protect companies that create illegal or harmful content. Nor does Section 230 protect companies from intellectual property claims

What Section 230 has enabled, however, is the freedom and flexibility for online communities to self-organize. Without the specter of one bad actor exposing the host(s) to serious legal threats, intermediaries can moderate how they see fit or even defer to volunteers within these communities.

Why the Open Social Web Needs Section 230

The superpower of decentralized systems like the Fediverse is the ability for thousands of small hosts to each shoulder some of the burdens of hosting. No single site can assert itself as a necessary intermediary for everyone; instead, all must collaborate to ensure messages reach the intended audience. The result is something superior to any one design or mandate. It is an ecosystem that is greater than the sum of its parts, resilient to disruptions, and free to experiment with different approaches to community governance.

The open social web’s kryptonite though, is the liability participants can face as intermediaries. The greater the potential liability, the more interference from powerful interests in the form of legal threats, more monetary costs, and less space for nuance in moderation. And in practice, participants may simply stop hosting to avoid those risks. The end result is only the biggest and most resourced options can survive.

This isn’t just about the hosts in the Open Social Web, like Mastodon instances or Bluesky PDSes. In the U.S., Section 230’s protections extend to internet users when they distribute another person’s speech. For example, Section 230 protects a user who forwards an email with a defamatory statement. On the open social web, that means when you pass along a message to others through sharing, boosting, and quoting, you’re not liable for the other user’s speech. The alternative would be a web where one misclick could open you up to a defamation lawsuit.

Section 230 also applies to the infrastructure stack, too, like Internet service providers, content delivery networks, domain, and hosting providers. Protections even extend to the new experimental infrastructures of decentralized mesh networks.

Beyond the existential risks to the feasibility of indie decentralized projects in the United States, weakening 230 protections would also make services worse. Being able to customize your social media experience from highly curated to totally laissez-faire in the open social web is only possible when the law allows space for private experiments in moderation approaches. The algorithmically driven firehose forced on users by antiquated social media giants is driven by the financial interests of advertisers, and would only be more tightly controlled in a post-230 world.

Defending 230

Laws aimed at changing 230 protections put decentralized projects like the open social web in a uniquely precarious position. That is why we urge lawmakers to take careful consideration of these impacts. It is also why the proponents and builders of a better web must be vigilant defenders of the legal tools that make their work possible. 

The open social web embodies what we are protecting with Section 230. It’s our best chance at building a truly democratic public interest internet, where communities are in control.

With a swipe of a magnet, microscopic “magno-bots” perform complex maneuvers

MIT Latest News - Tue, 04/28/2026 - 11:00am

Under a microscope, a bouquet of lollipop-like structures, each smaller than a grain of sand, waves gently in a petri dish of liquid. Suddenly, they snap together, like the jaws of a Venus flytrap, as a scientist waves a small magnet over the dish. What was previously an assemblage of tiny passive structures has transformed instantly into an active robotic gripper.

The lollipop gripper is one demonstration of a new type of soft magnetic hydrogel developed by engineers at MIT and their collaborators at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Cincinnati. In a study appearing today in the journal Matter, the MIT team reports on a new method to print and fabricate the gel, which can be made into complex, magnetically activated three-dimensional structures.

The new gel could be the basis for soft, microscopic, magnetically responsive robots and materials. Such magno-bots could be used in medicine, for instance to release drugs or grab biopsies when directed by an external magnet.

Making objects move with magnets is nothing new, at least at the macroscale. We can, for example, wave a refrigerator magnet over a pile of paper clips that will trail the magnet in response. And at the microscale, scientists have designed a variety of magnetic “micro-swimmers” — components that are smaller than a millimeter and can be directed remotely by a magnet to squeeze through small spaces. For the most part, these designs work by mixing magnetic particles into a printable resin and pulling the entire swimmer in the direction of an external magnet.

In contrast, the MIT team’s new material can be made into even more complex and deformable structures with micron-scale precision. These features could enable a magnetic millibot to move individual features and perform more complex maneuvers.

“We can now make a soft, intricate 3D architecture with components that can move and deform in complex ways within the same microscopic structure,” says study author Carlos Portela, the Robert N. Noyce Career Development Associate Professor of Mechanical Engineering at MIT. “For soft microscopic robotics, or stimuli-responsive matter, that could be a game-changing capability.”

The study’s MIT co-authors include graduate students Rachel Sun and Andrew Chen, along with Yiming Ji and Daryl Yee of EPFL and Eric Stewart of the University of Cincinnati.

In a flash

At MIT, Portela’s group develops new metamaterials — materials engineered with unique, microscopic architectures that give rise to beyond-normal material properties. Portela has fabricated a variety of such metamaterials, including extremely tough and stretchy architectures and designs that can manipulate sound and withstand violent impacts.

Most recently, he’s expanded his research to “programmable” materials, which can be engineered to change their properties in response to stimuli, such as certain chemicals, light, and electric and magnetic fields.

From the team’s perspective, magnetic stimuli stand out from the rest.

“With a magnetically responsive material, we have control at a distance and the response is instantaneous,” says co-lead author Andrew Chen. “We don’t have to wait for a slow chemical reaction or physical process, and we can manipulate the material without touching it.”

For the new study, the team aimed to create a magnetically responsive metamaterial that can be made into structures smaller than a millimeter. Researchers typically fabricate microstructures by using two-photon lithography — a high-resolution 3D printing technique that flashes a laser into a small pool of resin. With repeated flashes, the laser traces a microscopic pattern into the resin, which solidifies into the same pattern, ultimately creating a tiny, three-dimensional structure, layer by layer.

While 3D resin printing produces intricate microstructures, using the same process to print magnetic structures has been a challenge. Researchers have tried to combine the resin with magnetic nanoparticles before printing the mixture. But magnetic particles are essentially bits of metal that inherently scatter light away or agglomerate and sediment unintentionally. Scientists have found that any magnetic particles in the resin can reduce the laser’s power at a given spot and weaken the resulting structure or prevent its printing altogether.

“Directly 3D printing deformable micron-scale structures with a high fraction of magnetic particles is extremely difficult, often involving a tradeoff between magnetic functionality and structural integrity,” says Sun, a co-lead author on the work.

A printed double-dip

The researchers created a new way to fabricate magnetic microstructures, by combining 3D resin printing with a double-dip process. The researchers first applied conventional resin printing to create a microstructure using a typical polymer gel, with no added magnetic particles. Then they dipped the printed gel into a solution containing iron ions, which the gel can absorb. The iron-soaked structure is then dipped again in a second solution of hydroxide ions. The iron ions in the gel bond with the hydroxide ions, creating iron-oxide nanoparticles that are inherently magnetic.

With this new process, the team can print intricate structures smaller than a millimeter, and add magnetic properties to the structures after printing. What’s more, they are able to control how magnetic a structure’s individual features can be. They found that, by tuning the laser’s power as they print certain features, they can set how cross-linked, or “tight” the gel is when printed. The tighter the gel, the fewer magnetic particles it can form. In this way, the researchers can determine how magnetic each tiny feature can be.

“This provides unprecedented design freedom to print multifunctional structures and materials at the microscale,” Sun says.

As a demonstration, the team fabricated ball-and-stick structures resembling tiny lollipops. The structures were less than a millimeter in height, with balls that were smaller than a grain of sand. The researchers printed the lollipops out of polymer gel and infused each ball with different amounts of magnetic particles, giving them various degrees of magnetism. Under a microscope, they observed that when they passed an ordinary refrigerator magnet over the structures, the lollipops pulled toward the magnet in various degrees, in a configuration that mimicked gripping fingers.

“You could imagine a magnetic architecture like this could act as a small robot that you could guide through the body with an external magnet, and it could latch onto something, for instance to take a biopsy,” Portela says. “That is a vision that others can take from this work.”

The team also fabricated a magnetically responsive, “bistable” switch. They first printed a small millimeter-long rectangle of polymer gel and attached to either side four tiny, oar-like magnetic structures. Each oar measured about 8 microns thick — about the size of a red blood cell. When the team applied a magnet on one end of the rectangle, the oars flipped toward the magnet, pulling the rectangle in the same direction and locking it in that position. When the magnet was applied to the other side, the oars flipped again, pulling the rectangle, like a switch, in the opposite direction.

“We think this is a new kind of bistable mechanism that could be used, for instance, in a microfluidic device, as a magnetic valve to open or shut some flow,” Portela says. “For now, we’ve figured out how to fabricate magnetic complex architectures at the microscale and also spatially tune their properties. That opens up a lot of interesting ideas for soft miniature robots going forward.”

This research was supported, in part, by the National Science Foundation and the MathWorks seed grant program.

This work was performed, in part, in the MIT.nano fabrication and characterization facilities.

What Anthropic’s Mythos Means for the Future of Cybersecurity

Schneier on Security - Tue, 04/28/2026 - 7:06am

Two weeks ago, Anthropic announced that its new model, Claude Mythos Preview, can autonomously find and weaponize software vulnerabilities, turning them into working exploits without expert guidance. These were vulnerabilities in key software like operating systems and internet infrastructure that thousands of software developers working on those systems failed to find. This capability will have major security implications, compromising the devices and services we use every day. As a result, Anthropic is not releasing the model to the general public, but instead to a ...

‘There’s a day of reckoning coming’: Energy experts expect another spike at the pump

ClimateWire News - Tue, 04/28/2026 - 6:47am
President Donald Trump's jawboning of the markets is sending the wrong signals to oil producers by keeping crude prices artificially low.

Pages