Feed aggregator
Study suggests link between wildfire smoke and violent assaults
Iowa moves to protect agribusiness from climate liability
Prices sag in California’s latest carbon auction
Legislative analyst pans Newsom’s sustainable aviation fuel tax credit proposal
Florida Legislature moves closer to banning net-zero policies statewide
UN data shows 6.5M in Somalia at risk of severe hunger from drought
T. Rowe among signatories to resurrected net-zero alliance
Tackling industry’s burdensome bubble problem
In industrial plants around the world, tiny bubbles cause big problems. Bubbles clog filters, disrupt chemical reactions, reduce throughput during biomanufacturing, and can even cause overheating in electronics and nuclear power plants.
MIT Professor Kripa Varanasi has long studied methods to reduce bubble disruption. In a new study, Varanasi, along with PhD candidate Bert Vandereydt and former postdoc Saurabh Nath, have uncovered the physics behind a promising type of debubbling membrane material that is “aerophilic” — Greek for “air-loving.” The material can be used in systems of all types, allowing anyone to optimize their machine’s performance by breaking free from bubble-borne disruptions.
“We have figured out the structure of these bubble-attracting membrane materials to allow gas to evacuate in the fastest possible manner,” says Varanasi, the senior author of the study. “Think of trying to push honey through a coffee strainer: It’s not going to go through easily, whereas water will move through, and gas will move through even more easily. But even gas will reach a throughput limit, which depends on the properties of the gas and the liquid involved. By uncovering those limits, our research allows engineers to build better membranes for their systems.”
In the paper, which appears in the journal PNAS this week, the researchers distill their findings into a graph that allows anyone to plot a few characteristics of their system — like the viscosity of their gas and the surrounding liquid — and find the best membrane to make bubble removal near-instantaneous. Using their approach, the research team demonstrated a 1,000-fold acceleration in bubble removal in a bioreactor that’s used in the pharmaceutical industry, food and beverage manufacturing, cosmetics, chemical production, and more.
The researchers say the membranes, which repel water, could be used to improve the throughput of a wide range of advanced systems whose operation has been plagued to date by bubbles.
Better bubble breakers
Companies today try everything to burst bubbles. They deploy foam breakers that physically shear them, chemicals that act as antifoaming agents, even ultrasound. Such approaches have drawbacks in tightly controlled environments like bioreactors, where chemical defoamers can be toxic to cells, while mechanical agitation can damage delicate biological materials. Similar limitations apply to other industries where contamination or physical disturbance is unacceptable. As a result, many applications that cannot tolerate chemical defoamers or mechanical intervention remain fundamentally bottlenecked by foam formation.
“Biomanufacturing has really taken off in the last 10 years,” Vandereydt says. “We’re making a lot more out of biologic systems like cells and bacteria, and our reactors have increased in throughput from 5 million cells per millimeter of solution to 100 million cells per millimeter. However, the bubble evacuation and defoaming haven’t kept up — it’s becoming a significant rate-limiting step.”
To better understand the interaction between aerophilic membranes and bubbles, the MIT researchers used MIT.nano facilities to create a series of tiny porous silicon membranes with holes ranging in size from 10 microns to 200 microns. They coated the membranes with hydrophobic silica nanoparticles.
Placing them on the surface of different liquids, the researchers released single bubbles with varying viscosity and recorded the interaction using high-speed imaging as each collided with the membranes.
“We started by trying to take a very complicated system, like foam being generated in a bioreactor, and study it in the simplest form to understand what’s happening,” Vandereydt says.
At first, the bigger the holes, the faster the bubbles disappeared. The researchers also changed the bubble gas from air to hydrogen, which has half the viscosity, and found the speed of bubble destruction doubled.
But after about a 1,000-fold acceleration in bubble destruction, the researchers hit a wall no matter how big the membrane holes were. They had run up against a different physical limit to investigate.
The researchers then tried changing the viscosity of their liquid, from water to something closer to honey. They found viscosity only plays a role in the speed of bubble destruction when the liquid is 200 times the viscosity of liquid. Further experiments revealed the biggest factor for slowing bubble evacuation was inertial resistance in the liquid.
“Through experimentation, we showed there are three different limits [to the speed of bubble destruction],” Vandereydt says. “There is the viscous limit of the gas in a low-viscosity, low-permeability setup. Then there’s the viscous resistance of the liquid in the high-permeability, high-viscosity regime. Then we have the inertial limit of the liquid.”
The team used a bioreactor to experimentally validate their findings and charted them in a map that engineers can use to enter the characteristics of their system and find both the best membrane for their situation and the biggest factor slowing bubble evacuation.
The science of bubbles
The research should be useful for anyone trying to accelerate the destruction of bubbles in their industrial device, but it also improves our understanding of the physics underpinning bubble dynamics.
“We have identified three different throughput limits, and the physics behind those limits, and we have reduced it to very simple laws,” Nath explains. “How fast you can go is first dictated between surface tension and inertia. But you may also hit a different limit, where the pores are extremely small, so the gas finds it difficult to move through them. In that case, the viscosity of the gas is meaningful. But you may also have a bubble which was originally in something like honey, which means it’s not enough the gas is moving, the liquid also must refill the space behind it. No matter what your conditions are, you will be switching between these three limits.”
Varanasi says health care companies, chemical manufacturers, and even breweries have expressed interest in the work. His team plans to commercially develop the membranes for industrial use.
“These physical insights allowed us to design membranes that, quite surprisingly, evacuate bubbles even faster than a free liquid-gas interface,” says Varanasi.
The researchers’ design map could also be used to model natural systems and even liquid-liquid systems, which could be used to create membranes that remove oil spills from water or help efficiently extract hydrogen from water-splitting electrodes. Ultimately the biggest beneficiaries of the findings will be companies grappling with bubbles.
“Though small, bubbles quietly dictate the performance limits of many advanced technologies,” says Varanasi. “Our results provide a way to eliminate that bottleneck and unlock entirely new levels of performance across industries. These membranes can be readily retrofitted into existing systems, and our framework allows them to be rapidly designed and optimized for specific applications. We’re excited to work with industry to translate these insights into impact.”
The work was supported, in part, by MIT Lincoln Laboratory and used MIT.nano facilities.
New method could increase LLM training efficiency
Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning.
But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. While a few of the high-power processors continuously work through complicated queries, others in the group sit idle.
Researchers from MIT and elsewhere found a way to use this computational downtime to efficiently accelerate reasoning-model training.
Their new method automatically trains a smaller, faster model to predict the outputs of the larger reasoning LLM, which the larger model verifies. This reduces the amount of work the reasoning model must do, accelerating the training process.
The key to this system is its ability to train and deploy the smaller model adaptively, so it kicks in only when some processors are idle. By leveraging computational resources that would otherwise have been wasted, it accelerates training without incurring additional overhead.
When tested on multiple reasoning LLMs, the method doubled the training speed while preserving accuracy. This could reduce the cost and increase the energy efficiency of developing advanced LLMs for applications such as forecasting financial trends or detecting risks in power grids.
“People want models that can handle more complex tasks. But if that is the goal of model development, then we need to prioritize efficiency. We found a lossless solution to this problem and then developed a full-stack system that can deliver quite dramatic speedups in practice,” says Qinghao Hu, an MIT postdoc and co-lead author of a paper on this technique.
He is joined on the paper by co-lead author Shang Yang, an electrical engineering and computer science (EECS) graduate student; Junxian Guo, an EECS graduate student; senior author Song Han, an associate professor in EECS, member of the Research Laboratory of Electronics and a distinguished scientist of NVIDIA; as well as others at NVIDIA, ETH Zurich, the MIT-IBM Watson AI Lab, and the University of Massachusetts at Amherst. The research will be presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
Training bottleneck
Developers want reasoning LLMs to identify and correct mistakes in their critical thinking process. This capability allows them to ace complicated queries that would trip up a standard LLM.
To teach them this skill, developers train reasoning LLMs using a technique called reinforcement learning (RL). The model generates multiple potential answers to a query, receives a reward for the best candidate, and is updated based on the top answer. These steps repeat thousands of times as the model learns.
But the researchers found that the process of generating multiple answers, called rollout, can consume as much as 85 percent of the execution time needed for RL training.
“Updating the model — which is the actual ‘training’ part — consumes very little time by comparison,” Hu says.
This bottleneck occurs in standard RL algorithms because all processors in the training group must finish their responses before they can move on to the next step. Because some processors might be working on very long responses, others that generated shorter responses wait for them to finish.
“Our goal was to turn this idle time into speedup without any wasted costs,” Hu adds.
They sought to use an existing technique, called speculative decoding, to speed things up. Speculative decoding involves training a smaller model called a drafter to rapidly guess the future outputs of the larger model.
The larger model verifies the drafter’s guesses, and the responses it accepts are used for training.
Because the larger model can verify all the drafter’s guesses at once, rather than generating each output sequentially, it accelerates the process.
An adaptive solution
But in speculative decoding, the drafter model is typically trained only once and remains static. This makes the technique infeasible for reinforcement learning, since the reasoning model is updated thousands of times during training.
A static drafter would quickly become stale and useless after a few steps.
To overcome this problem, the researchers created a flexible system known as “Taming the Long Tail,” or TLT.
The first part of TLT is an adaptive drafter trainer, which uses free time on idle processors to train the drafter model on the fly, keeping it well-aligned with the target model without using extra computational resources.
The second component, an adaptive rollout engine, manages speculative decoding to automatically select the optimal strategy for each new batch of inputs. This mechanism changes the speculative decoding configuration based on the training workload features, such as the number of inputs processed by the draft model and the number of inputs accepted by the target model during verification.
In addition, the researchers designed the draft model to be lightweight so it can be trained quickly. TLT reuses some components of the reasoning model training process to train the drafter, leading to extra gains in acceleration.
“As soon as some processors finish their short queries and become idle, we immediately switch them to do draft model training using the same data they are using for the rollout process. The key mechanism is our adaptive speculative decoding — these gains wouldn’t be possible without it,” Hu says.
They tested TLT across multiple reasoning LLMs that were trained using real-world datasets. The system accelerated training between 70 and 210 percent while preserving the accuracy of each model.
As an added bonus, the small drafter model could readily be utilized for efficient deployment as a free byproduct.
In the future, the researchers want to integrate TLT into more types of training and inference frameworks and find new reinforcement learning applications that could be accelerated using this approach.
“As reasoning continues to become the major workload driving the demand for inference, Qinghao’s TLT is great work to cope with the computation bottleneck of training these reasoning models. I think this method will be very helpful in the context of efficient AI computing,” Han says.
This work is funded by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT Amazon Science Hub, Hyundai Motor Company, and the National Science Foundation.
Mixing generative AI with physics to create personal items that work in the real world
Have you ever had an idea for something that looked cool, but wouldn’t work well in practice? When it comes to designing things like decor and personal accessories, generative artificial intelligence (genAI) models can relate. They can produce creative and elaborate 3D designs, but when you try to fabricate such blueprints into real-world objects, they usually don’t sustain everyday use.
The underlying problem is that genAI models often lack an understanding of physics. While tools like Microsoft’s TRELLIS system can create a 3D model from a text prompt or image, its design for a chair, for example, may be unstable, or have disconnected parts. The model doesn’t fully understand what your intended object is designed to do, so even if your seat can be 3D printed, it would likely fall apart under the force of someone sitting down.
In an attempt to make these designs work in the real world, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are giving generative AI models a reality check. Their “PhysiOpt” system augments these tools with physics simulations, making blueprints for personal items such as cups, keyholders, and bookends work as intended when they’re 3D printed. It rapidly tests if the structure of your 3D model is viable, gently modifying smaller shapes while ensuring the overall appearance and function of the design is preserved.
You can simply type what you want to create and what it’ll be used for into PhysiOpt, or upload an image to the system’s user interface, and in roughly half a minute, you’ll get a realistic 3D object to fabricate. For example, CSAIL researchers prompted it to generate a “flamingo-shaped glass for drinking,” which they 3D printed into a drinking glass with a handle and base resembling the tropical bird’s leg. As the design was generated, PhysiOpt made tiny refinements to ensure the design was structurally sound.
“PhysiOpt combines GenAI and physically-based shape optimization, helping virtually anyone generate the designs they want for unique accessories and decorations,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL researcher Xiao Sean Zhan SM ’25, who is a co-lead author on a paper presenting the work. “It’s an automatic system that allows you to make the shape physically manufacturable, given some constraints. PhysiOpt can iterate on its creations as often as you’d like, without any extra training.”
This approach enables you to create a “smart design,” where the AI generator crafts your item based on users’ specifications, while considering functionality. You can plug in your favorite 3D generative AI model, and after typing out what you want to generate, you specify how much force or weight the object should handle. It’s a neat way to simulate real-world use, such as predicting whether a hook will be strong enough to hold up your coat. Users also specify what materials they’ll fabricate the item with (such as plastics or wood), and how it’s supported — for instance, a cup stands on the ground, whereas a bookend leans against a collection of books.
Given the specifics, PhysiOpt begins to iteratively optimize the object. Under the hood, it runs a physics simulation called a “finite element analysis” to stress test the design. This comprehensive scan provides a heat map over your 3D model, which indicates where your blueprint isn’t well-supported. If you were generating, say, a birdhouse, you may find that the support beams under the house were colored bright red, meaning the house will crumble if it’s not reinforced.
PhysiOpt can create even bolder pieces. Researchers saw this versatility firsthand when they fabricated a steampunk (a style that blends Victorian and futuristic aesthetics) keyholder featuring intricate, robotic-looking hooks, and a “giraffe table” with a flat back that you can place items on. But how did it know what “steampunk” is, or even how such a unique piece of furniture should look?
Remarkably, the answer isn’t extensive training — at least, not from the researchers. Instead, PhysiOpt uses a pre-trained model that’s already seen thousands of shapes and objects. “Existing systems often need lots of additional training to have a semantic understanding of what you want to see,” adds co-lead author Clément Jambon, who is also an MIT EECS PhD student and CSAIL researcher. “But we use a model with that feel for what you want to create already baked in, so PhysiOpt is training-free.”
By working with a pre-trained model, PhysiOpt can use “shape priors,” or knowledge of how shapes should look based on earlier training, to generate what users want to see. It’s sort of like an artist recreating the style of a famous painter. Their expertise is rooted in closely studying a variety of artistic approaches, so they’ll likely be able to mirror that particular aesthetic. Likewise, a pre-trained model’s familiarity with shapes helps it generate 3D models.
CSAIL researchers observed that PhysiOpt’s visual know-how helped it create 3D models more efficiently than “DiffIPC,” a comparable method that simulates and optimizes shapes. When both approaches were tasked with generating 3D designs for items like chairs, CSAIL’s system was nearly 10 times faster per iteration, while creating more realistic objects.
PhysiOpt presents a potential bridge between ideas and real-world personal items. What you may think is a great idea for a coffee mug, for instance, could soon make the jump from your computer screen to your desk. And while PhysiOpt already does the stress-testing for designers, it may soon be able to predict constraints such as loads and boundaries, instead of users needing to provide those details. This more autonomous, common-sense approach could be made possible by incorporating vision language models, which combine an understanding of human language with computer vision.
What’s more, Zhan and Jambon intend to remove the artifacts, or random fragments that occasionally appear in PhysiOpt’s 3D models, by making the system even more physics-aware. The MIT scientists are also considering how they can model more complex constraints for various fabrication techniques, such as minimizing overhanging components for 3D printing.
Zhan and Jambon wrote their paper with MIT-IBM Watson AI Lab Principal Research Scientist Kenney Ng ’89, SM ’90, PhD ’00 and two CSAIL colleagues: undergraduate researcher Evan Thompson and Assistant Professor Mina Konaković Luković, who is a principal investigator at the lab.
The researchers’ work was supported, in part, by the MIT-IBM Watson AI Laboratory and the Wistron Corp. They presented it in December at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.
☺️ Trust Us With Your Face | EFFector 38.4
Do you remember the last time you were carded at a bar or restaurant? It was probably such a quick and normal experience, that you barely remember it. But have you ever been carded to use the internet? Being required to present your ID to access content online is becoming a growing reality for many. We're explaining the dangers of age verification laws, and the latest in the fight for privacy and free speech online, with our EFFector newsletter.
For over 35 years, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This issue covers Discord's controversial rollout of mandatory age verification, a leaked Meta memo on face-scanning smart glasses, and a Super Bowl surveillance ad that said the quiet part out loud.
Prefer to listen in? In our audio companion, EFF Associate Director of State Affairs Rin Alajaji explains how online age verification hurts free expression for all users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.4 - ☺️ Trust Us With Your Face
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight against mandatory age verification laws when you support EFF today!
How to Pick Your Password Manager
Phishing and data breaches are a constant on the internet. The single best defense against both is to use a password manager to generate and automatically fill a unique password for every site. While 1Password has recently raised their prices, and researchers have recently published potential flaws in some implementations, using a password manager is still a critical investment in keeping yourself safe on the internet. There are free options, and even ones built into your operating system or browser. We can help you choose.
Password managers protect you from phishing by memorizing the connection between a password and a website, and, if you use the browser integration, filling each password only on the website it belongs to. They protect you from data breaches by making it feasible to use a long, random, unique password on each site. When bad actors get their hands on a data breach that includes email addresses and password data, they will typically try to crack those passwords, and then attempt to login on dozens of different websites with the email address/password combinations from the breach. If you use the same password everywhere, this can turn one site’s data breach into a personal disaster, as many of your accounts get compromised at once.
In recent years, the built-in password managers in browsers and operating systems have come a long way but still stumble on cross-platform support. Within the Apple ecosystem, you can use iCloud Keychain, with support for generating passwords, autofill in Safari, and end-to-end encrypted synchronization, so long as you don’t need access to your passwords in Google Chrome or Android (Windows is supported, though). Within the Google ecosystem, you can use Google Password Manager, which also supports password generation, autofill, and sync. Crucially, though, Google Password manager does not end-to-end encrypt credentials unless you manually enable on-device encryption. Firefox and Microsoft also offer password managers. All of these platform-based options are free, and may already be on your devices. But they tend to lock you into a single-vendor world.
There are also a variety of third-party password managers, some paid, and some free, and some open source. Most of these have the advantage of letting you sync your passwords across a wide variety of devices, operating systems, and browsers. Here are four key things to look out for. First, when synchronizing between devices, your passwords should be encrypted end-to-end using a password that only you know (a “master” or “primary” password). Second, support for autofill can reduce the chance that you’ll get phished. Third, security audits performed by third parties can increase confidence that the software really does what it is designed to do. And finally, of course, random generation of unique passwords is a must.
Don’t let uncertainty or price increases dissuade you from using a password manager. There’s a good choice for everyone, and using one can make your online life a lot safer. Want more help choosing? Check out our Surveillance Self-Defense guide.
Poisoning AI Training Data
All it takes to poison AI training data is to create a website:
I spent 20 minutes writing an article on my personal website titled “The best tech journalists at eating hot dogs.” Every word is a lie. I claimed (without evidence) that competitive hot-dog-eating is a popular hobby among tech reporters and based my ranking on the 2026 South Dakota International Hot Dog Championship (which doesn’t exist). I ranked myself number one, obviously. Then I listed a few fake reporters and real journalists who gave me permission….
Less than 24 hours later, the world’s leading chatbots were blabbering about my world-class hot dog skills. When I asked about the best hot-dog-eating tech journalists, Google parroted the gibberish from my website, both in the Gemini app and AI Overviews, the AI responses at the top of Google Search. ChatGPT did the same thing, though Claude, a chatbot made by the company Anthropic, wasn’t fooled...
