MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 10 hours 45 min ago

MIT study reveals a new role for cell membranes

Thu, 04/16/2026 - 12:00am

Cells are enveloped by a lipid membrane that gives them structure and provides a barrier between the cell and its environment. However, evidence has recently emerged suggesting that these membranes do more than simply provide protection — they also influence the behavior of the protein receptors embedded in them.

A new study from MIT chemists adds further support to that idea. The researchers found that changing the composition of the cell membrane can alter the function of a membrane receptor that promotes proliferation.

Epidermal growth factor receptor (EGFR) can be locked into an overactive state when the cell membrane has a higher than normal concentration of negatively charged lipids, the researchers found. This may help to explain why cancer cells with high levels of those lipids enter a highly proliferative state that allows them to divide uncontrollably.

“The longstanding dogma of what a membrane does is that it’s just a scaffold, an organizational structure. However, there have been increasing observations that suggest that maybe these membrane lipids are actually playing a role in receptor function,” says Gabriela Schlau-Cohen, the Robert T. Haslam and Bradley Dewey Professor of Chemistry at MIT and the senior author of the study.

The findings open up the possibility of discovering new ways to treat tumors by neutralizing the negative charge, which might turn down EGFR signaling, she adds.

Shwetha Srinivasan PhD ’22 is the lead author of the paper, which appears in the journal eLife. Other authors include former MIT postdocs Xingcheng Lin and Raju Regmi, Xuyan Chen PhD ’25, and Bin Zhang, an associate professor of chemistry at MIT.

Receptor dynamics

The EGF receptor, which is found on cells that line body surfaces and organs, is one of many receptors that help control cell growth. Some types of cancer, especially lung cancer and glioblastoma, overexpress the EGF receptor, which can lead to uncontrolled growth.

Like most receptor proteins, EGFR spans the entire cell membrane. Until recently, it has been challenging to study how signals are conveyed across the entire receptor, because of the difficulty of creating membranes that have proteins going all the way through them and then studying both ends of those proteins.

To make it easier to study these signaling processes, Schlau-Cohen’s lab uses nanodiscs, a special type of self-assembling membrane that mimics the cell membrane. When making these discs, the researchers can embed receptors in them, allowing the team to study the function of the full-length receptor.

Using a technique called single molecule FRET (fluorescence resonance energy transfer), the researchers can study how the shape of the receptor changes under different conditions. Single molecule FRET allows them to measure the distance between different parts of the protein by labeling them with fluorescent tags and then measuring how fast energy travels between the tags.

In previous work, Schlau-Cohen and Zhang used single molecule FRET and molecular dynamics simulations to reveal what happens when EGFR binds to EGF. They found that this binding causes the transmembrane section of the receptor to change shape, and that shape-shift triggers the section of the receptor that extends inside the cell to activate cellular machinery that stimulates growth.

Stuck in an overactive state

In the new study, the researchers used a similar approach to investigate how altering the composition of the membrane affects the function of the receptor. First, they explored how elevated levels of negatively charged lipids would affect the cell membrane and EGFR function.

Normally, about 15 percent of the cell membrane is made up of negatively charged lipids. The researchers found that membranes with negatively charged lipids in the range of 15 to 30 percent behaved normally, but if that level reached 60 percent, then the EGFR receptor would become locked into an active state.

In that state, the pro-growth signaling pathway is turned on all the time, even when no EGF is bound to the receptor. Many cancer cells show increased levels of these lipids, and this mechanism could help to explain why those cells are able to grow unchecked, Schlau-Cohen says.

“If the membrane has high levels of negatively charged lipids, then it’s always in that open conformation. It doesn’t matter if ligand is bound or unbound,” she says. “It’s always in the conformation that’s telling the cell to grow, not just when EGF binds.”

The researchers also used this system to explore the role of cholesterol in EGFR function. When the researchers created nanodiscs with elevated cholesterol levels, they found that the membranes became more rigid, and this rigidity suppressed EGFR signaling.

The research was funded by the National Institutes of Health and MIT’s Department of Chemistry.

Waves hit different on other planets

Thu, 04/16/2026 - 12:00am

On a calm day, a light breeze might barely ripple the surface of a lake on Earth. But on Saturn’s largest moon Titan, a similar mild wind would kick up 10-foot-tall waves.

This otherworldly behavior is one prediction from a new wave model developed by scientists at MIT. The model is the first to capture the full dynamics of waves and what it takes to whip them up under different planetary conditions.

In a study published in the Journal of Geophysical Research: Planets, the MIT team introduces the model, which they’ve aptly coined “PlanetWaves.” They apply the model to predict how waves behave on planetary bodies that might host liquid lakes and oceans, including Titan, ancient Mars, and three planets beyond the solar system.

The model predicts that a gentle wind would be enough to stir up huge waves on Titan, where lakes are filled with light liquid hydrocarbons. In contrast, it would take hurricane-force winds to barely move the surface of a lake on the exoplanet 55-Cancri e, which is thought to be a lava world covered in hot, dense liquid rock. 

“On Earth, we get accustomed to certain wave dynamics,” says study author Andrew Ashton, associate scientist at the Woods Hole Oceanographic Institution (WHOI) and faculty member of the MIT-WHOI Joint Program. “But with this model, we can see how waves behave on planets with different liquids, atmospheres, and gravity, which can kind of challenge our intuition.”

The team is particularly keen to understand how waves form on Titan. The large moon is the only other planetary body in the solar system other than the Earth that is known to currently host liquid lakes.

“Anywhere there’s a liquid surface with wind moving over it, there’s potential to make waves,” says Taylor Perron, the Cecil and Ida Green Professor of Earth, Atmospheric and Planetary Sciences at MIT. “For Titan, the tantalizing thing is that we don’t have any direct observation of what these lakes look like. So we don’t know for sure what kind of waves might exist there. Now this model gives us an idea.”

If humans were to one day to send a probe to Titan’s lakes, the team’s new model could inform the design of wave-resilient spacecraft.

“You would want to build something that can withstand the energy of the waves,” says lead author Una Schneck, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “So it’s important to know what kind of waves these instruments would be up against.”

The study’s co-authors include Charlene Detelich and Alexander Hayes of Cornell University and Milan Curcic of the University of Miami.

“The first puff”

When wind blows over water, it creates waves that can be strong enough to carve out coastlines and redistribute sediment brought to the coast by rivers. Through this process, waves can be a significant force in shaping a landscape over time. Schneck and her colleagues, who study landscape evolution on Earth and other planets, wondered how waves might behave on other worlds where gravity, atmospheric conditions, and liquid compositions can be very different from what is found on Earth.

“There have been attempts in the past to predict how gravity will affect waves on other planets,” Schneck says. “But they don’t quantify other factors such as the composition of the liquid that is making waves. That was the big leap with this project.”

She and her colleagues developed a full wave model that takes into account not just a planet’s gravity, but also properties of its surface liquid, such as its density, viscosity, and surface tension, or how resistant a liquid is to rippling. The team also incorporated the effect of a planet’s atmospheric pressure. With this model, they aimed to predict how a planet’s liquid surface would evolve in response to winds of a given speed.

“Imagine a completely still lake,” Ashton offers. “We’re trying to figure out the first puff that will make those first little tiny ripples, on up to a full ocean wave.”

Making waves

The team first tested their new model with wave data on Earth. They used measurements of waves that were collected by buoys across Lake Superior over 20 years. They found that the model, which took into account Earth’s gravity, the composition of liquid (water), and atmospheric conditions, was able to accurately predict what windspeeds it would take to generate waves across the lake, and how high the waves grew with a given wind strength.

The researchers then applied the model to predict how waves would behave on other planetary bodies that are known to host liquid on their surface. They looked first to Titan, where NASA’s Cassini mission previously captured radar images of lake formations, which scientists suspect are currently filled with liquid methane and ethane. The team used the new model to calculate the moon’s wave dynamics given its gravity, atmospheric pressure, and liquid composition.

They found that on Titan, it’s surprisingly easy to make waves. The relatively light liquid, combined with low gravity and atmospheric pressure, means that even a gentle wind can stir up huge waves.

“It kind of looks like tall waves moving in slow motion,” Schneck says. “If you were standing on the shore of this lake, you might feel only a soft breeze but you would see these enormous waves flowing toward you, which is not what we would expect on Earth.”

The researchers also considered wave activity on ancient Mars. The Red Planet hosts many impact basins that may have once been filled with water, before the planet’s atmosphere dissipated and the water evaporated away. One of those basins is Jezero Crater, which is currently being explored by NASA’s Perseverance rover. With the new model, the team showed that as Mars’ atmosphere gradually disappeared, reducing its pressure over time, it would have required stronger winds to make the same waves.

Beyond the solar system, the researchers applied the model to three different exoplanets. The first, LHS1140b, is a “cool super-Earth,” meaning that it is colder and larger than Earth. The planet hosts liquid water, though because it is so large, it has a stronger gravity. The model showed that the same wind on Earth would generate much smaller waves of water on the super-Earth, due to its difference in gravity.

The team also considered Kepler 1649b, a Venus-like planet, which has a gravity similar to Earth’s, with lakes of sulfuric acid, which is about twice as dense as water. Under these conditions, the researchers found that it would take strong winds to make even a ripple on the exo-Venus, compared to on Earth.

This effect is even more pronounced for the third planet, 55-Cancri e — a lava world that has both a higher gravity than Earth and a much denser, more viscous surface liquid. Scientists suspect that the planet hosts oceans of liquefied rock. In this environment, the model predicts that hurricane-force winds on Earth, of about 80 miles per hour, would generate only small waves of a few centimeters in height on the lava world.

Aside from illuminating new ways that waves can behave on other planets, Perron hopes the model will answer longstanding questions of planetary landscape formation.

“Unlike on Earth where there is often a delta where a river meets the coast, on Titan there are very few things that look like deltas, even though there are plenty of rivers and coasts. Could waves be responsible for this?” Perron wonders. “These are the kinds of mysteries that this model will help us solve.”

This work was supported, in part, by NASA and the National Science Foundation.

Geothermal energy turns red hot

Wed, 04/15/2026 - 7:30pm

Drill deep and drill differently. That’s what’s needed to exploit the nearly bottomless promise of geothermal energy in the United States and around the globe, according to participants at the 2026 Spring Symposium, titled “Next-generation geothermal energy for firm power.” 

Sponsored by the MIT Energy Initiative (MITEI), the March 4 event drew 120 people, including MIT faculty and students, investors, and representatives from startups, multinational energy companies, and zero-carbon advocacy groups.

“The time feels right to pull together good policy, great corporate partners, and the research and technological innovations … to make significant advances in the widespread utilization of this incredible resource,” said Karen Knutson, the vice president for government affairs at MIT, in welcoming attendees.

Technology from the oil and gas industry helped usher in a first wave of geothermal energy. But chewing vertical holes through rocks in traditional ways can’t deliver on the full potential of this resource. And the real treasure — geologic formations radiating heat at 374 degrees Celsius and above — lies kilometers beneath Earth’s surface, far beyond the reach of most conventional drilling rigs.

Panelists explored the many innovations in accessing and circulating subsurface heat, as well as digging to unprecedented depths through extremely challenging geological conditions, discussing advanced drilling technologies, materials, and subsurface imaging.

This work is needed urgently, as demand for firm (always-on) power skyrockets in response to the electrification of industry and rise of data centers, said Pablo Dueñas‑Martínez, a MITEI research scientist. “We cannot get through this only with solar and wind; we need dense, deployable energy like geothermal.”

From “minuscule” to “almost inexhaustible” energy

In her opening remarks, Carolyn Ruppel, MITEI’s deputy director of science and technology, noted that despite decades of successful projects in places like the United States, Kenya, Iceland, Indonesia, and Turkey, geothermal still contributes only a “minuscule” share of global electricity. “The tremendous heat beneath our feet remains largely untouched,” she said.

Citing MIT’s milestone 2006 study “The Future of Geothermal Energy,” keynote speaker John McLennan, a professor at the University of Utah and co–principal investigator of the U.S. Department of Energy’s Utah FORGE enhanced geothermal systems (EGS) field laboratory, reminded attendees that the continental crust holds enough accessible heat to supply power for generations. “For practical purposes, it’s almost inexhaustible,” he said.

The question now, he said, is how to access that resource economically and responsibly.

At the Utah FORGE test site, McLennan has been part of a team investigating one method — adapting the oil and gas industry’s drilling and reservoir engineering expertise for hot, relatively impermeable rocks.

The project has drilled multiple deep wells into crystalline granitic rock, including a pair of wells that have been hydraulically stimulated and connected. In a recent circulation test, cold water was pumped down one well, flowed through fractures, and returned hot through the other.

“On a commercial basis … this hot water would be converted to electricity at the surface,” McLennan said. “This has now been demonstrated at Utah FORGE.”

The basic physics, in other words, work. The harder problems now are cost, repeatability, and scale.

Geothermal on the grid

Several panels highlighted the fact that next-generation geothermal is already beginning to deliver firm power.

At Lightning Dock, New Mexico, geothermal company Zanskar used a probabilistic modeling framework that simulated thousands of possible subsurface configurations to identify where to drill a new production well at an underperforming geothermal field. By thermal power delivered, the resulting well is now “the most-productive pumped geothermal well in the country,” said Joel Edwards, Zanskar’s co-founder and chief technology officer — powering the entire 15 megawatt (MW) Lightning Dock plant from a single well.

This data-driven approach enables the company to find and develop new resources faster and more cheaply than traditional methods, said Edwards.

José Bona, the director of next-generation geothermal at Turboden, explained how his company’s technology uses specialized turbines to circulate organic fluids that conserve heat better than water, and then convert that heat efficiently into electrical power. This closed-cycle technology can utilize low- to medium-temperature heat sources. Turboden is supplying its technology both to the Lightning Dock geothermal facility in New Mexcio and to Fervo Energy’s Cape Station in southwest Utah, an EGS project that will begin delivering 100 MW of baseload, clean electricity to the grid this year, aiming for 500 MW by 2028.

In Geretsried, Germany, Eavor has developed its own proprietary closed-loop system by creating a kind of underground radiator.

“We drilled to about 4.5 kilometers vertical depth, completed six horizontal multilateral pairs, and we delivered the first power to the grid in December,” said Christian Besoiu, the team lead of technology development at Eavor. The project will ultimately be capable of supplying 8.2 MW of electricity to the 32,000 households in the Bavarian town of Geretsried and 64 MW of thermal energy to the district in which the town lies, prioritizing heat when needed.

Beyond oil and gas technology

Early geothermal exploration typically targeted preexisting faults using vertical wells left by oil and gas drilling. Today, companies are experimenting with rock fracturing at multiple subsurface levels and creating heat reservoirs in previously untenable formations by using propping materials.

“Instead of vertical wells, we’re going to horizontal wells, we’re going to cased wells, we’re introducing proppants [solid materials that hold open hydraulically fractured rock] … we do dozens of stages with these designs,” said Koenraad Beckers, the geothermal engineering lead at ResFrac. This shale-style approach has already yielded much higher flow rates and more-reliable performance than earlier EGS.

Some current geothermal wells manage to achieve depths close to 15,000 feet using the oil and gas industry’s polycrystalline diamond compact drill bits, which can bore through hard rock like granite at more than 100 feet per hour. But these bits and the rigs that drive them are no match for conditions six or more kilometers down — and it is at those depths that the heat on hand begins to make an overwhelming economic case for geothermal.

“If we go to around 300 to 350 degrees, your power potential increases 10 times,” said Lev Ring, CEO of Sage Geosystems. “At that point, with reasonable CAPEX [capital expenditure] assumptions, levelized cost of electricity [a metric for comparing the cost of electricity across different generation technologies] is around 4 cents, and geothermal becomes cheaper than any other alternative.”

But “at 10 kilometers down … the largest land rigs in existence today cannot handle it,” Ring added. “We need alternatives — new materials, new ways to handle pressure, maybe even welding on the rig … a whole space that has not been addressed yet.”

One panel, featuring Quaise Energy, an MIT spinout with MITEI roots, spotlighted just how radically drilling might change. Co-founder Matt Houde described the company’s millimeter-wave drilling approach, which uses high-frequency electromagnetic waves derived from fusion research to vaporize rock instead of grinding it, as with conventional drilling. In a recent Texas field test, the team drilled 100 meters of hard basement rock in about a month, and is now planning kilometer-scale trials aimed at reaching superhot rock temperatures around 400 C, where each well could deliver many times the power of today’s geothermal projects.

Innovations for deep drilling

Moderating a panel on “MIT innovations for next-generation geothermal,” Andrew Inglis, the venture builder in residence with MIT Proto Ventures, whose position is sponsored by the U.S. Department of Energy GEODE program, framed the Institute’s role in getting such hard-tech ideas out of the lab and into the field. “The way MIT thinks about tech development, uniquely from other universities, can play a very singular role in geothermal commercial liftoff,” he said.

Materials researchers on that panel illustrated the point. Matěj Peč, an associate professor of geophysics in the Department of Earth, Atmospheric and Planetary Sciences, outlined work to build sensors that survive up to 900 C so that rock deformation and fracturing can be studied at supercritical conditions. Michael Short, the Class of 1941 Professor in the Department of Nuclear Science and Engineering, and C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering, respectively described coatings and alloys designed to resist corrosion, fouling, and cracking in extreme environments. In response to audience questions after their talks, Tasan made an important point, highlighting how academics need input from industry to understand the real-world problems (e.g., corrosion of pipes by geofluids) that require engineering solutions.

Other researchers are rethinking how to detect geothermal resources: Wanju Yuan, a research scientist with the Geological Survey of Canada at Natural Resources Canada, is using satellite imagery and thermal infrared sensing to screen vast regions for subtle hot spots and structures, processing thousands of images to identify promising sites in just a few months of work. “It’s a very efficient way to screen potential areas before more expensive exploration, thus reducing exploration and drilling risks,” he said.

Policy as backdrop, not center stage

Policy loomed in the background of many discussions — from bipartisan support for geothermal exploration and tax incentives to issues of regulation and permitting.

For Ruppel, that was by design.

“We wanted this meeting to showcase what’s technically possible and what’s already happening on the ground,” she said. “The policy world is starting to pay attention. Our job is to make sure that when that spotlight turns our way, next-generation geothermal is ready.”

MITEI’s Spring Symposium was followed by a gathering of geothermal entrepreneurs, investors, and energy industry experts co-hosted by MITEI and the Clean Air Task Force. “GeoTech Summit: Accelerating geothermal technology, projects, and deal flow” explored the financing challenges and opportunities of geothermal energy today.

MIT faculty, alumni receive 2025-26 American Physical Society honors

Wed, 04/15/2026 - 2:50pm

The American Physical Society (APS) recently honored two MIT faculty members — professors Yoel Fink PhD ’00 and Mehran Kardar PhD ’83 — as well as six alumni with prizes and awards for their contributions to physics and academic leadership.

In addition, several MIT faculty members — Professor Jorn Dunkel, Professor Yen-Jie Lee PhD ’11, Associate Professor Mingda Li PhD ’15, and Associate Professor Julien Tailleur — as well as 12 additional alumni were named APS Fellows.

Yoel Fink PhD ’00, the Danae and Vasilis (1961) Salapatas Professor in the Department of Materials Science and Engineering, received the Andrei Sakharov Prize “for defending the academic freedom and human rights of scientists working in the U.S.”

The prize, named for physicist and human rights advocate Andrei Sakharov, recognizes scientists whose leadership and impact advance the principles of intellectual freedom and human dignity. Fink’s research focuses on “computing fabrics” — fibers and textiles that sense, communicate, store, and process information. By embedding functionality at the fiber level, fabrics become computing systems that can infer human activity and context while keeping the traditional qualities of garments. These textiles enable noninvasive monitoring of physiological and health conditions, with applications ranging from fetal and maternal health to human performance analytics, injury prevention in challenging environments, and defense.

Mehran Kardar PhD ’83, the Francis Friedman Professor of Physics, received the Lars Onsager Prize “for ground-breaking contributions to statistical physics, including the Kardar-Parisi-Zhang equation, Casimir forces, active matter, and aspects of biological physics.”

Kardar’s research focuses on how complex behavior emerges from simple interactions in systems both in and far from equilibrium, including stable ones like a still pond and rapidly changing ones such as growing surfaces. The Kardar-Parisi-Zhang equation, which he helped develop, provides a unifying framework for understanding how randomness and fluctuations shape evolving phenomena, from fluids and interfaces to biological and quantum systems. His work has also advanced the theoretical understanding of disordered materials, soft matter such as polymers and gels, and fluctuation-induced forces — including Casimir forces arising from quantum and thermal effects. More recently, he has applied these ideas to active matter — systems of self-driven units — and biological systems, helping reveal patterns in living and evolving systems.

Alumni receiving awards

Joel Butler PhD ’75 was presented the W.K.H. Panofsky Prize in Experimental Particle Physics “for wide-ranging scientific, technical, and strategic contributions to particle physics, particularly exceptional leadership in fixed-target quark flavor experiments at Fermilab and collider physics at the Large Hadron Collider.”

Anthony Duncan PhD ’75 is the recipient of the Abraham Pais Prize for History of Physics “for research on the history of quantum physics between 1900 and 1927 that culminated in 'Constructing Quantum Mechanics,' an exemplary work that uses primary sources masterfully and employs scaffold and arch metaphors to describe developments in the quantum revolution.”

Laura A. Lopez ’04 was presented the Edward A. Bouchet Award “for pioneering contributions to X-ray astronomy, including foundational studies of supernova remnants, compact objects, and stellar feedback in galaxies, and for transformative leadership in advancing equity and inclusion in physics through innovative mentorship programs, national advocacy, and unwavering support for students from historically marginalized communities.”

Zhiquan Sun PhD ’25 is the recipient of the J.J. and Noriko Sakurai Dissertation Award in Theoretical Particle Physics “for applying effective field theory to advance our understanding of QCD [quantum chromodynamics], including establishing a new formalism to study heavy quark fragmentation, determining how confinement affects energy correlators, and revealing an overlooked complexity of the axion solution to the strong CP [charge conjugation symmetry and parity symmetry] problem.”

Charles B. Thorn III ’68 received the Dannie Heineman Prize for Mathematical Physics for “fundamental contributions to elementary particle physics, primarily the theory of strong interactions and the development of string theory.”

Christina Wang ’19 received the Mitsuyoshi Tanaka Dissertation Award in Experimental Particle Physics “for pioneering a novel technique using CMS [Compact Muon Solenoid] muon chambers to search for weakly-coupled sub-GeV [giga-electronvolt] mass dark matter using long-lived particle searches, and for groundbreaking work in quantum sensing to enable new probes of dark matter.”

APS Fellows

Several MIT faculty were elected 2025 APS Fellows:

Jorn Dunkel, MathWorks Professor of Mathematics, is the recipient of the Division of Statistical and Nonlinear Physics Fellowship “for pioneering contributions to statistical, nonlinear, and biological physics, notably in understanding pattern formation in soft matter and biology, cell positioning in tissues, and turbulence in active media.”

Yen-Jie Lee PhD '11, professor of physics, received the Division of Nuclear Physics Fellowship “for pioneering measurements of jet quenching, medium response and heavy-quark diffusion in the quark-gluon plasma, and for using electron-positron collisions as an innovative control to understand collectivity in small collision systems.”

Mingda Li PhD '15, associate professor of nuclear science and engineering, is the recipient of the Topical Group on Data Science Fellowship “for pioneering the integration of artificial intelligence with scattering and spectroscopy, enabling breakthroughs in phonons, topological states, optical and time-resolved spectra, and data-driven discovery for quantum and energy applications.”

Julien Tailleur, associate professor of physics, is the recipient of the Division of Soft Matter Fellowship “for foundational theoretical work on motility-induced phase separation and emergent collective behavior in scalar active matter.”

The following additional MIT alumni were also honored as APS Fellows:

Andrew Cross SM ’05, PhD ’08 (EECS), Division of Quantum Information Fellowship 

Kevin D. Dorfman SM '01, PhD '02 (ChemE), Division of Polymer Physics Fellowship

Geoffroy Hautier PhD '11 (DMSE), Division of Computational Physics Fellowship

Douglas J. Jerolmack PhD '06 (EAPS), Division of Statistical and Nonlinear Physics Fellowship

Brian Lantz '92, PhD '99 (Physics), Division of Gravitational Physics Fellowship

Valerio Lucarini SM '03 (EAPS), Topical Group on Physics of Climate Fellowship

Giles Novak '81 (Physics), Division of Astrophysics Fellowship

Steve Presse PhD '08 (Physics), Division of Biological Physics Fellowship

Jonathan Rothstein PhD '01 (MechE), Division of Fluid Dynamics Fellowship

Gray Rybka PhD '07 (Physics), Division of Particles and Fields Fellowship

Sarah Sheldon '08, PhD '13 (Physics, NSE), Forum on Industrial and Applied Physics Fellowship

Lian Shen ScD '01 (MechE), Division of Fluid Dynamics Fellowship

Multitasking quantum sensors can measure several properties at once

Wed, 04/15/2026 - 12:00am

A special class of sensors leverages quantum properties to measure tiny signals at levels that would be impossible using classical sensors alone. Such quantum sensors are currently being used to study the inner workings of cells and the outer depths of our universe.

Particularly promising are solid-state quantum sensors, which can operate at room temperature. Unfortunately, most solid-state quantum sensors today only measure one physical quantity at a time — such as the magnetic field, temperature, or strain in a material. Trying to measure both the magnetic field and temperature of a material at the same time causes their signals to get mixed up and measurements to become unreliable.

Now, MIT researchers have created a way to simultaneously measure multiple physical quantities with a solid-state quantum sensor. They achieved this by exploiting entanglement, where particles become correlated into a single quantum state. In a new paper, the team demonstrated its approach in a commonly used quantum sensor at room temperature, measuring the amplitude, frequency, and phase of a microwave field in a single measurement. They also showed the approach works better than sequentially measuring each property or using traditional sensors.

The researchers say the approach could enable quantum sensors that can deepen our understanding of the behavior of atoms and electrons inside materials and living systems like cancer cells.

“Quantum multiparameter estimation has been mostly theoretical to date,” says co-lead author of the paper Takuya Isogawa, a graduate student in nuclear science and engineering. “There have been very few experiments that actually demonstrate it, and that work focused on photons. We wanted to demonstrate multiparameter estimation in a more application-oriented setup: a solid-state quantum sensor in use today.”

Joining Isogawa on the paper are co-lead authors Guoqing Wang PhD ’23 and MIT PhD candidate Boning Li. The other authors on the paper are former MIT visiting students Zhiyao Hu and Ayumi Kanamoto; University of Tokyo PhD candidate Shunsuke Nishimura; Chinese University of Hong Kong Professor Haidong Yuan; and Paola Cappellaro, MIT’s Ford Professor of Engineering, a professor of nuclear science and engineering and of physics, and a member of the Research Laboratory of Electronics.

Quantum effects for measurement

Quantum sensors exploit quantum effects like entanglement, spin states, and superposition to measure changes in magnetic fields, electric fields, gravity, acceleration, and more. As such, they can be used to measure the activity of single molecules in ways that are useful for understanding biology and space, like tracking the activity of metabolites or enzymes inside cells.

One particularly useful sensor in biology leverages what’s known as nitrogen-vacancy (NV) centers in diamonds, a defect where a carbon atom in the diamond’s crystal lattice is replaced by a nitrogen atom, and a neighboring lattice site is missing, or vacant. The defect hosts an electronic spin whose transition frequencies can be read out optically. The NV center’s spin state is extremely sensitive to external effects, such as magnetic fields and temperature, which can shift the spin state in ways that can be measured at extremely high resolution.

Unfortunately, different external effects change the energy resonances of the spin in similar ways, making it difficult to measure multiple effects at once. The result is that most solid-state quantum sensor applications measure a single physical quantity at one time.

“If you can only measure one quantity at a time, you have to repeat experiments to measure quantities one by one,” Isogawa says. “That takes more time, which means less sensitivity. It also makes experiments more susceptible to errors.”

For their experiment, the researchers used NV centers inside of a 5-square-millimeter diamond. They pointed a laser into the diamond and studied its fluorescence to make their measurements, a common approach for such sensors. To study the electronic spin of the NV center, they used a microwave antenna. To study the spin of the nitrogen atom they used a radio frequency field.

“We used those two spins as two qubits,” Isogawa says, referring to the building blocks of quantum computing systems. “If you have only one qubit, you can only measure one outcome: basically, 0 or 1. It’s the probability that it spins up or down. Think of it like a coin toss, with the probability of getting heads or tails. With two qubits, we increased the parameters that we could extract.”

The system worked because the spins of the sensor qubit and auxiliary qubit were entangled, a quantum property where the state of one particle is dependent on another. With one qubit, you get a binary outcome. With two, you get four possible outcomes with a total of three possible parameters.

The two qubits allowed researchers to measure those three quantities simultaneously using a technique known as the Bell state measurement.

Other researchers had used the Bell state measurement at extremely low temperatures before, but the MIT researchers developed a new technique to perform the measurement at room temperature. That technique was first proposed by Wang, who was previously a graduate student in Professor Cappellaro’s lab.

The researchers used the approach to simultaneously measure the amplitude, detuning, and phase of a microwave magnetic field. The researchers also say the approach could be used to measure electric fields, temperature, pressure, and strain.

“Measuring these parameters simultaneously can help us explore spin waves in materials, which is an important topic in condensed matter physics,” Isogawa says. “NV center sensors have extremely high spatial resolution and versatility. It can measure a lot of different physical quantities.”

More practical quantum sensing

The researchers say this work is an important step toward using solid-state quantum sensors to more fully characterize systems in biomedical research and materials characterization. That’s because multiparameter estimation had never been achieved in realistic settings or in widely used quantum sensors.

“What makes the NV center quantum sensors so special is they can operate at room temperature,” Isogawa says. “It’s very suitable for biological measurements or condensed matter physics experiments.”

Although the researchers say their sensor didn’t measure each quantity at the highest possible precision, in future work they plan to explore if their approach can achieve higher precision for each parameter.

They also plan to explore how their approach works to characterize heterogenous materials.

“In an extremely uniform environment, you could use many different classical and quantum sensors and measure each physical quantity at the same time,” Isogawa says. “But if the physical quantities change at different locations, you need high spatial sensors, and you need a sensor that can measure multiple physical quantities. This approach has major advantages in such situations.”

The work was supported, in part, by the U.S. National Science Foundation, the National Research Foundation of Korea, and the Research Grants Council of Hong Kong.

Human-machine teaming dives underwater

Tue, 04/14/2026 - 9:00am

The electricity to an island goes out. To find the break in the underwater power cable, a ship pulls up the entire line or deploys remotely operated vehicles (ROVs) to traverse the line. But what if an autonomous underwater vehicle (AUV) could map the line and pinpoint the location of the fault for a diver to fix?

Such underwater human-robot teaming is the focus of an MIT Lincoln Laboratory project funded through an internally administered R&D portfolio on autonomous systems and carried out by the Advanced Undersea Systems and Technology Group. The project seeks to leverage the respective strengths of humans and robots to optimize maritime missions for the U.S. military, including critical infrastructure inspection and repair, search and rescue, harbor entry, and countermine operations.

"Divers and AUVs generally don't team at all underwater," says principal investigator Madeline Miller. "Underwater missions requiring humans typically do so because they involve some sort of manipulation a robot can't do, like repairing infrastructure or deactivating a mine. Even ROVs are challenging to work with underwater in very skilled manipulation tasks because the manipulators themselves aren't agile enough."

Beyond their superior dexterity, humans excel at recognizing objects underwater. But humans working underwater can't perform complex computations or move very quickly, especially if they are carrying heavy equipment; robots have an edge over humans in processing power, high-speed mobility, and endurance. To combine these strengths, Miller and her team are developing hardware and algorithms for underwater navigation and perception — two key capabilities for effective human-robot teaming.

As Miller explains, divers may only have a compass and fin-kick counts to guide them. With few landmarks and potentially murky conditions caused by a lack of light at depth or the presence of biological matter in the water column, they can easily become disoriented and lost. For robots to help divers navigate, they need to perceive their environment. However, in the presence of darkness and turbidity, optical sensors (cameras) cannot generate images, while acoustic sensors (sonar) generate images that lack color and only show the shapes and shadows of objects in the scene. The historical lack of large, labeled sonar image datasets has hindered training of underwater perception algorithms. Even if data were available, the dynamic ocean can obscure the true nature of objects, confusing artificial intelligence. For instance, a downed aircraft broken into multiple pieces, or a tire covered in an overgrowth of mussels, may no longer resemble an aircraft or tire, respectively.

"Ultimately, we want to devise solutions for navigation and perception in expeditionary environments," Miller says. "For the missions we're thinking about, there is limited or no opportunity to map out the area in advance. For the harbor entry mission, maybe you have a satellite map but no underwater map, for example."

On the navigation side, Miller's team picked up on work started by the MIT Marine Robotics Group, led by John Leonard, to develop diver-AUV teaming algorithms. With their navigation algorithms, Leonard's group ran simulations under optimal conditions and performed field testing in calm waters using human-paddled kayaks as proxies for both divers and AUVs. Miller's team then integrated these algorithms into a mission-relevant AUV and began testing them under more realistic ocean conditions, initially with a support boat acting as a diver surrogate, and then with actual divers.

"We quickly learned that you need more sensing capabilities on the diver when you factor in ocean currents," Miller explains. "With the algorithms demonstrated by MIT, the vehicle only needed to calculate the distance, or range, to the diver at regular intervals to solve the optimization problem of estimating the positions of both the vehicle and diver over time. But with the real ocean forces pushing everything around, this optimization problem blows up quickly."

On the perception side, Miller's team has been developing an AI classifier that can process both optical and sonar data mid-mission and solicit human input for any objects classified with uncertainty.

"The idea is for the classifier to pass along some information — say, a bounding box around an image — to the diver and indicate, "I think this is a tire, but I'm not sure. What do you think?" Then, the diver can respond, "Yes, you've got it right, or no, look over here in the image to improve your classification," Miller says.

This feedback loop requires an underwater acoustic modem to support diver-AUV communication. State-of-the-art data rates in underwater acoustic communications would require tens of minutes to send an uncompressed image from the AUV to the diver. So, one aspect the team is investigating is how to compress information into a minimum amount to be useful, working within the constraints of the low bandwidth and high latency of underwater communications and the low size, weight, and power of the commercial off-the-shelf (COTS) hardware they're using. For their prototype system, the team procured mostly COTS sensors and built a sensor payload that would easily integrate into an AUV routinely employed by the U.S. Navy, with the goal of facilitating technology transition. Beyond sonar and optical sensors, the payload features an acoustic modem for ranging to the diver and several data processing and compute boards.

Miller's team has tested the sensor-equipped AUV and algorithms around coastal New England — including in the open ocean near Portsmouth, New Hampshire, with the University of New Hampshire's (UNH) Gulf Surveyor and Gulf Challenger coastal research vessels as diver surrogates, and on the Boston-area Charles River, with an MIT Sailing Pavilion skiff as the surrogate.

"The UNH boats are well-equipped and can access realistic ocean conditions. But pretending to be a diver with a large boat is hard. With the skiff, we can move more slowly and get the relative motion in tune with how a diver and AUV would navigate together."

Last summer, the team started testing equipment with human divers at Michigan Technological University's Great Lakes Research Center. Although the divers lacked an interface to feed back information to the AUV, each swam holding the team's tube-shaped prototype tablet, dubbed a "tube-let." The tube-let was equipped with a pressure and depth sensor, inertial measurement unit (to track relative motion), and ranging modem — all necessary components for the navigation algorithms to solve the optimization problem.

"A challenge during testing was coordinating the motion of the diver and vehicle, because they don't yet collaborate," Miller says. "Once the divers go underwater, there is no communication with the team on the surface. So, you have to plan where to put the diver and vehicle so they don't collide."

The team also worked on the perception problem. The water clarity of the Great Lakes at that time of year allowed for underwater imaging with an optical sensor. Caroline Keenan, a Lincoln Scholars Program PhD student jointly working in the laboratory's Advanced Undersea Systems and Technology Group and Leonard's research group at MIT, took the opportunity to advance her work on knowledge transfer from optical sensors to sonar sensors. She is exploring whether optical classifiers can train sonar classifiers to recognize objects for which sonar data doesn't exist. The motivation is to reduce the human operator load associated with labeling sonar data and training sonar classifiers.

With the internally funded research program coming to an end, Miller's team is now seeking external sponsorship to refine and transition the technology to military or commercial partners.

"The modern world runs on undersea telecommunication and power cables, which are vulnerable to attack by disruptive actors. The undersea domain is becoming increasingly contested as more nations develop and advance the capabilities of autonomous maritime systems. Maintaining global economic security and U.S. strategic advantage in the undersea domain will require leveraging and combining the best of AI and human capabilities," Miller says.

Q&A: MIT SHASS and the future of education in the age of AI

Tue, 04/14/2026 - 9:00am

The MIT School of Humanities, Arts, and Social Sciences (SHASS) was founded in 1950 in response to “a new era emerging from social upheaval and the disasters of war,” as outlined in the 1949 Lewis Committee Report

The report’s findings emphasized MIT’s role and responsibility in the new nuclear age, which called for doubling down on genuine “integration” of scientific and technical topics with humanistic scholarship and teaching. Only that way, the committee wrote, could MIT tackle “the most difficult and complicated problems confronting our generation.”

As SHASS marks its 75th anniversary, Dean Agustín Rayo answers questions about why the need for developing students with broad minds and human understanding is as urgent as ever, given pressing challenges in the midst of a new technological revolution.

Q: Many universities are responding to artificial intelligence by launching new technical programs or updating curricula. You’ve suggested the change is deeper than that. Why?

A: Artificial intelligence isn’t just changing the way students learn — it’s transforming every aspect of society. The labor market is experiencing a dramatic shift, upending traditional paths to financial stability. And AI is changing the ways we bring meaning to our lives: the ways we build relationships, the ways we pay attention, and the things we enjoy doing.

The upshot is that the most important question universities need to ask is not how to adapt our pedagogy to AI — although we certainly need to address that. The most important question we need to ask is how to provide an education that brings real value to students in the age of AI. 

We need to ensure that universities provide students with the tools they need to find a path to financial security and to build meaningful lives.

We need to produce students with minds that are both nimble and broad. We need our students to not only be able to execute tasks effectively, but also have the judgment to determine which tasks are worth executing. We need students who have a moral compass, and who understand how the world works, in all of its political, economic, and human complexity. We need students who know how to think critically, and who have excellent communication and leadership skills.

Q: What role do the humanities, arts, and social sciences play in preparing MIT students for that future?

A: They’re essential, and are rightly a core part of an MIT education: MIT has long required its undergraduates take at least eight courses in HASS disciplines to graduate.

Fields like philosophy, political science, economics, literature, history, music, and anthropology are crucial to developing the parts of our lives that are essentially human — the parts that will not be replaced by AI.

They are crucial to developing critical thinking and a moral compass. They are crucial to understanding people — our values, institutions, cultures, and ways of thinking. They are crucial to creating students who are broad thinkers who understand the way the world works. They are crucial to developing students who are excellent communicators and are able to describe their projects — and their lives — in a way that endows them with meaning.

Our students understand this. Here is how one of them put the point: “Engineering gives me the tools to measure the world; the humanities teach me how to interpret it. That balance has shaped both how I do science and why I do it.” (Full interview here.)

Q: Some people worry that emphasizing humanistic study could dilute MIT’s technological edge. How do you respond to that concern?

A: I think the opposite is true. 

MIT is an important engine for social mobility in the United States, and a catalyst for entrepreneurship, which has added billions of dollars to the American economy. That cannot be separated from the fact that we are a technical institution, which brings together the country’s most talented undergraduates — regardless of socioeconomic background — and transforms them into the next generation of our country's top scientific and engineering leaders. 

MIT plays an incredibly important role in our country. So, the last thing I want to do is mess with our secret sauce.

But I also think that the age of AI is forcing us to rethink what it means to be a top engineer. 

Think about artificial intelligence itself. The challenges we face are not just technical. Issues like bias, accountability, governance, and the societal impact of automation are no less important. Understanding those dimensions helps technologists design better systems and anticipate real-world consequences.

Strengthening the humanities at MIT isn’t a departure from our core mission — it’s a way of ensuring that our technical leadership continues to matter in the world.

Q: What kinds of changes is MIT SHASS pursuing to support this vision?

A: There’s a lot going on! 

We’ve launched the MIT Human Insight Collaborative (MITHIC) as a way of strengthening research in the humanities, arts, and social sciences, and of deepening collaboration with colleagues across MIT.

We’re shaping the undergraduate experience to ensure that every MIT student engages with the big societal questions shaping our time, from democratic resilience to climate change to the ethics of new technologies.

We’re building stronger connections through initiatives like the creation of shared faculty positions with the MIT Schwarzman College of Computing (SCC). And we recently launched a new Music Technology and Computation Graduate Program with the School of Engineering.

We’re partnering with SERC (the SCC’s Social and Ethical Responsibilities of Computing) to design new classes on the intersection of computing and human-centered issues, such as ethics.

And we’re elevating the humanities — for their own sake, and as a space for experimentation, bringing together students, faculty, and partners to explore new forms of research, teaching, and public engagement.

This is a very exciting time for SHASS.

Flying at the edge of the stratosphere

Tue, 04/14/2026 - 9:00am

All the ingredients to leave the first layer of the atmosphere were laying on a picnic table. T-minus 30 minutes before launch from the New York Catskills, students in MIT's reborn 16.00 (Introduction to Aerospace Engineering) course tore open hand warmers to fight the December morning chill. One hot pack for cold hands. One for the electronics payload, which would need the warmth on the way up. This series of balloon launches rose to more than 20 kilometers above the surface.

Five student teams completed stratospheric balloon launches for a final project in the MIT Department of Aeronautics and Astronautics (AeroAstro) first-year exploratory course. This fall semester was the first iteration of the reimagined 16.00. The course was co-taught by MIT professors Jeffery Hoffman, a former NASA astronaut, and Oliver de Weck, Apollo Program Professor of Astronautics and Engineering Systems. The course was reintroduced to the curriculum in 2025 to give first-year students a design-build experience from the very start, says de Weck, who is also AeroAstro's associate department head. 

"This course had been taught for more than 25 years. And then the pandemic came," he explains. "We felt that it was time to bring the course back, to revive it, give it new life."

De Weck taught a version of this hands-on project from 2012 to 2016 in Unified Engineering, with 20 balloon launches over that time. Hoffman taught a version that focused on blimps, indoor flights, and achieving neutral buoyancy and control. Those prior courses inspired the new program. The current 16.00 course is an early introduction to design-build flying, offered before the well-known Unified Engineering course for Course 16 sophomores.

"Students don't want to sit through long lectures, with lots of PowerPoints and notes and blackboards," says de Weck. He referenced feedback from students that is framing the department's upcoming strategic plan. "Those hands-on visceral experiences is what we want to provide them."

The AeroAstro program adds about 60 undergraduates per year. Future students can expect to see different versions of the 16.00 course, including those focused on fixed-wing aircraft, quadcopter drones, and rockets. Future balloon courses will be called 16.00B. A fixed-wing remote-controlled aircraft course will be 16.00A.

Over 13 weeks, the students attended lectures on subjects including atmospheric composition, radio waves, and flight planning and regulations. In labs, they practiced building Arduino-based pressure and temperature sensors, and testing communication systems.

On that cold launch day, Jackson Lunfelt kept his grip against the pull of an oversized helium balloon moments before his team's launch. His team worked for weeks configuring GPS and radio communications and testing balloon buoyancy. Among their trials and errors, they had to find the right weight for a 3D printed frame to attach the balloon and parachute. It was too heavy at first. They figured out how to reduce the weight of the plastic to keep the payload buoyant.

"Fortunately, a lot of preparation had helped us," he says.

Lunfelt, a first-year student, grew up just a few hours away from the Catskills in upstate New York. In high school, he was active in Future Farmers of America, welding, and robotics. On launch day, his team was worried their onboard GoPro would shut off from the cold high-altitude temperatures. They got the green light to add a battery bank. They would need to re-calculate the weight and helium needed at the final hour.

"It was one of those things that if you don't do this, you're not gonna launch,” says Lunfelt.

That first week of December brought frigid air, gusts, and wind patterns that meant the class would have to rethink its launch site. The team aimed to fly east, over Massachusetts, and land before reaching the ocean. The new weather pattern pushed the team even farther west across the New York border.

The balloon lifted the 3.5 pound payload from the Catskills while the mission control group monitored progress from Cambridge, Massachusetts. It rose hundreds of feet per minute. It passed the troposphere and flew across Western Massachusetts at 100 miles an hour, pushed by the strong upper-level winds of the jet stream. It climbed to an estimated 22 kilometers above the surface. At that height, an onboard GoPro camera recorded the curvature of the Earth.

"Every single moment of that video was amazing. It was truly a story in itself," says Lunfelt.

Then the latex balloon burst, as designed, and descended back down — aided by a parachute. The GoPros captured that spectacular moment, too. The winds carried them just north of the Massachusetts-New Hampshire border. They landed in a neighborhood around Nashua, New Hampshire. Locals saw the MIT identifiers written on the side of the payloads and helped the teams recover them. The landing made it onto the local news.

After a very early morning and late evening monitoring the launch returns, de Weck, alongside teaching assistant Jonathan Stoppani and Senior Technical Instructor Dave Robertson, agreed that the feeling of pride from the whole class was palpable. The payloads all came back in one piece, a test of successful design-builds and last-minute adjustments. The AeroAstro flying tradition is back for first-year students. 

Carbon removal project supports Maine’s blue economy, broader marine health

Tue, 04/14/2026 - 12:00am

Oceans absorb roughly 25 to 30 percent of the carbon dioxide (CO2) that is released into the atmosphere. When this CO2 dissolves in seawater, it forms carbonic acid, making the water more acidic and altering its chemistry. Elevated levels of acidity are harmful to marine life like corals, oysters, and certain plankton that rely on calcium carbonate to build shells and skeletons.

“As the oceans absorb more CO2, the chemistry shifts — increasing bicarbonate while reducing carbonate ion availability — which means shellfish have less carbonate to form shells,” explains Kripa Varanasi, professor of mechanical engineering at MIT. “These changes can propagate through marine ecosystems, affecting organism health and, over time, broader food webs.”

Loss of shellfish can lead to water quality decline, coastal erosion, and other ecosystem disruptions, including significant economic consequences for coastal communities. “The U.S. has such an extensive coastline, and shellfish aquaculture is globally valued at roughly $60 billion,” says Varanasi. “With the right innovations, there is a substantial opportunity to expand domestic production.”

“One might think, ‘this [depletion] could happen in 100 years or something,’ but what we’re finding is that they are already affecting hatcheries and coastal systems today,” he adds. “Without intervention, these trends could significantly alter marine ecosystems and the coastal economies that rely on them over time.”

Varanasi and T. Alan Hatton, the Ralph Landau Professor of Chemical Engineering, Post-Tenure, at MIT, have been collaborating for years to develop methods for removing carbon dioxide from seawater and turn acidic water back to alkaline. In recent years, they’ve partnered with researchers at the University of Maine Darling Marine Center to deploy the method in hatcheries.

“The way we farm oysters, we spawn them in special tanks and rear them through about a two-week larval period … until they’re big enough so that they can be transferred out into the river as the water warms up,” explains Bill Mook, founder of Mook Sea Farm. Around 2009, he noticed problems with production of early-stage larvae. “It was a catastrophe. We lost several hundred thousand dollars’ worth of production,” he says.

Ultimately, the problem was identified as the low pH of the water that was being brought in: The water was too acidic. The farm’s initial strategy, a common practice in oyster farming, was to buffer the water by adding sodium bicarbonate. The new approach avoids the use of chemicals or minerals.

“A lot of researchers are studying direct air capture, but very few are working in the ocean-capture space,” explains Hatton. “Our approach is to use electricity, in an electrochemical manner, rather than add chemicals to manipulate the solution pH.”

The method uses reactive electrodes to release protons into seawater that is collected and fed into the cells, driving the release of the dissolved carbon dioxide from the water. The cyclic process acidifies the water to convert dissolved inorganic bicarbonates to molecular carbon dioxide, which is collected as a gas under vacuum. The water is then fed to a second set of cells with a reversed voltage to recover the protons and turn the acidic water back to alkaline before releasing it back to the sea.

Maine’s Damariscotta River Estuary, where Mook farms is located, provides about 70 percent of the state’s oyster crop. Damian Brady, a professor of oceanography based at the University of Maine and key collaborator on the project, says the Damariscotta community has “grown into an oyster-producing powerhouse … [that is] not only part of the economy, but part of the culture.” He adds, “there’s actually a huge amount that we could learn if we couple the engineering at MIT with the aquaculture science here at the University of Maine.”

“The scientific underpinning of our hypothesis was that these bivalve shellfish, including oysters, need calcium carbonate in order to form their shells,” says Simon Rufer PhD ’25, a former student in Varanasi’s lab and now CEO and co-founder of CoFlo Medical. “By alkalizing the water, we actually make it easier for the oysters to form and maintain their shells.”

In trials conducted by the team, results first showed that the approach is biocompatible and doesn't kill the larvae, and later showed that the oysters treated by MIT's buffer approach did better than mineral or chemical approaches. Importantly, Hatton also notes, the process creates no waste products. Ocean water goes in, CO2 comes out. This captured CO2 can potentially be used for other applications, including to grow algae to be used as food for shellfish.

Varanasi and Hatton first introduced their approach in 2023. Their most recent paper, “Thermodynamics of Electrochemical Marine Inorganic Carbon Removal,” which was published last year in journal Environmental Science & Technology, outlines the overall thermodynamics of the process and presents a design tool to compare different carbon removal processes. The team received a “plus-up award” from ARPA-E to collaborate with University of Maine and further develop and scale the technology for application in aquaculture environments.

Brady says the project represents another avenue for aquaculture to contribute to climate change mitigation and adaptation. “It pushes a new technology for removing carbon dioxide from ocean environments forward simultaneously,” says Brady. “If they can be coupled, aquaculture and carbon dioxide removal improve each other’s bottom line."

Through the collaboration, the team is improving the robustness of the cells and learning about their function in real ocean environments. The project aims to scale up the technology, and to have significant impact on climate and the environment, but it includes another big focus.

“It’s also about jobs,” says Varanasi. “It’s about supporting the local economy and coastal communities who rely on aquaculture for their livelihood. We could usher in a whole new resilient blue economy. We think that this is only the beginning. What we have developed can really be scaled.”

Mook says the work is very much an applied science, “[and] because it’s applied science, it means that we benefit hugely from being connected and plugged into academic institutions that are doing research very relevant to our livelihoods. Without science, we don’t have a prayer of continuing this industry.”

Jazz in the key of life

Sun, 04/12/2026 - 12:00am

It is not hard to find glowing reviews of saxophonist Miguel Zenón, a creative jazz artist whose compositions incorporate musical elements from his native Puerto Rico.

For instance, The Jazz Times called “Jibaro,” Zenón’s breakthrough 2005 album, “profound yet joyful.” The New York Times called the same music “strong and light,” adding that we have “rarely seen a jazz composer step forward with a project so impressively organized, intellectually powerful and well played from the start.”

In 2009, when Zenón won a prestigious MacArthur Fellowship, the MacArthur Foundation called Zenón’s work “elegant and innovative,” with “a high degree of daring and sophistication.” In 2012, The New York Times reviewed another Zenón work, “Puerto Rico Nació en Mi: Tales From the Diaspora,” by calling the music “deeply hybridized and original, complex but clear.”

As you may have noticed, these notices all contain multiple descriptive terms. That’s because Zenón’s work is many things at once: jazz, combined with other musical genres; technically rigorous, and supple; novel, yet steeped in tradition. Indeed, Zenón has always seen jazz as being multifaceted.

“What I discovered, when I first encountered jazz, was this idea that you were using improvisation to portray your personality directly to your listeners,” Zenón explains. “And it was connected to a very interesting and intricate improvisational language. That provided something I hadn’t encountered in music before, this idea that you could have something personal and heartfelt walking hand in hand with something that was intellectual and brainy. That balance spoke to me.”

It is still speaking. In 2024, Zenón won the Grammy Award for Best Latin Jazz Album for “El Arte Del Bolero Vol. 2,” a collaboration with Venezuelan pianist Luis Perdomo, a musical partner in the Miguel Zenón Quartet.

Zenón has taught at MIT for three years now. He became a tenured faculty member last year, in MIT’s Music and Theater Arts program, where he helps students find the same satisfaction in music that he does.

“When I first got into music, I was looking for fulfillment,” Zenón says. “It wasn’t about success. I was just looking for music to fulfill something within me. And I still search for that now. And sometimes it still feels like it did 25 or 30 years ago, when I first encountered that feeling. It’s nice to have that in your pocket, to say, this is what I’m looking for, that initial feeling.”

Paradise in the Back Bay

Zenón grew up in San Juan, Puerto Rico. Around age 11, he started attending a performing arts school and playing the saxophone. In his last year of school, Zenón was admitted into college to study engineering. However, a few years before, he had encountered something new: jazz. Zenón’s training had been in classical music. But jazz felt different.

“Discovering jazz music ignited a passion for music in me that had not existed up to that point,” says Zenón, who decided to pursue music in college. “I kind of jumped ship, and it was a blind jump. I didn’t know what to expect, I didn’t know what was on the other side, I didn’t have any artists or any musicians in my family. I just followed a hunch, followed my heart.”

After teachers recommended he study at the renowned Berklee College of Music in Boston, Zenón worked to find a scholarship and funding.

“This was way before the internet. I was looking at catalogs,” Zenón recalls. “I had never been to Boston in my life, I didn’t even know what Berklee looked like. But at Berklee it was the first time I was able to connect with a jazz teacher in a formal way, to learn about history, theory, harmony, and I soaked in it. Also, I was surrounded by young people like myself, who were as enamored and passionate about music as I was. It really felt like paradise.”

After earning his BA from Berklee in 1998, Zenón then moved to New York City. He earned an MA from the Manhattan School of Music in 2001 and began playing more extensively with new bandmates.

“I just wanted to be able to play with people who were better than me, and learn from the experience,” Zenón says. He started generating new ideas, writing music, and performing publicly. With Antonio Sánchez, Hans Glawischnig, and Perdomo, he founded the Miguel Zenón Quartet.

“That led to going into the studio and making an album,” Zenón recounts. “And that led to more experience, and more albums.”

Did it ever. Zenón has now been the leader for about 20 albums, mostly featuring the quartet. (After several years, Henry Cole replaced Sánchez as the group’s drummer.) Zenón has played on many recordings by other artists, and helped found the SFJAZZ Collective.

Not many prolific musicians will name any one recording as their best, and Zenón is the same way, but he is willing to cite a few that were milestones for him.

“Jibaro” draws on the music of Puerto Rico’s jibaro singers, troubadors using 10-line stanzas with eight-syllable lines, something Zenón adopted for jazz-quartet use. “Esta Plena,” a 2009 record, fuses jazz and the structures of “plena,” a traditional percussion-based Puerto Rican song form. “Alma Adentro,” a 2011 album, covers classic songs from Puerto Rico.

“It would be impossible for me to pick one favorite, but what I would say is, there are a couple of albums in the earlier part of my career that explored a balance between things coming from a jazz world and coming from traditional Puerto Rican traditional music and folklore, when I was able to feel like that balance was right, it felt like me,” Zenón says. “This is what I have to give. This is my persona.”

In 2008, Zenón was also honored with a Guggenheim Fellowship, which helped him conduct music research, another facet of his career. Zenón has often extensively interviewed traditional Puerto Rican musicians about the intricacies of their works before writing material in those forms.

And Zenón has made a point of giving back, founding the Caravana Cultural, a project that brings free jazz concerts to rural Puerto Rico.

Work, joy, and love

Zenón is now settled in at MIT, which boasts a vibrant music program. More than 1,500 MIT students take a music class each year, and over 500 students participate in one of 30 campus ensembles. Last year, MIT opened its new Edward and Joyce Linde Music Building, a purpose-built performance, rehearsal, and teaching space.

“There are definitely students at MIT who could be at some of the best music schools in the world,” Zenón says. “That’s not in question.”

Moreover, among MIT students, Zenón says, “There is a communal approach to music. Everything they do, they do for each other. They look out for each other, they work together. And that has been one of the most rewarding things to see.”

He continues: “Of course the students are brilliant and the faculty are too. In terms of what I like to teach, it’s been a good fit for me personally, and I couldn’t be happier about the opportunity. There’s more and more interest in jazz, more and more interest in creating things together, and there’s a unique mindset being built in front of our eyes.”

He is also pleased to work in the Linde Music Building: “It’s amazing to have the building, not only in terms of the facilities, but it’s also a symbol of the place music has within the Institute. We’re not just talking about music, we’re creating it. It’s a great commitment from the school and says a lot about our leadership.”

Meanwhile, along with teaching, Zenón’s own recording career continues at full speed. With Luis Perdomo, he is working on “El Arte Del Bolero Vol. 3,” the follow-up to his Grammy-winning album. And Zenón has plans for still another album, to be recorded in Puerto Rico with a large ensemble, based on music he is writing about Puerto Rico’s history and present.

“Things are always linked,” Zenón explains. “Once you finish one project, the next one starts. It feels natural for me to do it that way.”

In conversation, Zenón is engaging, genial, and reflective. So what advice does he have for younger musicians? Not everyone who plays an instrument will become Miguel Zenón. But what about people who want to pursue music, not knowing how far it will take them?

“If you find something you enjoy, just enjoy it for the sake of it,” Zenón says. “Find what brings joy, and make sure you don’t lose that. Having said that, with music, like any art form, or anything else in life, in order to make progress, it takes work and commitment. There’s no hiding that. So if music is something you’re serious about, set goals you can achieve over time, so you always have something to work for. In my experience, that’s key. But I always pair that with the idea of joy and love for music — keeping that love close to your heart.”

Professor Emeritus Jack Dennis, pioneering developer of dataflow models of computation, dies at 94

Fri, 04/10/2026 - 5:40pm

Jack Dennis, an influential MIT professor emeritus of computer science and engineering, died on March 14 at age 94. The original leader of the Computation Structures Group within the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), he pioneered the development of dataflow models of computation, and, subsequently, many novel principles of computer architecture inspired by dataflow models.

The second child of an engineer and a textile designer, Dennis showed early interest in both engineering and music, rewriting Gilbert and Sullivan lyrics with his parents and playing piano with the Norwalk Symphony Orchestra in Connecticut as a teen, while building a canoe at home with his father. As an undergraduate at MIT, he developed his wide array of interests further, joining the VI-A Cooperative Program in Electrical Engineering; working at the Air Force Cambridge Research Laboratories on projects in speech processing and novel radar systems; participating in the model railroad club; and joining the MIT Symphony Orchestra, where he met his first wife, Jane Hodgson ’55, SM ’56, PhD ’61. (The two later separated when she went to study medicine in Florida.) 

Dennis earned his BS (1953), MS (1954), and ScD (1958) from MIT before joining the then-Department of Electrical Engineering as a faculty member. He was promoted to full professor status in 1969. His doctoral thesis, entitled, “Mathematical Programming and electrical networks,” explored analogies between electric circuit theory and quadratic programming problems. Ideas he developed in that paper further crystallized in his 1964 paper, “Distributed solution of network programming problems,” which created an important early class of digital distributed optimization solvers.

In a 2003 piece that Dennis wrote for his undergraduate class’s 50th reunion, he remembered his earliest encounters with computers at the Institute: “I prepared programs written in assembly language on punched paper tape using Frieden 'Flexowriters,' and stood aside watching the myriad lights blink and flash while operator Mike Solamita fed the tapes [...] That was 1954. Fifty years later, much has changed: A room full of vacuum tubes has become a tiny chip with millions of transistors. A phenomenon once limited to research laboratories has become an industry producing commodity products that anyone can own and use beneficially.”

Dennis’ influence in steering that change was profound. As a collaborator with the teams behind both Project MAC and Multics, the earliest attempts to allow multiple users to work with a single computer seemingly simultaneously (i.e., a time-shared operating system), Dennis helped to specify the unique segment addressing and paging mechanisms that became a fundamental part of the General Electric Model 645 computer. His insights stemmed from a tendency to pay equal attention to both hard- and software when others considered themselves specialists in one or the other. 

“I formed the Computation Structures Group [within CSAIL] and focused on architectural concepts that could narrow the acknowledged gap between programming concepts and the organization of computer hardware,” Dennis explained in his 2003 recollection. “I found myself dismayed that people would consider themselves to be either hardware or software experts, but paid little heed to how joint advances in programming and architecture could lead to a synergistic outcome that might revolutionize computing practice.”

Dennis’ emphasis on synergy did not go unnoticed. Gerald Sussman, the Panasonic Professor of Electrical Engineering, points out “the relationship of [Dennis’] dataflow architecture to single-assignment programs, and thus to pure functional programs. This coupled the virtue of referential transparency in programming to the effective use of hardware parallelism. Dennis also pioneered the use of self-timed circuits in digital systems. The ideas from that work generalize to much of the work on highly distributed systems.” 

The Computation Structures Group attracted multiple scholars interested in developing asynchronous computing and dataflow architecture, many of whom became lifelong friends and collaborators. These included Peter Denning, with whom Dennis and Joseph Qualitz co-authored the textbook “Machines, Languages, and Computation” (1978); the late Arvind, who became faculty head of computer science for the Department of Electrical Engineering and Computer Science (EECS), and the late Guang R. Gao, who became distinguished professor of electrical and computer engineering at the University of Delaware. 

In recognition of his contributions to the Multics project, Dennis was elected fellow of the Institute of Electrical and Electronics Engineers (IEEE). Many additional honors would follow: He received the Association for Computing Machinery (ACM)/IEEE Eckert-Mauchly Award in 1984; was inducted as a fellow of the ACM (1994); was named to the National Academy of Engineering (2009); was elected to the (ACM) Special Interest Group on Operating Systems (SIGOPS) Hall of Fame (2012); and was awarded the IEEE John von Neumann Medal (2013). 

A successful researcher, Dennis was perhaps equally influential in the development of EECS’ curriculum, developing six subjects in areas of computer theory and systems: Theoretical Models for Computation; Computation Structures; Structure of Computer Systems; Semantic Theory for Computer Systems; Semantics of Parallel Computation; and Computer System Architecture (taught in collaboration with Arvind.) Several of the courses that Dennis developed continue to be taught, in updated form, to this day.

Following his retirement from teaching in 1987, he consulted on projects relating to parallel computer hardware and software for such varied groups as NASA Research Institute for Advanced Computer Science; Boeing Aerospace; McGill University; the Architecture Group of Carlstedt Elektronik in Gothenburg, Sweden; and Acorn Networks, Inc. His fruitful relationship with former student Guang Gao continued in the form of a lecture tour through China, as well as co-authorship of a book, “Dataflow Architecture,” currently in progress at MIT Press. 

A voracious lifelong learner, Dennis was fond of repeating a friend’s observation that “a scholar is just a book’s way of making another book.” In a full and active retirement, he still made room for music, trying his hand at composing; performing at Tanglewood as a tenor in Chorus Pro Musica; playing piano at the marriage of Guang Gao’s son Nick; and joining the chorus at the First Church in Belmont, Massachusetts, where his celebration of life (with concurrent livestreaming) will be held on Monday, June 8, at 2 p.m. 

Dennis is survived by his wife Therese Smith ’75; children David Hodgson Dennis of North Miami, Florida; Randall Dennis of Connecticut; and Galen Dennis, a resident of Australia. 

Learning with audiobooks

Thu, 04/09/2026 - 2:00pm

Millions of students nationwide use text-supplemented audiobooks, learning tools that are thought to help those who struggle with reading keep up in the classroom. A new study from scientists at MIT’s McGovern Institute for Brain Research finds that many students do benefit from the audiobooks, gaining new vocabulary through the stories they hear. But study participants learned significantly more when audiobooks were paired with explicit one-on-one instruction — and this was especially true for students who were poor readers. The group’s findings were reported on March 17 in the journal Developmental Science.

“It is an exciting moment in this ed-tech space,” says Grover Hermann Professor of Health Sciences and Technology John Gabrieli, noting a rapid expansion of online resources meant to support students and educators. “The admirable goal in all this is: Can we use technology to help kids progress, especially kids who are behind for one reason or another?” His team’s study — one of few randomized, controlled trials to evaluate educational technology — suggests a nuanced approach is needed as these tools are deployed in the classroom. “What you can get out of a software package will be great for some people, but not so great for other people,” Gabrieli says. “Different people need different levels of support.” Gabrieli is also a professor of brain and cognitive sciences and an investigator at the McGovern Institute. 

Ola Ozernov-Palchik and Halie Olson, scientists in Gabrieli’s lab, launched the audiobook study in 2020, when most schools in the United States had closed to slow the spread of Covid-19. The pandemic meant the researchers would not be able to ask families to visit an MIT lab to participate in the study — but it also underscored the urgency of understanding which educational technologies are effective, and for whom.

“What we were really concerned about as the pandemic hit is that the types of gaps that we see widen through the summers — the summer slide that affects poor readers and disadvantaged children to a greater extent — would be amplified by the pandemic,” says Ozernov-Palchik. Many educational technologies purport to ameliorate these gaps. But, Ozernov-Palchik says, “fewer than 10 percent of educational technology tools have undergone any type of research. And we know that when we use unproven methods in education, the students who are most vulnerable are the ones who are left further and further behind.”

So the team designed a study that could be done remotely, involving hundreds of third- and fourth-graders around the country. They focused on evaluating the impact of audiobooks on children’s vocabularies, because vocabulary knowledge is so important for educational success. Ozernov-Palchik explains that books are important for exposing children to new words, and when children miss out on that experience because they struggle to read, they can fall further behind in school.

Audiobooks allow students to access similar content in a different way. For their study, the researchers partnered with Learning Ally, an organization that produces audiobooks synchronized with highlighted text on a computer screen, so students can follow along as they listen.

“The idea is, they’re going to learn vocabulary implicitly through accessing those linguistically rich materials,” Ozernov-Palchik says. But that idea was untested. In contrast, she says, “we know that really what works in education, especially for the most vulnerable students, is explicit instruction.”

Before beginning their study, Ozernov-Palchik and Olson trained a team of online tutors to provide that explicit instruction. The tutors — college students with no educational expertise — learned how to apply proven educational methods to support students’ learning and understanding of challenging new words they encountered in their audiobooks.

Students in the study were randomly assigned to an eight-week intervention. Some were asked to listen to Learning Ally audiobooks for about 90 minutes a week. Another group received one-on-one tutoring twice a week, in addition to listening to audiobooks. A third group, in which students participated in mindfulness practice without using audiobooks or receiving tutoring, served as a control.

A diverse group of students participated, spanning different reading abilities and socioeconomic backgrounds. The study’s remote design — with flexibly scheduled testing and tutoring sessions conducted over Zoom — helped make that possible. “I think the pandemic pushed researchers to rethink how we might use these technologies to make our research more accessible and better represent the people that we’re actually trying to learn about,” says Olson, a postdoc who was a graduate student in Gabrieli’s lab.

Testing before and after the intervention showed that overall, students in the audiobooks-only group gained vocabulary. But on their own, the books did not benefit everyone. Children who were poor readers showed no improvement from audiobooks alone, but did make significant gains in vocabulary when the audiobooks were paired with one-on-one instruction. Even good readers learned more vocabulary when they received tutoring, although the differences for this group were less dramatic.

Individualized, one-on-one instruction can be time-consuming, and may not be routinely paired with audiobooks in the classroom. But the researchers say their study shows that effective instruction can be provided remotely, and you don’t need highly trained professionals to do it.

For students from households with lower socioeconomic status, the researchers found no evidence of significant gains, even when audiobooks were paired with explicit instruction — further emphasizing that different students have different needs. “I think this carefully done study is a note of caution about who benefits from what,” Gabrieli says.

The researchers say their study highlights the value and feasibility of objectively evaluating educational technologies — and that effort will continue. At Boston University, where she is a research assistant professor, Ozernov-Palchik has launched a new initiative to evaluate artificial intelligence-based educational tools’ impacts on student learning. 

A philosophy of work

Thu, 04/09/2026 - 2:00pm

What makes work valuable? Michal Masny, the NC Ethics of Technology Postdoctoral Fellow in the MIT Department of Philosophy, investigates the role work plays in our lives and its impact on our well-being. 

Masny sees numerous benefits to work, beyond a paycheck. It’s a space for people to develop excellence at something, make a social contribution, gain social recognition, and create and sustain community. 

“Consider a future in which we shorten the work week, or one in which we eliminate work altogether,” Masny says. “I don’t believe either of these scenarios would be unambiguously good for everyone.”

“Work is both necessary and positively valuable,” he argues, further suggesting that our lives might be worsened if we were to eliminate work completely. “There can be optimal combinations of work and leisure time.”

Masny is completing his two-year term in the NC Ethics of Technology Fellowship at the end of the spring semester. In addition to advancing his research, Masny has been working to foster dialogue and educate students on issues at the intersection of philosophy and computing. This semester, Masny is teaching an undergraduate course, 24.131 (Ethics of Technology).

Masny advocates for an updated approach to educating complete, socially aware students. “I want to create scientists who think about their projects and potential outcomes as lawyers and philosophers might, and vice versa,” he says. Masny argues for the importance of eliminating the “wisdom gap” between these groups, citing scientist Carl Sagan’s warning about the dangers of becoming “powerful without becoming commensurately wise” as scientific and technological advances continue.

“The traditional division of labor is that scientists and engineers invent new technologies, and then philosophers and lawyers evaluate and regulate them,” he continues. “But the pace at which new technologies are invented and deployed has made this division of labor untenable.” 

Established in 2021 with support from the NC Cultural Foundation, the fellowship was created with the goal of advancing critical discourse and research in the ethics of technology and AI at MIT, and by making important research and information available to the global community. 

Venture capitalist Songyee Yoon, founder and managing partner of AI-focused investment firm Principal Venture Partners and a supporter of the NC Ethics of Technology Fellowship, believes technology and scientific discovery are among humanity’s most valuable public goods, and artificial intelligence represents the most consequential technology of our time. 

“If we want the fabric of our society to be built responsibly, we must train our builders upstream, at the very moment they begin learning to design and scale technology. There is no better place to begin this work than MIT,” she says. “Supporting the Ethics of Technology Fellows Program was born from that conviction, and I am deeply encouraged to see it embraced at MIT.”

“In philosophy, you’re supposed to question everything”

Masny arrived at MIT in fall 2024, following a year as a postdoc at the Kavli Center for Ethics, Science, and the Public at the University of California at Berkeley. Originally from Poland, Masny received his PhD in philosophy from Princeton University after completing studies at Oxford University and the University of Warwick in the United Kingdom. 

He works mainly in value theory, ethics of technology, and social and political philosophy. His current research interests include the nature of human and animal well-being, our obligations to future generations, the risk of human extinction, the future of work, and anti-aging technology. 

During his tenure in the fellowship, Masny has published several research articles on ethical issues concerning the future of humanity — a topic closely relevant to thinking about the existential risks of AI development and deployment. 

“In philosophy, you’re supposed to question everything,” he says.  

Masny’s work in the fellowship continues a tradition of collaborative investigation and exploration that MIT encourages and celebrates. In fall 2024, Masny co-taught an introductory undergraduate course, STS.006J/24.06J (Bioethics), with Robin Scheffler, an associate professor in the Program in Science, Technology, and Society

During the 2024-25 academic year, Masny led a student research group, “Deepfakes: Ethical, Political, and Epistemological Issues,” as a part of the Social and Ethical Responsibilities of Computing (SERC) Scholars Program. The group explored the ethical, political, and epistemological dimensions of concerns over misleading deepfakes, and how they can be mitigated.

Students in Masny’s cohort spent spring 2025 working in small groups on a number of projects and presented their findings in a poster session during the MIT Ethics of Computing Research Symposium at the MIT Schwarzman College of Computing.

In summer 2025, Masny assisted with a summer course in philosophy, 24.133/134 (Experiential Ethics), in which students subject their computer science and engineering projects to ethical scrutiny with the help of trained philosophers. 

He’s encouraged by the opportunities to test his ideas and share them with people who can help refine and improve them. 

Communities of practice and engagement

When considering the value of his experience at MIT, Masny lauds the philosophy department and the opportunities to collaborate with so many different kinds of scholars. To answer the kinds of questions his research uncovers, he says, you must range further afield. He values the space MIT creates for broad inquiry while also seeking connections between his findings on work, its value, and the human impact of technology on our social lives. 

“Typically, undergraduate philosophy courses include two hour-long lectures followed by discussion; a lecture is like an audiobook,” he says. Instead, he believes, they should more like listening to a podcast or watching a talk show. 

“I want the class to be an event in a student’s schedule,” he continues. 

Masny is also considering how to integrate valuable philosophical tools into life outside the classroom. Philosophy and research can support other kinds of inquiry. Developing philosophers’ mindsets is a net positive, by his reckoning. Designing better questions, for example, can lead to better, more insightful, more accurate answers. It can also improve students’ abilities to identify challenges.

Masny will begin teaching at the University of Colorado at Boulder in fall 2026, and wants to test new ideas while continuing his research into the value of work. 

Kieran Setiya, the Peter de Florez Professor in Philosophy and head of the Department of Linguistics and Philosophy, says the NC Ethics of Technology Postdoctoral Fellowship has allowed MIT to bring in a series of exceptional young philosophers working at the intersection of ethics and AI, studying the systemic effects of new computing technologies and the moral, social, and political challenges they pose.

“This is just the kind of applied interdisciplinary thinking we need to support and sustain at MIT,” he adds.

Slice and dice

Thu, 04/09/2026 - 2:00pm

What if the Trojan horse had been pulled to pieces, revealing the ruse and fending off the invasion, just as it entered the gates of Troy?

That’s an apt description of a newly characterized bacterial defense system that chops up foreign DNA.

Bacteria and the viruses that infect them, bacteriophages — phages for short — are ceaselessly at odds, with bacteria developing methods to protect themselves against phages that are constantly striving to overcome those safeguards.

New research from the Department of Biology at MIT, recently published in Nature, describes a defense system that is integrated into the protective membrane that encapsulates bacteria. SNIPE, which stands for surface-associated nuclease inhibiting phage entry, contains a nuclease domain that cleaves genetic material, chopping the invading phage genome into harmless fragments before it can appropriate the host’s molecular machinery to make more phages. 

Daniel Saxton, a postdoc in the Laub Lab and the paper’s first author, was initially drawn to studying this bacterial defense system in E. coli, in part because it is highly unusual to have a nuclease that localizes to the membrane, as most nucleases are free-floating in the cytoplasm, the gelatinous fluid that fills the space inside cells.

“The other thing that caught my attention is that this is something we call a direct defense system, meaning that when a phage infects a cell, that cell will actually survive the attack,” Saxton says. “It’s hard to fend off a phage directly in a cell and survive — but this defense system can do it.” 

Light it up

For Saxton, the project came into focus during a fluorescence-based experiment in which viral genetic material would light up if it successfully penetrated the bacteria. 

“SNIPE was obliterating the phage DNA so fast that we couldn’t even see a fluorescent spot,” Saxton recalls. “I don’t think I’ve ever seen such an effective defense system before — you can barrage the bacteria with hundreds of phage per cell, but SNIPE is like god-tier protection.”

When the nuclease domain of SNIPE was mutated so it couldn’t chop up DNA, fluorescent spots appeared as usual, and the bacteria succumbed to the phage infection. 

Bacteria maintain tight control over all their defense systems, lest they be turned against their host. Some systems remain dormant until they flare up, for example, to halt all translation of all proteins in the cell, while others can distinguish between bacterial DNA and foreign, invading phage DNA. There were only two previously characterized mechanisms in the latter category before researchers uncovered SNIPE. 

“Right now, the phage field is at a really interesting spot where people are discovering phage defense systems at a breakneck pace,” Saxton says. 

Problems at the periphery

Saxton says they had to approach the work in a somewhat roundabout way because there are currently no published structures depicting all the steps of phage genome injection. Studying processes at the membrane is challenging: Membranes are dense and chaotic, and phage genome injection is a highly transient process, lasting only a few minutes. 

SNIPE seems to discern viral DNA by interacting with proteins the phage uses to tunnel through the bacteria’s protective membrane. This “subcellular localization,” according to Saxton, may also prevent SNIPE from inadvertently chopping up the bacteria’s own genetic material.

The model outlined in the paper is that one region of SNIPE binds to a bacterial membrane protein called ManYZ, while another region likely binds to the tape measure protein from the phage. 

The tape measure protein got its name because it determines the length of the phage tail — the part of the phage between the small, leglike protrusions and the bulbous head, which contains the phage’s genetic material. The researchers revealed that the phage’s tape measure protein enters the cytoplasm during injection, a phenomenon that had not been physically demonstrated before. 

There may also be other proteins or interactions involved. 

“If you shunt the phage genome injection through an alternate pathway that isn’t ManYZ, suddenly SNIPE doesn’t defend against the phage nearly as well,” Saxton says. “It’s unclear exactly how these proteins interact, but we do know that these two proteins are involved in this genome injection process.” 

Future directions

Saxton hopes that future work will expand our understanding of what occurs during phage genome injection and uncover the structures of the proteins involved, especially the tunnel complex in the membrane through which phages insert their genome.

Members of the Laub Lab are already collaborating with another lab to determine the structure of SNIPE. In the meantime, Saxton has been working on a new defense system in which molecular mimicry — bacterial proteins imitating phage proteins — may play a role. 

Michael T. Laub, the Salvador E. Luria Professor of Biology and a Howard Hughes Medical Institute investigator, notes that one of the breakthrough experiments for demonstrating how SNIPE works came from a brainstorming session at a lab retreat.

“Daniel and I were kind of stuck with how to directly measure the effect of SNIPE during infection, but another postdoc in the lab, Ian Roney, who is a co-author on the paper, came up with a very clever idea that ultimately worked perfectly,” Laub recalls. “It’s a great example of how powerful internal collaborations can be in pushing our science forward.”

A new type of electrically driven artificial muscle fiber

Thu, 04/09/2026 - 11:00am

Muscles are remarkably effective systems for generating controlled force, and engineers developing hardware for robots or prosthetics have long struggled to create analogs that can approach their unique combination of strength, rapid response, scalability, and control. But now, researchers at the MIT Media Lab and Politecnico di Bari in Italy have developed artificial muscle fibers that come closer to matching many of these qualities.

Like the fibers that bundle together to form biological muscles, these fibers can be arranged in different configurations to meet the demands of a given task. Unlike conventional robotic actuation systems, they are compliant enough to interface comfortably with the human body and operate silently without motors, external pumps, or other bulky supporting hardware.

The new electrofluidic fiber muscles — electrically driven actuators built in fiber format — are described in a recent paper published in Science Robotics. The work is led by Media Lab PhD candidate Ozgun Kilic Afsar; Vito Cacucciolo, a professor at the Politecnico di Bari; and four co-authors.

The new system brings together two technologies, Afsar explains. One is a fluidically driven artificial muscle known as a thin McKibben actuator, and the other is a miniaturized solid-state pump based on electrohydrodynamics (EHD), which can generate pressure inside a sealed fluid compartment without moving parts or an external fluid supply.

Until now, most fluid-driven soft actuators have relied on external “heavy, bulky, oftentimes noisy hydraulic infrastructure,” Afsar says, “which makes them difficult to integrate into systems where mobility or compact, lightweight design is important.” This has created a fundamental bottleneck in the practical use of fluidic actuators in real-world applications.

The key to breaking through that bottleneck was the use of integrated pumps based on electrohydrodynamic principles. These millimeter-scale, electrically driven pumps generate pressure and flow by injecting charge into a dielectric fluid, creating ions that drag the fluid along with them. Weighing just a few grams each and not much thicker than a toothpick, they can be fabricated continuously and scaled easily. “We integrated these fiber pumps into a closed fluidic circuit with the thin McKibben actuators,” Afsar says, noting that this was not a simple task given the different dynamics of the two components.

A key design strategy was to pair these fibers in what are known as antagonistic configurations. Cacucciolo explains that this is where “one muscle contracts while another elongates,” as when you bend your arm and your biceps contract while your triceps stretch. In their system, a millimeter-scale fiber pump sits between two similarly scaled McKibben actuators, driving fluid into one actuator to contract it while simultaneously relaxing the other.

“This is very much reminiscent of how biological muscles are configured and organized,” Afsar says. “We didn’t choose this configuration simply for the sake of biomimicry, but because we needed a way to store the fluid within the muscle design.” The need for an external reservoir open to the atmosphere has been one of the main factors limiting the practical use of EHD pumps in robotic systems outside the lab. By pairing two McKibben fibers in line, with a fiber pump between them to form a closed circuit, the team eliminated that need entirely.

Another key finding was that the muscle fibers needed to be pre-pressurized, rather than simply filled. “There is a minimum internal system pressure that the system can tolerate,” Afsar says, “below which the pump can degrade or temporarily stop working.” This happens because of cavitation, in which vapor bubbles form when the pressure at the pump inlet drops below the vapor pressure of the liquid, eventually leading to dielectric breakdown.

To prevent cavitation, they applied a “bias” pressure from the outset so that the pressure at the fiber pump inlet never falls below the liquid’s vapor pressure. The magnitude of this bias pressure can be adjusted depending on the application. “To achieve the maximum contraction the muscle can generate, we found there is a specific bias pressure range that is optimal,” she says. “If you want to configure the system for faster response, you might increase that bias pressure, though with some reduction in maximum contraction.”

Cacucciolo adds that most of today’s robotic limbs and hands are built around electric servo motors, whose configuration differs fundamentally from that of natural muscles. Servo motors generate rotational motion on a shaft that must be converted into linear movement, whereas muscle fibers naturally contract and extend linearly, as do these electrofluidic fibers. 

“Most robotic arms and humanoid robots are designed around the servo motors that drive them,” he says. “That creates integration constraints, because servo motors are hard to package densely and tend to concentrate mass near the joints they drive. By contrast, artificial muscles in fiber form can be packed tightly inside a robot or exoskeleton and distributed throughout the structure, rather than concentrated near a joint.”

These electrofluidic muscles may be especially useful for wearable applications, such as exoskeletons that help a person lift heavier loads or assistive devices that restore or augment dexterity. But the underlying principles could also apply more broadly. “Our findings extend to fluid-driven robotic systems in general,” Cacucciolo says. “Wherever fluidic actuators are used, or where engineers want to replace external pumps with internal ones, these design principles could apply across a wide range of fluid-driven robotic systems.”

This work “presents a major advancement in fiber-format soft actuation,” which “addresses several long-standing hurdles in the field, particularly regarding portability and power density,” says Herbert Shea, a professor in the Soft Transducers Laboratory at Ecole Polytechnique Federale de Lausanne in Switzerland, who was not associated with this research. “The lack of moving parts in the pump makes these muscles silent, a major advantage for prosthetic devices and assistive clothing,” he says.

Shea adds that “this high-quality and rigorous work bridges the gap between fundamental fluid dynamics and practical robotic applications. The authors provide a complete system-level solution — characterizing the individual components, developing a predictive physical model, and validating it through a range of demonstrators.”

In addition to Afsar and Cacucciolo, the team also included Gabriele Pupillo and Gennaro Vitucci at Politecnico di Bari and Wedyan Babatain and Professor Hiroshi Ishii at the MIT Media Lab. The work was supported by the European Research Council and the Media Lab’s multi-sponsored consortium.

Bridging space research and policy

Thu, 04/09/2026 - 11:00am

While earning her dual master’s degrees in aeronautics and astronautics and public policy, Carissma McGee SM ’25 learned to navigate between two seemingly distinct worlds, bridging rigorous technical analysis and policy decisions.

As an undergraduate congressional intern and researcher, she saw a persistent gap in space policymaking. Policymakers often lacked technical expertise, while researchers were rarely involved in increasingly complex questions surrounding intellectual property and international collaboration in space.

Her work on intellectual property frameworks for space collaborations directly addresses that gap, combining expertise in gravitational microlensing and space telescope operations with policy analysis to tackle emerging governance challenges.

“I want to bring an expert level in science in the rooms where policy decisions are made,” says McGee, now a doctoral student in aeronautics and astronautics. “That perspective is critical for shaping the future of research and exploration.”

Likewise, she wants to bring her expertise in public policy into the lab.

“I enjoy being able to ask questions about intellectual property, territorial claims, knowledge transfer, or allocation of resources early on in a research project,” adds McGee.

McGee’s fascination with space started during her high school years in Delaware, when she first volunteered at a local observatory and then interned at the NASA Goddard Space Flight Center in Maryland.

Following high school, McGee attended Howard University. She was selected to participate in the Karsh STEM Scholars Program, a full-ride scholarship track for students committed to working continuously toward earning doctoral degrees. Howard, which holds an R1 research classification from the Carnegie Foundation, is in close proximity the Goddard Space Flight Center, as well as the American Astronomical Society and the D.C. Space Grant Consortium.

In 2020, after her first year at Howard, the Covid-19 pandemic sent McGee back to her hometown in Delaware. As it turned out, that gave her an opportunity to work with her local congresswoman, Lisa Blunt Rochester, then a U.S. representative. In addition to supporting the congresswoman’s constituents, she drafted dozens of letters related to STEM education and energy reform.

Working in government gave McGee an opportunity to use her voice to “advocate for astronomy and astrophysics with the American Astronomical Society, advocate for space sciences, and for science representation.”

As an undergraduate, McGee also conducted research linking computational physics and astronomy, working with both NASA’s Jet Propulsion Laboratory and Yale University’s Department of Astronomy. She also continued research begun in 2021 with the Harvard and Smithsonian Center for Astrophysics’ Black Hole Initiative, contributing to work associated with the Event Horizon Telescope.

When she visited MIT in 2023, McGee was struck by the Institute’s openness to interdisciplinary work and support of her interest in combining aeronautics and astronautics with policy.

Once at MIT, she started working in the Space, Telecommunications, Astronomy, and Radiation Laboratory (STAR Lab) with advisor Kerri Cahoy, professor of aeronautics and astronautics. McGee says she experienced a great deal of freedom to craft her own program.

“I was drawn to the lab’s work on satellite missions and CubeSats, and excited to discover that I could pursue exoplanet astrophysics research within this framework and that submitting a dual thesis or focusing on astrophysics applications was possible,” says McGee. “When I expressed interest in participating in the Technology [and] Policy Program for a dual thesis in a framework for space policy, my advisors encouraged me to explore how we could integrate these diverse interests into a path forward.”

In 2024, McGee was awarded a MathWorks Fellowship to pursue research associated with the Nancy Grace Roman Space Telescope and join a NASA mission.

“It was just amazing to join the exoplanet group at NASA,” she says. “I had a front-row seat to see how real researchers and workers navigate complex problems.”

McGee credits MathWorks with helping fellows to “be at the forefront of knowledge and shaping innovation.”

One of her proudest academic accomplishments is PyLIMASS, a software system she developed with collaborators at Louisiana State University, the Ohio State University, and NASA’s Goddard Space Flight Center. The tool enables more accurate mass and distance estimates in gravitational microlensing events, helping the Roman Space Telescope project meet its precision goals for studying exoplanets.

“To build software that didn’t previously exist — and to know it will be used for the Roman mission — is incredibly exciting,” McGee says.

In May 2025, McGee graduated with dual master’s degrees in aeronautics and astronautics and technology and policy. That same month, she presented her research at the American Astronomical Society meeting in Anchorage, Alaska, and at the Technology Management and Policy Conference in Portugal.

McGee remained at MIT to pursue her doctoral degree. Last fall, as an MIT BAMIT Community Advancement Program and Fund Fellow, she hosted a daylong conference for STEM students focused on how intellectual property frameworks shape technical fields.

McGee’s accomplishments and contributions have been celebrated with a number of honors recently. In 2026, she was named Miss Black Massachusetts United States, was recognized among MIT’s Graduate Students of Excellence, and received the MIT MLK Leadership Award in recognition of her service, integrity, and community impact.

Beyond her academic work, McGee is active across campus. She teaches Pilates with MIT Recreation, participates in the Graduate Women in Aerospace Engineering group, and serves as a graduate resident assistant in an undergraduate dorm on East Campus.

She credits the AeroAstro graduate community with keeping her momentum going.

“Even if we’re tired, there’s this powerful camaraderie among AeroAstro graduate students working together. Seeing my peers are pushing through similar research milestones and solve daunting problems motivates you to advance beyond the finish line to further developments in the field.”

New technique makes AI models leaner and faster while they’re still learning

Thu, 04/09/2026 - 9:00am

Training a large artificial intelligence model is expensive, not just in dollars, but in time, energy, and computational resources. Traditionally, obtaining a smaller, faster model either requires training a massive one first and then trimming it down, or training a small one from scratch and accepting weaker performance. 

Researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Max Planck Institute for Intelligent Systems, European Laboratory for Learning and Intelligent Systems, ETH, and Liquid AI have now developed a new method that sidesteps this trade-off entirely, compressing models during training, rather than after.

The technique, called CompreSSM, targets a family of AI architectures known as state-space models, which power applications ranging from language processing to audio generation and robotics. By borrowing mathematical tools from control theory, the researchers can identify which parts of a model are pulling their weight and which are dead weight, before surgically removing the unnecessary components early in the training process.

"It's essentially a technique to make models grow smaller and faster as they are training," says Makram Chahine, a PhD student in electrical engineering and computer science, CSAIL affiliate, and lead author of the paper. "During learning, they're also getting rid of parts that are not useful to their development."

The key insight is that the relative importance of different components within these models stabilizes surprisingly early during training. Using a mathematical quantity called Hankel singular values, which measure how much each internal state contributes to the model's overall behavior, the team showed they can reliably rank which dimensions matter and which don't after only about 10 percent of the training process. Once those rankings are established, the less-important components can be safely discarded, and the remaining 90 percent of training proceeds at the speed of a much smaller model.

"What's exciting about this work is that it turns compression from an afterthought into part of the learning process itself,” says senior author Daniela Rus, MIT professor and director of CSAIL. “Instead of training a large model and then figuring out how to make it smaller, CompreSSM lets the model discover its own efficient structure as it learns. That's a fundamentally different way to think about building AI systems.”

The results are striking. On image classification benchmarks, compressed models maintained nearly the same accuracy as their full-sized counterparts while training up to 1.5 times faster. A compressed model reduced to roughly a quarter of its original state dimension achieved 85.7 percent accuracy on the CIFAR-10 benchmark, compared to just 81.8 percent for a model trained at that smaller size from scratch. On Mamba, one of the most widely used state-space architectures, the method achieved approximately 4x training speedups, compressing a 128-dimensional model down to around 12 dimensions while maintaining competitive performance.

"You get the performance of the larger model, because you capture most of the complex dynamics during the warm-up phase, then only keep the most-useful states," Chahine says. "The model is still able to perform at a higher level than training a small model from the start."

What makes CompreSSM distinct from existing approaches is its theoretical grounding. Conventional pruning methods train a full model and then strip away parameters after the fact, meaning you still pay the full computational cost of training the big model. Knowledge distillation, another popular technique, requires training a large "teacher" model to completion and then training a second, smaller "student" model on top of it, essentially doubling the training effort. CompreSSM avoids both of these costs by making informed compression decisions mid-stream.

The team benchmarked CompreSSM head-to-head against both alternatives. Compared to Hankel nuclear norm regularization, a recently proposed spectral technique for encouraging compact state-space models, CompreSSM was more than 40 times faster, while also achieving higher accuracy. The regularization approach slowed training by roughly 16 times because it required expensive eigenvalue computations at every single gradient step, and even then, the resulting models underperformed. Against knowledge distillation on CIFAR-10, CompressSM held a clear advantage for heavily compressed models: At smaller state dimensions, distilled models saw significant accuracy drops, while CompreSSM-compressed models maintained near-full performance. And because distillation requires a forward pass through both the teacher and student at every training step, even its smaller student models trained slower than the full-sized baseline.

The researchers proved mathematically that the importance of individual model states changes smoothly during training, thanks to an application of Weyl's theorem, and showed empirically that the relative rankings of those states remain stable. Together, these findings give practitioners confidence that dimensions identified as negligible early on won't suddenly become critical later.

The method also comes with a pragmatic safety net. If a compression step causes an unexpected performance drop, practitioners can revert to a previously saved checkpoint. "It gives people control over how much they're willing to pay in terms of performance, rather than having to define a less-intuitive energy threshold," Chahine explains.

There are some practical boundaries to the technique. CompreSSM works best on models that exhibit a strong correlation between the internal state dimension and overall performance, a property that varies across tasks and architectures. The method is particularly effective on multi-input, multi-output (MIMO) models, where the relationship between state size and expressivity is strongest. For per-channel, single-input, single-output architectures, the gains are more modest, since those models are less sensitive to state dimension changes in the first place.

The theory applies most cleanly to linear time-invariant systems, although the team has developed extensions for the increasingly popular input-dependent, time-varying architectures. And because the family of state-space models extends to architectures like linear attention, a growing area of interest as an alternative to traditional transformers, the potential scope of application is broad.

Chahine and his collaborators see the work as a stepping stone. The team has already demonstrated an extension to linear time-varying systems like Mamba, and future directions include pushing CompreSSM further into matrix-valued dynamical systems used in linear attention mechanisms, which would bring the technique closer to the transformer architectures that underpin most of today's largest AI systems.

"This had to be the first step, because this is where the theory is neat and the approach can stay principled," Chahine says. "It's the stepping stone to then extend to other architectures that people are using in industry today."

"The work of Chahine and his colleagues provides an intriguing, theoretically grounded perspective on compression for modern state-space models (SSMs)," says Antonio Orvieto, ELLIS Institute Tübingen principal investigator and MPI for Intelligent Systems independent group leader, who wasn't involved in the research. "The method provides evidence that the state dimension of these models can be effectively reduced during training and that a control-theoretic perspective can successfully guide this procedure. The work opens new avenues for future research, and the proposed algorithm has the potential to become a standard approach when pre-training large SSM-based models."

The work, which was accepted as a conference paper at the International Conference on Learning Representations 2026, will be presented later this month. It was supported, in part, by the Max Planck ETH Center for Learning Systems, the Hector Foundation, Boeing, and the U.S. Office of Naval Research.

The flawed fundamentals of failing banks

Thu, 04/09/2026 - 12:00am

Bank runs are dramatic: Picture Depression-era footage of customers lined up, trying to get their deposits back. Or recall Lehmann Brothers emptying out in 2008 or Silicon Valley Bank collapsing in 2023.

But what causes these runs in the first place? One viewpoint is that something of a self-fulfilling prophecy is involved. Panic spreads, and suddenly many customers are seeking their money back, until an otherwise solid institution is run into the ground.

That is not exactly Emil Verner’s position, however. Verner, an MIT economist, has been studying bank failures empirically for years and now has a different perspective. Verner and his collaborators have produced extensive evidence suggesting that when banks fail, it is usually because they are in a fundamentally shaky position. A bank run generally finishes off an already flawed business rather than upending a viable one.

“What we essentially find is that banks that fail are almost always very weak, and are in trouble,” says Verner, who is the Jerome and Dorothy Lemelson Professor of Management and Financial Economics at the MIT Sloan School of Management. “Most banks that have been subject to runs have been pretty insolvent. Runs are more the final spasm that brings down weak banks, rather than the causes of indiscriminate failures.”

This conclusion has plenty of policy relevance for the banking sector and follows a lengthy analysis of historical data. In one forthcoming paper, in the Quarterly Journal of Economics, Verner and two colleagues reviewed U.S. bank data from 1863 to 2024, concluding that “the primary cause of bank failures and banking crises is almost always and everywhere a deterioration of bank fundamentals.” In a 2021 paper in the same journal, Verner and two other colleagues studied banking data from 46 countries covering 1870-2016, and found that declining bank fundamentals usually preceded runs. And currently, Verner is working to make more historical U.S. bank data publicly available to scholars.

Seen in this light, sure, bank runs are damaging, but bank failures likely have more to do with bad portfolios, poor risk management, and minimal assets in reserve, rather than sentiment-driven client behavior.

“From the idea that bank crises are really about sudden runs on bank debt, we’re moving to thinking that runs are one symptom of crisis that runs deeper,” Verner says. “For most people, we’re saying something reasonable, refining our knowledge, and just shifting the emphasis,” Verner says.

For his research and teaching, Verner received tenure at MIT last year.

Landing in a “great place”

Verner is a native of Denmark who also lived in the U.S. for several years while growing up. Around the time he was finishing school, the U.S. housing market imploded, taking some financial institutions with it.

“Everything came crashing down,” Verner said. “I got obsessed with understanding it.”

As an undergraduate, he studied economics at the University of Copenhagen. After three years, Verner was unconvinced the discipline had fully explained financial crises. He decided to keep studying economics in graduate school, and was accepted into the PhD program at Princeton University.

Along the way, Verner became a historically minded economist, digging into data and cases from past decades to shed light on larger patterns about crises and bank insolvency.

“I’ve always thought history was extremely fascinating in itself,” Verner says. And while history may not repeat, he notes, it is “a really valuable tool. It helps you think through what could happen, what are similar scenarios, and how agents acted when facing similar constraints and incentives in the past.”

For studying financial crises in particular, he adds, history helps in multiple ways. Crises are rare, so historical cases add data. Changes over time, like more financial regulations and more complex investment tools, provide different settings to examine the same cause-and-effect issues. “History is a useful laboratory to study these questions,” Verner says.

After earning his PhD from Princeton, Verner went on the job market and landed his faculty position at MIT Sloan. Many aspects of Institute life — the classroom experience, the collegiality, the campus — have strongly resonated with him.

“MIT is a great place,” Verner says simply. “Great colleagues, great students.”

Focused on fundamentals

Over the last decade, Verner has published papers on numerous topics in addition to banking crises. As an outgrowth of his doctoral work, for instance, he published innovative papers examining the dampening effect that household debt has on economic growth in many countries. He also co-authored the lead paper in an issue of the American Economic Review last year examining the way German hyperinflation after World War I reallocated wealth to large business with substantial debt, leading them to grow faster.

Still, the main focus of Verner’s work right now is on banking crises and bank failures — including their causes. In a 2024 paper looking at private lending in 117 countries since 1940, Verner and economist Karsten Müller showed that financial crises are often preceded by credit booms in what scholars call the “non-tradeable” sector of the economy. That includes industries such as retail or construction, which do not produce easily tradeable goods. Firms in the non-tradeable sector tend to rely more heavily on loans secured by real estate; during real estate booms, such firms use high valuations to borrow more, and they become more vulnerable to crashes — which helps explain why bank portfolios, in turn, can crater as well.

In recent years, in the process of studying these topics, Verner has helped expand the domain of known U.S. historical data in the field. Working with economists Sergio Correa and Stephan Luck, Verner has helped apply large language models to historical newspaper collections, unearthing information about 3,421 runs on individual banks from 1863 to 1934; they are making that data freely available to other scholars.

This topic has important policy implications. If runs are a contagion bringing down worthy banks, then one solution is to provide banks with more liquidity to get through the crisis — something that has indeed been tried in the U.S. However, if bank failures are more based in fundamentals about risk and not keeping enough capital on hand, more systemic policy options about best practices might be logical. At a minimum, substantive new research can help alter the contents of those discussions.

“When banks fail, it’s usually because these banks have taken a lot of risk and have big losses,” Verner says. “It’s rarely unjustified. So that means these types of liquidity interventions alone are not enough to stop a crisis.”

The expansive research Verner has helped conduct includes a number of specific indicators that fundamentals are a big factor in failure. For instance, examining how infrequently banks recover their all assets shows how shaky their foundations are.

“The recovery rate on assets is informative about how solvent a bank was,” Verner says. “This is where I think we’ve contributed something new.” Some economists in the past have cited particular examples of struggling banks making depositors whole, but those are exceptions, not the rule. “Sometimes people argue this or that bank was actually solvent because depositors ended up getting all their money back, and that might be true of one bank, but on aggregate it’s not the case,” Verner says.

Overall, Verner intends to keep following the facts, digging up more evidence, and seeing where it leads.

“While there is this notion that liquidity problems can arise pretty much out of nowhere, I think we are changing that emphasis by showing that financial crises happen basically because banks become insolvent,” Verner underscores. “And then the bank run is that final dramatic spasm — which slightly shifts how we teach and talk about it, and perhaps think about the policy response.”

Desirée Plata appointed associate dean of engineering

Wed, 04/08/2026 - 12:45pm

Desirée Plata, the School of Engineering Distinguished Climate and Energy Professor in the MIT Department of Civil and Environmental Engineering, has been named associate dean of engineering, effective July 1.

In her new role, Plata will focus on fostering early-stage research initiatives across the school’s faculty and on strengthening entrepreneurial and innovation efforts. She will also support the school’s Technical Leadership and Communication (TLC) Programs, including: the Gordon Engineering Leadership Program, the Daniel J. Riccio Graduate Engineering Leadership Program, the School of Engineering Communication Lab, and the Undergraduate Practice Opportunities Program.

Plata will join Associate Dean Hamsa Balakrishnan, who continues to lead faculty searches, fellowships, and outreach programs. Together, the two associate deans will serve on key leadership groups including Engineering Council and the Dean’s Advisory Council to shape the school’s strategic priorities.

“Desirée’s leadership, scholarship, and commitment to excellence have already had a meaningful impact on the MIT community, and I look forward to the perspective and energy she will bring to this role,” says Paula T. Hammond, dean of the School of Engineering and Institute Professor in the Department of Chemical Engineering.

Plata’s research centers on the sustainable design of industrial processes and materials through environmental chemistry, with an emphasis on clean energy technologies. She develops ways to make industrial processes more environmentally sustainable, incorporating environmental objectives into the design phase of processes and materials. Her work spans nanomaterials and carbon-based materials for pollution reduction, as well as advanced methods for environmental cleanup and energy conversion.  Plata directs MIT’s Parsons Laboratory, which conducts interdisciplinary research on natural systems and human adaptation to environmental change.

Plata is a leader on campus and beyond in climate and sustainability initiatives. She serves as director of the MIT Climate and Sustainability Consortium (MCSC), an industry–academia collaboration launched to accelerate solutions for global climate challenges. She founded and directs the MIT Methane Network, a multi-institution effort to cut global methane emissions within this decade. Plata also co-directs the National Institute of Environmental Health Sciences MIT Superfund Research Program, which focuses on strategies to protect communities concerned about hazardous chemicals, pollutants, and other contaminants in their environment.

Beyond academia, Plata has co-founded two climate and energy startups, Nth Cycle and Moxair. Nth Cycle is redefining metal refining and the domestic battery supply chain. Earlier this month, the company signed a $1.1 billion off-take agreement to help establish a secure and circular technology for battery minerals.

Her company Moxair specializes in advanced approaches for low-level methane monitoring and destruction. In 2026, with support from the U.S. Department of Energy and collaboration with MIT, Moxair will build and demonstrate a first-of-a-kind dilute methane oxidation technology to tackle methane emissions using transition metal catalysts.

As an educator, Plata has helped develop programs that enhance research experience for students and postdocs. She played a pivotal role in the founding of the MIT Postdoctoral Fellowship Program for Engineering Excellence, serving on its faculty steering committee, overseeing admissions, and leading both the academic track and entrepreneurship track. She also helped design the MCSC Climate and Sustainability Scholars Program, a yearlong program open to juniors and seniors across MIT.

Plata earned a BS in chemistry from Union College in 2003 and a PhD in the joint MIT-Woods Hole Oceanographic Institution program in oceanography and applied ocean science in 2009. After completing her doctorate, she held faculty positions at Mount Holyoke College, Duke University, and Yale University. While at Yale, she served as associate director of research at the university’s Center for Green Chemistry and Green Engineering. In 2018, Plata joined MIT’s faculty in the Department of Civil and Environmental Engineering.

Her work as a scholar and educator has earned numerous awards and honors. She received MIT’s Harold E. Edgerton Faculty Achievement Award in 2020, recognizing her excellence in research, teaching, and service. She has also been honored with an NSF CAREER Award and the Odebrecht Award for Sustainable Development. Plata is a fellow of the American Chemical Society and was a Young Investigator Sustainability Fellow at Caltech.

Plata is a two-time National Academy of Engineering Frontiers of Engineering Fellow and a two-time National Academy of Sciences Kavli Frontiers of Science Fellow. Her dedication to mentoring was recognized with MIT’s Junior Bose Award for Excellence in Teaching and the Frank Perkins Graduate Advising Award.

Physicists zero in on the mass of the fundamental W boson particle

Wed, 04/08/2026 - 12:00pm

When fundamental particles are heavier or lighter than expected, physicists’ understanding of the universe can tip into the unknown. A particle that is just beyond its predicted mass can unravel scientists’ assumptions about the forces that make up all of matter and space. But now, a new precision measurement has reset the balance and confirmed scientists’ theories, at least for one of the universe’s core building blocks.

In a paper appearing today in the journal Nature, an international team including MIT physicists reports a new, ultraprecise measurement of the mass of the W boson.

The W boson is one of two elementary particles that embody the weak force, which is one of the four fundamental forces of nature. The weak force enables certain particles to change identities, such as from protons to neutrons and vice versa. This morphing is what drives radioactive decay, as well as nuclear fusion, which powers the sun.

Now, scientists have determined the mass of the W boson by analyzing more than 1 billion proton-colliding events produced by the Large Hadron Collider (LHC) at CERN (the European Organization for Nuclear Research) in Switzerland. The LHC accelerates protons toward each other at close to the speed of light. When they collide, two protons can produce a W boson, among a shower of other particles.

Catching a W boson is nearly impossible, as it decays almost immediately into two types of particles, one of which, a neutrino, is so elusive that it cannot be detected. Scientists are left to measure the other particle, known as a muon, and model how it might add up to the total mass of its parent, the W boson. In the new study, scientists used the Compact Muon Solenoid (CMS) experiment, a particle detector at the LHC that precisely tracks muons and other particles produced in the aftermath of proton collisions.

From billions of proton-proton collisions, the team identified 100 million events that produced a W boson decaying to a muon and a neutrino. For each of these events, they carried out detailed analyses to narrow in on a precise mass measurement. In the end, they determined that the W boson has a mass of 80360.2 ± 9.9 megaelectron volts (MeV). This new mass is in line with predictions of the Standard Model, which is physicists’ best rulebook for describing the fundamental particles and forces of nature.

The precision of the new measurement is on par with a previous measurement made in 2022 by the Collider Detector at Fermilab (CDF). That measurement took physicists by surprise, as it was significantly heavier than what the Standard Model predicted, and therefore raised the possibility of “new physics,” such as particles and forces that have yet to be discovered.

Because the new CMS measurement is just as precise as the CDF result and agrees with the Standard Model along with a number of other experiments, it is more likely that physicists are on solid ground in terms of how they understand the W boson.

“It’s just a huge relief, to be honest,” says Kenneth Long, a lead author of the study, who is a senior postdoc in MIT’s Laboratory for Nuclear Science. “This new measurement is a strong confirmation that we can trust the Standard Model.”

The study is authored by more than 3,000 members of CERN’s CMS Collaboration. The core group who worked on the new measurement includes about 30 scientists from 10 institutions, led by a team at MIT that includes Long; Tianyu Justin Yang PhD ’24; David Walter and Jan Eysermans, who are both MIT postdocs in physics; Guillelmo Gomez-Ceballos, a principal research scientist in the Particle Physics Collaboration; Josh Bendavid, a former research scientist; and Christoph Paus, a professor of physics at MIT and principal investigator with the Particle Physics Collaboration.

Piecing together

The W boson was first discovered in 1983 and is predicted to be the fourth heaviest among all the fundamental particles. Multiple experiments have aimed to narrow in on the particle’s mass, with varying degrees of precision. For the most part, these experiments have produced measurements that agree with the Standard Model’s predictions. The 2022 measurement by Fermilab’s CDF experiment is the one significant outlier. It also happens to be the most precise experiment to date.

“If you take the CDF measurement at face value, you would say there must be physics beyond the Standard Model,” says co-author Christoph Paus. “And of course that was the big mystery.”

Paus and his colleagues sought to either support or refute the CDF’s findings by making an independent measurement, with an experiment that matches CDF’s precision. Their new W boson mass measurement is a product of 10 years’ worth of work, both to analyze actual particle collision events and to simulate all the scenarios that could produce those events.

For their new study, the physicists analyzed proton collision events that were produced at the LHC in 2016. When it is running, the particle collider generates proton collisions at a furious rate of about one every 25 nanoseconds. The team analyzed a portion of the LHC’s 2016 dataset that encompasses billions of proton-proton collisions. Among these, they identified about 100 million events that produced a very short-lived W boson.

“A particle like the W boson exists for a teeny tiny moment — something like 10-24 seconds — before decaying to two particles, one of which is a neutrino that can’t be measured directly,” Long explains. “That’s the tricky part: You have to measure the other particle — a muon — really well, and be able to piece things together with only one piece of the puzzle.”

Gathering momentum

When a muon is produced from the decay of a W boson, it carries half of the W boson’s mass, which is converted into momentum that carries the muon away from the original collision. Due to the strong magnetic field inside the CMS detector, the electrically charged muon follows a path whose curvature is a function of its momentum. Scientists’ challenge is to track the muon’s path and every interaction it may have with other particles and its surroundings, in order to estimate its initial momentum.

The muon’s momentum is also influenced by the momentum of the W boson before it decays. Decoding the impact of the W boson’s motion from the effects of its mass presented a major challenge. To infer the W boson mass, the team first carried out simulations of every scenario they could think of that a muon might experience after a proton-proton collision in the chaotic environment of the particle collider. In all, the team produced 4 billion such simulated events described by state-of-the-art theoretical calculations. The simulations encoded diverse hypotheses about how the muon momentum is affected by the physical features of the CMS detector, as well as uncertainties in the predictions that govern W boson production in LHC collisions.

The researchers compared their simulations with data from the 2016 LHC run. For every proton-proton collision event that occurs in the collider, scientists can use the CMS detector at CERN’s LHC to precisely measure the energy and momentum of resulting particles such as muons. The team analyzed CMS measurements of muons that were produced from over 100 million W boson events. They then overlaid this data onto their simulations of the muon momentum, which they then converted to a new mass for the W boson.

That mass — 80360.2 ± 9.9 megaelectron volts — is significantly lighter than the CDF experiment’s measurement. What’s more, the new estimate is within the range of what the Standard Model predicts for the W boson’s mass, bolstering physicists’ confidence in the Standard Model and its descriptions of the major particles and forces of nature.

“With the combination of our really precise result and other experiments that line up with the Standard Model’s predictions, I think that most people would place their bets on the Standard Model,” Long says. “Though I do think people should continue doing this measurement. We are not done.”

“We want to add more data, make our analysis techniques more precise, and basically squeeze the lemon a little harder. There is always some juice left,” Paus adds. “With a better look, then we can say for certain whether we truly understand this one fundamental building block.”

This work was supported, in part, by multiple funding agencies, including the U.S. Department of Energy, and the SubMIT computing facility, sponsored by the MIT Department of Physics. 

Pages