MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 7 hours 26 min ago

Decoding the sounds of battery formation and degradation

Tue, 09/16/2025 - 11:00am

Before batteries lose power, fail suddenly, or burst into flames, they tend to produce faint sounds over time that provide a signature of the degradation processes going on within their structure. But until now, nobody had figured out how to interpret exactly what those sounds meant, and how to distinguish between ordinary background noise and significant signs of possible trouble.

Now, a team of researchers at MIT’s Department of Chemical Engineering have done a detailed analysis of the sounds emanating from lithium ion batteries, and has been able to correlate particular sound patterns with specific degradation processes taking place inside the cells. The new findings could provide the basis for relatively simple, totally passive and nondestructive devices that could continuously monitor the health of battery systems, for example in electric vehicles or grid-scale storage facilities, to provide ways of predicting useful operating lifetimes and forecasting failures before they occur.

The findings were reported Sept. 5 in the journal Joule, in a paper by MIT graduate students Yash Samantaray and Alexander Cohen, former MIT research scientist Daniel Cogswell PhD ’10, and Chevron Professor of Chemical Engineering and professor of mathematics Martin Z. Bazant.

“In this study, through some careful scientific work, our team has managed to decode the acoustic emissions,” Bazant says. “We were able to classify them as coming from gas bubbles that are generated by side reactions, or by fractures from the expansion and contraction of the active material, and to find signatures of those signals even in noisy data.”

Samantaray explains that, “I think the core of this work is to look at a way to investigate internal battery mechanisms while they’re still charging and discharging, and to do this nondestructively.” He adds, “Out there in the world now, there are a few methods that exist, but most are very expensive and not really conducive to batteries in their normal format.”

To carry out their analysis, the team coupled electrochemical testing with recording of the acoustic emissions, under real-world charging and discharging conditions, using detailed signal processing to correlate the electrical and acoustic data. By doing so, he says, “we were able to come up with a very cost-effective and efficient method of actually understanding gas generation and fracture of materials.”

Gas generation and fracturing are two primary mechanisms of degradation and failure in batteries, so being able to detect and distinguish those processes, just by monitoring the sounds produced by the batteries, could be a significant tool for those managing battery systems.

Previous approaches have simply monitored the sounds and recorded times when the overall sound level exceeded some threshold. But in this work, by simultaneously monitoring the voltage and current as well as the sound characteristics, Bazant says, “We know that [sound] emissions happen at a certain potential [voltage], and that helps us identify what the process might be that is causing that emission.”

After these tests, they would then take the batteries apart and study them under an electron microscope to detect fracturing of the materials.

In addition, they took a wavelet transform — essentially, a way of encoding the frequency and duration of each signal that is captured, providing distinct signatures that can then be more easily extracted from background noise. “No one had done that before,” Bazant says, “so that was another breakthrough.”

Acoustic emissions are widely used in engineering, he points out, for example to monitor structures such as bridges for signs of incipient failure. “It’s a great way to monitor a system,” he says, “because those emissions are happening whether you’re listening to them or not,” so by listening, you can learn something about internal processes that would otherwise be invisible.

With batteries, he says, “we often have a hard time interpreting the voltage and current information as precisely as we’d like, to know what’s happening inside a cell. And so this offers another window into the cell’s state of health, including its remaining useful life, and safety, too.” In a related paper with Oak Ridge National Laboratory researchers, the team has shown that acoustic emissions can provide an early warning of thermal runaway, a situation that can lead to fires if not caught. The new study suggests that these sounds can be used to detect gas generation prior to combustion, “like seeing the first tiny bubbles in a pot of heated water, long before it boils,” says Bazant.

The next step will be to take this new knowledge of how certain sounds relate to specific conditions, and develop a practical, inexpensive monitoring system based on this understanding. For example, the team has a grant from Tata Motors to develop a battery monitoring system for its electric vehicles. “Now, we know what to look for, and how to correlate that with lifetime and health and safety,” Bazant says.

One possible application of this new understanding, Samantaray says, is “as a lab tool for groups that are trying to develop new materials or test new environments, so they can actually determine gas generation or active material fracturing without having to open up the battery.”

Bazant adds that the system could also be useful for quality control in battery manufacturing. “The most expensive and rate-limiting process in battery production is often the formation cycling,” he says. This is the process where batteries are cycled through charging and discharging to break them in, and part of that process involves chemical reactions that release some gas. The new system would allow detection of these gas formation signatures, he says, “and by sensing them, it may be easier to isolate well-formed cells from poorly formed cells very early, even before the useful life of the battery, when it’s being made,” he says.

The work was supported by the Toyota Research Institute, the Center for Battery Sustainability, the National Science Foundation, and the Department of Defense, and made use of the facilities of MIT.nano.

A new community for computational science and engineering

Tue, 09/16/2025 - 11:00am

For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.

As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.

Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.

“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).

Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.

“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.

Expanding horizons

Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.

To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.

“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.

That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.

Designing for impact

The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.

For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.

The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:

  • Discretization and numerical methods for partial differential equations;
  • Optimization methods;
  • Inference, statistical computing, and data-driven modeling;
  • High performance computing, software engineering, and algorithms;
  • Mathematical foundations (e.g., functional analysis, probability); and
  • Modeling (i.e., a subject that treats computational modeling in any science or engineering discipline).

Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.

“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.

“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.

That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.

“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.

How to build AI scaling laws for efficient LLM training and budget maximization

Tue, 09/16/2025 - 11:00am

When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.

New work from MIT and MIT-IBM Watson AI Lab researchers addresses this by amassing and releasing a collection of hundreds of models and metrics concerning training and performance to approximate more than a thousand scaling laws. From this, the team developed a meta-analysis and guide for how to select small models and estimate scaling laws for different LLM model families, so that the budget is optimally applied toward generating reliable performance predictions.

“The notion that you might want to try to build mathematical models of the training process is a couple of years old, but I think what was new here is that most of the work that people had been doing before is saying, ‘can we say something post-hoc about what happened when we trained all of these models, so that when we’re trying to figure out how to train a new large-scale model, we can make the best decisions about how to use our compute budget?’” says Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science and principal investigator with the MIT-IBM Watson AI Lab.

The research was recently presented at the International Conference on Machine Learning by Andreas, along with MIT-IBM Watson AI Lab researchers Leshem Choshen and Yang Zhang of IBM Research.

Extrapolating performance

No matter how you slice it, developing LLMs is an expensive endeavor: from decision-making regarding the numbers of parameters and tokens, data selection and size, and training techniques to determining output accuracy and tuning to the target applications and tasks. Scaling laws offer a way to forecast model behavior by relating a large model’s loss to the performance of smaller, less-costly models from the same family, avoiding the need to fully train every candidate. Mainly, the differences between the smaller models are the number of parameters and token training size. According to Choshen, elucidating scaling laws not only enable better pre-training decisions, but also democratize the field by enabling researchers without vast resources to understand and build effective scaling laws.

The functional form of scaling laws is relatively simple, incorporating components from the small models that capture the number of parameters and their scaling effect, the number of training tokens and their scaling effect, and the baseline performance for the model family of interest. Together, they help researchers estimate a target large model’s performance loss; the smaller the loss, the better the target model’s outputs are likely to be.

These laws allow research teams to weigh trade-offs efficiently and to test how best to allocate limited resources. They’re particularly useful for evaluating scaling of a certain variable, like the number of tokens, and for A/B testing of different pre-training setups.

In general, scaling laws aren’t new; however, in the field of AI, they emerged as models grew and costs skyrocketed. “It’s like scaling laws just appeared at some point in the field,” says Choshen. “They started getting attention, but no one really tested how good they are and what you need to do to make a good scaling law.” Further, scaling laws were themselves also a black box, in a sense. “Whenever people have created scaling laws in the past, it has always just been one model, or one model family, and one dataset, and one developer,” says Andreas. “There hadn’t really been a lot of systematic meta-analysis, as everybody is individually training their own scaling laws. So, [we wanted to know,] are there high-level trends that you see across those things?”

Building better

To investigate this, Choshen, Andreas, and Zhang created a large dataset. They collected LLMs from 40 model families, including Pythia, OPT, OLMO, LLaMA, Bloom, T5-Pile, ModuleFormer mixture-of-experts, GPT, and other families. These included 485 unique, pre-trained models, and where available, data about their training checkpoints, computational cost (FLOPs), training epochs, and the seed, along with 1.9 million performance metrics of loss and downstream tasks. The models differed in their architectures, weights, and so on. Using these models, the researchers fit over 1,000 scaling laws and compared their accuracy across architectures, model sizes, and training regimes, as well as testing how the number of models, inclusion of intermediate training checkpoints, and partial training impacted the predictive power of scaling laws to target models. They used measurements of absolute relative error (ARE); this is the difference between the scaling law’s prediction and the observed loss of a large, trained model. With this, the team compared the scaling laws, and after analysis, distilled practical recommendations for AI practitioners about what makes effective scaling laws.

Their shared guidelines walk the developer through steps and options to consider and expectations. First, it’s critical to decide on a compute budget and target model accuracy. The team found that 4 percent ARE is about the best achievable accuracy one could expect due to random seed noise, but up to 20 percent ARE is still useful for decision-making. The researchers identified several factors that improve predictions, like including intermediate training checkpoints, rather than relying only on final losses; this made scaling laws more reliable. However, very early training data before 10 billion tokens are noisy, reduce accuracy, and should be discarded. They recommend prioritizing training more models across a spread of sizes to improve robustness of the scaling law’s prediction, not just larger models; selecting five models provides a solid starting point. 

Generally, including larger models improves prediction, but costs can be saved by partially training the target model to about 30 percent of its dataset and using that for extrapolation. If the budget is considerably constrained, developers should consider training one smaller model within the target model family and borrow scaling law parameters from a model family with similar architecture; however, this may not work for encoder–decoder models. Lastly, the MIT-IBM research group found that when scaling laws were compared across model families, there was strong correlation between two sets of hyperparameters, meaning that three of the five hyperparameters explained nearly all of the variation and could likely capture the model behavior. Together, these guidelines provide a systematic approach to making scaling law estimation more efficient, reliable, and accessible for AI researchers working under varying budget constraints.

Several surprises arose during this work: small models partially trained are still very predictive, and further, the intermediate training stages from a fully trained model can be used (as if they are individual models) for prediction of another target model. “Basically, you don’t pay anything in the training, because you already trained the full model, so the half-trained model, for instance, is just a byproduct of what you did,” says Choshen. Another feature Andreas pointed out was that, when aggregated, the variability across model families and different experiments jumped out and was noisier than expected. Unexpectedly, the researchers found that it’s possible to utilize the scaling laws on large models to predict performance down to smaller models. Other research in the field has hypothesized that smaller models were a “different beast” compared to large ones; however, Choshen disagrees. “If they’re totally different, they should have shown totally different behavior, and they don’t.”

While this work focused on model training time, the researchers plan to extend their analysis to model inference. Andreas says it’s not, “how does my model get better as I add more training data or more parameters, but instead as I let it think for longer, draw more samples. I think there are definitely lessons to be learned here about how to also build predictive models of how much thinking you need to do at run time.” He says the theory of inference time scaling laws might become even more critical because, “it’s not like I'm going to train one model and then be done. [Rather,] it’s every time a user comes to me, they’re going to have a new query, and I need to figure out how hard [my model needs] to think to come up with the best answer. So, being able to build those kinds of predictive models, like we’re doing in this paper, is even more important.”

This research was supported, in part, by the MIT-IBM Watson AI Lab and a Sloan Research Fellowship. 

MIT geologists discover where energy goes during an earthquake

Tue, 09/16/2025 - 12:00am

The ground-shaking that an earthquake generates is only a fraction of the total energy that a quake releases. A quake can also generate a flash of heat, along with a domino-like fracturing of underground rocks. But exactly how much energy goes into each of these three processes is exceedingly difficult, if not impossible, to measure in the field.

Now MIT geologists have traced the energy that is released by “lab quakes” — miniature analogs of natural earthquakes that are carefully triggered in a controlled laboratory setting. For the first time, they have quantified the complete energy budget of such quakes, in terms of the fraction of energy that goes into heat, shaking, and fracturing.

They found that only about 10 percent of a lab quake’s energy causes physical shaking. An even smaller fraction — less than 1 percent — goes into breaking up rock and creating new surfaces. The overwhelming portion of a quake’s energy — on average 80 percent — goes into heating up the immediate region around a quake’s epicenter. In fact, the researchers observed that a lab quake can produce a temperature spike hot enough to melt surrounding material and turn it briefly into liquid melt.

The geologists also found that a quake’s energy budget depends on a region’s deformation history — the degree to which rocks have been shifted and disturbed by previous tectonic motions. The fractions of quake energy that produce heat, shaking, and rock fracturing can shift depending on what the region has experienced in the past.

“The deformation history — essentially what the rock remembers — really influences how destructive an earthquake could be,” says Daniel Ortega-Arroyo, a graduate student in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “That history affects a lot of the material properties in the rock, and it dictates to some degree how it is going to slip.”

The team’s lab quakes are a simplified analog of what occurs during a natural earthquake. Down the road, their results could help seismologists predict the likelihood of earthquakes in regions that are prone to seismic events. For instance, if scientists have an idea of how much shaking a quake generated in the past, they might be able to estimate the degree to which the quake’s energy also affected rocks deep underground by melting or breaking them apart. This in turn could reveal how much more or less vulnerable the region is to future quakes.

“We could never reproduce the complexity of the Earth, so we have to isolate the physics of what is happening, in these lab quakes,” says Matěj Peč, associate professor of geophysics at MIT. “We hope to understand these processes and try to extrapolate them to nature.”

Peč (pronounced “Peck”) and Ortega-Arroyo reported their results on Aug. 28 in the journal AGU Advances. Their MIT co-authors are Hoagy O’Ghaffari and Camilla Cattania, along with Zheng Gong and Roger Fu at Harvard University and Markus Ohl and Oliver Plümper at Utrecht University in the Netherlands.

Under the surface

Earthquakes are driven by energy that is stored up in rocks over millions of years. As tectonic plates slowly grind against each other, stress accumulates through the crust. When rocks are pushed past their material strength, they can suddenly slip along a narrow zone, creating a geologic fault. As rocks slip on either side of the fault, they produce seismic waves that ripple outward and upward.

We perceive an earthquake’s energy mainly in the form of ground shaking, which can be measured using seismometers and other ground-based instruments. But the other two major forms of a quake’s energy — heat and underground fracturing — are largely inaccessible with current technologies.

“Unlike the weather, where we can see daily patterns and measure a number of pertinent variables, it’s very hard to do that very deep in the Earth,” Ortega-Arroyo says. “We don’t know what’s happening to the rocks themselves, and the timescales over which earthquakes repeat within a fault zone are on the century-to-millenia timescales, making any sort of actionable forecast challenging.”

To get an idea of how an earthquake’s energy is partitioned, and how that energy budget might affect a region’s seismic risk, he and Peč went into the lab. Over the last seven years, Peč’s group at MIT has developed methods and instrumentation to simulate seismic events, at the microscale, in an effort to understand how earthquakes at the macroscale may play out.

“We are focusing on what’s happening on a really small scale, where we can control many aspects of failure and try to understand it before we can do any scaling to nature,” Ortega-Arroyo says.

Microshakes

For their new study, the team generated miniature lab quakes that simulate a seismic slipping of rocks along a fault zone. They worked with small samples of granite, which are representative of rocks in the seismogenic layer — the geologic region in the continental crust where earthquakes typically originate. They ground up the granite into a fine powder and mixed the crushed granite with a much finer powder of magnetic particles, which they used as a sort of internal temperature gauge. (A particle’s magnetic field strength will change in response to a fluctuation in temperature.)

The researchers placed samples of the powdered granite — each about 10 square millimeters and 1 millimeter thin — between two small pistons and wrapped the ensemble in a gold jacket. They then applied a strong magnetic field to orient the powder’s magnetic particles in the same initial direction and to the same field strength. They reasoned that any change in the particles’ orientation and field strength afterward should be a sign of how much heat that region experienced as a result of any seismic event.

Once samples were prepared, the team placed them one at a time into a custom-built apparatus that the researchers tuned to apply steadily increasing pressure, similar to the pressures that rocks experience in the Earth’s seismogenic layer, about 10 to 20 kilometers below the surface. They used custom-made piezoelectric sensors, developed by co-author O’Ghaffari, which they attached to either end of a sample to measure any shaking that occurred as they increased the stress on the sample.

They observed that at certain stresses, some samples slipped, producing a microscale seismic event similar to an earthquake. By analyzing the magnetic particles in the samples after the fact, they obtained an estimate of how much each sample was temporarily heated — a method developed in collaboration with Roger Fu’s lab at Harvard University. They also estimated the amount of shaking each sample experienced, using measurements from the piezoelectric sensor and numerical models. The researchers also examined each sample under the microscope, at different magnifications, to assess how the size of the granite grains changed — whether and how many grains broke into smaller pieces, for instance.

From all these measurements, the team was able to estimate each lab quake’s energy budget. On average, they found that about 80 percent of a quake’s energy goes into heat, while 10 percent generates shaking, and less than 1 percent goes into rock fracturing, or creating new, smaller particle surfaces. 

“In some instances we saw that, close to the fault, the sample went from room temperature to 1,200 degrees Celsius in a matter of microseconds, and then immediately cooled down once the motion stopped,” Ortega-Arroyo says. “And in one sample, we saw the fault move by about 100 microns, which implies slip velocities essentially about 10 meters per second. It moves very fast, though it doesn’t last very long.”

The researchers suspect that similar processes play out in actual, kilometer-scale quakes.

“Our experiments offer an integrated approach that provides one of the most complete views of the physics of earthquake-like ruptures in rocks to date,” Peč says. “This will provide clues on how to improve our current earthquake models and natural hazard mitigation.”

This research was supported, in part, by the National Science Foundation.

How to get your business into the flow

Tue, 09/16/2025 - 12:00am

In the late 1990s, a Harley-Davidson executive named Donald Kieffer became general manager of a company engine plant near Milwaukee. The iconic motorcycle maker had forged a celebrated comeback, and Kieffer, who learned manufacturing on the shop floor, had been part of it. Now Kieffer wanted to make his facility better. So he arranged for a noted Toyota executive, Hajime Oba, to pay a visit.

The meeting didn’t go as Kieffer expected. Oba walked around the plant for 45 minutes, diagrammed the setup on a whiteboard, and suggested one modest change. As a high-ranking manager, Kieffer figured he had to make far-reaching upgrades. Instead, Oba asked him, “What is the problem you are trying to solve?”

Oba’s point was subtle. Harley-Davidson had a good plant that could get better, but not by imposing grand, top-down plans. The key was to fix workflow issues the employees could identify. Even a small fix can have large effects, and, anyway, a modestly useful change is better than a big, formulaic makeover that derails things. So Kieffer took Oba’s prompt and started making specific, useful changes. 

“Organizations are dynamic places, and when we try to impose a strict, static structure on them, we drive all that dynamism underground,” says MIT professor of management Nelson Repenning. “And the waste and chaos it creates is 100 times more expensive than people anticipate.”

Now Kieffer and Repenning have written a book about flexible, sensible organizational improvement, “There’s Got to Be a Better Way,” published by PublicAffairs. They call their approach “dynamic work design,” which aims to help firms refine their workflow — and to stop people from making it worse through overconfident, cookie-cutter prescriptions.

“So much of management theory presumes we can predict the future accurately, including our impact on it,” Repenning says. “And everybody knows that’s not true. Yet we go along with the fiction. The premise underlying dynamic work design is, if we accept that we can’t predict the future perfectly, we might design the world differently.”

Kieffer adds: “Our principles address how work is designed. Not how leaders have to act, but how you design human work, and drive changes.”

One collaboration, five principles

This book is the product of a long collaboration: In 1996, Kieffer first met Repenning, who was then a new MIT faculty member, and they soon recognized they thought similarly about managing work. By 2008, Kieffer also became a lecturer at the MIT Sloan School of Management, where Repenning is now a distinguished professor of system dynamics and organization studies.

The duo began teaching executive education classes together at MIT Sloan, often working with firms tackling tough problems. In the 2010s, they worked extensively with BP executives after the Deepwater Horizon accident, finding ways to combine safety priorities with other operations.

Repenning is an expert on system dynamics, an MIT-developed field emphasizing how parts of a system interact. In a firm, making isolated changes may throw the system as a whole further off kilter. Instead, managers need to grasp the larger dynamics — and recognize that a firm’s problems are not usually its people, since most employees perform similarly when burdened by a faulty system.

Whereas many have touted management systems prescribe set things in advance — like culling the bottom 10 percent of your employees annually — Repenning and Kieffer believe a firm should study itself empirically and develop improvements from there.

“Managers lose touch with how work actually gets done,” Kieffer says. “We bring managers in touch with real-time work, to see the problems people have, to help them solve it and learn new ways to work.”

Over time, Repenning and Kieffer have codified their ideas about work design into five principles:

  • Solve the right problem: Use empiricism to develop a blame-free statement of issues to address;
  • Structure for discovery: Allow workers to see how their work fits into the bigger picture, and to help improve things;
  • Connect the human chain: Make sure the right information moves from one person to the next;
  • Regulate for flow: New tasks should only enter a system when there is capacity for them to be handled; and
  • Visualize the work: Create a visual method — think of a whiteboard with sticky notes — for mapping work operations.

No mugs, no t-shirts — just open your eyes

Applying dynamic work design to any given firm may sound simple, but Repenning and Kieffer note that many forces make it hard to implement. For instance, firm leaders may be tempted to opt for technology-based solutions when there are simpler, cheaper fixes available.

Indeed, “resorting to technology before fixing the underlying design risks wasting money and embedding the original problem even deeper in the organization,” they write in the book.

Moreover, dynamic work design is not itself a solution, but a way of trying to find a specific solution.

“One thing that keeps Don and I up at night is a CEO reading our book and thinking, ‘We’re going to be a dynamic work design company,’ and printing t-shirts and coffee mugs and holding two-day conferences where everyone signs the dynamic work design poster, and evaluating everyone every week on how dynamic they are,’” Repenning says. “Then you’re being awfully static.”

After all, firms change, and their needs change. Repenning and Kieffer want managers to keep studying their firm’s workflow, so they can keep current with their needs. In fairness, a certain amount of managers do this.

“Most people have experienced fleeting moments of good work design,” Repenning says. Building on that, he says, managers and employees can keep driving a process of improvement that is realistic and logical.

“Start small,” he adds. “Pick one problem you can work on in a couple of weeks, and solve that. Most cases, with open eyes, there’s low-hanging fruit. You find the places you can win, and change incrementally, rather than all at once. For senior executives, this is hard. They are used to doing big things. I tell our executive ed students, it’s going to feel uncomfortable at the beginning, but this is a much more sustainable path to progress.”

Climate Action Learning Lab helps state and local leaders identify and implement effective climate mitigation strategies

Mon, 09/15/2025 - 10:00am

This spring, J-PAL North America — a regional office of MIT’s Abdul Latif Jameel Poverty Action Lab (J-PAL) — launched its first ever Learning Lab, centered on climate action. The Learning Lab convened a cohort of government leaders who are enacting a broad range of policies and programs to support the transition to a low-carbon economy. Through the Learning Lab, participants explored how to embed randomized evaluation into promising solutions to determine how to maximize changes in behavior — a strategy that can help advance decarbonization in the most cost-effective ways to benefit all communities. The inaugural cohort included more than 25 participants from state agencies and cities, including the Massachusetts Clean Energy Center, the Minnesota Housing Finance Agency, and the cities of Lincoln, Nebraska; Newport News, Virginia; Orlando, Florida; and Philadelphia.

“State and local governments have demonstrated tremendous leadership in designing and implementing decarbonization policies and climate action plans over the past few years,” said Peter Christensen, scientific advisor of the J-PAL North America Environment, Energy, and Climate Change Sector. “And while these are informed by scientific projections on which programs and technologies may effectively and equitably reduce emissions, the projection methods involve a lot of assumptions. It can be challenging for governments to determine whether their programs are actually achieving the expected level of emissions reductions that we desperately need. The Climate Action Learning Lab was designed to support state and local governments in addressing this need — helping them to rigorously evaluate their programs to detect their true impact.”

From May to July, the Learning Lab offered a suite of resources for participants to leverage rigorous evaluation to identify effective and equitable climate mitigation solutions. Offerings included training lectures, one-on-one strategy sessions, peer learning engagements, and researcher collaboration. State and local leaders built skills and knowledge in evidence generation and use, reviewed and applied research insights to their own programmatic areas, and identified priority research questions to guide evidence-building and decision-making practices. Programs prioritized for evaluation covered topics such as compliance with building energy benchmarking policies, take-up rates of energy-efficient home improvement programs such as heat pumps and Solar for All, and scoring criteria for affordable housing development programs.

“We appreciated the chance to learn about randomized evaluation methodology, and how this impact assessment tool could be utilized in our ongoing climate action planning. With so many potential initiatives to pursue, this approach will help us prioritize our time and resources on the most effective solutions,” said Anna Shugoll, program manager at the City of Philadelphia’s Office of Sustainability.

This phase of the Learning Lab was possible thanks to grant funding from J-PAL North America’s longtime supporter and collaborator Arnold Ventures. The work culminated in an in-person summit in Cambridge, Massachusetts, on July 23, where Learning Lab participants delivered a presentation on their jurisdiction’s priority research questions and strategic evaluation plans. They also connected with researchers in the J-PAL network to further explore impact evaluation opportunities for promising decarbonization programs.

“The Climate Action Learning Lab has helped us identify research questions for some of the City of Orlando’s deep decarbonization goals. J-PAL staff, along with researchers in the J-PAL network, worked hard to bridge the gap between behavior change theory and the applied, tangible benefits that we achieve through rigorous evaluation of our programs,” said Brittany Sellers, assistant director for sustainability, resilience and future-ready for Orlando. “Whether we’re discussing an energy-efficiency policy for some of the biggest buildings in the City of Orlando or expanding [electric vehicle] adoption across the city, it’s been very easy to communicate some of these high-level research concepts and what they can help us do to actually pursue our decarbonization goals.”

The next phase of the Climate Action Learning Lab will center on building partnerships between jurisdictions and researchers in the J-PAL network to explore the launch of randomized evaluations, deepening the community of practice among current cohort members, and cultivating a broad culture of evidence building and use in the climate space. 

“The Climate Action Learning Lab provided a critical space for our city to collaborate with other cities and states seeking to implement similar decarbonization programs, as well as with researchers in the J-PAL network to help rigorously evaluate these programs,” said Daniel Collins, innovation team director at the City of Newport News. “We look forward to further collaboration and opportunities to learn from evaluations of our mitigation efforts so we, as a city, can better allocate resources to the most effective solutions.”

The Climate Action Learning Lab is one of several offerings under the J-PAL North America Evidence for Climate Action Project. The project’s goal is to convene an influential network of researchers, policymakers, and practitioners to generate rigorous evidence to identify and advance equitable, high-impact policy solutions to climate change in the United States. In addition to the Learning Lab, J-PAL North America will launch a climate special topic request for proposals this fall to fund research on climate mitigation and adaptation initiatives. J-PAL will welcome applications from both research partnerships formed through the Learning Lab as well as other eligible applicants.

Local government leaders, researchers, potential partners, or funders committed to advancing climate solutions that work, and who want to learn more about the Evidence for Climate Action Project, may email na_eecc@povertyactionlab.org or subscribe to the J-PAL North America Climate Action newsletter.

How MIT’s Steel Research Group led to a groundbreaking national materials initiative

Mon, 09/15/2025 - 10:00am

Traditionally, developing new materials for cutting-edge applications — such as SpaceX’s Raptor engine — has taken a decade or more. But thanks to a breakthrough technology pioneered by an MIT research group now celebrating its 40th year, a key material for the Raptor was delivered in just a few years. The same innovation has accelerated the development of high-performance materials for the Apple Watch, U.S. Air Force jets, and Formula One race cars.

The MIT Steel Research Group (SRG) also led to a national initiative that “has already sparked a paradigm shift in how new materials are discovered, developed, and deployed,” according to a White House story describing the Materials Genome Initiative’s first five years.

Gregory B. Olson founded the SRG in 1985 with the goal of using computers to accelerate the hunt for new materials by plumbing databases of those materials’ fundamental properties. It was the beginning of a new field: computational materials design.

At the time, “nobody knew whether we could really do this,” remembers Olson, a professor of the practice in the Department of Materials Science and Engineering. “I have some documented evidence of agencies resisting the entire concept because, in their opinion, a material could never be designed.”

Eventually, however, Olson and colleagues showed that the approach worked. One of the most important results: In 2011 President Barack Obama made a speech “essentially announcing that this technology is real and it’s what everybody should be doing,” says Olson, who is also affiliated with the Materials Research Laboratory. In the speech, Obama launched the Materials Genome Initiative (MGI).

The MGI is developing “a fundamental database of the parameters that direct the assembly of the structures of materials,” much like the Human Genome Project “is a database that directs the assembly of the structures of life,” says Olson.

The goal is to use the MGI database to discover, manufacture, and deploy advanced materials twice as fast, and at a fraction of the cost, compared to traditional methods, according to the MGI website.

At MIT, the SRG continues to focus on steel, “because it’s the material [the world has] studied the longest, so we have the deepest fundamental understanding of its properties,” says Olson, project principal investigator.

The Cybersteels Project, funded by the Office of Naval Research, brings together eight MIT faculty who are working to expand our knowledge of steel, eventually adding their data to the MGI. Major areas of study include the boundaries between the microscopic grains that make up a steel and the economic modeling of new steels.

Concludes Olson, “it has been tremendously satisfying to see how this technology has really blossomed in the hands of leading corporations and led to a national initiative to take it even further.”

Machine-learning tool gives doctors a more detailed 3D picture of fetal health

Mon, 09/15/2025 - 10:00am

For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.

MRIs aren’t a catch-all, though; the 3D scans are difficult for doctors to interpret well enough to diagnose problems because our visual system is not accustomed to processing 3D volumetric scans (in other words, a wrap-around look that also shows us the inner structures of a subject). Enter machine learning, which could help model a fetus’s development more clearly and accurately from data — although no such algorithm has been able to model their somewhat random movements and various body shapes.

That is, until a new approach called “Fetal SMPL” from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), Boston Children’s Hospital (BCH), and Harvard Medical School presented clinicians with a more detailed picture of fetal health. It was adapted from “SMPL” (Skinned Multi-Person Linear model), a 3D model developed in computer graphics to capture adult body shapes and poses, as a way to represent fetal body shapes and poses accurately. Fetal SMPL was then trained on 20,000 MRI volumes to predict the location and size of a fetus and create sculpture-like 3D representations. Inside each model is a skeleton with 23 articulated joints called a “kinematic tree,” which the system uses to pose and move like the fetuses it saw during training.

The extensive, real-world scans that Fetal SMPL learned from helped it develop pinpoint accuracy. Imagine stepping into a stranger’s footprint while blindfolded, and not only does it fit perfectly, but you correctly guess what shoe they wore — similarly, the tool closely matched the position and size of fetuses in MRI frames it hadn’t seen before. Fetal SMPL was only misaligned by an average of about 3.1 millimeters, a gap smaller than a single grain of rice.

The approach could enable doctors to precisely measure things like the size of a baby’s head or abdomen and compare these metrics with healthy fetuses at the same age. Fetal SMPL has demonstrated its clinical potential in early tests, where it achieved accurate alignment results on a small group of real-world scans.

“It can be challenging to estimate the shape and pose of a fetus because they’re crammed into the tight confines of the uterus,” says lead author, MIT PhD student, and CSAIL researcher Yingcheng Liu SM ’21. “Our approach overcomes this challenge using a system of interconnected bones under the surface of the 3D model, which represent the fetal body and its motions realistically. Then, it relies on a coordinate descent algorithm to make a prediction, essentially alternating between guessing pose and shape from tricky data until it finds a reliable estimate.”

In utero

Fetal SMPL was tested on shape and pose accuracy against the closest baseline the researchers could find: a system that models infant growth called “SMIL.” Since babies out of the womb are larger than fetuses, the team shrank those models by 75 percent to level the playing field.

The system outperformed this baseline on a dataset of fetal MRIs between the gestational ages of 24 and 37 weeks taken at Boston Children’s Hospital. Fetal SMPL was able to recreate real scans more precisely, as its models closely lined up with real MRIs.

The method was efficient at lining up their models to images, only needing three iterations to arrive at a reasonable alignment. In an experiment that counted how many incorrect guesses Fetal SMPL had made before arriving at a final estimate, its accuracy plateaued from the fourth step onward.

The researchers have just begun testing their system in the real world, where it produced similarly accurate models in initial clinical tests. While these results are promising, the team notes that they’ll need to apply their results to larger populations, different gestational ages, and a variety of disease cases to better understand the system’s capabilities.

Only skin deep

Liu also notes that their system only helps analyze what doctors can see on the surface of a fetus, since only bone-like structures lie beneath the skin of the models. To better monitor babies’ internal health, such as liver, lung, and muscle development, the team intends to make their tool volumetric, modeling the fetus’s inner anatomy from scans. Such upgrades would make the models more human-like, but the current version of Fetal SMPL already presents a precise (and unique) upgrade to 3D fetal health analysis.

“This study introduces a method specifically designed for fetal MRI that effectively captures fetal movements, enhancing the assessment of fetal development and health,” says Kiho Im, Harvard Medical School associate professor of pediatrics and staff scientist in the Division of Newborn Medicine at BCH’s Fetal-Neonatal Neuroimaging and Developmental Science Center. Im, who was not involved with the paper, adds that this approach “will not only improve the diagnostic utility of fetal MRI, but also provide insights into the early functional development of the fetal brain in relation to body movements.”

“This work reaches a pioneering milestone by extending parametric surface human body models for the earliest shapes of human life: fetuses,” says Sergi Pujades, an associate professor at University Grenoble Alpes, who wasn’t involved in the research. “It allows us to detangle the shape and motion of a human, which has already proven to be key in understanding how adult body shape relates to metabolic conditions and how infant motion relates to neurodevelopmental disorders. In addition, the fact that the fetal model stems from, and is compatible with, the adult (SMPL) and infant (SMIL) body models, will allow us to study human shape and pose evolution over long periods of time. This is an unprecedented opportunity to further quantify how human shape growth and motion are affected by different conditions.”

Liu wrote the paper with three CSAIL members: Peiqi Wang SM ’22, PhD ’25; MIT PhD student Sebastian Diaz; and senior author Polina Golland, the Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science, a principal investigator in MIT CSAIL, and the leader of the Medical Vision Group. BCH assistant professor of pediatrics Esra Abaci Turk, Inria researcher Benjamin Billot, and Harvard Medical School professor of pediatrics and professor of radiology Patricia Ellen Grant are also authors on the paper. This work was supported, in part, by the National Institutes of Health and the MIT CSAIL-Wistron Program.

The researchers will present their work at the International Conference on Medical Image Computing and Computer Assisted Intervention (MICCAI) in September.

3 Questions: On humanizing scientists

Mon, 09/15/2025 - 12:00am

Alan Lightman has spent much of his authorial career writing about scientific discovery, the boundaries of knowledge, and remarkable findings from the world of research. His latest book “The Shape of Wonder,” co-authored with the lauded English astrophysicist Martin Rees and published this month by Penguin Random House, offers both profiles of scientists and an examination of scientific methods, humanizing researchers and making an affirmative case for the value of their work. Lightman is a professor of the practice of the humanities in MIT’s Comparative Media Studies/Writing Program; Rees is a fellow of Trinity College at Cambridge University and the UK’s Astronomer Royal. Lightman talked with MIT News about the new volume.

Q: What is your new book about?

A: The book tries to show who scientists are and how they think. Martin and I wrote it to address several problems. One is mistrust in scientists and their institutions, which is a worldwide problem. We saw this problem illustrated during the pandemic. That mistrust I think is associated with a belief by some people that scientists and their institutions are part of the elite establishment, a belief that is one feature of the populist movement worldwide. In recent years there’s been considerable misinformation about science. And, many people don’t know who scientists are.

Another thing, which is very important, is a lack of understanding about evidence-based critical thinking. When scientists get new data and information, their theories and recommendations change. But this process, part of the scientific method, is not well-understood outside of science. Those are issues we address in the book. We have profiles of a number of scientists and show them as real people, most of whom work for the benefit of society or out of intellectual curiosity, rather than being driven by political or financial interests. We try to humanize scientists while showing how they think.

Q: You profile some well-known figures in the book, as well as some lesser-known scientists. Who are some of the people you feature in it?

A: One person is a young neuroscientist, Lace Riggs, who works at the McGovern Institute for Brain Research at MIT. She grew up in difficult circumstances in southern California, decided to go into science, got a PhD in neuroscience, and works as a postdoc researching the effect of different compounds on the brain and how that might lead to drugs to combat certain mental illnesses. Another very interesting person is Magdalena Lenda, an ecologist in Poland. When she was growing up, her father sold fish for a living, and took her out in the countryside and would identify plants, which got her interested in ecology. She works on stopping invasive species. The intention is to talk about people’s lives and interests, and show them as full people.

While humanizing scientists in the book, we show how critical thinking works in science. By the way, critical thinking is not owned by scientists. Accountants, doctors, and many others use critical thinking. I’ve talked to my car mechanic about what kinds of problems come into the shop. People don’t know what causes the check engine light to go on — the catalytic converter, corroded spark plugs, etc. — so mechanics often start from the simplest and cheapest possibilities and go to the next potential problem, down the list. That’s a perfect example of critical thinking. In science, it is checking your ideas and hypotheses against data, then updating them if needed.

Q: Are there common threads linking together the many scientists you feature in the book?

A: There are common threads, but also no single scientific stereotype. There’s a wide range of personalities in the sciences. But one common thread is that all the scientists I know are passionate about what they’re doing. They’re working for the benefit of society, and out of sheer intellectual curiosity. That links all the people in the book, as well as other scientists I’ve known. I wish more people in America would realize this: Scientists are working for their overall benefit. Science is a great success story. Thanks to scientific advances, since 1900 the expected lifespan in the U.S, has increased from a little more than 45 years to almost 80 years, in just a century, largely due to our ability to combat diseases. What’s more vital than your lifespan?

This book is just a drop in the bucket in terms of what needs to be done. But we all do what we can. 

Lidar helps gas industry find methane leaks and avoid costly losses

Fri, 09/12/2025 - 10:30am

Each year, the U.S. energy industry loses an estimated 3 percent of its natural gas production, valued at $1 billion in revenue, to leaky infrastructure. Escaping invisibly into the air, these methane gas plumes can now be detected, imaged, and measured using a specialized lidar flown on small aircraft.

This lidar is a product of Bridger Photonics, a leading methane-sensing company based in Bozeman, Montana. MIT Lincoln Laboratory developed the lidar's optical-power amplifier, a key component of the system, by advancing its existing slab-coupled optical waveguide amplifier (SCOWA) technology. The methane-detecting lidar is 10 to 50 times more capable than other airborne remote sensors on the market.

"This drone-capable sensor for imaging methane is a great example of Lincoln Laboratory technology at work, matched with an impactful commercial application," says Paul Juodawlkis, who pioneered the SCOWA technology with Jason Plant in the Advanced Technology Division and collaborated with Bridger Photonics to enable its commercial application.

Today, the product is being adopted widely, including by nine of the top 10 natural gas producers in the United States. "Keeping gas in the pipe is good for everyone — it helps companies bring the gas to market, improves safety, and protects the outdoors," says Pete Roos, founder and chief innovation officer at Bridger. "The challenge with methane is that you can't see it. We solved a fundamental problem with Lincoln Laboratory."

A laser source "miracle"

In 2014, the Advanced Research Projects Agency-Energy (ARPA-E) was seeking a cost-effective and precise way to detect methane leaks. Highly flammable and a potent pollutant, methane gas (the primary constituent of natural gas) moves through the country via a vast and intricate pipeline network. Bridger submitted a research proposal in response to ARPA-E's call and was awarded funding to develop a small, sensitive aerial lidar.

Aerial lidar sends laser light down to the ground and measures the light that reflects back to the sensor. Such lidar is often used for producing detailed topography maps. Bridger's idea was to merge topography mapping with gas measurements. Methane absorbs light at the infrared wavelength of 1.65 microns. Operating a laser at that wavelength could allow a lidar to sense the invisible plumes and measure leak rates.

"This laser source was one of the hardest parts to get right. It's a key element," Roos says. His team needed a laser source with specific characteristics to emit powerfully enough at a wavelength of 1.65 microns to work from useful altitudes. Roos recalled the ARPA-E program manager saying they needed a "miracle" to pull it off.

Through mutual connections, Bridger was introduced to a Lincoln Laboratory technology for optically amplifying laser signals: the SCOWA. When Bridger contacted Juodawlkis and Plant, they had been working on SCOWAs for a decade. Although they had never investigated SCOWAs at 1.65 microns, they thought that the fundamental technology could be extended to operate at that wavelength. Lincoln Laboratory received ARPA-E funding to develop 1.65-micron SCOWAs and provide prototype units to Bridger for incorporation into their gas-mapping lidar systems.

"That was the miracle we needed," Roos says.

A legacy in laser innovation

Lincoln Laboratory has long been a leader in semiconductor laser and optical emitter technology. In 1962, the laboratory was among the first to demonstrate the diode laser, which is now the most widespread laser used globally. Several spinout companies, such as Lasertron and TeraDiode, have commercialized innovations stemming from the laboratory's laser research, including those for fiber-optic telecommunications and metal-cutting applications.

In the early 2000s, Juodawlkis, Plant, and others at the laboratory recognized a need for a stable, powerful, and bright single-mode semiconductor optical amplifier, which could enhance lidar and optical communications. They developed the SCOWA (slab-coupled optical waveguide amplifier) concept by extending earlier work on slab-coupled optical waveguide lasers (SCOWLs). The initial SCOWA was funded under the laboratory's internal technology investment portfolio, a pool of R&D funding provided by the undersecretary of defense for research and engineering to seed new technology ideas. These ideas often mature into sponsored programs or lead to commercialized technology.

"Soon, we developed a semiconductor optical amplifier that was 10 times better than anything that had ever been demonstrated before," Plant says. Like other semiconductor optical amplifiers, the SCOWA guides laser light through semiconductor material. This process increases optical power as the laser light interacts with electrons, causing them to shed photons at the same wavelength as the input laser. The SCOWA's unique light-guiding design enables it to reach much higher output powers, creating a powerful and efficient beam. They demonstrated SCOWAs at various wavelengths and applied the technology to projects for the Department of Defense.

When Bridger Photonics reached out to Lincoln Laboratory, the most impactful application of the device yet emerged. Working iteratively through the ARPA-E funding and a Cooperative Research and Development Agreement (CRADA), the team increased Bridger's laser power by more than tenfold. This power boost enabled them to extend the range of the lidar to elevations over 1,000 feet.

"Lincoln Laboratory had the knowledge of what goes on inside the optical amplifier — they could take our input, adjust the recipe, and make a device that worked very well for us," Roos says.

The Gas Mapping Lidar was commercially released in 2019. That same year, the product won an R&D 100 Award, recognizing it as a revolutionary advancement in the marketplace.

A technology transfer takes off

Today, the United States is the world's largest natural gas supplier, driving growth in the methane-sensing market. Bridger Photonics deploys its Gas Mapping Lidar for customers nationwide, attaching the sensor to planes and drones and pinpointing leaks across the entire supply chain, from where gas is extracted, piped through the country, and delivered to businesses and homes. Customers buy the data from these scans to efficiently locate and repair leaks in their gas infrastructure. In January 2025, the Environmental Protection Agency provided regulatory approval for the technology.

According to Bruce Niemeyer, president of Chevron's shale and tight operations, the lidar capability has been game-changing: "Our goal is simple — keep methane in the pipe. This technology helps us assure we are doing that … It can find leaks that are 10 times smaller than other commercial providers are capable of spotting."

At Lincoln Laboratory, researchers continue to innovate new devices in the national interest. The SCOWA is one of many technologies in the toolkit of the laboratory's Microsystems Prototyping Foundry, which will soon be expanded to include a new Compound Semiconductor Laboratory – Microsystem Integration Facility. Government, industry, and academia can access these facilities through government-funded projects, CRADAs, test agreements, and other mechanisms.

At the direction of the U.S. government, the laboratory is also seeking industry transfer partners for a technology that couples SCOWA with a photonic integrated circuit platform. Such a platform could advance quantum computing and sensing, among other applications.

"Lincoln Laboratory is a national resource for semiconductor optical emitter technology," Juodawlkis says.

MIT launches Day of Design to bring hands-on learning to classrooms

Fri, 09/12/2025 - 9:50am

A new MIT initiative known as Day of Design offers free, open-source, hands-on design activities for all classrooms, in addition to professional development opportunities and signature events. The material engages pK-12 learners in the skills they need to solve complex open-ended problems while also considering user, social, and environmental needs. Inspired by Day of AI and Day of Climate, it is a new collaborative effort by the MIT Morningside Academy for Design (MAD) and the WPS Institute, with support from the MIT pK-12 Initiative.

“At MIT, design is practiced across departments — from the more obvious ones, like architecture and mechanical engineering, to less apparent ones, like biology and chemistry. Design skills support students in becoming strong collaborators, idea-makers, and human-centered problem-solvers. The Day of Design initiative seeks to share these skills with the K-12 audience through bite-sized, engaging activities for every classroom,” says Rosa Weinberg, who co-led the development of Day of Design and serves as MAD’s K–12 design education lead.

These interdisciplinary resources are designed collaboratively with feedback from teachers and grounded in exciting themes across science, humanities, art, engineering, and other subject areas, serving educators and learners regardless of their experience with design and making. Activities are scaffolded like “grammar lessons” for design education, including classroom-ready slides, handouts, tutorial videos, and facilitation tips supporting 21st century mindsets. All materials will be shared online, enabling educators to use the content as-is, or modify it as needed for their classrooms and other informal learning settings.

Rachel Adams, a former teacher and head of teaching and learning at the WPS Institute, explains, “There can be a gap between open-ended teaching materials and what teachers actually need in their classrooms. Day of Design classroom materials are piloted and workshopped by an interdisciplinary cohort of teachers who make up our Teacher Innovation Fellowship. This collaborative design process allows us to bridge the gap between cutting-edge MIT research with practical student-centered design lessons. These materials represent a new way of thinking that honors both the innovation happening in the labs at MIT and the real-world needs of educators.” 

Day of Design also features signature events and a yearly, real-world challenge that brings all the design skills together. It is intended for educators who want ready-to-use design and making activities that connect to their subject areas and mindsets, and for students eager to develop problem-solving skills, creativity, and hands-on experience. Schools and districts looking to engage learners through interdisciplinary, project-based approaches can adopt the program as a flexible framework, while community partners can use it to provide young people with tools and spaces to create.

Cedric Jacobson, a chemistry teacher at Brooke High School in Boston who participated in MAD’s Teacher Innovation Fellowship and contributed to testing the Day of Design curriculum, emphasizes it “provides opportunities for teachers to practice and interact with design principles in concrete ways through multiple lesson structures. This process empowers them to try design principles in model lessons before preparing to use them in their own curriculum.”

Evan Milstein-Greengart, another Teacher Innovation Fellow, describes how “having this hands-on experience changed the way I thought about education. I felt like a kid again — going back to playground learning — and I want to bring that same spirit into my classroom.” 

Closing the skills gap through design education

Technologies such as artificial intelligence, robotics, and biotech are reshaping work and society. The World Economic Forum estimates that 39 percent of key job skills will change by 2030. At the same time, research shows student engagement drops sharply in high school, with a third of students experiencing what is often called the “engagement cliff.” Many do not encounter design until college, if at all.

There is a growing need to foster not just technical literacy, but design fluency — the ability to approach complex problems with empathy, creativity, and critical thinking. Design education helps students prototype solutions, iterate based on feedback, and communicate ideas clearly. Studies have shown it can improve creative thinking, motivation, problem-solving, self-efficacy, and academic achievement.

At MIT, design is a way of thinking and creating that spans disciplines — from bioengineering and architecture to mechanical systems and public policy. It is both creative and analytical, grounded in iteration, user input, and systems thinking. Day of Design reflects MIT’s “mens et manus” (“mind and hand”) motto and extends the tools of design to young learners and educators.

“The workshops help students develop skills that can be applied across multiple subject areas, using topics that draw context from MIT research while remaining exciting and accessible to middle and high school students,” explains Weinberg. “For example, ‘Cosmic Comfort,’ one of our pilot workshops, was inspired by MIT's Space Architecture course (MAS.S66/4.154/16.89). It challenges students to consider how you might make a lunar habitat feel like home, while focusing on developing the crucial design skill of ideation — the ability to generate multiple creative solutions.”

Building on an MIT legacy

Day of Design builds on the model of Day of AI and Day of Climate, two ongoing efforts by MIT RAISE and the MIT pK-12 Initiative. All three initiatives share free, open-source activities, professional development materials, and events that connect MIT research with educators and students worldwide. Since 2021, Day of AI has reached more than 42,000 teachers and 1.5 million students in 170 countries and all 50 U.S. states. Day of Climate, launched in March 2025, has already recorded over 50,000 website visitors, 300 downloads of professional development materials, and an April launch event at the MIT Museum that drew 200 participants.

“Day of Design builds on the spirit of Day of AI and Day of Climate by inviting young people to engage with real-world challenges through creative work, meaningful collaboration, and deep empathy for others. These initiatives reflect MIT’s commitment to hands-on, transdisciplinary learning, empowering future young leaders not just to understand the world, but to shape it,” says Claudia Urrea, executive director for the pK–12 Initiative at MIT Open Learning. 

Kicking off with connection

“Learning and creating together in person sparks the kind of ideas and connections that are hard to make any other way. Collective learning helps everyone think bigger and more creatively, while building a more deeply connected community that keeps that growth alive,” observes Caitlin Morris, PhD student in Fluid Interfaces, a 2024 MAD Design Fellow, and co-organizer of Day of Design: Connect, which will kick off Day of Design on Sept. 25. 

Following the launch, the first set of classroom resources will be introduced during the 2025–26 school year, starting with activities for grades 7–12. Additional resources for younger learners, along with training opportunities for educators, will be added over time. Each year, new design skills and mindsets will be incorporated, creating a growing library of activities. While initial events will take place at MIT, organizers plan to expand programming globally.

Teacher Innovation Fellow Jessica Toupin, who piloted Day of Design activities in her math classroom, reflects on the impact: “As a math teacher, I don’t always get to focus on design. This material reminded me of the joy of learning — and when I brought it into my classroom, students who had struggled came alive. Just the ability to play and build showed me they were capable of so much more.”

This MIT spinout is taking biomolecule storage out of the freezer

Fri, 09/12/2025 - 12:00am

Ever since freezers were invented, the life sciences industry has been reliant on them. That’s because many patient samples, drug candidates, and other biologics must be stored and transported in powerful freezers or surrounded by dry ice to remain stable.

The problem was on full display during the Covid-19 pandemic, when truckloads of vaccines had to be discarded because they had thawed during transport. Today, the stakes are even higher. Precision medicine, from CAR-T cell therapies to tumor DNA sequencing that guides cancer treatment, depends on pristine biological samples. Yet a single power outage, shipping delay, or equipment failure can destroy irreplaceable patient samples, setting back treatment by weeks or halting it entirely. In remote areas and developing nations, the lack of reliable cold storage effectively locks out entire populations from these life-saving advances.

Cache DNA wants to set the industry free from freezers. At MIT, the company’s founders created a new way to store and preserve DNA molecules at room temperature. Now the company is building biomolecule preservation technologies that can be used in applications across health care, from routine blood tests and cancer screening to rare disease research and pandemic preparedness.

“We want to challenge the paradigm,” says Cache DNA co-founder and former MIT postdoc James Banal. “Biotech has been reliant on the cold chain for more than 50 years. Why hasn’t that changed? Meanwhile, the cost of DNA sequencing has plummeted from $3 billion for the first human genome to under $200 today. With DNA sequencing and synthesis becoming so cheap and fast, storage and transport have emerged as the critical bottlenecks. It’s like having a supercomputer that still requires punch cards for data input.”

As the company works to preserve biomolecules beyond DNA and scale the production of its kits, co-founders Banal and MIT Professor Mark Bathe believe their technology has the potential to unlock new health insights by making sample storage accessible to scientists around the world.

“Imagine if every human on Earth could contribute to a global biobank, not just those living near million-dollar freezer facilities,” Banal says. “That’s 8 billion biological stories instead of just a privileged few. The cures we’re missing might be hiding in the biomolecules of someone we’ve never been able to reach.”

From quantum computing to “Jurassic Park”

Banal came to MIT from Australia to work as a postdoc under Bathe, a professor in MIT’s Department of Biological Engineering. Banal primarily studied in the MIT-Harvard Center for Excitonics, through which he collaborated with researchers from across MIT.

“I worked on some really wacky stuff, like DNA nanotechnology and its intersection with quantum computing and artificial photosynthesis,” Banal recalls.

Another project focused on using DNA to store data. While computers store data as 0s and 1s, DNA can store the same information using the nucleotides A, T, G, and C, allowing for extremely dense storage of data: By one estimate, 1 gram of DNA can hold up to 215 petabytes of data.

After three years of work, in 2021, Banal and Bathe created a system that stored DNA-based data in tiny glass particles. They founded Cache DNA the same year, securing the intellectual property by working with MIT’s Technology Licensing Office, applying the technology to storing clinical nucleic acid samples as well as DNA data. Still, the technology was too nascent to be used for most commercial applications at the time.

Professor of chemistry Jeremiah Johnson had a different approach. His research had shown that certain plastics and rubbers could be made recyclable by adding cleavable molecular bonds. Johnson thought Cache DNA’s technology could be faster and more reliable using his amber-like polymers, similar to how researchers in the “Jurassic Park” movie recover ancient dinosaur DNA from a tree’s fossilized amber resin.

“It started basically as a fun conversation along the halls of Building 16,” Banal recalls. “He’d seen my work, and I was aware of the innovations in his lab.”

Banal immediately saw the potential. He was familiar with the burden of the cold chain. For his MIT experiments, he’d store samples in big freezers kept at -80 degrees Celsius. Samples would sometimes get lost in the freezer or be buried in the inevitable ice build-up. Even when they were perfectly preserved, samples could degrade as they thawed.

As part of a collaboration between Cache DNA and MIT, Banal, Johnson, and two researchers in Johnson’s lab developed a polymer that stores DNA at room temperature. In a nod to their inspiration, they demonstrated the approach by encoding DNA sequences with the “Jurassic Park” theme song.

The researchers’ polymers could encompass a material as a liquid and then form a solid, glass-like block when heated. To release the DNA, the researchers could add a molecule called cysteamine and a special detergent. The researchers showed the process could work to store and access all 50,000 base pairs of a human genome without causing damage.

“Real amber is not great at preservation. It’s porous and lets in moisture and air,” Banal says. “What we built is completely different: a dense polymer network that forms an impenetrable barrier around DNA. Think of it like vacuum-sealing, but at the molecular level. The polymer is so hydrophobic that water and enzymes that would normally destroy DNA simply can’t get through.”

As that research was taking shape, Cache DNA was learning that sample storage was a huge problem from hospitals and research labs. In places like Florida and Singapore, researchers said contending with the effects of humidity on samples was another constant headache. Other researchers across the globe wanted to know if the technology would help them collect samples outside of the lab.

“Hospitals told us they were running out of space,” Banal says. “They had to throw samples out, limit sample collection, and as a last-case scenario, they would use a decades-old storage technology that leads to degradation after a short period of time. It became a north star for us to solve those problems.”

A new tool for precision health

Last year, Cache DNA sent out more than 100 of its first alpha DNA preservation kits to researchers around the world.

“We didn’t tell researchers what to use it for, and our minds were blown by the use cases,” Banal says. “Some used it for collecting samples in the field where cold shipping wasn't feasible. Others evaluated for long term archival storage. The applications were different, but the problem was universal: They all needed reliable storage without the constraint of refrigeration.”

Cache DNA has developed an entire suite of preservation technologies that can be optimized for different storage scenarios. The company also recently received a grant from the National Science Foundation to expand its technology to preserve a broader swath of biomolecules, including RNA and proteins, which could yield new insights into health and disease.

“This important innovation helps eliminate the cold chain and has the potential to unlock millions of genetic samples globally for Cache DNA to empower personalized medicine,” Bathe says. “Eliminating the cold chain is half the equation. The other half is scaling from thousands to millions or even billions of nucleic acid samples. Together, this could enable the equivalent of a ‘Google Books’ for nucleic acids stored at room temperature, either for clinical samples in hospital settings and remote regions of the world, or alternatively to facilitate DNA data storage and retrieval at scale.”

“Freezers have dictated where science could happen,” Banal says. “Remove that constraint, and you start to crack open possibilities: island nations studying their unique genetics without samples dying in transit; every rare disease patient worldwide contributing to research, not just those near major hospitals; the 2 billion people without reliable electricity finally joining global health studies. Room-temperature storage isn’t the whole answer, but every cure starts with a sample that survived the journey.”

New RNA tool to advance cancer and infectious disease research and treatment

Thu, 09/11/2025 - 4:45pm

Researchers at the Antimicrobial Resistance (AMR) interdisciplinary research group of the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, have developed a powerful tool capable of scanning thousands of biological samples to detect transfer ribonucleic acid (tRNA) modifications — tiny chemical changes to RNA molecules that help control how cells grow, adapt to stress, and respond to diseases such as cancer and antibiotic‑resistant infections. This tool opens up new possibilities for science, health care, and industry — from accelerating disease research and enabling more precise diagnostics to guiding the development of more effective medical treatments for diseases such as cancer and antibiotic-resistant infections.

For this study, the SMART AMR team worked in collaboration with researchers at MIT, Nanyang Technological University in Singapore, the University of Florida, the University at Albany in New York, and Lodz University of Technology in Poland.

Addressing current limitations in RNA modification profiling

Cancer and infectious diseases are complicated health conditions in which cells are forced to function abnormally by mutations in their genetic material or by instructions from an invading microorganism. The SMART-led research team is among the world’s leaders in understanding how the epitranscriptome — the over 170 different chemical modifications of all forms of RNA — controls growth of normal cells and how cells respond to stressful changes in the environment, such as loss of nutrients or exposure to toxic chemicals. The researchers are also studying how this system is corrupted in cancer or exploited by viruses, bacteria, and parasites in infectious diseases.

Current molecular methods used to study the expansive epitranscriptome and all of the thousands of different types of modified RNA are often slow, labor-intensive, costly, and involve hazardous chemicals, which limits research capacity and speed.

To solve this problem, the SMART team developed a new tool that enables fast, automated profiling of tRNA modifications — molecular changes that regulate how cells survive, adapt to stress, and respond to disease. This capability allows scientists to map cell regulatory networks, discover novel enzymes, and link molecular patterns to disease mechanisms, paving the way for better drug discovery and development, and more accurate disease diagnostics. 

Unlocking the complexity of RNA modifications

SMART’s open-access research, recently published in Nucleic Acids Research and titled “tRNA modification profiling reveals epitranscriptome regulatory networks in Pseudomonas aeruginosa,” shows that the tool has already enabled the discovery of previously unknown RNA-modifying enzymes and the mapping of complex gene regulatory networks. These networks are crucial for cellular adaptation to stress and disease, providing important insights into how RNA modifications control bacterial survival mechanisms. 

Using robotic liquid handlers, researchers extracted tRNA from more than 5,700 genetically modified strains of Pseudomonas aeruginosa, a bacterium that causes infections such as pneumonia, urinary tract infections, bloodstream infections, and wound infections. Samples were enzymatically digested and analyzed by liquid chromatography-tandem mass spectrometry (LC-MS/MS), a technique that separates molecules based on their physical properties and identifies them with high precision and sensitivity. 

As part of the study, the process generated over 200,000 data points in a high-resolution approach that revealed new tRNA-modifying enzymes and simplified gene networks controlling how cells respond and adapt to stress. For example, the data revealed that the methylthiotransferase MiaB, one of the enzymes responsible for tRNA modification ms2i6A, was found to be sensitive to the availability of iron and sulfur and to metabolic changes when oxygen is low. Discoveries like this highlight how cells respond to environmental stresses, and could lead to future development of therapies or diagnostics.

SMART’s automated system was specially designed to profile tRNA modifications across thousands of samples rapidly and safely. Unlike traditional methods, this tool integrates robotics to automate sample preparation and analysis, eliminating the need for hazardous chemical handling and reducing costs. This advancement increases safety, throughput, and affordability, enabling routine large-scale use in research and clinical labs.

A faster and automated way to study RNA

As the first system capable of quantitative, system‑wide profiling of tRNA modifications at this scale, the tool provides a unique and comprehensive view of the epitranscriptome — the complete set of RNA chemical modifications within cells. This capability allows researchers to validate hypotheses about RNA modifications, uncover novel biology, and identify promising molecular targets for developing new therapies.

“This pioneering tool marks a transformative advance in decoding the complex language of RNA modifications that regulate cellular responses,” says Professor Peter Dedon, co-lead principal investigator at SMART AMR, professor of biological engineering at MIT, and corresponding author of the paper. “Leveraging AMR’s expertise in mass spectrometry and RNA epitranscriptomics, our research uncovers new methods to detect complex gene networks critical for understanding and treating cancer, as well as antibiotic-resistant infections. By enabling rapid, large-scale analysis, the tool accelerates both fundamental scientific discovery and the development of targeted diagnostics and therapies that will address urgent global health challenges.”

Accelerating research, industry, and health-care applications

This versatile tool has broad applications across scientific research, industry, and health care. It enables large-scale studies of gene regulation, RNA biology, and cellular responses to environmental and therapeutic challenges. The pharmaceutical and biotech industry can harness it for drug discovery and biomarker screening, efficiently evaluating how potential drugs affect RNA modifications and cellular behavior. This aids the development of targeted therapies and personalized medical treatments.

“This is the first tool that can rapidly and quantitatively profile RNA modifications across thousands of samples,” says Jingjing Sun, research scientist at SMART AMR and first author of the paper. “It has not only allowed us to discover new RNA-modifying enzymes and gene networks, but also opens the door to identifying biomarkers and therapeutic targets for diseases such as cancer and antibiotic-resistant infections. For the first time, large-scale epitranscriptomic analysis is practical and accessible.”

Looking ahead: advancing clinical and pharmaceutical applications

Moving forward, SMART AMR plans to expand the tool’s capabilities to analyze RNA modifications in human cells and tissues, moving beyond microbial models to deepen understanding of disease mechanisms in humans. Future efforts will focus on integrating the platform into clinical research to accelerate the discovery of biomarkers and therapeutic targets. The translation of the technology into an epitranscriptome-wide analysis tool that can be used in pharmaceutical and health-care settings will drive the development of more effective and personalized treatments.

The research conducted at SMART is supported by the National Research Foundation Singapore under its Campus for Research Excellence and Technological Enterprise program.

Technology originating at MIT leads to approved bladder cancer treatment

Thu, 09/11/2025 - 12:00am

At MIT, a few scribbles on a whiteboard can turn into a potentially transformational cancer treatment.

This scenario came to fruition this week when the U.S. Food and Drug Administration approved a system for treating an aggressive form of bladder cancer. More than a decade ago, the system started as an idea in the lab of MIT Professor Michael Cima at the Koch Institute for Integrative Cancer Research, enabled by funding from the National Institutes of Health and MIT’s Deshpande Center.

The work that started with a few researchers at MIT turned into a startup, TARIS Biomedical LLC, that was co-founded by Cima and David H. Koch Institute Professor Robert Langer, and acquired by Johnson & Johnson in 2019. In developing the core concept of a device for local drug delivery to the bladder — which represents a new paradigm in bladder cancer treatment — the MIT team approached drug delivery like an engineering problem.

“We spoke to urologists and sketched out the problems with past treatments to get to a set of design parameters,” says Cima, a David H. Koch Professor of Engineering and professor of materials science and engineering. “Part of our criteria was it had to fit into urologists’ existing procedures. We wanted urologists to know what to do with the system without even reading the instructions for use. That’s pretty much how it came out.”

To date, the system has been used in patients thousands of times. In one study involving people with high-risk, non-muscle-invasive bladder cancer whose disease had proven resistant to standard care, doctors could find no evidence of cancer in 82.4 percent of patients treated with the system. More than 50 percent of those patients were still cancer-free nine months after treatment.

The results are extremely gratifying for the team of researchers that worked on it at MIT, including Langer and Heejin Lee SM ’04, PhD ’09, who developed the system as part of his PhD thesis. And Cima says far more people deserve credit than just the ones who scribbled on his whiteboard all those years ago.

“Drug products like this take an enormous amount of effort,” says Cima. “There are probably more than 1,000 people that have been involved in developing and commercializing the system: the MIT inventors, the urologists they consulted, the scientists at TARIS, the scientists at Johnson & Johnson — and that’s not including all the patients who participated in clinical trials. I also want to emphasize the importance of the MIT ecosystem, and the importance of giving people the resources to pursue arguably crazy ideas. We need to continue to support those kinds of activities.”

In the mid 2000s, Langer connected Cima with a urologist at Boston Children’s Hospital who was seeking a new treatment for a painful bladder disease known as interstitial cystitis. The standard treatment required frequent drug infusions into a patient’s bladder through a catheter, which provided only temporary relief.

A group of researchers including Cima; Lee; Hong Linh Ho Duc SM ’05, PhD ’09; Grace Kim PhD ’08; and Karen Daniel PhD ’09 began speaking with urologists and people who had run failed clinical trials involving bladder treatments to understand what went wrong. All that information went on Cima’s whiteboard over the course of several weeks. Fortunately, Cima also scribbled “Do not erase!”

“We learned a lot in the process of writing everything down,” Cima says. “We learned what not to build and what to avoid.”

With the problem well-defined, Cima received a grant from MIT’s Deshpande Center for Technological Innovation, which allowed Lee to work on designing a better solution as part of his PhD thesis.

One of the key advances the group made was using a special alloy that gave the device “shape memory” so that it could be straightened out and inserted into the bladder through a catheter. Then it would fold up, preventing it from being expelled during urination.

The new design was able to slowly release drugs over a two-week period — far longer than any other approach — and could then be removed using a thin, flexible tube commonly used in urology, called a cystoscope. The progress was enough for Cima and Langer, who are both serial entrepreneurs, to found TARIS Biomedical and license the technology from MIT. Lee and three other MIT graduates joined the company.

“It was a real pleasure working with Mike Cima, our students, and colleagues on this novel drug delivery system, which is already changing patients’ lives,” Langer says, “It’s a great example of how research at the Koch Institute starts with basic science and engineering and ends up with new treatments for cancer patients.”

The FDA’s approval of the system for the treatment of certain patients with high-risk, non-muscle-invasive bladder cancer now means that patients with this disease may have a better treatment option. Moving forward, Cima hopes the system continues to be explored to treat other diseases.

A better understanding of debilitating head pain

Thu, 09/11/2025 - 12:00am

Everyone gets headaches. But not everyone gets cluster headache attacks, a debilitating malady producing acute pain that lasts an hour or two. Cluster headache attacks come in sets — hence the name — and leave people in complete agony, unable to function. A little under 1 percent of the U.S. population suffers from cluster headache.

But that’s just an outline of the matter. What’s it like to actually have a cluster headache?

“The pain of a cluster headache is such that you can’t sit still,” says MIT-based science journalist Tom Zeller, who has suffered from them for decades. “I’d liken it to putting your hand on a hot burner, except that you can’t take your hand off for an hour or two. Every headache is an emergency. You have to run or pace or rock. Think of another pain you had to dance through, but it just doesn’t stop. It’s that level of intensity, and it’s all happening inside your head.”

And then there is the pain of the migraine headache, which seems slightly less acute than a cluster attack, but longer-lasting, and similarly debilitating. Migraine attacks can be accompanied by extreme sensitivity to light and noise, vision issues, and nausea, among other neurological symptoms, leaving patients alone in dark rooms for hours or days. An estimated 1.2 billion people around the world, including 40 million in the U.S., struggle with migraine attacks.

These are not obscure problems. And yet: We don’t know exactly why migraine and cluster headache disorders occur, nor how to address them. Headaches have never been a prominent topic within modern medical research. How can something so pervasive be so overlooked?

Now Zeller examines these issues in an absorbing book, “The Headache: The Science of a Most Confounding Affliction — and a Search for Relief,” published this summer by Mariner Books. Zeller is the editor-in-chief and co-founder of Undark, a digital magazine on science and society published by the Knight Science Journalism Program at MIT.

One word, but different syndromes

“The Headache,” which is Zeller’s first book, combines a first-person narrative of his own suffering, accounts of the pain and dread that other headache sufferers feel, and thorough reporting on headache-based research in science and medicine. Zeller has experienced cluster headache attacks for 30-plus years, dating to when he was in his 20s.

“In some ways, I suppose I had been writing the book my whole adult life without knowing it,” Zeller says. Indeed, he had collected research material about these conditions for years while grappling with his own headache issues.

A key issue in the book is why society has not taken cluster headache and migraine problems more seriously — and relatedly, why the science of headache disorders is not more advanced. Although in fairness, as Zeller says, “Anything involving the brain or central nervous system is incredibly hard to study.”

More broadly, Zeller suggests in the book, we have conflated regular workaday headaches — the kind you may get from staring at a screen too long — with the far more severe and rather different disorders like cluster headache and migraine. (Some patients refer to cluster headache and migraine in the singular, not plural, to emphasize that this is an ongoing condition, not just successive headaches.)

“Headaches are annoying, and we tough it out,” Zeller says. “But we use the same exact word to talk about these other things,” namely, cluster headache and migraine. This has likely reinforced our general dismissal of severe headache disorders as a pressing and distinct medical problem. Instead, we often consider headache disorders, even severe ones, as something people should simply power through.

“There’s a certain sense of malingering we still attach to a migraine or [other] headache disorder, and I’m not sure that’s going away,” Zeller says.

Then too, about three-quarters of people who experience migraine attacks are women, which has quite plausibly led the ailment to “get short shrift historically,” as Zeller says. Or at least, in recent history: As Zeller chronicles in the book, an awareness of severe headache disorders goes back to ancient times, and it’s possible they have received less relative attention in modernity.

A new shift in medical thinking

In any case, for much of the 20th century, conventional medical wisdom held that migraine and cluster headache stemmed from changes or abnormalities in blood vessels. But in recent decades, as Zeller details, there has been a paradigm shift: These conditions are now seen as more neurological in origin.

A key breakthrough here was the 1980s discovery of a neurotransmitter called calcitonin gene-related peptide, or CGRP. As scientists have discovered, CGRP is released from nerve endings around blood vessels and helps produce migraine symptoms. This offered a new strategy — and target — for combating severe head pain. The first drugs to inhibit the effects of CGRP hit the market in 2018, and most researchers in the field are now focused on idiopathic headache as a neurological disorder, not a vascular problem.

“It’s the way science works,” Zeller says. “Changing course is not easy. It’s like turning a ship on a dime. The same applies to the study of headaches.”

Many medications aimed at blocking these neurotransmitters have since been developed, though only about 20 percent of patients seem to find permanent relief as a result. As Zeller chronicles, other patients feel benefits for about a year, before the effects of a medication wear off; many of them now try complicated combinations of medications.

Severe headache disorders also seem linked to hormonal changes in people, who often see an onset of these ailments in their teens, and a diminishing of symptoms later in life. So, while headache medicine has witnessed a recent breakthrough, much more work lies ahead.

Opening up a discussion

Amid all this, one set of questions still tugging at Zeller is evolutionary in nature: Why do humans experience headache disorders at all? There is no clear evidence that other species get severe headaches — or that the prevalence of severe headache conditions in society has ever diminished.

One hypothesis, Zeller notes, is that “having a highly attuned nervous system could have been a benefit in our more primitive state.” Such a system may have helped us survive, in the past, but at the cost of producing intense disorders in some people when the wiring goes a bit awry. We may learn more about this as neuro-based headache research continues.

“The Headache” has received widespread praise. Writing in The New Yorker, Jerome Groopman heralded the “rich material in the book,” noting that it “weaves together history, biology, a survey of current research, testimony from patients, and an agonizing account of Zeller’s own suffering.”

For his part, Zeller says he is appreciative of the attention “The Headache” has generated, as one of the most widely-noted nonfiction books released this summer.

“It’s opened up room for a kind of conversation that doesn’t usually break through into the mainstream,” Zeller says. “I’m hearing from a lot of patients who just are saying, ‘Thank you for writing this.’ And that’s really gratifying. I’m most happy to hear from people who think it’s giving them a voice. I’m also hearing a lot from doctors and scientists. The moment has opened up for this discussion, and I’m grateful for that.”

MIT software tool turns everyday objects into animated, eye-catching displays

Wed, 09/10/2025 - 3:15pm

Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.

But now, thanks to MIT researchers, it’s also possible to make dynamic displays without using electronics, using barrier-grid animations (or scanimations), which use printed materials instead. This visual trick involves sliding a patterned sheet across an image to create the illusion of a moving image. The secret of barrier-grid animations lies in its name: An overlay called a barrier (or grid) often resembling a picket fence moves across, rotates around, or tilts toward an image to reveal frames in an animated sequence. That underlying picture is a combination of each still, sliced and interwoven to present a different snapshot depending on the overlay’s position.

While tools exist to help artists create barrier-grid animations, they’re typically used to create barrier patterns that have straight lines. Building off of previous work in creating images that appear to move, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a tool that allows users to explore more unconventional designs. From zigzags to circular patterns, the team’s “FabObscura” software turns unique concepts into printable scanimations, helping users add dynamic animations to things like pictures, toys, and decor.

MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Ticha Sethapakdi SM ’19, a lead author on a paper presenting FabObscura, says that the system is a one-size-fits-all tool for customizing barrier-grid animations. This versatility extends to unconventional, elaborate overlay designs, like pointed, angled lines to animate a picture you might put on your desk, or the swirling, hypnotic appearance of a radial pattern you could spin over an image placed on a coin or a Frisbee.

“Our system can turn a seemingly static, abstract image into an attention-catching animation,” says Sethapakdi. “The tool lowers the barrier to entry to creating these barrier-grid animations, while helping users express a variety of designs that would’ve been very time-consuming to explore by hand.”

Behind these novel scanimations is a key finding: Barrier patterns can be expressed as any continuous mathematical function — not just straight lines. Users can type these equations into a text box within the FabObscura program, and then see how it graphs out the shape and movement of a barrier pattern. If you wanted a traditional horizontal pattern, you’d enter in a constant function, where the output is the same no matter the input, much like drawing a straight line across a graph. For a wavy design, you’d use a sine function, which is smooth and resembles a mountain range when plotted out. The system’s interface includes helpful examples of these equations to guide users toward their preferred pattern.

A simple interface for elaborate ideas

FabObscura works for all known types of barrier-grid animations, supporting a variety of user interactions. The system enables the creation of a display with an appearance that changes depending on your viewpoint. FabObscura also allows you to create displays that you can animate by sliding or rotating a barrier over an image.

To produce these designs, users can upload a folder of frames of an animation (perhaps a few stills of a horse running), or choose from a few preset sequences (like an eye blinking) and specify the angle your barrier will move. After previewing your design, you can fabricate the barrier and picture onto separate transparent sheets (or print the image on paper) using a standard 2D printer, such as an inkjet. Your image can then be placed and secured on flat, handheld items such as picture frames, phones, and books.

You can enter separate equations if you want two sequences on one surface, which the researchers call “nested animations.” Depending on how you move the barrier, you’ll see a different story being told. For example, CSAIL researchers created a car that rotates when you move its sheet vertically, but transforms into a spinning motorcycle when you slide the grid horizontally.

These customizations lead to unique household items, too. The researchers designed an interactive coaster that you can switch from displaying a “coffee” icon to symbols of a martini and a glass of water by pressing your fingers down on the edges of its surface. The team also spruced up a jar of sunflower seeds, producing a flower animation on the lid that blooms when twisted off.

Artists, including graphic designers and printmakers, could also use this tool to make dynamic pieces without needing to connect any wires. The tool saves them crucial time to explore creative, low-power designs, such as a clock with a mouse that runs along as it ticks. FabObscura could produce animated food packaging, or even reconfigurable signage for places like construction sites or stores that notify people when a particular area is closed or a machine isn’t working.

Keep it crisp

FabObscura’s barrier-grid creations do come with certain trade-offs. While nested animations are novel and more dynamic than a single-layer scanimation, their visual quality isn’t as strong. The researchers wrote design guidelines to address these challenges, recommending users upload fewer frames for nested animations to keep the interlaced image simple and stick to high-contrast images for a crisper presentation.

In the future, the researchers intend to expand what users can upload to FabObscura, like being able to drop in a video file that the program can then select the best frames from. This would lead to even more expressive barrier-grid animations.

FabObscura might also step into a new dimension: 3D. While the system is currently optimized for flat, handheld surfaces, CSAIL researchers are considering implementing their work into larger, more complex objects, possibly using 3D printers to fabricate even more elaborate illusions.

Sethapakdi wrote the paper with several CSAIL affiliates: Zhejiang University PhD student and visiting researcher Mingming Li; MIT EECS PhD student Maxine Perroni-Scharf; MIT postdoc Jiaji Li; MIT associate professors Arvind Satyanarayan and Justin Solomon; and senior author and MIT Associate Professor Stefanie Mueller, leader of the Human-Computer Interaction (HCI) Engineering Group at CSAIL. Their work will be presented at the ACM Symposium on User Interface Software and Technology (UIST) this month.

Demo Day features hormone-tracking sensors, desalination systems, and other innovations

Wed, 09/10/2025 - 3:00pm

Kresge Auditorium came alive Friday as MIT entrepreneurs took center stage to share their progress in the delta v startup accelerator program.

Now in its 14th year, delta v Demo Day represents the culmination of a summer in which students work full-time on new ventures under the guidance of the Martin Trust Center for MIT Entrepreneurship.

It also doubles as a celebration, with Trust Center Managing Director (and consummate hype man) Bill Aulet setting the tone early with his patented high-five run through the audience and leap on stage for opening remarks.

“All these students have performed a miracle,” Aulet told the crowd. “One year ago, they were sitting in the audience like all of you. One year ago, they probably didn’t even have an idea or a technology. Maybe they did, but they didn’t have a team, a clear vision, customer models, or a clear path to impact. But today they’re going to blow your mind. They have products — real products — a founding team, a clear mission, customer commitments or letters of intent, legitimate business models, and a path to greatness and impact. In short, they will have achieved escape velocity.”

The two-hour event filled Kresge Auditorium, with a line out the door for good measure, and was followed by a party under a tent on the Kresge lawn. Each presentation began with a short video introducing the company before a student took the stage to expand on the problem they were solving and what their team has learned from talks with potential customers.

In total, 22 startups showcased their ventures and early business milestones in rapid-fire presentations.

Rick Locke, the new dean of the MIT Sloan School of Management, said events like Demo Day are why he came back to the Institute after serving in various roles between 1988 and 2013.

“What’s great about this event is how it crystallizes the spirit of MIT: smart people doing important work, doing it by rolling up their sleeves, doing it with a certain humility but also a vision, and really making a difference in the world,” Locke told the audience. “You can feel the positivity, the energy, and the buzz here tonight. That’s what the world needs more of.”

A program with a purpose

This year’s Demo Day featured 70 students from across MIT, with 16 startups working out of the Trust Center on campus and six working from New York City. Through the delta v program, the students were guided by mentors, received funding, and worked through an action-oriented curriculum full-time between June and September. Aulet also noted that the students presenting benefitted from entrepreneurial support resources from across the Institute.

The odds are in the startups’ favor: A 2022 study found that 69 percent of businesses from the program were still operating five years later. Alumni companies had raised roughly $1 billion in funding.

Demo Day marks the end of delta v and serves to inspire next year’s cohort of entrepreneurs.

“Turn on a screen or look anywhere around you, and you'll see issues with climate, sustainability, health care, the future of work, economic disparities, and more,” Aulet said. “It can all be overwhelming. These entrepreneurs bring light to dark times. Entrepreneurs don’t see problems. As the great Biggie Smalls from Brooklyn said, ‘Turn a negative into a positive.’ That’s what entrepreneurs do.”

Startups in action

Startups in this year’s cohort presented solutions in biotech and health care, sustainability, financial services, energy, and more.

One company, Gees, is helping women with hormonal conditions like polycystic ovary syndrome (PCOS) with a saliva-based sensor that tracks key hormones to help women get personalized insights and manage symptoms.

“Over 200 million women live with PCOS worldwide,” said MIT postdoc and co-founder Walaa Khushaim. “If it goes unmanaged, it can lead to even more serious diseases. The good news is that 80 percent of cases can be managed with lifestyle changes. The problem is women trying to change their lifestyle are left in the dark, unsure if what they are doing is truly helping.”

Gees’ sensor is noninvasive and easier to use than current sensors that track hormones. It provides feedback in minutes from the comfort of users’ homes. The sensor connects to an app that shows results and trends to help women stay on track. The company already has more than 500 sign-ups for its wait list.

Another company, Kira, has created an electrochemical system to increase the efficiency and access of water desalination. The company is aiming to help companies manage their brine wastewater that is often dumped, pumped underground, or trucked off to be treated.

“At Kira, we’re working toward a system that produces zero liquid waste and only solid salts,” says PhD student Jonathan Bessette SM ’22.

Kira says its system increases the amount of clean water created by industrial processes, reduces the amount of brine wastewater, and optimizes the energy flows of factories. The company says next year it will deploy a system at the largest groundwater desalination plant in the U.S.

A variety of other startups presented at the event:

AutoAce builds AI agents for car dealerships, automating repetitive tasks with a 24/7 voice agent that answers inbound service calls and books appointments.

Carbion uses a thermochemical process to convert biomass into battery-grade graphite at half the temperature of traditional synthetic methods.

Clima Technologies has developed an AI building engineer that enables facilities managers to “talk” to their buildings in real-time, allowing teams to conduct 24/7 commissioning, act on fault diagnostics, minimize equipment downtime, and optimize controls.

Cognify uses AI to predict customer interactions with digital platforms, simulating customer behavior to deliver insights into which designs resonate with customers, where friction exists in user journeys, and how to build a user experience that converts.

Durability uses computer vision and AI to analyze movement, predict injury risks, and guide recovery for athletes.

EggPlan uses a simple blood test and proprietary model to assess eligibility for egg freezing with fertility clinics. If users do not have a baby, their fees are returned, making the process risk-free.

Forma Systems developed an optimization software for manufacturers to make smarter, faster decisions about things like materials use while reducing their climate impact.

Ground3d is a social impact organization building a digital tool for crowdsourcing hyperlocal environmental data, beginning with street-level documentation of flooding events in New York City. The platform could help residents with climate resilience and advocacy.

GrowthFactor helps retailers scale their footprint with a fractional real estate analyst while using an AI-powered platform to maximize their chance of commercial success.

Kyma uses AI-powered patient engagement to integrate data from wearables, smart scales, sensors, and continuous glucose monitors to track behaviors and draft physician-approved, timely reminders.

LNK Energies is solving the heavy-duty transport industry’s emissions problem with liquid organic hydrogen carriers (LOHCs): safe, room-temperature liquids compatible with existing diesel infrastructure.

Mendhai Health offers a suite of digital tools to help women improve pelvic health and rehabilitate before and after childbirth.

Nami has developed an automatic, reusable drinkware cleaning station that delivers a hot, soapy, pressurized wash in under 30 seconds.

Pancho helps restaurants improve margins with an AI-powered food procurement platform that uses real-time price comparison, dispute tracking, and smart ordering.

Qadence offers older adults a co-pilot that assesses mobility and fall risk, then delivers tailored guidance to improve balance, track progress, and extend recovery beyond the clinic.

Sensopore offers an at-home diagnostic device to help families test for everyday illnesses at home, get connected with a telehealth doctor, and have prescriptions shipped to their door, reducing clinical visits.

Spheric Bio has developed a personal occlusion device to improve a common surgical procedure used to treat strokes.

Tapestry uses conversational AI to chat with attendees before events and connect them with the right people for more meaningful conversations.

Torque automates financial analysis across private equity portfolios to help investment professionals make better strategic decisions.

Trazo helps interior designers and architects collaborate and iterate on technical drawings and 3D designs of new construction of remodeling projects.

DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions

Wed, 09/10/2025 - 11:45am

The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.

The Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions (CHEFSI) — a joint effort of the MIT Center for Computational Science and Engineering, the MIT Schwarzman College of Computing, and the MIT Institute for Soldier Nanotechnologies (ISN) — plans to harness cutting-edge exascale supercomputers and next-generation algorithms to simulate with unprecedented detail how extremely hot, fast-moving gaseous and solid materials interact. The understanding of these extreme environments — characterized by temperatures of more than 1,500 degrees Celsius and speeds as high as Mach 25 — and their effect on vehicles is central to national security, space exploration, and the development of advanced thermal protection systems.

“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”

CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.

CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.

“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”

Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team. 

CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.

Ten years later, LIGO is a black-hole hunting machine

Wed, 09/10/2025 - 11:00am

The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.

On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.

The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.

Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.

The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.

“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”

Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”

The clearest signal yet

LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.

“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.

By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.

Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.

In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.

Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.

The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.

In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.

Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.

“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”

Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”

Pushing the limits

LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.

“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.

“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”

Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.

Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.

LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.

“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”

In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.

Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.

“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”

The LIGO-Virgo-KAGRA Collaboration

LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.

The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.

KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA

Study explains how a rare gene variant contributes to Alzheimer’s disease

Wed, 09/10/2025 - 11:00am

A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.

Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.

This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.

“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.

Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.

Membrane dysfunction

Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.

ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.

To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.

The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).

To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.

These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.

Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.

A boost in choline

Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.

As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.

The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.

In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.

Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.

“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.

In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.

“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.

The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.

Pages