MIT Latest News
For one learner, online MIT courses are “like getting a Ferrari for the price of an electric scooter”
As a professional mechanical engineer, Badri Ratnam was inspired when MIT started offering massive open online courses (MOOCs) in engineering and science in 2012. He wondered if he was up to the challenge of solving problem sets and successfully completing exams from MIT.
Ratnam first began his journey with the course 8.MReVx/8.MReV (Mechanics ReView), and he hasn’t looked back since. As he grew in his career in mechanical design and computer-aided engineering, he also completed nearly 40 MITx courses in physics, mechanical engineering, and materials science.
Part of MIT Open Learning, MITx offers free online courses across a wide variety of subjects to learners around the world. Learners may also opt for the certificate track for a low fee.
Ratnam has worked for companies such as Freudenberg e-Power Systems, Siemens, GE, and Westport Fuel Systems. His continued learning through MITx courses, as well as courses offered by other universities, has expanded his expertise to include areas such as physics, mechanics of materials, transport phenomena, failure and root cause analysis, validation and verification testing, vibration signal processing, certification and compliance statistical quality control, manufacturing, reliability, supplier selection, and more.
“There are many different learning styles,” says Ratnam. “Some people might need to be in a classroom, and others might be able to learn entirely on their own from a textbook. Personally, I benefit from some amount of structure, including having timelines and deadlines, as well as assignments and discussion forums. With MITx, there is also the excitement of the rigor that can be a boost of adrenaline — trying to see whether you can tackle some of the toughest material, presented by a top institution.”
Supplementing engineering education with extensive course offerings
Ratnam earned a bachelor’s degree in engineering from the University of Delhi. He says during his undergraduate program he tended to study the night before exams, and was “more focused on passing the subject than deep learning.”
He followed his undergrad studies with a master of science degree in mechanical engineering from the University of South Florida and an MS in computational and applied mathematics from Simon Fraser University in British Columbia. Even with all of his degrees, he felt that he needed to revisit the engineering subjects he had initially learned as an undergraduate student, pursuing online courses to review the fundamentals and gain greater understanding and mastery.
The MITx courses Ratnam has taken have covered many different areas within engineering, physics, mathematics, supply chains, and manufacturing. He has recently completed Vibrations and Waves, taught by Yen-Jie Lee, Alex Shvonski, and Michelle Tomasik.
“It’s an 18-week class with over 40 lessons, 13 assignments, and three exams, all designed very deliberately. I don’t think I could have ever learned this very difficult subject without this structure,” says Ratnam. “It’s also important to note that I paid less than $100 for this class. MITx does not follow the dictum that ‘you get what you pay for.’ It’s like getting a Ferrari for the price of an electric scooter.”
Ratnam has also recently finished Information Entropy: Energy and Exergy, taught by former MIT Open Learning dean for digital learning Krishna Rajagopal, Peter Dourmaskin, and Aidan MacDonagh, as well as Shvonski and Tomasik.
Although Ratnam says he can’t pick a favorite course — and is hard-pressed to even pick a few favorites of the many MITx courses he has taken — he says he has especially liked these recent courses and Elements of Structures, taught by Alexie M. Kolpak and Simona Socrate. In addition to the many MITx courses he has taken, he has also completed a few MIT Professional Education programs in smart manufacturing and design.
“As I’ve taken more and more courses, I’ve learned to never fear learning new things and exploring new areas,” says Ratnam. “I used to think of more unfamiliar subjects and feel a little terrified, not knowing where to start, but I don’t feel that any more. I know that with some time and effort, I can pick up new skills and knowledge.”
Ratnam has found the discussion forums for MITx courses to be especially useful to the learning process.
“This is where the rigorous, engaging, yet automated, courses come to life,” says Ratnam. “Learners from all over the world help each other in the problem sets and discuss their conceptual doubts. And the forums are diligently monitored by MIT staff to ensure there are no open questions, and all errors are corrected.”
Increasing value in the workplace
Ratnam says that his MITx studies have deepened his understanding of a variety of engineering topics, which have given him new insights to apply as an engineer.
“My learnings from MITx courses have really helped me gain the confidence of having a deep understanding on the theoretical side,” says Ratman. “I’ve developed a wide base of knowledge and have become the go-to person whom people come to with questions.”
Ratnam has found MITx to be an excellent professional development resource. He notes that while many professionals have access to and complete courses offered at or through their workplaces, these usually aim to enable people to complete a very specific goal — such as performing a set task at work — within a short period of time. He says that with online courses, it’s a much different timeline and result.
“MITx classes have provided me with a much broader overview of engineering phenomena,” says Ratnam. “The benefit of the classes might not always come immediately. It can be a long gestation period for the information to all gel together. It’s much more of a profound and long-term benefit.”
Explore lifelong learning opportunities from the Institute, including online courses, resources, and professional programs, on MIT Learn.
New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatories
When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.
And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.
Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.
The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)
This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.
“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.
“In the past decade, gravitational wave astronomy has progressed from the first detection to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”
“Pushing the edges”
Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.
A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.
Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.
The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.
“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”
Unusual signals
The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.
For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.
“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”
Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.
Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.
The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other.
“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”
Cosmic connections
From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.
“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.
This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.
The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.
“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”
Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.
“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”
The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.
Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.
“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”
By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).
“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.
“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."
Nitrous oxide, a product of fertilizer use, may harm some soil bacteria
Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.
While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.
Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.
The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.
“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”
A toxic gas
Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.
“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”
While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.
They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.
Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.
Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.
“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.
“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”
From the lab to the farm
In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.
Wasson calls the paper a proof of concept and plans to study agricultural soil next.
“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”
McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.
“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”
The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).
How some skills become second nature
Expertise isn’t easy to pass down. Take riding a bike: A seasoned cyclist might talk a beginner through the basics of how to sit and when to push off. But other skills, like how hard to pedal to keep balanced, are more intuitive and harder to articulate. This implicit know-how is known as tacit knowledge, and very often, it can only be learned with experience and time.
But a team of MIT engineers wondered: Could an expert’s unconscious know-how be accessed, and even taught, to quickly bring a novice up to an expert’s level?
The answer appears to be “yes,” at least for a particular type of visual-learning task.
In a study published today in the Journal of Neural Engineering, the engineers identified tacit knowledge in volunteers who were tasked with classifying images of various shapes and patterns. As the volunteers were shown images to organize, the team recorded their eye movements and brain activity to measure their visual focus and cognitive attention, respectively.
The measurements showed that, over time, the volunteers shifted their focus and attention to a part of each image that made it easier to classify. However, when asked directly, the volunteers were not aware that they had made such a shift. The researchers concluded that this unconscious shift in attention and focus was a form of tacit knowledge that the volunteers possessed, even if they could not articulate it. What’s more, when the volunteers were made aware of this tacit knowledge, their accuracy in classifying images improved significantly.
The study is the first to directly show that visual attention can reveal unconscious, tacit knowledge during image classification tasks. It also finds for the first time that bringing this concealed knowledge to the surface can enhance experts’ performance.
While the results are specific to the study’s experiment, the researchers say they suggest that some forms of hidden know-how can be made explicit and applied to boost one’s learning experience. They suspect that tacit knowledge could be accessed for disciplines that require keen observation skills, including certain physical trades and crafts, sports, and image analysis, such as medical X-ray diagnoses.
“We as humans have a lot of knowledge, some that is explicit that we can translate into books, encyclopedias, manuals, equations. The tacit knowledge is what we cannot verbalize, that’s hidden in our unconscious,” says study author Alex Armengol-Urpi, a research scientist in MIT’s Department of Mechanical Engineering. “If we can make that knowledge explicit, we can then allow for it to be transferred easier, which can help in education and learning in general.”
The study’s co-authors include Andrés F. Salazar-Gomez, research scientist at the MIT Media Lab; Pawan Sinha, professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences; and Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering.
Hidden gaze
The concept of tacit knowledge is credited to the scientist and philosopher Michael Polyani, who in the mid 20th century was the first to investigate the notion that “we know more than we can tell.” His insights revealed that humans can hold a form of knowledge that is internalized, almost second nature, and often difficult to express or translate to others.
Since Polyani’s work, many studies have highlighted how tacit knowledge may play a part in perfecting certain skills, spanning everything from diagnosing medical images to discerning the sex of cats from images of their faces.
For Armengol-Urpi, these studies raised a question: Could a person’s tacit knowledge be revealed through unconscious signals, such as patterns in their eye movements? His PhD work focused on visual attention, and he had developed methods to study how humans focus their attention, by using cameras to follow the direction of their gaze, and electroencephalography (EEG) monitors to record their brain activity. In his research, he learned of a previous study that used similar methods to investigate how radiologists diagnose nodules in X-ray images. That study showed that the doctors unconsciously focused on areas of an image that helped them to correctly detect the nodules.
“That paper didn’t focus on tacit knowledge, but it suggested that there are some hidden clues in our gaze that could be explored further,” Armengol-Urpi says.
The shape of knowledge
For their new study, the team looked at whether they could identify signs of tacit knowledge from measurements of visual focus and attention. In their experiment, they asked 30 volunteers to look sequentially at over 120 images. They could look at each image for several seconds and then were asked to classify the image as belonging to either group A, or group B, before they were shown the next image.
Each image contained two simple shapes on either side of the image — a square, a triangle, a circle, and any combination of the three, along with different colors and patterns for each shape. The researchers designed the images such that they should be classified into one of two groups, based on an intricate combination of shape, color, and pattern. Importantly, only one side of each image was relevant for the classification.
The volunteers, however, were given no guidelines on how to classify the images. Therefore, for about the first half of the experiment, they were considered “novices,” and more or less guessed at their classifications. Over time, and many more images, their accuracy improved to a level that the researchers considered “expert.” Throughout the experiment, the team used cameras to follow each participant’s eye movements, as a measure of visual focus.
They also outfitted volunteers with EEG sensors to record their brain waves, which they used as a measure of cognitive attention. They designed each image to show two shapes, each of which flickered at different, imperceptible frequencies. They found they could identify where a volunteer’s attention landed, based on which shape’s flicker their brain waves synced up with.
For each volunteer, the team created maps of where their gaze and attention were focused, both during their novice and expert phases. Overall, these maps showed that in the beginning, the volunteers focused on all parts of an image as they tried to make sense of how to classify it. Toward the end, as they got a grasp of the exercise and improved their accuracy, their attention shifted to just one side of each image. This side happened to be the side that the researchers designed to be most relevant, while the other side was just random noise.
The maps showed that the volunteers picked up some knowledge of how to accurately classify the images. But when they were given a survey and asked to articulate how they learned the task, they always maintained that they focused on each entire image. It seemed their actual shift in focus was an unconscious, tacit skill.
“They were unconsciously focusing their attention on the part of the image that was actually informative,” Armengol-Urpi says. “So the tacit knowledge they had was hidden inside them.”
Going a step further, the team then showed each participant the maps of their gaze and attention, and how the maps changed from their novice to expert phases. When they were then shown additional images, the volunteers seemed to use this once-tacit knowledge, and further improved their classification accuracy.
“We are currently extending this approach to other domains where tacit knowledge plays a central role,” says Armengol-Urpi, who is exploring tacit knowledge in skilled crafts and sports such as glassblowing and table tennis, as well as in diagnosing medical imaging. “We believe the underlying principle — capturing and reinforcing implicit expertise through physiological signals — can generalize to a wide range of perceptual and skill-based domains.”
This research was supported, in part, by Takeda Pharmaceutical Company.
A “ChatGPT for spreadsheets” helps solve difficult engineering challenges faster
Many engineering challenges come down to the same headache — too many knobs to turn and too few chances to test them. Whether tuning a power grid or designing a safer vehicle, each evaluation can be costly, and there may be hundreds of variables that could matter.
Consider car safety design. Engineers must integrate thousands of parts, and many design choices can affect how a vehicle performs in a collision. Classic optimization tools could start to struggle when searching for the best combination.
MIT researchers developed a new approach that rethinks how a classic method, known as Bayesian optimization, can be used to solve problems with hundreds of variables. In tests on realistic engineering-style benchmarks, like power-system optimization, the approach found top solutions 10 to 100 times faster than widely used methods.
Their technique leverages a foundation model trained on tabular data that automatically identifies the variables that matter most for improving performance, repeating the process to hone in on better and better solutions. Foundation models are huge artificial intelligence systems trained on vast, general datasets. This allows them to adapt to different applications.
The researchers’ tabular foundation model does not need to be constantly retrained as it works toward a solution, increasing the efficiency of the optimization process. The technique also delivers greater speedups for more complicated problems, so it could be especially useful in demanding applications like materials development or drug discovery.
“Modern AI and machine-learning models can fundamentally change the way engineers and scientists create complex systems. We came up with one algorithm that can not only solve high-dimensional problems, but is also reusable so it can be applied to many problems without the need to start everything from scratch,” says Rosen Yu, a graduate student in computational science and engineering and lead author of a paper on this technique.
Yu is joined on the paper by Cyril Picard, a former MIT postdoc and research scientist, and Faez Ahmed, associate professor of mechanical engineering and a core member of the MIT Center for Computational Science and Engineering. The research will be presented at the International Conference on Learning Representations.
Improving a proven method
When scientists seek to solve a multifaceted problem but have expensive methods to evaluate success, like crash testing a car to know how good each design is, they often use a tried-and-true method called Bayesian optimization. This iterative method finds the best configuration for a complicated system by building a surrogate model that helps estimate what to explore next while considering the uncertainty of its predictions.
But the surrogate model must be retrained after each iteration, which can quickly become computationally intractable when the space of potential solutions is very large. In addition, scientists need to build a new model from scratch any time they want to tackle a different scenario.
To address both shortcomings, the MIT researchers utilized a generative AI system known as a tabular foundation model as the surrogate model inside a Bayesian optimization algorithm.
“A tabular foundation model is like a ChatGPT for spreadsheets. The input and output of these models are tabular data, which in the engineering domain is much more common to see and use than language,” Yu says.
Just like large language models such as ChatGPT, Claude, and Gemini, the model has been pre-trained on an enormous amount of tabular data. This makes it well-equipped to tackle a range of prediction problems. In addition, the model can be deployed as-is, without the need for any retraining.
To make their system more accurate and efficient for optimization, the researchers employed a trick that enables the model to identify features of the design space that will have the biggest impact on the solution.
“A car might have 300 design criteria, but not all of them are the main driver of the best design if you are trying to increase some safety parameters. Our algorithm can smartly select the most critical features to focus on,” Yu says.
It does this by using a tabular foundation model to estimate which variables (or combinations of variables) most influence the outcome.
It then focuses the search on those high-impact variables instead of wasting time exploring everything equally. For instance, if the size of the front crumple zone significantly increased and the car’s safety rating improved, that feature likely played a role in the enhancement.
Bigger problems, better solutions
One of their biggest challenges was finding the best tabular foundation model for this task, Yu says. Then they had to connect it with a Bayesian optimization algorithm in such a way that it could identify the most prominent design features.
“Finding the most prominent dimension is a well-known problem in math and computer science, but coming up with a way that leveraged the properties of a tabular foundation model was a real challenge,” Yu says.
With the algorithmic framework in place, the researchers tested their method by comparing it to five state-of-the-art optimization algorithms.
On 60 benchmark problems, including realistic situations like power grid design and car crash testing, their method consistently found the best solution between 10 and 100 times faster than the other algorithms.
“When an optimization problem gets more and more dimensions, our algorithm really shines,” Yu added.
But their method did not outperform the baselines on all problems, such as robotic path planning. This likely indicates that scenario was not well-defined in the model’s training data, Yu says.
In the future, the researchers want to study methods that could boost the performance of tabular foundation models. They also want to apply their technique to problems with thousands or even millions of dimensions, like the design of a naval ship.
“At a higher level, this work points to a broader shift: using foundation models not just for perception or language, but as algorithmic engines inside scientific and engineering tools, allowing classical methods like Bayesian optimization to scale to regimes that were previously impractical,” says Ahmed.
“The approach presented in this work, using a pretrained foundation model together with high‑dimensional Bayesian optimization, is a creative and promising way to reduce the heavy data requirements of simulation‑based design. Overall, this work is a practical and powerful step toward making advanced design optimization more accessible and easier to apply in real-world settings,” says Wei Chen, the Wilson-Cook Professor in Engineering Design and chair of the Department of Mechanical Engineering at Northwestern University, who was not involved in this research.
Injectable “satellite livers” could offer an alternative to liver transplantation
More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.
To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.
In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.
“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.
Restoring liver function
The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.
Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.
Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.
To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.
In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.
“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”
The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.
Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.
In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.
“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.
An alternative to transplantation
In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.
“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”
After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.
“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”
With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.
The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.
LAB14 joins the MIT.nano Consortium
LAB14 GmbH, a corporate network based in Germany that unites eight high-tech companies focused on nanofabrication, microfabrication, and surface analysis, has joined the MIT.nano Consortium.
“The addition of LAB14 to the MIT.nano Consortium reinforces the importance of collaboration to advance the next set of great ideas,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh (1990) Professor of Emerging Technologies at MIT. “At MIT.nano, we are thrilled when our shared-access facility leads to cross-disciplinary discoveries. LAB14 carries this same motivation by assembling the constellation of remarkable interconnected industry partners.”
Comprising eight companies — Heidelberg Instruments, Nanoscribe, GenISys, Notion Systems, 40-30, Amcoss, SPECSGROUP, and Nanosurf — LAB14 is focused on developing products and services that are fundamental to micro- and nanofabrication technologies, supporting industrial and research-driven applications with complex manufacturing and analysis requirements.
The companies of LAB14 operate under a shared organizational structure that enables closer coordination in technology development. This setup allows for faster research progress and more efficient manufacturing workflows.
“Joining the MIT.nano Consortium marks a significant milestone for LAB14 and our companies,” says Martin Wynaendts van Resandt, CEO of LAB14. “This participation allows our network to collaborate directly with world-leading researchers, accelerating innovation in micro- and nanotechnology."
As part of this engagement, LAB14 will provide two pieces of equipment to be installed at MIT.nano within the coming year. The VPG 300 DI maskless stepper, a high-performance, direct-write system from Heidelberg Instruments, will be positioned inside MIT.nano’s cleanroom. This tool will allow MIT.nano users to pattern structures smaller than 500 nanometers directly onto wafers with accuracy and uniformity comparable to typical high resolution i-line lithography. Equipped with advanced multi-layer alignment and mix‑and‑match functions, the VPG creates a seamless link between laser direct writing and e‑beam lithography.
The EnviroMETROS X-ray photoelectron spectroscopy (XPS/HAXPES) tool by SPECSGROUP will join the suite of Characterization.nano instruments. This unique system is specialized in nondestructive depth profile measurements using multiple X-ray energies to determine the thickness of thin-film samples and their chemical compositions with highest precision. It supports various analyses across a wide pressure range, allowing MIT.nano users to examine thin‑film materials under more realistic environmental conditions and to observe how they change during operation.
The MIT.nano Consortium is a platform for academia-industry collaboration, fostering research and innovation in nanoscale science and engineering. Consortium members gain unparalleled access to MIT.nano and its dynamic user community, providing opportunities to share expertise and guide advances in nanoscale technology.
MIT.nano continues to welcome new companies as sustaining members. For details, and to see a list of current members, visit the MIT.nano Consortium page.
Engineering confidence to navigate uncertainty
Flying on Mars — or any other world — is an extraordinary challenge. An autonomous spacecraft, operating millions of miles from pilots or engineers who could intervene on Earth, must be able to navigate unfamiliar and changing environments, avoid obstacles, land on uncertain terrain, and make decisions entirely on its own. Every maneuver depends on careful perception, planning, and control systems that are fault-tolerant, allowing the craft to recover if something goes wrong. A single miscalculation can leave a multi-million dollar spacecraft face-down on the surface, ending the mission before it even begins.
“This problem is in no way solved, in industry or even in research settings,” says Nicholas Roy, the Jerome C. Hunsaker Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro). “You’ve got to bring together a lot of pieces of code, software, and integrate multiple pieces of hardware. Putting those together is not trivial.”
Not trivial, but for students nearing the culmination of their Course 16 undergraduate careers, far from impossible. In class 16.85 Autonomy Capstone (Design and Testing of Autonomous Vehicles), students design, implement, deploy, and test a full software architecture for flying autonomous systems. These systems have wide-ranging applications, from urban air-mobility and reusable launch vehicles to extraterrestrial exploration. With robust autonomous technology, vehicles can operate far from home while engineers watch from mission control centers not too different from the high bay in AeroAstro’s Kresa Center for Autonomous Systems.
Roy and Jonathan How, Ford Professor of Engineering, developed the new course to build on the foundations of class 16.405 (Robotics: Science and Systems), which introduces students to working with complex robotic platforms and autonomous navigation through ground vehicles with pre-built software. 16.85 applies those same principles to flight, with a basic quadrotor drone and an entirely blank slate to build their own navigation systems. The vehicles are then tested on an obstacle course featuring dubious landing pads and uncertain terrain. Students work in large teams (for this first run, two teams of seven — the SLAMdunkers and the Spelunkers) designed to mirror real-world missions where coordination across roles is essential.
“The vehicles need to be able to differentiate between all these hidden risks that are in the mission and the environment that they’re in and still survive,” says How. “We really want the students to learn how to make a system that they have confidence in.”
Mission: Figure it out, together
“The specific mission we gave them this semester is to imagine that you are an aircraft of some kind, and you’ve got to go and explore the surface of an extraterrestrial body like Mars or the moon,” Roy explains. “You need to use onboard sensors to fly around and explore, build a map, identify interesting objects, and then land safely on what is probably not a flat surface, or not a perfectly horizontal surface.”
A mission of this magnitude is far too complex for any one engineer to tackle alone, but that too poses a challenge for a large team. “The hardest problems these days are coordination problems,” says Andrew Fishberg, a graduate student in the Aerospace Controls Laboratory and one of three teaching assistants (TAs) for the course. “To use the robotics term, a team of this size is something of a heterogeneous swarm. Not everyone has the same skill set, but everyone shows up with something to contribute, and managing that together is a challenge.”
The challenge asks students to apply multiple types of “systems thinking” to the task. Relationships, interdependencies, and feedback loops are critical to their software architecture, and equally important in how students communicate and coordinate with their teammates. “Writing the reports and communicating with a team feels like overhead sometimes, but if you don’t communicate, you have a team of one,” says Fishberg. “We don’t have these ‘solo inventor’ situations where one person figures everything out anymore — it’s hundreds of people building this huge thing.”
The new faces of flight
Students in the class say they are eager to enter the rapidly evolving field, working with unconventional tools and vehicles that go beyond traditional applications.
“We continue to send rovers to extraterrestrial bodies. But there is an increasing interest in deploying unmanned systems to explore Earth,” says Roy. “There’s lots of places on Earth where we want to send robots to go and explore, places where it’s hazardous for humans to go.” That expanding set of applications is exactly what draws students to the field.
“I was really excited for the idea of a new class, especially one that was focused on autonomy, because that’s where I see my career going,” says senior Norah Miller. “This class has given me a really great experience in what it feels like to develop software from zero to a full flying mission.”
The Design and Testing of Autonomous Vehicles course offers a unique perspective for instructors and TAs who have known many of the students throughout their undergraduate careers. As a capstone, it provides an opportunity to see that growth come full circle. “A couple years ago we’re solving differential equations, and now they’re implementing software they wrote on a quadrotor in the high bay,” says How.
After weeks of learning, building, testing, refinement, and finally, flight, the results reflected the goals of the course. “It was exactly what we wanted to see happen,” says Roy. “We gave them a pretty challenging mission. We gave them hardware that should be capable of completing the mission, but not guaranteed. And the students have put in a tremendous amount of effort and have really risen to the challenge.”
W.M. Keck Foundation to support research on healthy aging at MIT
A prestigious grant from the W.M. Keck Foundation to Alison E. Ringel, an MIT assistant professor of biology, will support groundbreaking healthy aging research at the Institute.
Ringel, who is also a core member of the Ragon Institute of Mass General Brigham, MIT, and Harvard, will draw on her background in cancer immunology to create a more comprehensive biomedical understanding of the cause and possible treatments for aging-related decline.
“It is such an honor to receive this grant,” Ringel says. “This support will enable us to draw new connections between immunology and aging biology. As the U.S. population grows older, advancing this research is increasingly important, and this line of inquiry is only possible because of the W.M. Keck Foundation.”
Understanding how to extend healthy years of life is a fundamental question of biomedical research with wide-ranging societal implications. Although modern science and medicine have greatly expanded global life expectancy, it remains unclear why everyone ages differently; some maintain physical and cognitive fitness well into old age, while others become debilitatingly frail later in life.
Our immune systems are adaptable, but they do naturally decline as we get older. One critical component of our immune system is CD8+ T cells, which are known to target and destroy cancerous or damaged cells. As we age, our tissues accumulate cells that can no longer divide. These senescent cells are present throughout our lives, but reach seemingly harmful levels as a normal part of aging, causing tissue damage and diminished resilience under stress.
There is now compelling evidence that the immune system plays a more active role in aging than previously thought.
“Decades of research have revealed that T cells can eliminate cancer cells, and studies of how they do so have led directly to the development of cancer immunotherapy,” Ringel says. “Building on these discoveries, we can now ask what roles T cells play in normal aging, where the accumulation of senescent cells, which are remarkably similar to cancer cells in some respects, may cause health problems later in life.”
In animal models, reconstituting elements of a young immune system has been shown to improve age-related decline, potentially due to CD8+ T cells selectively eliminating senescent cells. CD8+ T cells progressively losing the ability to cull senescent cells could explain some age-related pathology.
Ringel aims to build models for the express purpose of tracking and manipulating T cells in the context of aging and to evaluate how T cell behavior changes over a lifespan.
“By defining the protective processes that slow aging when we are young and healthy, and defining how these go awry in older adults, our goal is to generate knowledge that can be applied to extend healthy years of life,” Ringel says. “I’m really excited about where this research can take us.”
The W.M. Keck Foundation was established in 1954 in Los Angeles by William Myron Keck, founder of The Superior Oil Co. One of the nation’s largest philanthropic organizations, the W.M. Keck Foundation supports outstanding science, engineering, and medical research. The foundation also supports undergraduate education and maintains a program within Southern California to support arts and culture, education, health, and community service projects.
Les Perelman, expert in writing assessment and champion of writing education, dies at 77
Leslie “Les” Perelman, an influential figure in college writing assessment; a champion of writing instruction across all subject matters for over three decades at MIT; and a former MIT associate dean for undergraduate education, died on Nov. 12, 2025, at home in Lexington, Massachusetts. He was 77.
A Los Angeles native, Perelman attended the University of California at Berkeley, joining in its lively activist years, and in 1980 received his PhD in English from the University of Massachusetts at Amherst. After stints at the University of Southern California and Tulane University, he returned to Massachusetts — to MIT — in 1987, and stayed for the next 35 years.
Perelman became best known for his dogged critique of autograding systems and writing assessments that didn’t assess actual college writing. The Boston Globe dubbed him “The man who killed the SAT essay.” He told NPR that colleges “spend the first year deprogramming [students] from the five-paragraph essay.”
His widow, MIT Professor Emerita Elizabeth Garrels, says that while attending a conference, Perelman — who was practically blind without his glasses — arranged to stand at one end of a room in order to “grade” essays held up for him on the other side. “He would call out the grade that each essay would likely receive on standardized scoring,” Garrels says. “And he was consistently right.” Perelman was doing what automatic scorers were: He was, he said in the NPR interview, “mirroring how automated or formulaic grading systems often reward form over substance.”
Perelman also “ruffled a lot of feathers” in industry, says Garrels, with his 2020 paper documenting his BABEL (“Basic Automatic B.S. Essay Language”) Generator, which output nonsense that commercial autograders nevertheless gave top marks. He saved some of his most systematic criticism for autograders’ defenders in academia, at one point calling out peers at the University of Akron for the methodology in their widely-touted paper claiming autograders performed just as well as human graders.
At least one service, though, E.T.S., partly welcomed Perelman’s critique by making its autograder available to him for testing. (Others, like Pearson and Vantage Learning, declined.) He discovered he could ace the tests, even when his essay included non-factual gibberish and typographical errors:
Teaching assistants are paid an excessive amount of money. The average teaching assistant makes six times as much money as college presidents. In addition, they often receive a plethora of extra benefits such as private jets, vacations in the south seas, a staring roles in motion pictures. Moreover, in the Dickens novel Great Expectation, Pip makes his fortune by being a teaching assistant. It doesn’t matter what the subject is, since there are three parts to everything you can think of.
MIT career
Within MIT, Perelman’s legacy was his push to embed writing instruction into the whole of MIT’s curriculum, not as standalone expository writing subjects, let alone as merely a writing exam that incoming students could use to pass out of writing subjects altogether. Supported by a $325,000 National Science Foundation grant, he convinced MIT to hire writing instructors who were also subject matter experts, often with STEM PhDs. They were tasked with collaborating with departments to plant writing instruction into both existing curricula and new subjects. That effort eventually became the Writing Across the Curriculum program (today named Writing, Rhetoric, and Professional Communication) with a staff of more than 30 instructors.
Building out the infrastructure wasn’t quick, however. Perelman’s successor, Suzanne Lane ’85, says it took him almost 15 years. It started with proving to others just how uneven writing instruction at MIT actually was. “A whole cohort of students who took a lot of writing classes or got communication instruction in various places would make great progress,” Lane says. “But it was definitely possible to get through all of MIT without doing much writing at all.”
To bolster his case, Perelman turned to alumni surveys. “The surveys asked how well MIT prepared you for your career,” says Lane. “The technical skills scored really high, but — what is horribly termed, sometimes, as ‘soft skills’ — communication skills, collaboration, etc., these scored really high on importance to career, but really low on how well MIT had prepared them.”
In other words, MIT alumni knew their stuff but were bad at communicating it, at a cost to their careers.
This led Perelman and others to push for a new undergraduate communication requirement. That NSF grant supported a 1997 pilot, designing experiments for courses that would be communication-intensive. It was a huge success. Every department participated. It involved 24 subjects and roughly 300 students. MIT faculty, following “lively” discussion at an April 1999 faculty meeting, approved the proposal of the creation of a report on the communication requirement’s implementation, followed a year later by its formal passage, effective fall 2001.
From that initial pilot of 24, there are now nearly 300 subjects that count toward the requirement, from class 1.013 (Senior Civil and Environmental Engineering Design) to 24.918 (Workshop in Linguistic Research).
Connections beyond MIT
Early in his career, Perelman worked with Vincent DiMarco, a literature scholar at the University of Massachusetts at Amherst, to publish “The Middle English Letter of Alexander to Aristotle” (Brill, 1978). With Wang Computers as publisher, he was a technical writer and project leader on the “DOS Release 3.30 User’s Reference Guide.” He edited a book and chapter on writing studies and assessment with New Jersey Institute of Technology professor Norbert Elliot. And in a project he was particularly proud of, he worked with the New South Wales Teachers Federation in 2018 to convince Australia to reject the adoption of an automated essay grading regime.
“Les was brilliant, with a Talmudic way of asking questions and entering academic debates,” says Nancy Sommers, whose work on undergraduate writing assessment at Harvard University paralleled Perelman’s. “I loved the way his eyes sparkled when he was ready to rip an adversary or a colleague who wasn’t up to his quick mind and vast, encyclopedic knowledge.”
Openness to rhetorical combat didn’t keep Perelman from being a wonderful friend, Sommers says, saying he once waited for her at the airline gate with a sandwich and a smile after a canceled flight. “That was Les, so gracious, generous, anticipating the needs of friends, always there to offer sustenance and friendship.”
Donations in Perelman’s name can be made to UNICEF’s work supporting children in Ukraine, the Lexington Refugee Assistance Program, Doctors Without Borders, and the Ash Grove Movie Finishing Fund.
Coping with catastrophe
Each April in Japan, people participate in a tradition called “hanami,” or cherry-blossom viewing, where they picnic under the blooming trees. The tradition has a second purpose: The presence of people at these gatherings, often by water, helps solidify riverbanks and protect them from spring floods. The celebration has a dual purpose, by addressing, however incrementally, the threat of natural disaster.
The practice of creating things that also protect against disasters can be seen all over Japan, where many new or renovated school buildings have design features unfamiliar to students elsewhere. In Tokyo, one elementary school has a roof swimming pool that stores water and is used to help the building’s toilets flush, plus an additional rainwater catchment tank and exterior stairs leading to a large balcony that wraps around one side of the building.
Why? Well, Japan is prone to natural disasters, such as tsunamis, earthquakes, and flooding. The country’s schools often double as evacuation sites for local residents, and design practices increasingly reflect this. In normal times, the roof pool is where students learn to swim and helps keep the school cool, and the large balcony is used by spectators watching the adjacent school athletics field. In emergencies, water storage is crucial and exterior stairs help people ascend quickly to the gymnasium, built on the second floor — to keep evacuees safer during flooding.
Meanwhile, in one Tokyo district, rooftop solar power is now common. Some schools feature skylights and courtyards to bring in natural light. Again, these architectural features serve dual purposes. Solar power, for one, lowers annual operating costs, and it provides electricity even in case of grid troubles.
These are examples of what MIT scholar Miho Mazereeuw has termed “anticipatory design,” in which structures and spaces are built with dual uses, for daily living and for when crisis strikes.
“The idea is to have these proactive measures in place rather than being reactionary and jumping into action only after something has happened,” says Mazereeuw, an associate professor in MIT’s Department of Architecture and a leading expert on resilient design.
Now Mazereeuw has a new book on the subject, “Design Before Disaster: Japan’s Culture of Preparedness,” published by the University of Virginia Press. Based on many years of research, with extensive illustrations, Mazereeuw examines scores of successful design examples from Japan, both in terms of architectural features and the civic process that created them.
“I’m hoping there can be a culture shift,” Mazereeuw says. “Wherever you can invent design outcomes to help society be more resilient beforehand, it is not at exorbitant cost. You can design for exceptional everyday spaces but embed other infrastructure and flexibility in there, so when there is a flood event or earthquake, those buildings have more capability.”
Bosai and barbecue
Mazereeuw, who is also the head of MIT’s Urban Risk Lab, has been studying disaster preparedness for over 30 years. As part of the Climate Project at MIT, she is also one of the mission directors and has worked with communities around the world on resiliency planning.
Japan has a particularly well-established culture of preparedness, often referred to through the Japanese word “bosai.” Mazereeuw has been studying the country’s practices carefully since the 1990s. In researching the book, she has visited hundreds of sites in the country and talked to many officials, designers, and citizens along the way.
Indeed, Mazereeuw emphasizes, “A major theme in the book is connecting the top-down and bottom-up.” Some good design ideas come from planners and architects. Other have come from community groups and local residents. All these sources are important.
“The Japanese government does invest a lot in disaster research and recovery,” Mazereeuw says. “But I would hate for people in other countries to think this isn’t possible elsewhere. It’s the opposite. There are a lot of examples in here that don’t cost extra, because of careful design through community participation.”
As one example, Mazereeuw devotes a chapter of the book to public parks, which are often primary evacuation spaces for residents in case of emergency. Some have outdoor cooking facilities, which in normal times are used for, say, a weekend barbecue or local community events but are also there in case of emergency. Some parks also have water storage, or restroom facilities designed to expand if needed, and many serve as flood reservoirs, protecting the surrounding neighborhood.
“The barbecue facilities are a great example of dual use, connecting the everyday with disaster preparedness,” Mazereeuw says. “You can bring food into this beautiful park, so you’re used to using this space for cooking already. The idea is that your cognitive map of where you should go is connected to fun things you have done in the past.”
Some of the parks Mazereeuw surveys in the book are tiny pocket parks, which are also filled with useful resilience tools.
“Anticipatory design does not have to be monumental,” Mazereeuw writes in the book.
Negotiating through design
To be sure, some disaster mitigation measures are difficult to enact. In the Naiwan district of Kesennuma, as Mazereeuw outlines in the book, much of the local port area was destroyed in the 2011 tsunami, and the government wanted to build a seawall as part of the reconstruction plan. Some local residents and fishermen were unenthusiastic; a seawall could limit ocean access. Finally, after extended negotiations, designers created a seawall integrated into a new commercial district with cafes and stores, as well as new areas of public water access.
“This project used the power of design to negotiate between prefectural and local regulations, structural integrity and aesthetics, ocean access and safety,” Mazereeuw says.
Ultimately, working to build a coalition in support of resilience measures can help create more interesting and useful designs.
Other scholars have praised “Design Before Disaster.” Daniel P. Aldrich, a professor at Northeastern University, has called the book a “well-researched, clearly written investigation” into Japanese disaster-management practices, adding that any officials or citizens around the world “who seek to keep residents and communities safe from shocks of all kinds will learn something important from this book. It sets a high bar for future scholarship in the field.”
For her part, Mazereeuw emphasizes, “We can learn from the Japanese example, but it’s not a copy-paste thing. The book is so people can understand the essence of it and then create their own disaster preparedness culture and approach. This should be an all-hands process. Emergency management is not about relying on managers. It’s figuring out how we all play a part.”
Featured video: Coding for underwater robotics
During a summer internship at MIT Lincoln Laboratory, Ivy Mahncke, an undergraduate student of robotics engineering at Olin College of Engineering, took a hands-on approach to testing algorithms for underwater navigation. She first discovered her love for working with underwater robotics as an intern at the Woods Hole Oceanographic Institution in 2024. Drawn by the chance to tackle new problems and cutting-edge algorithm development, Mahncke began an internship with Lincoln Laboratory's Advanced Undersea Systems and Technology Group in 2025.
Mahncke spent the summer developing and troubleshooting an algorithm that would help a human diver and robotic vehicle collaboratively navigate underwater. The lack of traditional localization aids — such as the Global Positioning System, or GPS — in an underwater environment posed challenges for navigation that Mahncke and her mentors sought to overcome. Her work in the laboratory culminated in field tests of the algorithm on an operational underwater vehicle. Accompanying group staff to field test sites in the Atlantic Ocean, Charles River, and Lake Superior, Mahncke had the opportunity see her software in action in the real world.
"One of the lead engineers on the project had split off to go do other work. And she said, 'Here's my laptop. Here are the things that you need to do. I trust you to go do them.' And so I got to be out on the water as not just an extra pair of hands, but as one of the lead field testers," Mahncke says. "I really felt that my supervisors saw me as the future generation of engineers, either at Lincoln Lab or just in the broader industry."
Says Madeline Miller, Mahncke's internship supervisor: "Ivy's internship coincided with a rigorous series of field tests at the end of an ambitious program. We figuratively threw her right in the water, and she not only floated, but played an integral part in our program's ability to hit several reach goals."
Lincoln Laboratory's summer research program runs from mid-May to August. Applications are now open.
Video by Tim Briggs/MIT Lincoln Laboratory | 2 minutes, 59 seconds
Turning curiosity about engineering into careers
It’s not every day that aspiring teenage engineers can see firsthand how planes are built. But a collaboration between nonprofit Engineering Tomorrow, aerospace firm Boeing, and alumni of the MIT Leaders for Global Operations (LGO) program working at Boeing is aiming to turn curiosity about aerospace engineering into possible careers for young students.
Boeing is LGO’s longest-standing industry collaborator, hosting LGO internships, recruiting LGO alumni, and hosting plant treks for future engineers. Engineering Tomorrow, a nonprofit dedicated to inspiring the next generation of engineers, frames the U.S. engineering workforce shortage as an economic and national security issue — and says the shortage isn’t in just engineers with degrees, but also in trained operators and technicians. They also recognize that many kids often start as natural tinkerers, but get scared off by higher-level math.
To bring more kids into the engineering fold, the organization delivers no-cost engineering labs to middle and high school students by collaborating with influential mentors, such as LGO graduates at organizations like Boeing.
“We want to inspire students by exposing them to professional engineers to illustrate the pathways for them to be problem-solvers in society,” explains Alex Dickson, Engineering Tomorrow’s program coordinator. “The demand for engineers has just gone up dramatically. It’s about being competitive on a global scale. We try to illustrate to students that there are many pathways into these careers.”
How MIT LGO makes engineering dreams a reality
Engineering Tomorrow’s collaboration with MIT LGO grew organically, through a robust alumni network. One of the nonprofit’s board members, LGO alumna Kristine Budill SM ’93, recognized a shared interest: the sizable Boeing LGO community wanted concrete ways to connect more directly with communities, and Engineering Tomorrow does just that.
Budill connected the organization with fellow LGO alumnus Cameron Hoffman MBA ’24, SM ’24, a Boeing manufacturing strategy manager who helped translate that shared mission into a real-world opportunity: an on-site Boeing experience that made engineering tangible for high school students.
The result: One lucky high school engineering design class from Mercer Island, Washington, recently got to experience Boeing 737s being built in person. In November 2025, 30 ninth graders at Mercer Island High School traveled to Boeing’s Renton, Washington, facility to learn how planes are constructed and understand what it really takes to have a career building them.
From the outset, the goal was to avoid the typical spectator field trip. Instead, Engineering Tomorrow and Hoffman designed a structured, multi-touch experience that prepared students before they ever set foot in the factory.
First, an Engineering Tomorrow liaison introduced key aerospace concepts and an associated lab challenge to the class via Zoom, then returned in person to guide Mercer students through a hands-on airplane-design lab, helping them translate theory into practice and answer questions about engineering pathways. Students then visited Boeing’s production facility, where they spoke with engineers from multiple disciplines — not just aerospace — and toured the factory floor.
By the time they arrived, students weren’t just impressed by the scale of the operation; they understood what they were seeing, asked informed questions, and left with a sharp sense of the many routes into engineering and manufacturing careers, Dickson says.
“Cameron set up an incredible on-site experience for the students that really made real-world engineering a more tangible experience for them,” Dickson says. “Many people think Boeing is just about aerospace engineering, because Boeing is an aerospace company. But they got to hear from mechanical engineers, electrical engineers, and workers with all sorts of backgrounds who made it clear that there’s no one set pathway into engineering or manufacturing.”
Then came the best part: Students got a VIP tour of the production facility, led by Boeing staff.
A snack and a tour
“It’s awe-inspiring: Dozens of unfinished airplanes are under one site, and you see all of the real-world production engineering that goes into something that oftentimes we take for granted when we step onto an airplane,” Dickson says.
When the big day arrived, students also met with engineering teams to learn about the history of the plant, complete with fun facts geared to high schoolers. (Did you know that a 737 takes off or lands every two seconds?) They learned about different career pathways, from design to production. It was easy to envision themselves working there, Hoffman says.
“Boeing is a company that a lot of folks work at for their entire career and take a lot of pride in the work that they do. We showed them: What does that look like? Do you want to be an engineer for your entire career? Do you want to be a people leader in the facility? Do you want to be a technical expert?” Hoffman says. “And the kids asked great questions.”
Then, the students — after snacks, of course — toured the production floor, where engineers assembled planes and tested parts. For Hoffman, that experience was deeply personal: He wished he’d experienced something similar growing up.
A 10-year Boeing veteran, Hoffman led the group throughout. He started at Boeing in 2015 as a recent college graduate, where he encountered several LGO alums who recommended the program.
“I’d been deeply interested in manufacturing since my early undergrad days. Boeing was an amazing place to work because our products are so complex, and the production systems are so fascinating,” he recalls.
Over time, he wanted to transition into people leadership with an MBA degree. His Boeing colleagues, well-represented among the LGO ranks, urged him toward the MIT program.
“LGO’s network is what makes it so special,” he says.
Upon returning to Boeing after completing his LGO degrees, Hoffman joined Boeing’s LGO/Tauber Leadership Development Program, which allows him to stay regularly engaged with the MIT LGO Program. One such activity where he remains engaged with the program is through the MIT LGO Alumni Board. As part of the board, Hoffman focuses on the social good committee, and the Engineering Tomorrow high school partnership was a perfect fit to meet that committee’s goals.
For Hoffman, these leadership initiatives are what makes LGO distinctive.
“When you graduate from a program like LGO, you’re often so forward-looking. It helps to take time to reflect on what an inspiration you can be to the people who come after you. MIT LGO focuses on both engineering and business. Our students want to study engineering because they want to be problem-solvers. The LGO program, which is at the intersection of engineering and business leadership, is just an incredible inspirational program for young students to see,” Hoffman says.
It was an opportunity he didn’t get as an ambitious young high schooler.
“As a kid, the only engineering class that was available to me was architectural drafting. If this opportunity was offered to me when I was in high school, I would’ve jumped out of my shoes at the chance. You get to see products that are just so complex; you really can't believe it until you see it,” he says.
Setting a positive precedent across industries
Mercer Island engineering design teacher Michael Ketchum had high praise for the field trip, considering it transformative for his students. He estimates that roughly 80 percent of them want to be engineers. He was impressed that the experience was more than just a tour, that it also included classroom support and airplane design kits, reinforcing core engineering concepts. The collaboration allowed them to broaden a previously CAD-focused class into one that also includes 3D printing, electronics, and aerospace applications.
“For freshmen and sophomores, field trips are key. They stick in their head a bit longer than just school learning. If they get to see people getting excited talking about engineering, and it embeds it a little bit better in their brain,” Ketchum says.
In a post-trip survey, students reported being more likely to consider engineering after the experience.
“They expressed the idea that the conversations with engineers inspired them, and 100 percent of students said that seeing a production facility was one of the coolest parts of the program, which led to them being more inclined to want to be an engineer,” Engineering Tomorrow’s Dickson says.
Next year, the LGO network hopes to expand to partner with additional companies, from health care to biotech.
“The goal is to continue to create exposure. This visit was a really great proof of concept to see what’s valuable to students,” Hoffman says — and, ideally, future LGO alumni.
Designing a more resilient future for plants, from the cell up
In a narrow strip of land along the Andes mountain range in central Chile, an Indigenous community has long celebrated the bark of a rare tree for its medicinal properties. Modern science only recently caught up to the tradition, finding the so-called soapbark tree contains potent compounds for boosting the human immune system.
The molecules have since been harnessed to make the world’s first malaria vaccine and to boost the effectiveness of vaccines for everything from shingles to Covid-19 and cancer. Unfortunately, unsustainable harvesting has threatened the existence of the tree species, leading the Chilean government to severely restrict lumbering.
The soapbark tree’s story is not unique. Plants are the foundation of industries such as pharmaceuticals, beauty, agriculture, and forestry, yet around 45 percent of plant species are in danger of going extinct. At the same time, human demand for plant products continues to rise. Ashley Beckwith SM ’18, PhD ’22 believes meeting that demand requires rethinking how plants are grown. Her company, Foray Bioscience, aims to make plant production faster, more adaptable, and less damaging to fragile natural supply chains.
The company is working to make it possible to grow any plant or plant product from single cells using biomanufacturing powered by artificial intelligence. Foray has already developed molecules, materials, and fabricated seeds with various partners, including academic researchers, nurseries, conservationists, and companies.
In one new partnership, Foray is working with the nursery West Coast Chestnut to deploy a more disease-resistant version of the chestnut trees that once filled forests across the eastern U.S. but have since been wiped out. The project is just one example of how AI and plant science can be leveraged to protect the plant populations that bring so much value to humans and the planet.
“Plant systems underpin every aspect of our daily lives, from the air we breathe to the food we eat, the clothes we wear, the homes we live in, and more,” Beckwith says. “But these plant systems are fragile and in decline. We need new strategies to ensure lasting access to the plant products and ecosystems we depend on.”
From human cells to plants
Beckwith focused on biology and materials manufacturing as a master’s student in MIT’s Department of Mechanical Engineering. Her research involved building platforms to enable precision treatments for human diseases. After graduating, she worked on a regenerative, self-sufficient farm that mimicked natural ecosystems, and began thinking about applying her work to address the fragility of plant systems.
Beckwith returned to MIT for her PhD to explore the idea of regenerative plant systems, studying in the lab of Research Scientist Luis Fernando Velásquez-García in the Department of Electrical Engineering and Computer Science.
“To address organ shortages for transplants, scientists aspire to grow kidneys that don’t have to be harvested from a human using tissue engineering,” Beckwith says. “What if we could do something similar for our plant systems?”
Beckwith went on to publish papers showing she could grow wood-like plant material in a lab. By adjusting certain chemicals, the researchers could precisely control properties like stiffness and density.
“I was thinking about how we build products, like wood, from the cell up instead of extracting from the top down,” Beckwith recalls. “It led to some foundational demonstrations that underpin the work we do at Foray today, but it also opened up questions: Where are these new approaches most urgently needed? What would it take to apply these tools where they’re needed, fast?”
Beckwith began exploring the idea of starting a company in 2021, participating in accelerator programs run by the E14 Fund and The Engine — both MIT-affiliated initiatives designed to support breakthrough science ventures. She officially founded Foray in February of 2022 after completing her PhD.
“Our early research showed that we could grow wood-like material directly from plant cells,” she says. “We are now able to grow not just wood without the tree, but also produce harvest-free molecules, materials, and even seeds by steering single cells to develop precisely into the products we need without ever having to grow the whole plant.”
Beckwith describes her lab-grown wood innovation as analogous to Uber if there were no internet — a powerful idea without the digital backbone to scale. To create the data foundation and ecosystem to scale plant innovation, Foray is now building the Pando AI platform to enable rapid discovery and deployment of these novel plant solutions.
“Pando functions like a Google Maps for plant growth,” Beckwith says. “It helps scientists navigate a really complex field of variables and arrive at a research destination efficiently — because to steer a cell to produce a particular product, there might be 50 different variables to tweak. It would take a lifetime to explore each of those, and that’s one reason why plant research is so slow today.”
The “operating system for plant science”
Foray’s team includes experts in plant biology, artificial intelligence, machine learning, computational biology, and process engineering.
“This is a very intersectional problem,” Beckwith says. “One of the most exciting things for me is building this highly capable team that is able to deliver solutions that could never be created in a silo.”
After a year of pilot collaborations with select researchers, Foray is preparing for a broader public launch of its Pando platform early this year.
Over the next several years, Beckwith hopes Foray will serve as an innovation engine for researchers and companies working across agriculture, materials, pharmaceuticals, and conservation. Foray already uses Pando internally to create plant solutions that overcome limitations in natural production.
“Fabricated seeds are one capability that we’re really excited about,” Beckwith says. “Being able to grow seeds from cells lets you create really timely and scalable seed supplies to address gaps in restoration, or shorten the path to market for new, resilient crop varieties. There’s a lot to be gained by making our plant systems more adaptive.”
“We want to shorten plant development timelines, so solutions can be built in months, not decades,” Beckwith says. “We’re excited to be building tools that represent a step change in the way plant production can be done.”
As Foray’s products scale and more researchers use its platform, the company is hoping to help the plant science industry respond to some of our planet’s most pressing challenges.
“Right now, we’re focused on plants in labs,” Beckwith says. “In five years, we aim to be the operating system for all of plant science, making it possible to build anything from a single plant cell.”
Tackling industry’s burdensome bubble problem
In industrial plants around the world, tiny bubbles cause big problems. Bubbles clog filters, disrupt chemical reactions, reduce throughput during biomanufacturing, and can even cause overheating in electronics and nuclear power plants.
MIT Professor Kripa Varanasi has long studied methods to reduce bubble disruption. In a new study, Varanasi, along with PhD candidate Bert Vandereydt and former postdoc Saurabh Nath, have uncovered the physics behind a promising type of debubbling membrane material that is “aerophilic” — Greek for “air-loving.” The material can be used in systems of all types, allowing anyone to optimize their machine’s performance by breaking free from bubble-borne disruptions.
“We have figured out the structure of these bubble-attracting membrane materials to allow gas to evacuate in the fastest possible manner,” says Varanasi, the senior author of the study. “Think of trying to push honey through a coffee strainer: It’s not going to go through easily, whereas water will move through, and gas will move through even more easily. But even gas will reach a throughput limit, which depends on the properties of the gas and the liquid involved. By uncovering those limits, our research allows engineers to build better membranes for their systems.”
In the paper, which appears in the journal PNAS this week, the researchers distill their findings into a graph that allows anyone to plot a few characteristics of their system — like the viscosity of their gas and the surrounding liquid — and find the best membrane to make bubble removal near-instantaneous. Using their approach, the research team demonstrated a 1,000-fold acceleration in bubble removal in a bioreactor that’s used in the pharmaceutical industry, food and beverage manufacturing, cosmetics, chemical production, and more.
The researchers say the membranes, which repel water, could be used to improve the throughput of a wide range of advanced systems whose operation has been plagued to date by bubbles.
Better bubble breakers
Companies today try everything to burst bubbles. They deploy foam breakers that physically shear them, chemicals that act as antifoaming agents, even ultrasound. Such approaches have drawbacks in tightly controlled environments like bioreactors, where chemical defoamers can be toxic to cells, while mechanical agitation can damage delicate biological materials. Similar limitations apply to other industries where contamination or physical disturbance is unacceptable. As a result, many applications that cannot tolerate chemical defoamers or mechanical intervention remain fundamentally bottlenecked by foam formation.
“Biomanufacturing has really taken off in the last 10 years,” Vandereydt says. “We’re making a lot more out of biologic systems like cells and bacteria, and our reactors have increased in throughput from 5 million cells per millimeter of solution to 100 million cells per millimeter. However, the bubble evacuation and defoaming haven’t kept up — it’s becoming a significant rate-limiting step.”
To better understand the interaction between aerophilic membranes and bubbles, the MIT researchers used MIT.nano facilities to create a series of tiny porous silicon membranes with holes ranging in size from 10 microns to 200 microns. They coated the membranes with hydrophobic silica nanoparticles.
Placing them on the surface of different liquids, the researchers released single bubbles with varying viscosity and recorded the interaction using high-speed imaging as each collided with the membranes.
“We started by trying to take a very complicated system, like foam being generated in a bioreactor, and study it in the simplest form to understand what’s happening,” Vandereydt says.
At first, the bigger the holes, the faster the bubbles disappeared. The researchers also changed the bubble gas from air to hydrogen, which has half the viscosity, and found the speed of bubble destruction doubled.
But after about a 1,000-fold acceleration in bubble destruction, the researchers hit a wall no matter how big the membrane holes were. They had run up against a different physical limit to investigate.
The researchers then tried changing the viscosity of their liquid, from water to something closer to honey. They found viscosity only plays a role in the speed of bubble destruction when the liquid is 200 times the viscosity of liquid. Further experiments revealed the biggest factor for slowing bubble evacuation was inertial resistance in the liquid.
“Through experimentation, we showed there are three different limits [to the speed of bubble destruction],” Vandereydt says. “There is the viscous limit of the gas in a low-viscosity, low-permeability setup. Then there’s the viscous resistance of the liquid in the high-permeability, high-viscosity regime. Then we have the inertial limit of the liquid.”
The team used a bioreactor to experimentally validate their findings and charted them in a map that engineers can use to enter the characteristics of their system and find both the best membrane for their situation and the biggest factor slowing bubble evacuation.
The science of bubbles
The research should be useful for anyone trying to accelerate the destruction of bubbles in their industrial device, but it also improves our understanding of the physics underpinning bubble dynamics.
“We have identified three different throughput limits, and the physics behind those limits, and we have reduced it to very simple laws,” Nath explains. “How fast you can go is first dictated between surface tension and inertia. But you may also hit a different limit, where the pores are extremely small, so the gas finds it difficult to move through them. In that case, the viscosity of the gas is meaningful. But you may also have a bubble which was originally in something like honey, which means it’s not enough the gas is moving, the liquid also must refill the space behind it. No matter what your conditions are, you will be switching between these three limits.”
Varanasi says health care companies, chemical manufacturers, and even breweries have expressed interest in the work. His team plans to commercially develop the membranes for industrial use.
“These physical insights allowed us to design membranes that, quite surprisingly, evacuate bubbles even faster than a free liquid-gas interface,” says Varanasi.
The researchers’ design map could also be used to model natural systems and even liquid-liquid systems, which could be used to create membranes that remove oil spills from water or help efficiently extract hydrogen from water-splitting electrodes. Ultimately the biggest beneficiaries of the findings will be companies grappling with bubbles.
“Though small, bubbles quietly dictate the performance limits of many advanced technologies,” says Varanasi. “Our results provide a way to eliminate that bottleneck and unlock entirely new levels of performance across industries. These membranes can be readily retrofitted into existing systems, and our framework allows them to be rapidly designed and optimized for specific applications. We’re excited to work with industry to translate these insights into impact.”
The work was supported, in part, by MIT Lincoln Laboratory and used MIT.nano facilities.
New method could increase LLM training efficiency
Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning.
But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. While a few of the high-power processors continuously work through complicated queries, others in the group sit idle.
Researchers from MIT and elsewhere found a way to use this computational downtime to efficiently accelerate reasoning-model training.
Their new method automatically trains a smaller, faster model to predict the outputs of the larger reasoning LLM, which the larger model verifies. This reduces the amount of work the reasoning model must do, accelerating the training process.
The key to this system is its ability to train and deploy the smaller model adaptively, so it kicks in only when some processors are idle. By leveraging computational resources that would otherwise have been wasted, it accelerates training without incurring additional overhead.
When tested on multiple reasoning LLMs, the method doubled the training speed while preserving accuracy. This could reduce the cost and increase the energy efficiency of developing advanced LLMs for applications such as forecasting financial trends or detecting risks in power grids.
“People want models that can handle more complex tasks. But if that is the goal of model development, then we need to prioritize efficiency. We found a lossless solution to this problem and then developed a full-stack system that can deliver quite dramatic speedups in practice,” says Qinghao Hu, an MIT postdoc and co-lead author of a paper on this technique.
He is joined on the paper by co-lead author Shang Yang, an electrical engineering and computer science (EECS) graduate student; Junxian Guo, an EECS graduate student; senior author Song Han, an associate professor in EECS, member of the Research Laboratory of Electronics and a distinguished scientist of NVIDIA; as well as others at NVIDIA, ETH Zurich, the MIT-IBM Watson AI Lab, and the University of Massachusetts at Amherst. The research will be presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
Training bottleneck
Developers want reasoning LLMs to identify and correct mistakes in their critical thinking process. This capability allows them to ace complicated queries that would trip up a standard LLM.
To teach them this skill, developers train reasoning LLMs using a technique called reinforcement learning (RL). The model generates multiple potential answers to a query, receives a reward for the best candidate, and is updated based on the top answer. These steps repeat thousands of times as the model learns.
But the researchers found that the process of generating multiple answers, called rollout, can consume as much as 85 percent of the execution time needed for RL training.
“Updating the model — which is the actual ‘training’ part — consumes very little time by comparison,” Hu says.
This bottleneck occurs in standard RL algorithms because all processors in the training group must finish their responses before they can move on to the next step. Because some processors might be working on very long responses, others that generated shorter responses wait for them to finish.
“Our goal was to turn this idle time into speedup without any wasted costs,” Hu adds.
They sought to use an existing technique, called speculative decoding, to speed things up. Speculative decoding involves training a smaller model called a drafter to rapidly guess the future outputs of the larger model.
The larger model verifies the drafter’s guesses, and the responses it accepts are used for training.
Because the larger model can verify all the drafter’s guesses at once, rather than generating each output sequentially, it accelerates the process.
An adaptive solution
But in speculative decoding, the drafter model is typically trained only once and remains static. This makes the technique infeasible for reinforcement learning, since the reasoning model is updated thousands of times during training.
A static drafter would quickly become stale and useless after a few steps.
To overcome this problem, the researchers created a flexible system known as “Taming the Long Tail,” or TLT.
The first part of TLT is an adaptive drafter trainer, which uses free time on idle processors to train the drafter model on the fly, keeping it well-aligned with the target model without using extra computational resources.
The second component, an adaptive rollout engine, manages speculative decoding to automatically select the optimal strategy for each new batch of inputs. This mechanism changes the speculative decoding configuration based on the training workload features, such as the number of inputs processed by the draft model and the number of inputs accepted by the target model during verification.
In addition, the researchers designed the draft model to be lightweight so it can be trained quickly. TLT reuses some components of the reasoning model training process to train the drafter, leading to extra gains in acceleration.
“As soon as some processors finish their short queries and become idle, we immediately switch them to do draft model training using the same data they are using for the rollout process. The key mechanism is our adaptive speculative decoding — these gains wouldn’t be possible without it,” Hu says.
They tested TLT across multiple reasoning LLMs that were trained using real-world datasets. The system accelerated training between 70 and 210 percent while preserving the accuracy of each model.
As an added bonus, the small drafter model could readily be utilized for efficient deployment as a free byproduct.
In the future, the researchers want to integrate TLT into more types of training and inference frameworks and find new reinforcement learning applications that could be accelerated using this approach.
“As reasoning continues to become the major workload driving the demand for inference, Qinghao’s TLT is great work to cope with the computation bottleneck of training these reasoning models. I think this method will be very helpful in the context of efficient AI computing,” Han says.
This work is funded by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT Amazon Science Hub, Hyundai Motor Company, and the National Science Foundation.
Mixing generative AI with physics to create personal items that work in the real world
Have you ever had an idea for something that looked cool, but wouldn’t work well in practice? When it comes to designing things like decor and personal accessories, generative artificial intelligence (genAI) models can relate. They can produce creative and elaborate 3D designs, but when you try to fabricate such blueprints into real-world objects, they usually don’t sustain everyday use.
The underlying problem is that genAI models often lack an understanding of physics. While tools like Microsoft’s TRELLIS system can create a 3D model from a text prompt or image, its design for a chair, for example, may be unstable, or have disconnected parts. The model doesn’t fully understand what your intended object is designed to do, so even if your seat can be 3D printed, it would likely fall apart under the force of someone sitting down.
In an attempt to make these designs work in the real world, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are giving generative AI models a reality check. Their “PhysiOpt” system augments these tools with physics simulations, making blueprints for personal items such as cups, keyholders, and bookends work as intended when they’re 3D printed. It rapidly tests if the structure of your 3D model is viable, gently modifying smaller shapes while ensuring the overall appearance and function of the design is preserved.
You can simply type what you want to create and what it’ll be used for into PhysiOpt, or upload an image to the system’s user interface, and in roughly half a minute, you’ll get a realistic 3D object to fabricate. For example, CSAIL researchers prompted it to generate a “flamingo-shaped glass for drinking,” which they 3D printed into a drinking glass with a handle and base resembling the tropical bird’s leg. As the design was generated, PhysiOpt made tiny refinements to ensure the design was structurally sound.
“PhysiOpt combines GenAI and physically-based shape optimization, helping virtually anyone generate the designs they want for unique accessories and decorations,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL researcher Xiao Sean Zhan SM ’25, who is a co-lead author on a paper presenting the work. “It’s an automatic system that allows you to make the shape physically manufacturable, given some constraints. PhysiOpt can iterate on its creations as often as you’d like, without any extra training.”
This approach enables you to create a “smart design,” where the AI generator crafts your item based on users’ specifications, while considering functionality. You can plug in your favorite 3D generative AI model, and after typing out what you want to generate, you specify how much force or weight the object should handle. It’s a neat way to simulate real-world use, such as predicting whether a hook will be strong enough to hold up your coat. Users also specify what materials they’ll fabricate the item with (such as plastics or wood), and how it’s supported — for instance, a cup stands on the ground, whereas a bookend leans against a collection of books.
Given the specifics, PhysiOpt begins to iteratively optimize the object. Under the hood, it runs a physics simulation called a “finite element analysis” to stress test the design. This comprehensive scan provides a heat map over your 3D model, which indicates where your blueprint isn’t well-supported. If you were generating, say, a birdhouse, you may find that the support beams under the house were colored bright red, meaning the house will crumble if it’s not reinforced.
PhysiOpt can create even bolder pieces. Researchers saw this versatility firsthand when they fabricated a steampunk (a style that blends Victorian and futuristic aesthetics) keyholder featuring intricate, robotic-looking hooks, and a “giraffe table” with a flat back that you can place items on. But how did it know what “steampunk” is, or even how such a unique piece of furniture should look?
Remarkably, the answer isn’t extensive training — at least, not from the researchers. Instead, PhysiOpt uses a pre-trained model that’s already seen thousands of shapes and objects. “Existing systems often need lots of additional training to have a semantic understanding of what you want to see,” adds co-lead author Clément Jambon, who is also an MIT EECS PhD student and CSAIL researcher. “But we use a model with that feel for what you want to create already baked in, so PhysiOpt is training-free.”
By working with a pre-trained model, PhysiOpt can use “shape priors,” or knowledge of how shapes should look based on earlier training, to generate what users want to see. It’s sort of like an artist recreating the style of a famous painter. Their expertise is rooted in closely studying a variety of artistic approaches, so they’ll likely be able to mirror that particular aesthetic. Likewise, a pre-trained model’s familiarity with shapes helps it generate 3D models.
CSAIL researchers observed that PhysiOpt’s visual know-how helped it create 3D models more efficiently than “DiffIPC,” a comparable method that simulates and optimizes shapes. When both approaches were tasked with generating 3D designs for items like chairs, CSAIL’s system was nearly 10 times faster per iteration, while creating more realistic objects.
PhysiOpt presents a potential bridge between ideas and real-world personal items. What you may think is a great idea for a coffee mug, for instance, could soon make the jump from your computer screen to your desk. And while PhysiOpt already does the stress-testing for designers, it may soon be able to predict constraints such as loads and boundaries, instead of users needing to provide those details. This more autonomous, common-sense approach could be made possible by incorporating vision language models, which combine an understanding of human language with computer vision.
What’s more, Zhan and Jambon intend to remove the artifacts, or random fragments that occasionally appear in PhysiOpt’s 3D models, by making the system even more physics-aware. The MIT scientists are also considering how they can model more complex constraints for various fabrication techniques, such as minimizing overhanging components for 3D printing.
Zhan and Jambon wrote their paper with MIT-IBM Watson AI Lab Principal Research Scientist Kenney Ng ’89, SM ’90, PhD ’00 and two CSAIL colleagues: undergraduate researcher Evan Thompson and Assistant Professor Mina Konaković Luković, who is a principal investigator at the lab.
The researchers’ work was supported, in part, by the MIT-IBM Watson AI Laboratory and the Wistron Corp. They presented it in December at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.
AI to help researchers see the bigger picture in cell biology
Studying gene expression in a cancer patient’s cells can help clinical biologists understand the cancer’s origin and predict the success of different treatments. But cells are complex and contain many layers, so how the biologist conducts measurements affects which data they can obtain. For instance, measuring proteins in a cell could yield different information about the effects of cancer than measuring gene expression or cell morphology.
Where in the cell the information comes from matters. But to capture complete information about the state of the cell, scientists often must conduct many measurements using different techniques and analyze them one at a time. Machine-learning methods can speed up the process, but existing methods lump all the information from each measurement modality together, making it difficult to figure out which data came from which part of the cell.
To overcome this problem, researchers at the Broad Institute of MIT and Harvard and ETH Zurich/Paul Scherrer Institute (PSI) developed an artificial intelligence-driven framework that learns which information about a cell’s state is shared across different measurement modalities and which information is unique to a particular measurement type.
By pinpointing which information came from which cell parts, the approach provides a more holistic view of the cell’s state, making it easier for a biologist to see the complete picture of cellular interactions. This could help scientists understand disease mechanisms and track the progression of cancer, neurodegenerative disorders such as Alzheimer’s, and metabolic diseases like diabetes.
“When we study cells, one measurement is often not sufficient, so scientists develop new technologies to measure different aspects of cells. While we have many ways of looking at a cell, at the end of the day we only have one underlying cell state. By putting the information from all these measurement modalities together in a smarter way, we could have a fuller picture of the state of the cell,” says lead author Xinyi Zhang SM ’22, PhD ’25, a former graduate student in the MIT Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, who is now a group leader at AITHYRA in Vienna, Austria.
Zhang is joined on a paper about the work by G.V. Shivashankar, a professor in the Department of Health Sciences and Technology at ETH Zurich and head of the Laboratory of Multiscale Bioimaging at PSI; and senior author Caroline Uhler, a professor in EECS and the Institute for Data, Systems, and Society (IDSS) at MIT, member of MIT’s Laboratory for Information and Decision Systems (LIDS), and director of the Eric and Wendy Schmidt Center at the Broad Institute. The research appears today in Nature Computational Science.
Manipulating multiple measurements
There are many tools scientists can use to capture information about a cell’s state. For instance, they can measure RNA to see if the cell is growing, or they can measure chromatin morphology to see if the cell is dealing with external physical or chemical signals.
“When scientists perform multimodal analysis, they gather information using multiple measurement modalities and integrate it to better understand the underlying state of the cell. Some information is captured by one modality only, while other information is shared across modalities. To fully understand what is happening inside the cell, it is important to know where the information came from,” says Shivashankar.
Often, for scientists, the only way to sort this out is to conduct multiple individual experiments and compare the results. This slow and cumbersome process limits the amount of information they can gather.
In the new work, the researchers built a machine-learning framework that specifically understands which information overlaps between different modalities, and which information is unique to a particular modality but not captured by others.
“As a user, you can simply input your cell data and it automatically tells you which data are shared and which data are modality-specific,” Zhang says.
To build this framework, the researchers rethought the typical way machine-learning models are designed to capture and interpret multimodal cellular measurements.
Usually these methods, known as autoencoders, have one model for each measurement modality, and each model encodes a separate representation for the data captured by that modality. The representation is a compressed version of the input data that discards any irrelevant details.
The MIT method has a shared representation space where data that overlap between multiple modalities are encoded, as well as separate spaces where unique data from each modality are encoded.
In essence, one can think of it like a Venn diagram of cellular data.
The researchers also used a special, two-step training procedure that helps their model handle the complexity involved in deciding which data are shared across multiple data modalities. After training, the model can identify which data are shared and which are unique when fed cell data it has never seen before.
Distinguishing data
In tests on synthetic datasets, the framework correctly captured known shared and modality-specific information. When they applied their method to real-world single-cell datasets, it comprehensively and automatically distinguished between gene activity captured jointly by two measurement modalities, such as transcriptomics and chromatin accessibility, while also correctly identifying which information came from only one of those modalities.
In addition, the researchers used their method to identify which measurement modality captured a certain protein marker that indicates DNA damage in cancer patients. Knowing where this information came from would help a clinical scientist determine which technique they should use to measure that marker.
“There are too many modalities in a cell and we can’t possibly measure them all, so we need a prediction tool. But then the question is: Which modalities should we measure and which modalities should we predict? Our method can answer that question,” Uhler says.
In the future, the researchers want to enable the model to provide more interpretable information about the state of the cell. They also want to conduct additional experiments to ensure it correctly disentangles cellular information and apply the model to a wider range of clinical questions.
“It is not sufficient to just integrate the information from all these modalities,” Uhler says. “We can learn a lot about the state of a cell if we carefully compare the different modalities to understand how different components of cells regulate each other.”
This research is funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, AstraZeneca, the MIT-IBM Watson AI Lab, the MIT J-Clinic for Machine Learning and Health, and a Simons Investigator Award.
MIT’s delta v accelerator receives $6M gift to supercharge startups being built by student founders
With the impact artificial intelligence is having on how companies operate, the environment for how MIT students are learning entrepreneurship and choosing to create new ventures is seeing rapid changes as well. To address how these student startups are being built, the Martin Trust Center for MIT Entrepreneurship undertook a months-long series of discussions with key stakeholders to help shape a new direction for delta v, MIT’s capstone entrepreneurship accelerator for student founders.
Two of Boston’s most successful tech entrepreneurs have stepped forward to fund this growth of new MIT ventures through a combined $6 million gift that supports the delta v accelerator run out of the Trust Center. Ed Hallen MBA ’12 and Andrew Bialecki, co-founders of Boston-based customer relationship management firm Klaviyo, are providing the donation to support the next wave of innovation-driven entrepreneurship taking place at MIT.
“In the early days of Klaviyo, we learned almost everything by building, testing assumptions, making mistakes, and figuring things out as we went,” Hallen says. “MIT delta v creates that same learning-by-doing environment for students, while surrounding them with mentorship and resources that help founders build with clarity and momentum. We’ve seen the difference delta v can make for founders, and we’re excited to help the Trust Center extend that opportunity to the next generation of students.”
“We’ve always believed the world needs more entrepreneurs, and that Boston should be one of the places leading the way,” adds Bialecki. “Boston is a hub of innovation with ambitious students and a strong community of builders. MIT delta v plays a critical role in developing founders early, not just helping them start companies but helping them build companies that last. Supporting that mission is something Ed and I care deeply about.”
The Martin Trust Center plans to “accelerate the accelerator” with the funding. Recognizing the opportunity that exists as AI impacts how students are able to build companies, along with the increased interest being shown by students to learn about entrepreneurship during their time on campus, is a major driver for these changes. One of the main impacts will be the ability of delta v participants to earn up to $75,000 in equity-free funding during the program, an increase from $20,000 in years past.
Also, delta v will be introducing a partner model composed of leading founders from companies such as HubSpot, Okta, and Kayak, C-suite operators, subject matter experts, and early-stage investors who will all be providing significant guidance and mentorship to the student ventures.
“Core to MIT’s mission is developing the innovative technologies and solutions that can help solve tough problems at global scale,” says MIT Provost Anantha Chandrakasan. “The AI revolution is creating exciting new opportunities for MIT students to build the next wave of impactful companies, and the delta v accelerator is a perfect vehicle to help them make that happen.”
In recent years MIT-founded startups such as Cursor and Delve who use AI as a core part of their business have seen explosive growth in both customers and revenue as well as valuation. In addition, delta v alumni entrepreneurs and their companies such as Klarity and Reducto are providing software-as-a-service (SaaS) platforms using AI tools while Vertical Semiconductor is growing thanks to providing the energy solutions that data centers need to power today’s computing demands. These are just some of the businesses MIT students are looking to as models they can follow to build and launch successfully, whether they are working on solutions in health care, climate, finance, the future of work, or another global challenge.
“MIT Sloan is the place for entrepreneurship education, part of a unique ecosystem of collaboration across MIT to solve problems," says Richard M. Locke, the John C Head III Dean at the MIT Sloan School of Management. “The delta v program is a great example of how MIT students dedicate their energy to starting a venture, connect with mentors, and incorporate proven frameworks for disciplined entrepreneurship. This gift from Ed Hallen and Andrew Bialecki will provide additional funding for this important program, and I’m so grateful for their support of entrepreneurship education at MIT.”
“I remember when Ed and Andrew were giving birth to Klaviyo at the Trust Center,” says Bill Aulet, the Ethernet Inventors Professor of the Practice and managing director of the Trust Center. “Through their ingenuity and drive, they have created an iconic tech company here in Boston with the support of our ecosystem. Through their willingness to give back, many more students will now be able to follow their path and become entrepreneurs who can create extraordinary positive impact in the world.”
Applications for the next delta v cohort will open on March 1 and close on April 1. Teams will be announced in May for the summer 2026 accelerator.
“MIT delta v is about creating belief in our most exceptional entrepreneurial talent — and turning that belief into consequential impact for the world. By supporting early-stage founders who take bold ideas from improbable to possible, we help them build companies that matter,” says Ana Bakshi, the Trust Center’s executive director. “Our students are the next generation of job creators, economic drivers, and thought leaders. To realize this potential, it is critical that we continue to invest in and scale startup programs and spaces so they can build at unprecedented levels. Ed and Andrew’s generosity gives us a powerful opportunity to change velocity—and make that future possible.”
Founded in 1991, the award-winning Martin Trust Center for MIT Entrepreneurship is today focused on teaching entrepreneurship as a craft. It combines evidence-based entrepreneurship frameworks, used in over a thousand other organizations, with experiential learning, experiences, and community building inside and outside the classroom to create the next generation of innovation-driven entrepreneurs. Alumni who have gone through Trust Center programs have started companies including Cursor, Delve, Okta, HubSpot, PillPack, Honey, WHOOP, Reducto, Klarity, and Biobot Analytics, and thousands more in industries as diverse as biotech, climate and energy, AI, health care, fintech, business and consumer software, and more.
In the first 10 years of delta v, the program's alumni have helped create entrepreneurs who have gone on to experience extraordinary success. The five-year survival rate of their companies has been 69%, and they have raised well over $3 billion in funding while addressing the world’s greatest challenges — evidenced by the fact that 89% are directly aligned with the UN Sustainable Development goals.
More trees where they matter, please
One of the best forms of heat relief is pretty simple: trees. In cities, as studies have documented, more tree cover lowers surface temperatures and heat-related health risks.
However, as a new study led by MIT researchers shows, the amount of tree cover varies widely within cities, and is generally connected to wealth levels. After examining a cross-section of cities on four continents at different latitudes, the research finds a consistent link between wealth and neighborhood tree abundance within a city, with better-off residents usually enjoying much more shade on nearby sidewalks.
“Shade is the easiest way to counter warm weather,” says Fabio Duarte, an MIT urban studies scholar and co-author of a new paper detailing the study’s results. “Strictly by looking at which areas are shaded, we can tell where rich people and poor people live.”
That disparity is evident within a range of cities, and is present whether a city contains a large amount of tree cover overall or just a little. Either way, there are more trees in wealthier spots.
“When we compare the most well-shaded city in our study, Stockholm, with the worst-shaded, Belem in northern Brazil, we still see marked inequality,” says Duarte, the associate director of MIT’s Senseable City Lab in the Department of Urban Studies and Planning (DUSP). “Even though the most-shaded parts of Belem are less shaded than the least-shaded parts of Stockholm, shade inequality in Stockholm is greater. Rich people in Stockholm have much better shade provison as pedestrians than we see in poor areas of Stockholm.”
The paper, “Global patterns of pedestrian shade inequality,” is published today in Nature Communications. The authors are Xinyue Gu of Hong Kong Polytechnic University; Lukas Beuster, a research fellow at the Amsterdam Institute for Advanced Metropolitan Solutions and MIT’s Senseable City Lab; Xintao Liu, an associate professor at Hong Kong Polytechnic University; Eveline van Leeuwen, scientific director at the Amsterdam Institute for Advanced Metropolitan Solutions; Titus Venverloo, who leads the MIT Senseable City Amsterdam lab; and Duarte, who is also a lecturer in DUSP.
From Stockholm to Sydney
To conduct the study, the researchers used satellite data from multiple sources, along with urban mapping programs and granular economic data about the cities they examined. There are nine cities in the study: Amsterdam, Barcelona, Belem, Boston, Hong Kong, Milan, Rio de Janeiro, Stockholm, and Sydney. Those places are intended to create a cross-section of cities with different characteristics, including latitude, wealth levels, urban form, and more.
The scholars looked at the amount of shade available on city sidewalks on summer solistice day, as well as the hottest recorded day each year from 1991 to 2020. They then created a scale, ranging from 0 to 1, to rate the amount of shade available on sidewalks, both citywide and within neighborhoods.
“We focused on sidewalks because they are a major counduit of urban activity, even on hot summer days,” Gu says. “Adding tree cover for sidewalks is one crucial way cities can pursue heat-reduction measures.”
Duarte adds: “When it comes to those who are not protected by air conditioning, they are also using the city, walking, taking buses, and anybody who takes a bus is walking or biking to or from bus stops. They are using sidewalks as the main infrastructure.”
The cities in the study offer very different levels of tree coverage. On the 0-to-1 scale the researchers developed, much of Stockholm falls in the 0.6-0.9 range, with some neighborhoods being over 0.9. By contrast, large swaths of Rio de Janeiro are under the 0.1 mark. Much of Boston ranges from 0.15 to 0.4, with a few neighborhoods reaching 0.45 on the scale.
The overall pattern of disparities, however, is very consistent, and includes the more affluent cities. The bottom 20 percent of neighborhoods in Stockholm, in terms of shade coverage, are rated at 0.58 on the scale, while the top 20 percent of Belem neighborhoods rate at 0.37; Stockholm has a greater disparity between most-covered and least-covered. To be sure, there is variety within many cities: Milan and Barcelona have some lower-income neighborhoods with abundant shade, for instance. But the aggregate trend is clear. Amsterdam, another well-off place on average, has a distinct pattern of less shade in lower-income areas.
“In rich cities like Amsterdam, even though it’s relatively well-shaded, the disparity is still very high,” Beuster says. “For us the most surprising point was not that in poor cities and more unequal societies the disparity would be notable — that was expected. What was unexpected was how the disparity still happens and is sometimes more pronounced in rich countries.”
“Follow transit”
If the tree-shade disparity issue is quite persistent, then it raises the matter of what to do about it. The researchers have a basic answer: Add trees in areas with public transit, which generate a lot of pedestrian mileage.
“In each city, from Sydney to Rio to Amsterdam, there are people who, regardless of the weather, need to walk,” Duarte says. “And it’s those people who also take public transportation. Therefore, link a tree-planting scheme to a public transportation network. And secondly, they are also the medium-and low-income part of the population. So the action deriving from this result is quite clear: If you need to increase your tree coverage and don’t know where, follow transit. If you follow transit, you will have the right shading.”
Indeed, one takeaway from the study is to think of trees not just as a nice-to-have part of urban aesthetics, but in functional terms.
“Planners and city officials should think about tree placement at least partly in terms of the heat-mitigating effect they have,” Beuster says.
“It’s not just about planting trees,” Duarte observes. “It’s about providing shade by planting trees. If you remove a tree that’s providing shade in a pedestrian area and you plant two other trees in a park, you are still removing part of the public function of the tree.”
He adds: “With increasing temperatures, providing shade is an essential public amenity. Along with providing transportation, I think providing shade in pedestrian spaces should almost be a public right.”
The Amsterdam Institute for Advanced Metropolitan Solutions and all members of the MIT Senseable City Consortium (including FAE Technology, Dubai Foundation, Sondotécnica, Seoul AI Foundation, Arnold Ventures, Sidara, Toyota, Abu Dhabi’s Department of Municipal Transportation, A2A, UnipolTech, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, Hospital Israelita Albert Einstein, KACST, KAIST, and the cities of Laval, Amsterdam, and Rio de Janeiro) supported the research.
