MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Injectable “satellite livers” could offer an alternative to liver transplantation
More than 10,000 Americans who suffer from chronic liver disease are on a waitlist for a liver transplant, but there are not enough donated organs for all of those patients. Additionally, many people with liver failure aren’t eligible for a transplant if they are not healthy enough to tolerate the surgery.
To help those patients, MIT engineers have developed “mini livers” that could be injected into the body and take over the functions of the failing liver.
In a new study in mice, the researchers showed that these injected liver cells could remain viable in the body for at least two months, and they were able to generate many of the enzymes and other proteins that the liver produces.
“We think of these as satellite livers. If we could deliver these cells into the body, while leaving the sick organ in place, that would provide booster function,” says Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research and the Institute for Medical Engineering and Science (IMES).
Bhatia is the senior author of the new study, which appears today in the journal Cell Biomaterials. MIT postdoc Vardhman Kumar is the paper’s lead author.
Restoring liver function
The human liver plays a role in about 500 essential functions, including regulation of blood clotting, removing bacteria from the bloodstream, and metabolizing drugs. Most of these functions are performed by cells called hepatocytes.
Over the past decade, Bhatia’s lab has been working on ways to restore hepatocyte function without a surgical liver transplant. One possible approach is to embed hepatocytes into a biomaterial such as a hydrogel, but these gels also have to be surgically implanted.
Another option is to inject hepatocytes into the body, which eliminates the need for surgery. In this study, Bhatia’s lab sought to improve on this strategy by providing an engineered niche that could enhance the cells’ survival and facilitate noninvasive monitoring of graft health.
To achieve that, the researchers came up with the idea of injecting cells along with hydrogel microspheres that would help them stay together and form connections with nearby blood vessels. These spheres have special properties that allow them to act like a liquid when they are closely packed together, so they can be injected through a syringe and then regain their solid structure once inside the body.
In recent years, researchers have explored using hydrogel microspheres to promote wound healing, as they help cells to migrate into the spaces between the spheres and build new tissue. In the new study, the MIT team adapted them to help hepatocytes form a stable tissue graft after injection.
“What we did is use this technology to create an engineered niche for cell transplantation,” Kumar says. “If the cells are injected in the absence of these spheres, they would not integrate efficiently with the host, but these microspheres provide the hepatocytes with a niche where they can stay localized and become connected to the host circulation much faster.”
The injected mixture also includes fibroblast cells — supportive cells that help the hepatocytes survive and promote the growth of blood vessels into the tissue.
Working with Nicole Henning, an ultrasound research specialist at the Koch Institute, the researchers developed a way to inject the cell mixture using a syringe guided by ultrasound. After injection, the researchers can also use ultrasound to monitor the long-term stability of the implant.
In this study, the mini livers were injected into the fat tissue in the belly. In the future, similar grafts could be delivered to other sites in the body, such as into the spleen or near the kidneys. As long as they have enough space and access to blood vessels, the injected hepatocytes can function similarly to hepatocytes in the liver.
“For a vast majority of liver disorders, the graft does not need to sit close to the liver,” Kumar says.
An alternative to transplantation
In tests in mice, the researchers injected the mixture of liver cells and microspheres into an area of fatty tissue known as the perigonadal adipose tissue. Once the cells are localized in the body, they form a stable, compact structure. Over time, blood vessels begin to grow into the graft area, helping the injected hepatocytes to stay healthy.
“The new blood vessels formed right next to the hepatocytes, which is why they were able to survive,” Kumar says. “They were able to get the nutrients delivered right to them, they were able to function the way they're supposed to, and they produced the proteins that we expect them to.”
After injection, the cells remained viable and able to secrete specialized proteins into the host circulation for eight weeks, the length of the study. That suggests that the therapy could potentially work as a long-term treatment for liver disease, the researchers say.
“The way we see this technology is it can provide an alternative to surgery, but it can also serve as a bridge to transplantation where these grafts can provide support until a donor organ becomes available,” Kumar says. “And if we think they might need another therapy or more grafts, the barriers to do that are much less with this injectable technology than undergoing another surgery.”
With the current version of this technology, patients would likely need to take immunosuppressive drugs, but the researchers are exploring the possibility of developing “stealthy” hepatocytes that could evade the immune system, or using the hydrogel microspheres to deliver immunosuppressants locally.
The research was funded by the Koch Institute Support (core) grant from the National Cancer Institute, the National Institutes of Health, the Wellcome Leap HOPE Program, a National Science Foundation Graduate Research Fellowship, and the Howard Hughes Medical Institute.
LAB14 joins the MIT.nano Consortium
LAB14 GmbH, a corporate network based in Germany that unites eight high-tech companies focused on nanofabrication, microfabrication, and surface analysis, has joined the MIT.nano Consortium.
“The addition of LAB14 to the MIT.nano Consortium reinforces the importance of collaboration to advance the next set of great ideas,” says Vladimir Bulović, the founding faculty director of MIT.nano and the Fariborz Maseeh (1990) Professor of Emerging Technologies at MIT. “At MIT.nano, we are thrilled when our shared-access facility leads to cross-disciplinary discoveries. LAB14 carries this same motivation by assembling the constellation of remarkable interconnected industry partners.”
Comprising eight companies — Heidelberg Instruments, Nanoscribe, GenISys, Notion Systems, 40-30, Amcoss, SPECSGROUP, and Nanosurf — LAB14 is focused on developing products and services that are fundamental to micro- and nanofabrication technologies, supporting industrial and research-driven applications with complex manufacturing and analysis requirements.
The companies of LAB14 operate under a shared organizational structure that enables closer coordination in technology development. This setup allows for faster research progress and more efficient manufacturing workflows.
“Joining the MIT.nano Consortium marks a significant milestone for LAB14 and our companies,” says Martin Wynaendts van Resandt, CEO of LAB14. “This participation allows our network to collaborate directly with world-leading researchers, accelerating innovation in micro- and nanotechnology."
As part of this engagement, LAB14 will provide two pieces of equipment to be installed at MIT.nano within the coming year. The VPG 300 DI maskless stepper, a high-performance, direct-write system from Heidelberg Instruments, will be positioned inside MIT.nano’s cleanroom. This tool will allow MIT.nano users to pattern structures smaller than 500 nanometers directly onto wafers with accuracy and uniformity comparable to typical high resolution i-line lithography. Equipped with advanced multi-layer alignment and mix‑and‑match functions, the VPG creates a seamless link between laser direct writing and e‑beam lithography.
The EnviroMETROS X-ray photoelectron spectroscopy (XPS/HAXPES) tool by SPECSGROUP will join the suite of Characterization.nano instruments. This unique system is specialized in nondestructive depth profile measurements using multiple X-ray energies to determine the thickness of thin-film samples and their chemical compositions with highest precision. It supports various analyses across a wide pressure range, allowing MIT.nano users to examine thin‑film materials under more realistic environmental conditions and to observe how they change during operation.
The MIT.nano Consortium is a platform for academia-industry collaboration, fostering research and innovation in nanoscale science and engineering. Consortium members gain unparalleled access to MIT.nano and its dynamic user community, providing opportunities to share expertise and guide advances in nanoscale technology.
MIT.nano continues to welcome new companies as sustaining members. For details, and to see a list of current members, visit the MIT.nano Consortium page.
Engineering confidence to navigate uncertainty
Flying on Mars — or any other world — is an extraordinary challenge. An autonomous spacecraft, operating millions of miles from pilots or engineers who could intervene on Earth, must be able to navigate unfamiliar and changing environments, avoid obstacles, land on uncertain terrain, and make decisions entirely on its own. Every maneuver depends on careful perception, planning, and control systems that are fault-tolerant, allowing the craft to recover if something goes wrong. A single miscalculation can leave a multi-million dollar spacecraft face-down on the surface, ending the mission before it even begins.
“This problem is in no way solved, in industry or even in research settings,” says Nicholas Roy, the Jerome C. Hunsaker Professor in the MIT Department of Aeronautics and Astronautics (AeroAstro). “You’ve got to bring together a lot of pieces of code, software, and integrate multiple pieces of hardware. Putting those together is not trivial.”
Not trivial, but for students nearing the culmination of their Course 16 undergraduate careers, far from impossible. In class 16.85 Autonomy Capstone (Design and Testing of Autonomous Vehicles), students design, implement, deploy, and test a full software architecture for flying autonomous systems. These systems have wide-ranging applications, from urban air-mobility and reusable launch vehicles to extraterrestrial exploration. With robust autonomous technology, vehicles can operate far from home while engineers watch from mission control centers not too different from the high bay in AeroAstro’s Kresa Center for Autonomous Systems.
Roy and Jonathan How, Ford Professor of Engineering, developed the new course to build on the foundations of class 16.405 (Robotics: Science and Systems), which introduces students to working with complex robotic platforms and autonomous navigation through ground vehicles with pre-built software. 16.85 applies those same principles to flight, with a basic quadrotor drone and an entirely blank slate to build their own navigation systems. The vehicles are then tested on an obstacle course featuring dubious landing pads and uncertain terrain. Students work in large teams (for this first run, two teams of seven — the SLAMdunkers and the Spelunkers) designed to mirror real-world missions where coordination across roles is essential.
“The vehicles need to be able to differentiate between all these hidden risks that are in the mission and the environment that they’re in and still survive,” says How. “We really want the students to learn how to make a system that they have confidence in.”
Mission: Figure it out, together
“The specific mission we gave them this semester is to imagine that you are an aircraft of some kind, and you’ve got to go and explore the surface of an extraterrestrial body like Mars or the moon,” Roy explains. “You need to use onboard sensors to fly around and explore, build a map, identify interesting objects, and then land safely on what is probably not a flat surface, or not a perfectly horizontal surface.”
A mission of this magnitude is far too complex for any one engineer to tackle alone, but that too poses a challenge for a large team. “The hardest problems these days are coordination problems,” says Andrew Fishberg, a graduate student in the Aerospace Controls Laboratory and one of three teaching assistants (TAs) for the course. “To use the robotics term, a team of this size is something of a heterogeneous swarm. Not everyone has the same skill set, but everyone shows up with something to contribute, and managing that together is a challenge.”
The challenge asks students to apply multiple types of “systems thinking” to the task. Relationships, interdependencies, and feedback loops are critical to their software architecture, and equally important in how students communicate and coordinate with their teammates. “Writing the reports and communicating with a team feels like overhead sometimes, but if you don’t communicate, you have a team of one,” says Fishberg. “We don’t have these ‘solo inventor’ situations where one person figures everything out anymore — it’s hundreds of people building this huge thing.”
The new faces of flight
Students in the class say they are eager to enter the rapidly evolving field, working with unconventional tools and vehicles that go beyond traditional applications.
“We continue to send rovers to extraterrestrial bodies. But there is an increasing interest in deploying unmanned systems to explore Earth,” says Roy. “There’s lots of places on Earth where we want to send robots to go and explore, places where it’s hazardous for humans to go.” That expanding set of applications is exactly what draws students to the field.
“I was really excited for the idea of a new class, especially one that was focused on autonomy, because that’s where I see my career going,” says senior Norah Miller. “This class has given me a really great experience in what it feels like to develop software from zero to a full flying mission.”
The Design and Testing of Autonomous Vehicles course offers a unique perspective for instructors and TAs who have known many of the students throughout their undergraduate careers. As a capstone, it provides an opportunity to see that growth come full circle. “A couple years ago we’re solving differential equations, and now they’re implementing software they wrote on a quadrotor in the high bay,” says How.
After weeks of learning, building, testing, refinement, and finally, flight, the results reflected the goals of the course. “It was exactly what we wanted to see happen,” says Roy. “We gave them a pretty challenging mission. We gave them hardware that should be capable of completing the mission, but not guaranteed. And the students have put in a tremendous amount of effort and have really risen to the challenge.”
W.M. Keck Foundation to support research on healthy aging at MIT
A prestigious grant from the W.M. Keck Foundation to Alison E. Ringel, an MIT assistant professor of biology, will support groundbreaking healthy aging research at the Institute.
Ringel, who is also a core member of the Ragon Institute of Mass General Brigham, MIT, and Harvard, will draw on her background in cancer immunology to create a more comprehensive biomedical understanding of the cause and possible treatments for aging-related decline.
“It is such an honor to receive this grant,” Ringel says. “This support will enable us to draw new connections between immunology and aging biology. As the U.S. population grows older, advancing this research is increasingly important, and this line of inquiry is only possible because of the W.M. Keck Foundation.”
Understanding how to extend healthy years of life is a fundamental question of biomedical research with wide-ranging societal implications. Although modern science and medicine have greatly expanded global life expectancy, it remains unclear why everyone ages differently; some maintain physical and cognitive fitness well into old age, while others become debilitatingly frail later in life.
Our immune systems are adaptable, but they do naturally decline as we get older. One critical component of our immune system is CD8+ T cells, which are known to target and destroy cancerous or damaged cells. As we age, our tissues accumulate cells that can no longer divide. These senescent cells are present throughout our lives, but reach seemingly harmful levels as a normal part of aging, causing tissue damage and diminished resilience under stress.
There is now compelling evidence that the immune system plays a more active role in aging than previously thought.
“Decades of research have revealed that T cells can eliminate cancer cells, and studies of how they do so have led directly to the development of cancer immunotherapy,” Ringel says. “Building on these discoveries, we can now ask what roles T cells play in normal aging, where the accumulation of senescent cells, which are remarkably similar to cancer cells in some respects, may cause health problems later in life.”
In animal models, reconstituting elements of a young immune system has been shown to improve age-related decline, potentially due to CD8+ T cells selectively eliminating senescent cells. CD8+ T cells progressively losing the ability to cull senescent cells could explain some age-related pathology.
Ringel aims to build models for the express purpose of tracking and manipulating T cells in the context of aging and to evaluate how T cell behavior changes over a lifespan.
“By defining the protective processes that slow aging when we are young and healthy, and defining how these go awry in older adults, our goal is to generate knowledge that can be applied to extend healthy years of life,” Ringel says. “I’m really excited about where this research can take us.”
The W.M. Keck Foundation was established in 1954 in Los Angeles by William Myron Keck, founder of The Superior Oil Co. One of the nation’s largest philanthropic organizations, the W.M. Keck Foundation supports outstanding science, engineering, and medical research. The foundation also supports undergraduate education and maintains a program within Southern California to support arts and culture, education, health, and community service projects.
Les Perelman, expert in writing assessment and champion of writing education, dies at 77
Leslie “Les” Perelman, an influential figure in college writing assessment; a champion of writing instruction across all subject matters for over three decades at MIT; and a former MIT associate dean for undergraduate education, died on Nov. 12, 2025, at home in Lexington, Massachusetts. He was 77.
A Los Angeles native, Perelman attended the University of California at Berkeley, joining in its lively activist years, and in 1980 received his PhD in English from the University of Massachusetts at Amherst. After stints at the University of Southern California and Tulane University, he returned to Massachusetts — to MIT — in 1987, and stayed for the next 35 years.
Perelman became best known for his dogged critique of autograding systems and writing assessments that didn’t assess actual college writing. The Boston Globe dubbed him “The man who killed the SAT essay.” He told NPR that colleges “spend the first year deprogramming [students] from the five-paragraph essay.”
His widow, MIT Professor Emerita Elizabeth Garrels, says that while attending a conference, Perelman — who was practically blind without his glasses — arranged to stand at one end of a room in order to “grade” essays held up for him on the other side. “He would call out the grade that each essay would likely receive on standardized scoring,” Garrels says. “And he was consistently right.” Perelman was doing what automatic scorers were: He was, he said in the NPR interview, “mirroring how automated or formulaic grading systems often reward form over substance.”
Perelman also “ruffled a lot of feathers” in industry, says Garrels, with his 2020 paper documenting his BABEL (“Basic Automatic B.S. Essay Language”) Generator, which output nonsense that commercial autograders nevertheless gave top marks. He saved some of his most systematic criticism for autograders’ defenders in academia, at one point calling out peers at the University of Akron for the methodology in their widely-touted paper claiming autograders performed just as well as human graders.
At least one service, though, E.T.S., partly welcomed Perelman’s critique by making its autograder available to him for testing. (Others, like Pearson and Vantage Learning, declined.) He discovered he could ace the tests, even when his essay included non-factual gibberish and typographical errors:
Teaching assistants are paid an excessive amount of money. The average teaching assistant makes six times as much money as college presidents. In addition, they often receive a plethora of extra benefits such as private jets, vacations in the south seas, a staring roles in motion pictures. Moreover, in the Dickens novel Great Expectation, Pip makes his fortune by being a teaching assistant. It doesn’t matter what the subject is, since there are three parts to everything you can think of.
MIT career
Within MIT, Perelman’s legacy was his push to embed writing instruction into the whole of MIT’s curriculum, not as standalone expository writing subjects, let alone as merely a writing exam that incoming students could use to pass out of writing subjects altogether. Supported by a $325,000 National Science Foundation grant, he convinced MIT to hire writing instructors who were also subject matter experts, often with STEM PhDs. They were tasked with collaborating with departments to plant writing instruction into both existing curricula and new subjects. That effort eventually became the Writing Across the Curriculum program (today named Writing, Rhetoric, and Professional Communication) with a staff of more than 30 instructors.
Building out the infrastructure wasn’t quick, however. Perelman’s successor, Suzanne Lane ’85, says it took him almost 15 years. It started with proving to others just how uneven writing instruction at MIT actually was. “A whole cohort of students who took a lot of writing classes or got communication instruction in various places would make great progress,” Lane says. “But it was definitely possible to get through all of MIT without doing much writing at all.”
To bolster his case, Perelman turned to alumni surveys. “The surveys asked how well MIT prepared you for your career,” says Lane. “The technical skills scored really high, but — what is horribly termed, sometimes, as ‘soft skills’ — communication skills, collaboration, etc., these scored really high on importance to career, but really low on how well MIT had prepared them.”
In other words, MIT alumni knew their stuff but were bad at communicating it, at a cost to their careers.
This led Perelman and others to push for a new undergraduate communication requirement. That NSF grant supported a 1997 pilot, designing experiments for courses that would be communication-intensive. It was a huge success. Every department participated. It involved 24 subjects and roughly 300 students. MIT faculty, following “lively” discussion at an April 1999 faculty meeting, approved the proposal of the creation of a report on the communication requirement’s implementation, followed a year later by its formal passage, effective fall 2001.
From that initial pilot of 24, there are now nearly 300 subjects that count toward the requirement, from class 1.013 (Senior Civil and Environmental Engineering Design) to 24.918 (Workshop in Linguistic Research).
Connections beyond MIT
Early in his career, Perelman worked with Vincent DiMarco, a literature scholar at the University of Massachusetts at Amherst, to publish “The Middle English Letter of Alexander to Aristotle” (Brill, 1978). With Wang Computers as publisher, he was a technical writer and project leader on the “DOS Release 3.30 User’s Reference Guide.” He edited a book and chapter on writing studies and assessment with New Jersey Institute of Technology professor Norbert Elliot. And in a project he was particularly proud of, he worked with the New South Wales Teachers Federation in 2018 to convince Australia to reject the adoption of an automated essay grading regime.
“Les was brilliant, with a Talmudic way of asking questions and entering academic debates,” says Nancy Sommers, whose work on undergraduate writing assessment at Harvard University paralleled Perelman’s. “I loved the way his eyes sparkled when he was ready to rip an adversary or a colleague who wasn’t up to his quick mind and vast, encyclopedic knowledge.”
Openness to rhetorical combat didn’t keep Perelman from being a wonderful friend, Sommers says, saying he once waited for her at the airline gate with a sandwich and a smile after a canceled flight. “That was Les, so gracious, generous, anticipating the needs of friends, always there to offer sustenance and friendship.”
Donations in Perelman’s name can be made to UNICEF’s work supporting children in Ukraine, the Lexington Refugee Assistance Program, Doctors Without Borders, and the Ash Grove Movie Finishing Fund.
Coping with catastrophe
Each April in Japan, people participate in a tradition called “hanami,” or cherry-blossom viewing, where they picnic under the blooming trees. The tradition has a second purpose: The presence of people at these gatherings, often by water, helps solidify riverbanks and protect them from spring floods. The celebration has a dual purpose, by addressing, however incrementally, the threat of natural disaster.
The practice of creating things that also protect against disasters can be seen all over Japan, where many new or renovated school buildings have design features unfamiliar to students elsewhere. In Tokyo, one elementary school has a roof swimming pool that stores water and is used to help the building’s toilets flush, plus an additional rainwater catchment tank and exterior stairs leading to a large balcony that wraps around one side of the building.
Why? Well, Japan is prone to natural disasters, such as tsunamis, earthquakes, and flooding. The country’s schools often double as evacuation sites for local residents, and design practices increasingly reflect this. In normal times, the roof pool is where students learn to swim and helps keep the school cool, and the large balcony is used by spectators watching the adjacent school athletics field. In emergencies, water storage is crucial and exterior stairs help people ascend quickly to the gymnasium, built on the second floor — to keep evacuees safer during flooding.
Meanwhile, in one Tokyo district, rooftop solar power is now common. Some schools feature skylights and courtyards to bring in natural light. Again, these architectural features serve dual purposes. Solar power, for one, lowers annual operating costs, and it provides electricity even in case of grid troubles.
These are examples of what MIT scholar Miho Mazereeuw has termed “anticipatory design,” in which structures and spaces are built with dual uses, for daily living and for when crisis strikes.
“The idea is to have these proactive measures in place rather than being reactionary and jumping into action only after something has happened,” says Mazereeuw, an associate professor in MIT’s Department of Architecture and a leading expert on resilient design.
Now Mazereeuw has a new book on the subject, “Design Before Disaster: Japan’s Culture of Preparedness,” published by the University of Virginia Press. Based on many years of research, with extensive illustrations, Mazereeuw examines scores of successful design examples from Japan, both in terms of architectural features and the civic process that created them.
“I’m hoping there can be a culture shift,” Mazereeuw says. “Wherever you can invent design outcomes to help society be more resilient beforehand, it is not at exorbitant cost. You can design for exceptional everyday spaces but embed other infrastructure and flexibility in there, so when there is a flood event or earthquake, those buildings have more capability.”
Bosai and barbecue
Mazereeuw, who is also the head of MIT’s Urban Risk Lab, has been studying disaster preparedness for over 30 years. As part of the Climate Project at MIT, she is also one of the mission directors and has worked with communities around the world on resiliency planning.
Japan has a particularly well-established culture of preparedness, often referred to through the Japanese word “bosai.” Mazereeuw has been studying the country’s practices carefully since the 1990s. In researching the book, she has visited hundreds of sites in the country and talked to many officials, designers, and citizens along the way.
Indeed, Mazereeuw emphasizes, “A major theme in the book is connecting the top-down and bottom-up.” Some good design ideas come from planners and architects. Other have come from community groups and local residents. All these sources are important.
“The Japanese government does invest a lot in disaster research and recovery,” Mazereeuw says. “But I would hate for people in other countries to think this isn’t possible elsewhere. It’s the opposite. There are a lot of examples in here that don’t cost extra, because of careful design through community participation.”
As one example, Mazereeuw devotes a chapter of the book to public parks, which are often primary evacuation spaces for residents in case of emergency. Some have outdoor cooking facilities, which in normal times are used for, say, a weekend barbecue or local community events but are also there in case of emergency. Some parks also have water storage, or restroom facilities designed to expand if needed, and many serve as flood reservoirs, protecting the surrounding neighborhood.
“The barbecue facilities are a great example of dual use, connecting the everyday with disaster preparedness,” Mazereeuw says. “You can bring food into this beautiful park, so you’re used to using this space for cooking already. The idea is that your cognitive map of where you should go is connected to fun things you have done in the past.”
Some of the parks Mazereeuw surveys in the book are tiny pocket parks, which are also filled with useful resilience tools.
“Anticipatory design does not have to be monumental,” Mazereeuw writes in the book.
Negotiating through design
To be sure, some disaster mitigation measures are difficult to enact. In the Naiwan district of Kesennuma, as Mazereeuw outlines in the book, much of the local port area was destroyed in the 2011 tsunami, and the government wanted to build a seawall as part of the reconstruction plan. Some local residents and fishermen were unenthusiastic; a seawall could limit ocean access. Finally, after extended negotiations, designers created a seawall integrated into a new commercial district with cafes and stores, as well as new areas of public water access.
“This project used the power of design to negotiate between prefectural and local regulations, structural integrity and aesthetics, ocean access and safety,” Mazereeuw says.
Ultimately, working to build a coalition in support of resilience measures can help create more interesting and useful designs.
Other scholars have praised “Design Before Disaster.” Daniel P. Aldrich, a professor at Northeastern University, has called the book a “well-researched, clearly written investigation” into Japanese disaster-management practices, adding that any officials or citizens around the world “who seek to keep residents and communities safe from shocks of all kinds will learn something important from this book. It sets a high bar for future scholarship in the field.”
For her part, Mazereeuw emphasizes, “We can learn from the Japanese example, but it’s not a copy-paste thing. The book is so people can understand the essence of it and then create their own disaster preparedness culture and approach. This should be an all-hands process. Emergency management is not about relying on managers. It’s figuring out how we all play a part.”
Featured video: Coding for underwater robotics
During a summer internship at MIT Lincoln Laboratory, Ivy Mahncke, an undergraduate student of robotics engineering at Olin College of Engineering, took a hands-on approach to testing algorithms for underwater navigation. She first discovered her love for working with underwater robotics as an intern at the Woods Hole Oceanographic Institution in 2024. Drawn by the chance to tackle new problems and cutting-edge algorithm development, Mahncke began an internship with Lincoln Laboratory's Advanced Undersea Systems and Technology Group in 2025.
Mahncke spent the summer developing and troubleshooting an algorithm that would help a human diver and robotic vehicle collaboratively navigate underwater. The lack of traditional localization aids — such as the Global Positioning System, or GPS — in an underwater environment posed challenges for navigation that Mahncke and her mentors sought to overcome. Her work in the laboratory culminated in field tests of the algorithm on an operational underwater vehicle. Accompanying group staff to field test sites in the Atlantic Ocean, Charles River, and Lake Superior, Mahncke had the opportunity see her software in action in the real world.
"One of the lead engineers on the project had split off to go do other work. And she said, 'Here's my laptop. Here are the things that you need to do. I trust you to go do them.' And so I got to be out on the water as not just an extra pair of hands, but as one of the lead field testers," Mahncke says. "I really felt that my supervisors saw me as the future generation of engineers, either at Lincoln Lab or just in the broader industry."
Says Madeline Miller, Mahncke's internship supervisor: "Ivy's internship coincided with a rigorous series of field tests at the end of an ambitious program. We figuratively threw her right in the water, and she not only floated, but played an integral part in our program's ability to hit several reach goals."
Lincoln Laboratory's summer research program runs from mid-May to August. Applications are now open.
Video by Tim Briggs/MIT Lincoln Laboratory | 2 minutes, 59 seconds
Turning curiosity about engineering into careers
It’s not every day that aspiring teenage engineers can see firsthand how planes are built. But a collaboration between nonprofit Engineering Tomorrow, aerospace firm Boeing, and alumni of the MIT Leaders for Global Operations (LGO) program working at Boeing is aiming to turn curiosity about aerospace engineering into possible careers for young students.
Boeing is LGO’s longest-standing industry collaborator, hosting LGO internships, recruiting LGO alumni, and hosting plant treks for future engineers. Engineering Tomorrow, a nonprofit dedicated to inspiring the next generation of engineers, frames the U.S. engineering workforce shortage as an economic and national security issue — and says the shortage isn’t in just engineers with degrees, but also in trained operators and technicians. They also recognize that many kids often start as natural tinkerers, but get scared off by higher-level math.
To bring more kids into the engineering fold, the organization delivers no-cost engineering labs to middle and high school students by collaborating with influential mentors, such as LGO graduates at organizations like Boeing.
“We want to inspire students by exposing them to professional engineers to illustrate the pathways for them to be problem-solvers in society,” explains Alex Dickson, Engineering Tomorrow’s program coordinator. “The demand for engineers has just gone up dramatically. It’s about being competitive on a global scale. We try to illustrate to students that there are many pathways into these careers.”
How MIT LGO makes engineering dreams a reality
Engineering Tomorrow’s collaboration with MIT LGO grew organically, through a robust alumni network. One of the nonprofit’s board members, LGO alumna Kristine Budill SM ’93, recognized a shared interest: the sizable Boeing LGO community wanted concrete ways to connect more directly with communities, and Engineering Tomorrow does just that.
Budill connected the organization with fellow LGO alumnus Cameron Hoffman MBA ’24, SM ’24, a Boeing manufacturing strategy manager who helped translate that shared mission into a real-world opportunity: an on-site Boeing experience that made engineering tangible for high school students.
The result: One lucky high school engineering design class from Mercer Island, Washington, recently got to experience Boeing 737s being built in person. In November 2025, 30 ninth graders at Mercer Island High School traveled to Boeing’s Renton, Washington, facility to learn how planes are constructed and understand what it really takes to have a career building them.
From the outset, the goal was to avoid the typical spectator field trip. Instead, Engineering Tomorrow and Hoffman designed a structured, multi-touch experience that prepared students before they ever set foot in the factory.
First, an Engineering Tomorrow liaison introduced key aerospace concepts and an associated lab challenge to the class via Zoom, then returned in person to guide Mercer students through a hands-on airplane-design lab, helping them translate theory into practice and answer questions about engineering pathways. Students then visited Boeing’s production facility, where they spoke with engineers from multiple disciplines — not just aerospace — and toured the factory floor.
By the time they arrived, students weren’t just impressed by the scale of the operation; they understood what they were seeing, asked informed questions, and left with a sharp sense of the many routes into engineering and manufacturing careers, Dickson says.
“Cameron set up an incredible on-site experience for the students that really made real-world engineering a more tangible experience for them,” Dickson says. “Many people think Boeing is just about aerospace engineering, because Boeing is an aerospace company. But they got to hear from mechanical engineers, electrical engineers, and workers with all sorts of backgrounds who made it clear that there’s no one set pathway into engineering or manufacturing.”
Then came the best part: Students got a VIP tour of the production facility, led by Boeing staff.
A snack and a tour
“It’s awe-inspiring: Dozens of unfinished airplanes are under one site, and you see all of the real-world production engineering that goes into something that oftentimes we take for granted when we step onto an airplane,” Dickson says.
When the big day arrived, students also met with engineering teams to learn about the history of the plant, complete with fun facts geared to high schoolers. (Did you know that a 737 takes off or lands every two seconds?) They learned about different career pathways, from design to production. It was easy to envision themselves working there, Hoffman says.
“Boeing is a company that a lot of folks work at for their entire career and take a lot of pride in the work that they do. We showed them: What does that look like? Do you want to be an engineer for your entire career? Do you want to be a people leader in the facility? Do you want to be a technical expert?” Hoffman says. “And the kids asked great questions.”
Then, the students — after snacks, of course — toured the production floor, where engineers assembled planes and tested parts. For Hoffman, that experience was deeply personal: He wished he’d experienced something similar growing up.
A 10-year Boeing veteran, Hoffman led the group throughout. He started at Boeing in 2015 as a recent college graduate, where he encountered several LGO alums who recommended the program.
“I’d been deeply interested in manufacturing since my early undergrad days. Boeing was an amazing place to work because our products are so complex, and the production systems are so fascinating,” he recalls.
Over time, he wanted to transition into people leadership with an MBA degree. His Boeing colleagues, well-represented among the LGO ranks, urged him toward the MIT program.
“LGO’s network is what makes it so special,” he says.
Upon returning to Boeing after completing his LGO degrees, Hoffman joined Boeing’s LGO/Tauber Leadership Development Program, which allows him to stay regularly engaged with the MIT LGO Program. One such activity where he remains engaged with the program is through the MIT LGO Alumni Board. As part of the board, Hoffman focuses on the social good committee, and the Engineering Tomorrow high school partnership was a perfect fit to meet that committee’s goals.
For Hoffman, these leadership initiatives are what makes LGO distinctive.
“When you graduate from a program like LGO, you’re often so forward-looking. It helps to take time to reflect on what an inspiration you can be to the people who come after you. MIT LGO focuses on both engineering and business. Our students want to study engineering because they want to be problem-solvers. The LGO program, which is at the intersection of engineering and business leadership, is just an incredible inspirational program for young students to see,” Hoffman says.
It was an opportunity he didn’t get as an ambitious young high schooler.
“As a kid, the only engineering class that was available to me was architectural drafting. If this opportunity was offered to me when I was in high school, I would’ve jumped out of my shoes at the chance. You get to see products that are just so complex; you really can't believe it until you see it,” he says.
Setting a positive precedent across industries
Mercer Island engineering design teacher Michael Ketchum had high praise for the field trip, considering it transformative for his students. He estimates that roughly 80 percent of them want to be engineers. He was impressed that the experience was more than just a tour, that it also included classroom support and airplane design kits, reinforcing core engineering concepts. The collaboration allowed them to broaden a previously CAD-focused class into one that also includes 3D printing, electronics, and aerospace applications.
“For freshmen and sophomores, field trips are key. They stick in their head a bit longer than just school learning. If they get to see people getting excited talking about engineering, and it embeds it a little bit better in their brain,” Ketchum says.
In a post-trip survey, students reported being more likely to consider engineering after the experience.
“They expressed the idea that the conversations with engineers inspired them, and 100 percent of students said that seeing a production facility was one of the coolest parts of the program, which led to them being more inclined to want to be an engineer,” Engineering Tomorrow’s Dickson says.
Next year, the LGO network hopes to expand to partner with additional companies, from health care to biotech.
“The goal is to continue to create exposure. This visit was a really great proof of concept to see what’s valuable to students,” Hoffman says — and, ideally, future LGO alumni.
Designing a more resilient future for plants, from the cell up
In a narrow strip of land along the Andes mountain range in central Chile, an Indigenous community has long celebrated the bark of a rare tree for its medicinal properties. Modern science only recently caught up to the tradition, finding the so-called soapbark tree contains potent compounds for boosting the human immune system.
The molecules have since been harnessed to make the world’s first malaria vaccine and to boost the effectiveness of vaccines for everything from shingles to Covid-19 and cancer. Unfortunately, unsustainable harvesting has threatened the existence of the tree species, leading the Chilean government to severely restrict lumbering.
The soapbark tree’s story is not unique. Plants are the foundation of industries such as pharmaceuticals, beauty, agriculture, and forestry, yet around 45 percent of plant species are in danger of going extinct. At the same time, human demand for plant products continues to rise. Ashley Beckwith SM ’18, PhD ’22 believes meeting that demand requires rethinking how plants are grown. Her company, Foray Bioscience, aims to make plant production faster, more adaptable, and less damaging to fragile natural supply chains.
The company is working to make it possible to grow any plant or plant product from single cells using biomanufacturing powered by artificial intelligence. Foray has already developed molecules, materials, and fabricated seeds with various partners, including academic researchers, nurseries, conservationists, and companies.
In one new partnership, Foray is working with the nursery West Coast Chestnut to deploy a more disease-resistant version of the chestnut trees that once filled forests across the eastern U.S. but have since been wiped out. The project is just one example of how AI and plant science can be leveraged to protect the plant populations that bring so much value to humans and the planet.
“Plant systems underpin every aspect of our daily lives, from the air we breathe to the food we eat, the clothes we wear, the homes we live in, and more,” Beckwith says. “But these plant systems are fragile and in decline. We need new strategies to ensure lasting access to the plant products and ecosystems we depend on.”
From human cells to plants
Beckwith focused on biology and materials manufacturing as a master’s student in MIT’s Department of Mechanical Engineering. Her research involved building platforms to enable precision treatments for human diseases. After graduating, she worked on a regenerative, self-sufficient farm that mimicked natural ecosystems, and began thinking about applying her work to address the fragility of plant systems.
Beckwith returned to MIT for her PhD to explore the idea of regenerative plant systems, studying in the lab of Research Scientist Luis Fernando Velásquez-García in the Department of Electrical Engineering and Computer Science.
“To address organ shortages for transplants, scientists aspire to grow kidneys that don’t have to be harvested from a human using tissue engineering,” Beckwith says. “What if we could do something similar for our plant systems?”
Beckwith went on to publish papers showing she could grow wood-like plant material in a lab. By adjusting certain chemicals, the researchers could precisely control properties like stiffness and density.
“I was thinking about how we build products, like wood, from the cell up instead of extracting from the top down,” Beckwith recalls. “It led to some foundational demonstrations that underpin the work we do at Foray today, but it also opened up questions: Where are these new approaches most urgently needed? What would it take to apply these tools where they’re needed, fast?”
Beckwith began exploring the idea of starting a company in 2021, participating in accelerator programs run by the E14 Fund and The Engine — both MIT-affiliated initiatives designed to support breakthrough science ventures. She officially founded Foray in February of 2022 after completing her PhD.
“Our early research showed that we could grow wood-like material directly from plant cells,” she says. “We are now able to grow not just wood without the tree, but also produce harvest-free molecules, materials, and even seeds by steering single cells to develop precisely into the products we need without ever having to grow the whole plant.”
Beckwith describes her lab-grown wood innovation as analogous to Uber if there were no internet — a powerful idea without the digital backbone to scale. To create the data foundation and ecosystem to scale plant innovation, Foray is now building the Pando AI platform to enable rapid discovery and deployment of these novel plant solutions.
“Pando functions like a Google Maps for plant growth,” Beckwith says. “It helps scientists navigate a really complex field of variables and arrive at a research destination efficiently — because to steer a cell to produce a particular product, there might be 50 different variables to tweak. It would take a lifetime to explore each of those, and that’s one reason why plant research is so slow today.”
The “operating system for plant science”
Foray’s team includes experts in plant biology, artificial intelligence, machine learning, computational biology, and process engineering.
“This is a very intersectional problem,” Beckwith says. “One of the most exciting things for me is building this highly capable team that is able to deliver solutions that could never be created in a silo.”
After a year of pilot collaborations with select researchers, Foray is preparing for a broader public launch of its Pando platform early this year.
Over the next several years, Beckwith hopes Foray will serve as an innovation engine for researchers and companies working across agriculture, materials, pharmaceuticals, and conservation. Foray already uses Pando internally to create plant solutions that overcome limitations in natural production.
“Fabricated seeds are one capability that we’re really excited about,” Beckwith says. “Being able to grow seeds from cells lets you create really timely and scalable seed supplies to address gaps in restoration, or shorten the path to market for new, resilient crop varieties. There’s a lot to be gained by making our plant systems more adaptive.”
“We want to shorten plant development timelines, so solutions can be built in months, not decades,” Beckwith says. “We’re excited to be building tools that represent a step change in the way plant production can be done.”
As Foray’s products scale and more researchers use its platform, the company is hoping to help the plant science industry respond to some of our planet’s most pressing challenges.
“Right now, we’re focused on plants in labs,” Beckwith says. “In five years, we aim to be the operating system for all of plant science, making it possible to build anything from a single plant cell.”
Tackling industry’s burdensome bubble problem
In industrial plants around the world, tiny bubbles cause big problems. Bubbles clog filters, disrupt chemical reactions, reduce throughput during biomanufacturing, and can even cause overheating in electronics and nuclear power plants.
MIT Professor Kripa Varanasi has long studied methods to reduce bubble disruption. In a new study, Varanasi, along with PhD candidate Bert Vandereydt and former postdoc Saurabh Nath, have uncovered the physics behind a promising type of debubbling membrane material that is “aerophilic” — Greek for “air-loving.” The material can be used in systems of all types, allowing anyone to optimize their machine’s performance by breaking free from bubble-borne disruptions.
“We have figured out the structure of these bubble-attracting membrane materials to allow gas to evacuate in the fastest possible manner,” says Varanasi, the senior author of the study. “Think of trying to push honey through a coffee strainer: It’s not going to go through easily, whereas water will move through, and gas will move through even more easily. But even gas will reach a throughput limit, which depends on the properties of the gas and the liquid involved. By uncovering those limits, our research allows engineers to build better membranes for their systems.”
In the paper, which appears in the journal PNAS this week, the researchers distill their findings into a graph that allows anyone to plot a few characteristics of their system — like the viscosity of their gas and the surrounding liquid — and find the best membrane to make bubble removal near-instantaneous. Using their approach, the research team demonstrated a 1,000-fold acceleration in bubble removal in a bioreactor that’s used in the pharmaceutical industry, food and beverage manufacturing, cosmetics, chemical production, and more.
The researchers say the membranes, which repel water, could be used to improve the throughput of a wide range of advanced systems whose operation has been plagued to date by bubbles.
Better bubble breakers
Companies today try everything to burst bubbles. They deploy foam breakers that physically shear them, chemicals that act as antifoaming agents, even ultrasound. Such approaches have drawbacks in tightly controlled environments like bioreactors, where chemical defoamers can be toxic to cells, while mechanical agitation can damage delicate biological materials. Similar limitations apply to other industries where contamination or physical disturbance is unacceptable. As a result, many applications that cannot tolerate chemical defoamers or mechanical intervention remain fundamentally bottlenecked by foam formation.
“Biomanufacturing has really taken off in the last 10 years,” Vandereydt says. “We’re making a lot more out of biologic systems like cells and bacteria, and our reactors have increased in throughput from 5 million cells per millimeter of solution to 100 million cells per millimeter. However, the bubble evacuation and defoaming haven’t kept up — it’s becoming a significant rate-limiting step.”
To better understand the interaction between aerophilic membranes and bubbles, the MIT researchers used MIT.nano facilities to create a series of tiny porous silicon membranes with holes ranging in size from 10 microns to 200 microns. They coated the membranes with hydrophobic silica nanoparticles.
Placing them on the surface of different liquids, the researchers released single bubbles with varying viscosity and recorded the interaction using high-speed imaging as each collided with the membranes.
“We started by trying to take a very complicated system, like foam being generated in a bioreactor, and study it in the simplest form to understand what’s happening,” Vandereydt says.
At first, the bigger the holes, the faster the bubbles disappeared. The researchers also changed the bubble gas from air to hydrogen, which has half the viscosity, and found the speed of bubble destruction doubled.
But after about a 1,000-fold acceleration in bubble destruction, the researchers hit a wall no matter how big the membrane holes were. They had run up against a different physical limit to investigate.
The researchers then tried changing the viscosity of their liquid, from water to something closer to honey. They found viscosity only plays a role in the speed of bubble destruction when the liquid is 200 times the viscosity of liquid. Further experiments revealed the biggest factor for slowing bubble evacuation was inertial resistance in the liquid.
“Through experimentation, we showed there are three different limits [to the speed of bubble destruction],” Vandereydt says. “There is the viscous limit of the gas in a low-viscosity, low-permeability setup. Then there’s the viscous resistance of the liquid in the high-permeability, high-viscosity regime. Then we have the inertial limit of the liquid.”
The team used a bioreactor to experimentally validate their findings and charted them in a map that engineers can use to enter the characteristics of their system and find both the best membrane for their situation and the biggest factor slowing bubble evacuation.
The science of bubbles
The research should be useful for anyone trying to accelerate the destruction of bubbles in their industrial device, but it also improves our understanding of the physics underpinning bubble dynamics.
“We have identified three different throughput limits, and the physics behind those limits, and we have reduced it to very simple laws,” Nath explains. “How fast you can go is first dictated between surface tension and inertia. But you may also hit a different limit, where the pores are extremely small, so the gas finds it difficult to move through them. In that case, the viscosity of the gas is meaningful. But you may also have a bubble which was originally in something like honey, which means it’s not enough the gas is moving, the liquid also must refill the space behind it. No matter what your conditions are, you will be switching between these three limits.”
Varanasi says health care companies, chemical manufacturers, and even breweries have expressed interest in the work. His team plans to commercially develop the membranes for industrial use.
“These physical insights allowed us to design membranes that, quite surprisingly, evacuate bubbles even faster than a free liquid-gas interface,” says Varanasi.
The researchers’ design map could also be used to model natural systems and even liquid-liquid systems, which could be used to create membranes that remove oil spills from water or help efficiently extract hydrogen from water-splitting electrodes. Ultimately the biggest beneficiaries of the findings will be companies grappling with bubbles.
“Though small, bubbles quietly dictate the performance limits of many advanced technologies,” says Varanasi. “Our results provide a way to eliminate that bottleneck and unlock entirely new levels of performance across industries. These membranes can be readily retrofitted into existing systems, and our framework allows them to be rapidly designed and optimized for specific applications. We’re excited to work with industry to translate these insights into impact.”
The work was supported, in part, by MIT Lincoln Laboratory and used MIT.nano facilities.
New method could increase LLM training efficiency
Reasoning large language models (LLMs) are designed to solve complex problems by breaking them down into a series of smaller steps. These powerful models are particularly good at challenging tasks like advanced programming and multistep planning.
But developing reasoning models demands an enormous amount of computation and energy due to inefficiencies in the training process. While a few of the high-power processors continuously work through complicated queries, others in the group sit idle.
Researchers from MIT and elsewhere found a way to use this computational downtime to efficiently accelerate reasoning-model training.
Their new method automatically trains a smaller, faster model to predict the outputs of the larger reasoning LLM, which the larger model verifies. This reduces the amount of work the reasoning model must do, accelerating the training process.
The key to this system is its ability to train and deploy the smaller model adaptively, so it kicks in only when some processors are idle. By leveraging computational resources that would otherwise have been wasted, it accelerates training without incurring additional overhead.
When tested on multiple reasoning LLMs, the method doubled the training speed while preserving accuracy. This could reduce the cost and increase the energy efficiency of developing advanced LLMs for applications such as forecasting financial trends or detecting risks in power grids.
“People want models that can handle more complex tasks. But if that is the goal of model development, then we need to prioritize efficiency. We found a lossless solution to this problem and then developed a full-stack system that can deliver quite dramatic speedups in practice,” says Qinghao Hu, an MIT postdoc and co-lead author of a paper on this technique.
He is joined on the paper by co-lead author Shang Yang, an electrical engineering and computer science (EECS) graduate student; Junxian Guo, an EECS graduate student; senior author Song Han, an associate professor in EECS, member of the Research Laboratory of Electronics and a distinguished scientist of NVIDIA; as well as others at NVIDIA, ETH Zurich, the MIT-IBM Watson AI Lab, and the University of Massachusetts at Amherst. The research will be presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems.
Training bottleneck
Developers want reasoning LLMs to identify and correct mistakes in their critical thinking process. This capability allows them to ace complicated queries that would trip up a standard LLM.
To teach them this skill, developers train reasoning LLMs using a technique called reinforcement learning (RL). The model generates multiple potential answers to a query, receives a reward for the best candidate, and is updated based on the top answer. These steps repeat thousands of times as the model learns.
But the researchers found that the process of generating multiple answers, called rollout, can consume as much as 85 percent of the execution time needed for RL training.
“Updating the model — which is the actual ‘training’ part — consumes very little time by comparison,” Hu says.
This bottleneck occurs in standard RL algorithms because all processors in the training group must finish their responses before they can move on to the next step. Because some processors might be working on very long responses, others that generated shorter responses wait for them to finish.
“Our goal was to turn this idle time into speedup without any wasted costs,” Hu adds.
They sought to use an existing technique, called speculative decoding, to speed things up. Speculative decoding involves training a smaller model called a drafter to rapidly guess the future outputs of the larger model.
The larger model verifies the drafter’s guesses, and the responses it accepts are used for training.
Because the larger model can verify all the drafter’s guesses at once, rather than generating each output sequentially, it accelerates the process.
An adaptive solution
But in speculative decoding, the drafter model is typically trained only once and remains static. This makes the technique infeasible for reinforcement learning, since the reasoning model is updated thousands of times during training.
A static drafter would quickly become stale and useless after a few steps.
To overcome this problem, the researchers created a flexible system known as “Taming the Long Tail,” or TLT.
The first part of TLT is an adaptive drafter trainer, which uses free time on idle processors to train the drafter model on the fly, keeping it well-aligned with the target model without using extra computational resources.
The second component, an adaptive rollout engine, manages speculative decoding to automatically select the optimal strategy for each new batch of inputs. This mechanism changes the speculative decoding configuration based on the training workload features, such as the number of inputs processed by the draft model and the number of inputs accepted by the target model during verification.
In addition, the researchers designed the draft model to be lightweight so it can be trained quickly. TLT reuses some components of the reasoning model training process to train the drafter, leading to extra gains in acceleration.
“As soon as some processors finish their short queries and become idle, we immediately switch them to do draft model training using the same data they are using for the rollout process. The key mechanism is our adaptive speculative decoding — these gains wouldn’t be possible without it,” Hu says.
They tested TLT across multiple reasoning LLMs that were trained using real-world datasets. The system accelerated training between 70 and 210 percent while preserving the accuracy of each model.
As an added bonus, the small drafter model could readily be utilized for efficient deployment as a free byproduct.
In the future, the researchers want to integrate TLT into more types of training and inference frameworks and find new reinforcement learning applications that could be accelerated using this approach.
“As reasoning continues to become the major workload driving the demand for inference, Qinghao’s TLT is great work to cope with the computation bottleneck of training these reasoning models. I think this method will be very helpful in the context of efficient AI computing,” Han says.
This work is funded by the MIT-IBM Watson AI Lab, the MIT AI Hardware Program, the MIT Amazon Science Hub, Hyundai Motor Company, and the National Science Foundation.
Mixing generative AI with physics to create personal items that work in the real world
Have you ever had an idea for something that looked cool, but wouldn’t work well in practice? When it comes to designing things like decor and personal accessories, generative artificial intelligence (genAI) models can relate. They can produce creative and elaborate 3D designs, but when you try to fabricate such blueprints into real-world objects, they usually don’t sustain everyday use.
The underlying problem is that genAI models often lack an understanding of physics. While tools like Microsoft’s TRELLIS system can create a 3D model from a text prompt or image, its design for a chair, for example, may be unstable, or have disconnected parts. The model doesn’t fully understand what your intended object is designed to do, so even if your seat can be 3D printed, it would likely fall apart under the force of someone sitting down.
In an attempt to make these designs work in the real world, researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are giving generative AI models a reality check. Their “PhysiOpt” system augments these tools with physics simulations, making blueprints for personal items such as cups, keyholders, and bookends work as intended when they’re 3D printed. It rapidly tests if the structure of your 3D model is viable, gently modifying smaller shapes while ensuring the overall appearance and function of the design is preserved.
You can simply type what you want to create and what it’ll be used for into PhysiOpt, or upload an image to the system’s user interface, and in roughly half a minute, you’ll get a realistic 3D object to fabricate. For example, CSAIL researchers prompted it to generate a “flamingo-shaped glass for drinking,” which they 3D printed into a drinking glass with a handle and base resembling the tropical bird’s leg. As the design was generated, PhysiOpt made tiny refinements to ensure the design was structurally sound.
“PhysiOpt combines GenAI and physically-based shape optimization, helping virtually anyone generate the designs they want for unique accessories and decorations,” says MIT electrical engineering and computer science (EECS) PhD student and CSAIL researcher Xiao Sean Zhan SM ’25, who is a co-lead author on a paper presenting the work. “It’s an automatic system that allows you to make the shape physically manufacturable, given some constraints. PhysiOpt can iterate on its creations as often as you’d like, without any extra training.”
This approach enables you to create a “smart design,” where the AI generator crafts your item based on users’ specifications, while considering functionality. You can plug in your favorite 3D generative AI model, and after typing out what you want to generate, you specify how much force or weight the object should handle. It’s a neat way to simulate real-world use, such as predicting whether a hook will be strong enough to hold up your coat. Users also specify what materials they’ll fabricate the item with (such as plastics or wood), and how it’s supported — for instance, a cup stands on the ground, whereas a bookend leans against a collection of books.
Given the specifics, PhysiOpt begins to iteratively optimize the object. Under the hood, it runs a physics simulation called a “finite element analysis” to stress test the design. This comprehensive scan provides a heat map over your 3D model, which indicates where your blueprint isn’t well-supported. If you were generating, say, a birdhouse, you may find that the support beams under the house were colored bright red, meaning the house will crumble if it’s not reinforced.
PhysiOpt can create even bolder pieces. Researchers saw this versatility firsthand when they fabricated a steampunk (a style that blends Victorian and futuristic aesthetics) keyholder featuring intricate, robotic-looking hooks, and a “giraffe table” with a flat back that you can place items on. But how did it know what “steampunk” is, or even how such a unique piece of furniture should look?
Remarkably, the answer isn’t extensive training — at least, not from the researchers. Instead, PhysiOpt uses a pre-trained model that’s already seen thousands of shapes and objects. “Existing systems often need lots of additional training to have a semantic understanding of what you want to see,” adds co-lead author Clément Jambon, who is also an MIT EECS PhD student and CSAIL researcher. “But we use a model with that feel for what you want to create already baked in, so PhysiOpt is training-free.”
By working with a pre-trained model, PhysiOpt can use “shape priors,” or knowledge of how shapes should look based on earlier training, to generate what users want to see. It’s sort of like an artist recreating the style of a famous painter. Their expertise is rooted in closely studying a variety of artistic approaches, so they’ll likely be able to mirror that particular aesthetic. Likewise, a pre-trained model’s familiarity with shapes helps it generate 3D models.
CSAIL researchers observed that PhysiOpt’s visual know-how helped it create 3D models more efficiently than “DiffIPC,” a comparable method that simulates and optimizes shapes. When both approaches were tasked with generating 3D designs for items like chairs, CSAIL’s system was nearly 10 times faster per iteration, while creating more realistic objects.
PhysiOpt presents a potential bridge between ideas and real-world personal items. What you may think is a great idea for a coffee mug, for instance, could soon make the jump from your computer screen to your desk. And while PhysiOpt already does the stress-testing for designers, it may soon be able to predict constraints such as loads and boundaries, instead of users needing to provide those details. This more autonomous, common-sense approach could be made possible by incorporating vision language models, which combine an understanding of human language with computer vision.
What’s more, Zhan and Jambon intend to remove the artifacts, or random fragments that occasionally appear in PhysiOpt’s 3D models, by making the system even more physics-aware. The MIT scientists are also considering how they can model more complex constraints for various fabrication techniques, such as minimizing overhanging components for 3D printing.
Zhan and Jambon wrote their paper with MIT-IBM Watson AI Lab Principal Research Scientist Kenney Ng ’89, SM ’90, PhD ’00 and two CSAIL colleagues: undergraduate researcher Evan Thompson and Assistant Professor Mina Konaković Luković, who is a principal investigator at the lab.
The researchers’ work was supported, in part, by the MIT-IBM Watson AI Laboratory and the Wistron Corp. They presented it in December at the Association for Computing Machinery’s SIGGRAPH Conference and Exhibition on Computer Graphics and Interactive Techniques in Asia.
AI to help researchers see the bigger picture in cell biology
Studying gene expression in a cancer patient’s cells can help clinical biologists understand the cancer’s origin and predict the success of different treatments. But cells are complex and contain many layers, so how the biologist conducts measurements affects which data they can obtain. For instance, measuring proteins in a cell could yield different information about the effects of cancer than measuring gene expression or cell morphology.
Where in the cell the information comes from matters. But to capture complete information about the state of the cell, scientists often must conduct many measurements using different techniques and analyze them one at a time. Machine-learning methods can speed up the process, but existing methods lump all the information from each measurement modality together, making it difficult to figure out which data came from which part of the cell.
To overcome this problem, researchers at the Broad Institute of MIT and Harvard and ETH Zurich/Paul Scherrer Institute (PSI) developed an artificial intelligence-driven framework that learns which information about a cell’s state is shared across different measurement modalities and which information is unique to a particular measurement type.
By pinpointing which information came from which cell parts, the approach provides a more holistic view of the cell’s state, making it easier for a biologist to see the complete picture of cellular interactions. This could help scientists understand disease mechanisms and track the progression of cancer, neurodegenerative disorders such as Alzheimer’s, and metabolic diseases like diabetes.
“When we study cells, one measurement is often not sufficient, so scientists develop new technologies to measure different aspects of cells. While we have many ways of looking at a cell, at the end of the day we only have one underlying cell state. By putting the information from all these measurement modalities together in a smarter way, we could have a fuller picture of the state of the cell,” says lead author Xinyi Zhang SM ’22, PhD ’25, a former graduate student in the MIT Department of Electrical Engineering and Computer Science (EECS) and an affiliate of the Eric and Wendy Schmidt Center at the Broad Institute of MIT and Harvard, who is now a group leader at AITHYRA in Vienna, Austria.
Zhang is joined on a paper about the work by G.V. Shivashankar, a professor in the Department of Health Sciences and Technology at ETH Zurich and head of the Laboratory of Multiscale Bioimaging at PSI; and senior author Caroline Uhler, a professor in EECS and the Institute for Data, Systems, and Society (IDSS) at MIT, member of MIT’s Laboratory for Information and Decision Systems (LIDS), and director of the Eric and Wendy Schmidt Center at the Broad Institute. The research appears today in Nature Computational Science.
Manipulating multiple measurements
There are many tools scientists can use to capture information about a cell’s state. For instance, they can measure RNA to see if the cell is growing, or they can measure chromatin morphology to see if the cell is dealing with external physical or chemical signals.
“When scientists perform multimodal analysis, they gather information using multiple measurement modalities and integrate it to better understand the underlying state of the cell. Some information is captured by one modality only, while other information is shared across modalities. To fully understand what is happening inside the cell, it is important to know where the information came from,” says Shivashankar.
Often, for scientists, the only way to sort this out is to conduct multiple individual experiments and compare the results. This slow and cumbersome process limits the amount of information they can gather.
In the new work, the researchers built a machine-learning framework that specifically understands which information overlaps between different modalities, and which information is unique to a particular modality but not captured by others.
“As a user, you can simply input your cell data and it automatically tells you which data are shared and which data are modality-specific,” Zhang says.
To build this framework, the researchers rethought the typical way machine-learning models are designed to capture and interpret multimodal cellular measurements.
Usually these methods, known as autoencoders, have one model for each measurement modality, and each model encodes a separate representation for the data captured by that modality. The representation is a compressed version of the input data that discards any irrelevant details.
The MIT method has a shared representation space where data that overlap between multiple modalities are encoded, as well as separate spaces where unique data from each modality are encoded.
In essence, one can think of it like a Venn diagram of cellular data.
The researchers also used a special, two-step training procedure that helps their model handle the complexity involved in deciding which data are shared across multiple data modalities. After training, the model can identify which data are shared and which are unique when fed cell data it has never seen before.
Distinguishing data
In tests on synthetic datasets, the framework correctly captured known shared and modality-specific information. When they applied their method to real-world single-cell datasets, it comprehensively and automatically distinguished between gene activity captured jointly by two measurement modalities, such as transcriptomics and chromatin accessibility, while also correctly identifying which information came from only one of those modalities.
In addition, the researchers used their method to identify which measurement modality captured a certain protein marker that indicates DNA damage in cancer patients. Knowing where this information came from would help a clinical scientist determine which technique they should use to measure that marker.
“There are too many modalities in a cell and we can’t possibly measure them all, so we need a prediction tool. But then the question is: Which modalities should we measure and which modalities should we predict? Our method can answer that question,” Uhler says.
In the future, the researchers want to enable the model to provide more interpretable information about the state of the cell. They also want to conduct additional experiments to ensure it correctly disentangles cellular information and apply the model to a wider range of clinical questions.
“It is not sufficient to just integrate the information from all these modalities,” Uhler says. “We can learn a lot about the state of a cell if we carefully compare the different modalities to understand how different components of cells regulate each other.”
This research is funded, in part, by the Eric and Wendy Schmidt Center at the Broad Institute, the Swiss National Science Foundation, the U.S. National Institutes of Health, the U.S. Office of Naval Research, AstraZeneca, the MIT-IBM Watson AI Lab, the MIT J-Clinic for Machine Learning and Health, and a Simons Investigator Award.
MIT’s delta v accelerator receives $6M gift to supercharge startups being built by student founders
With the impact artificial intelligence is having on how companies operate, the environment for how MIT students are learning entrepreneurship and choosing to create new ventures is seeing rapid changes as well. To address how these student startups are being built, the Martin Trust Center for MIT Entrepreneurship undertook a months-long series of discussions with key stakeholders to help shape a new direction for delta v, MIT’s capstone entrepreneurship accelerator for student founders.
Two of Boston’s most successful tech entrepreneurs have stepped forward to fund this growth of new MIT ventures through a combined $6 million gift that supports the delta v accelerator run out of the Trust Center. Ed Hallen MBA ’12 and Andrew Bialecki, co-founders of Boston-based customer relationship management firm Klaviyo, are providing the donation to support the next wave of innovation-driven entrepreneurship taking place at MIT.
“In the early days of Klaviyo, we learned almost everything by building, testing assumptions, making mistakes, and figuring things out as we went,” Hallen says. “MIT delta v creates that same learning-by-doing environment for students, while surrounding them with mentorship and resources that help founders build with clarity and momentum. We’ve seen the difference delta v can make for founders, and we’re excited to help the Trust Center extend that opportunity to the next generation of students.”
“We’ve always believed the world needs more entrepreneurs, and that Boston should be one of the places leading the way,” adds Bialecki. “Boston is a hub of innovation with ambitious students and a strong community of builders. MIT delta v plays a critical role in developing founders early, not just helping them start companies but helping them build companies that last. Supporting that mission is something Ed and I care deeply about.”
The Martin Trust Center plans to “accelerate the accelerator” with the funding. Recognizing the opportunity that exists as AI impacts how students are able to build companies, along with the increased interest being shown by students to learn about entrepreneurship during their time on campus, is a major driver for these changes. One of the main impacts will be the ability of delta v participants to earn up to $75,000 in equity-free funding during the program, an increase from $20,000 in years past.
Also, delta v will be introducing a partner model composed of leading founders from companies such as HubSpot, Okta, and Kayak, C-suite operators, subject matter experts, and early-stage investors who will all be providing significant guidance and mentorship to the student ventures.
“Core to MIT’s mission is developing the innovative technologies and solutions that can help solve tough problems at global scale,” says MIT Provost Anantha Chandrakasan. “The AI revolution is creating exciting new opportunities for MIT students to build the next wave of impactful companies, and the delta v accelerator is a perfect vehicle to help them make that happen.”
In recent years MIT-founded startups such as Cursor and Delve who use AI as a core part of their business have seen explosive growth in both customers and revenue as well as valuation. In addition, delta v alumni entrepreneurs and their companies such as Klarity and Reducto are providing software-as-a-service (SaaS) platforms using AI tools while Vertical Semiconductor is growing thanks to providing the energy solutions that data centers need to power today’s computing demands. These are just some of the businesses MIT students are looking to as models they can follow to build and launch successfully, whether they are working on solutions in health care, climate, finance, the future of work, or another global challenge.
“MIT Sloan is the place for entrepreneurship education, part of a unique ecosystem of collaboration across MIT to solve problems," says Richard M. Locke, the John C Head III Dean at the MIT Sloan School of Management. “The delta v program is a great example of how MIT students dedicate their energy to starting a venture, connect with mentors, and incorporate proven frameworks for disciplined entrepreneurship. This gift from Ed Hallen and Andrew Bialecki will provide additional funding for this important program, and I’m so grateful for their support of entrepreneurship education at MIT.”
“I remember when Ed and Andrew were giving birth to Klaviyo at the Trust Center,” says Bill Aulet, the Ethernet Inventors Professor of the Practice and managing director of the Trust Center. “Through their ingenuity and drive, they have created an iconic tech company here in Boston with the support of our ecosystem. Through their willingness to give back, many more students will now be able to follow their path and become entrepreneurs who can create extraordinary positive impact in the world.”
Applications for the next delta v cohort will open on March 1 and close on April 1. Teams will be announced in May for the summer 2026 accelerator.
“MIT delta v is about creating belief in our most exceptional entrepreneurial talent — and turning that belief into consequential impact for the world. By supporting early-stage founders who take bold ideas from improbable to possible, we help them build companies that matter,” says Ana Bakshi, the Trust Center’s executive director. “Our students are the next generation of job creators, economic drivers, and thought leaders. To realize this potential, it is critical that we continue to invest in and scale startup programs and spaces so they can build at unprecedented levels. Ed and Andrew’s generosity gives us a powerful opportunity to change velocity—and make that future possible.”
Founded in 1991, the award-winning Martin Trust Center for MIT Entrepreneurship is today focused on teaching entrepreneurship as a craft. It combines evidence-based entrepreneurship frameworks, used in over a thousand other organizations, with experiential learning, experiences, and community building inside and outside the classroom to create the next generation of innovation-driven entrepreneurs. Alumni who have gone through Trust Center programs have started companies including Cursor, Delve, Okta, HubSpot, PillPack, Honey, WHOOP, Reducto, Klarity, and Biobot Analytics, and thousands more in industries as diverse as biotech, climate and energy, AI, health care, fintech, business and consumer software, and more.
In the first 10 years of delta v, the program's alumni have helped create entrepreneurs who have gone on to experience extraordinary success. The five-year survival rate of their companies has been 69%, and they have raised well over $3 billion in funding while addressing the world’s greatest challenges — evidenced by the fact that 89% are directly aligned with the UN Sustainable Development goals.
More trees where they matter, please
One of the best forms of heat relief is pretty simple: trees. In cities, as studies have documented, more tree cover lowers surface temperatures and heat-related health risks.
However, as a new study led by MIT researchers shows, the amount of tree cover varies widely within cities, and is generally connected to wealth levels. After examining a cross-section of cities on four continents at different latitudes, the research finds a consistent link between wealth and neighborhood tree abundance within a city, with better-off residents usually enjoying much more shade on nearby sidewalks.
“Shade is the easiest way to counter warm weather,” says Fabio Duarte, an MIT urban studies scholar and co-author of a new paper detailing the study’s results. “Strictly by looking at which areas are shaded, we can tell where rich people and poor people live.”
That disparity is evident within a range of cities, and is present whether a city contains a large amount of tree cover overall or just a little. Either way, there are more trees in wealthier spots.
“When we compare the most well-shaded city in our study, Stockholm, with the worst-shaded, Belem in northern Brazil, we still see marked inequality,” says Duarte, the associate director of MIT’s Senseable City Lab in the Department of Urban Studies and Planning (DUSP). “Even though the most-shaded parts of Belem are less shaded than the least-shaded parts of Stockholm, shade inequality in Stockholm is greater. Rich people in Stockholm have much better shade provison as pedestrians than we see in poor areas of Stockholm.”
The paper, “Global patterns of pedestrian shade inequality,” is published today in Nature Communications. The authors are Xinyue Gu of Hong Kong Polytechnic University; Lukas Beuster, a research fellow at the Amsterdam Institute for Advanced Metropolitan Solutions and MIT’s Senseable City Lab; Xintao Liu, an associate professor at Hong Kong Polytechnic University; Eveline van Leeuwen, scientific director at the Amsterdam Institute for Advanced Metropolitan Solutions; Titus Venverloo, who leads the MIT Senseable City Amsterdam lab; and Duarte, who is also a lecturer in DUSP.
From Stockholm to Sydney
To conduct the study, the researchers used satellite data from multiple sources, along with urban mapping programs and granular economic data about the cities they examined. There are nine cities in the study: Amsterdam, Barcelona, Belem, Boston, Hong Kong, Milan, Rio de Janeiro, Stockholm, and Sydney. Those places are intended to create a cross-section of cities with different characteristics, including latitude, wealth levels, urban form, and more.
The scholars looked at the amount of shade available on city sidewalks on summer solistice day, as well as the hottest recorded day each year from 1991 to 2020. They then created a scale, ranging from 0 to 1, to rate the amount of shade available on sidewalks, both citywide and within neighborhoods.
“We focused on sidewalks because they are a major counduit of urban activity, even on hot summer days,” Gu says. “Adding tree cover for sidewalks is one crucial way cities can pursue heat-reduction measures.”
Duarte adds: “When it comes to those who are not protected by air conditioning, they are also using the city, walking, taking buses, and anybody who takes a bus is walking or biking to or from bus stops. They are using sidewalks as the main infrastructure.”
The cities in the study offer very different levels of tree coverage. On the 0-to-1 scale the researchers developed, much of Stockholm falls in the 0.6-0.9 range, with some neighborhoods being over 0.9. By contrast, large swaths of Rio de Janeiro are under the 0.1 mark. Much of Boston ranges from 0.15 to 0.4, with a few neighborhoods reaching 0.45 on the scale.
The overall pattern of disparities, however, is very consistent, and includes the more affluent cities. The bottom 20 percent of neighborhoods in Stockholm, in terms of shade coverage, are rated at 0.58 on the scale, while the top 20 percent of Belem neighborhoods rate at 0.37; Stockholm has a greater disparity between most-covered and least-covered. To be sure, there is variety within many cities: Milan and Barcelona have some lower-income neighborhoods with abundant shade, for instance. But the aggregate trend is clear. Amsterdam, another well-off place on average, has a distinct pattern of less shade in lower-income areas.
“In rich cities like Amsterdam, even though it’s relatively well-shaded, the disparity is still very high,” Beuster says. “For us the most surprising point was not that in poor cities and more unequal societies the disparity would be notable — that was expected. What was unexpected was how the disparity still happens and is sometimes more pronounced in rich countries.”
“Follow transit”
If the tree-shade disparity issue is quite persistent, then it raises the matter of what to do about it. The researchers have a basic answer: Add trees in areas with public transit, which generate a lot of pedestrian mileage.
“In each city, from Sydney to Rio to Amsterdam, there are people who, regardless of the weather, need to walk,” Duarte says. “And it’s those people who also take public transportation. Therefore, link a tree-planting scheme to a public transportation network. And secondly, they are also the medium-and low-income part of the population. So the action deriving from this result is quite clear: If you need to increase your tree coverage and don’t know where, follow transit. If you follow transit, you will have the right shading.”
Indeed, one takeaway from the study is to think of trees not just as a nice-to-have part of urban aesthetics, but in functional terms.
“Planners and city officials should think about tree placement at least partly in terms of the heat-mitigating effect they have,” Beuster says.
“It’s not just about planting trees,” Duarte observes. “It’s about providing shade by planting trees. If you remove a tree that’s providing shade in a pedestrian area and you plant two other trees in a park, you are still removing part of the public function of the tree.”
He adds: “With increasing temperatures, providing shade is an essential public amenity. Along with providing transportation, I think providing shade in pedestrian spaces should almost be a public right.”
The Amsterdam Institute for Advanced Metropolitan Solutions and all members of the MIT Senseable City Consortium (including FAE Technology, Dubai Foundation, Sondotécnica, Seoul AI Foundation, Arnold Ventures, Sidara, Toyota, Abu Dhabi’s Department of Municipal Transportation, A2A, UnipolTech, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, Hospital Israelita Albert Einstein, KACST, KAIST, and the cities of Laval, Amsterdam, and Rio de Janeiro) supported the research.
Study reveals climatic fingerprints of wildfires and volcanic eruptions
Volcanoes and wildfires can inject millions of tons of gases and aerosol particles into the air, affecting temperatures on a global scale. But picking out the specific impact of individual events against a background of many contributing factors is like listening for one person’s voice from across a crowded concourse.
MIT scientists now have a way to quiet the noise and identify the specific signal of wildfires and volcanic eruptions, including their effects on Earth’s global atmospheric temperatures.
In a study appearing this week in the Proceedings of the National Academy of Sciences, the researchers report that they detected statistically significant changes in global atmospheric temperatures in response to three major natural events: the eruption of Mount Pinatubo in 1991, the Australian wildfires in 2019-2020, and the eruption of the underwater volcano Hunga Tonga in the South Pacific in 2022.
While the specifics of each event differed, all three events appeared to significantly affect temperatures in the stratosphere. The stratosphere lies above the troposphere, which is the lowest layer of the atmosphere, closest to the surface, where global warming has accelerated in recent years. In the new study, Pinatubo showed the classic pattern of stratospheric warming paired with tropospheric cooling. The Australian wildfires and the Hunga Tonga eruption also showed significant warming or cooling in the stratosphere, respectively, but they did not produce a robust, globally detectable tropospheric signal over the first two years following each event. This new understanding will help scientists further pin down the effect of human-related emissions on global temperature change.
“Understanding the climate responses to natural forcings is essential for us to interpret anthropogenic climate change,” says study author Yaowei Li, a former postdoc and currently a visiting scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Unlike the global tropospheric and surface cooling caused by Pinatubo, our results also indicate that the Australian wildfires and Hunga Tonga eruption may not have played a role in the acceleration of global surface warming in recent years. So, there must be some other factors.”
The study’s co-authors include Susan Solomon, the Lee and Geraldine Martin Professor of Environmental Studies and Chemistry at MIT, along with Benjamin Santer of the University of East Anglia, David Thompson of the University of East Anglia and Colorado State University, and Qiang Fu of the University of Washington.
Extraordinary events
The past several years have set back-to-back records for global average surface temperatures. The World Meteorological Organization recently confirmed that the years 2023 to 2025 were the three warmest years on record, while the past 11 years have been the 11 warmest years ever recorded. The world is warming, due mainly to human activities that have emitted huge amounts of greenhouse gases into the atmosphere over centuries.
In addition to greenhouse gases, the atmosphere has been on the receiving end of other large-scale emissions, including sulfur gases and water vapor from volcanic eruptions and smoke particles from wildfires. Li and his colleagues have wondered whether such natural events could have any global impact on temperatures, and whether such an effect would be detectable.
“These events are extraordinary and very unique in terms of the different materials they inject into different altitudes,” Li says. “So we asked the question: Do these events actually perturb the global temperature to a degree that could be identifiable from natural, meteorological noise, and could they contribute to some of the exceptional global surface warming we’ve seen in the last few years?”
In particular, the team looked for signals of global temperature change in response to three large-scale natural events. The Pinatubo eruption resulted in around 20 million tons of volcanic aerosols in the stratosphere, which was the largest volume ever recorded by modern satellite instruments. The Australian fires injected around 1 million tons of smoke particles into the upper troposphere and stratosphere. And the Hunga Tonga eruption produced the largest atmospheric explosion on satellite record, launching nearly 150 million tons of water vapor into the stratosphere.
If any natural event could measurably shift global temperatures, the team reasoned, it would be any of these three.
Natural signals
For their new study, the team took a signal-to-noise approach. They looked to minimize “noise” from other known influences on global temperatures in order to isolate the “signal,” such as a change in temperature associated specifically with one of the three natural events.
To do so, they looked first through satellite measurements taken by the Stratospheric Sounding Unit (SSU) and the Microwave and Advanced Microwave Sounding Units (MSU), which have been measuring global temperatures at different altitudes throughout the atmosphere since 1979. The team compiled SSU and MSU measurements from 1986 to the present day. From these measurements, the researchers could see long-term trends of steady tropospheric warming and stratospheric cooling. Those long-term trends are largely associated with anthropogenic greenhouse gases, which the team subtracted from the dataset.
What was left over was more of a level baseline, which still contained some confounding noise, in the form of natural variability. Global temperature changes can also be affected by phenomena such as El Niño and La Niña, which naturally warm and cool the Earth every few years. The sun also swings global temperatures on a roughly 11-year cycle. The team took this natural variability into account, and subtracted out the effects of these influences.
After minimizing such noise from their dataset, the team reasoned that whatever temperature changes remained could be more easily traced to the three large-scale natural events and quantified. And indeed, when they pinned the events to the temperature measurements, at the times that they occurred, they could plainly see how each event influenced temperatures around the world.
The team found that Pinatubo decreased global tropospheric temperatures by up to about 0.7 degree Celsius, for more than two years following the eruption. The volcanic sulfate aerosols essentially acted as many tiny reflectors, cooling the troposphere and surface by scattering sunlight back into space. At the same time, the aerosols, which remained in the stratosphere, also absorbed heat that was emitted from the surface, subsequently warming the stratosphere.
This finding agreed with many other studies of the event, which confirmed that the team’s approach is accurate. They applied the same method to the 2019-2020 Australian wildfires, and the 2022 underwater eruption — events where the influence on global temperatures is less clear.
For the Australian wildfires, they found that the smoke particles caused the global stratosphere to warm up, by up to about 0.77 degree Celsius, which persisted for about five months but did not produce a clear global tropospheric signal.
“In the end we found that the wildfire smoke caused a very strong warming in the stratosphere, because these materials are very different chemically from sulfate,” Li explains. “They are particles that are dark colored, meaning they are efficient at absorbing solar radiation. So, a relatively small amount of smoke particles can cause a dramatic warming.”
In the case of the Hunga Tonga, the underwater eruption triggered a global cooling effect in the middle-to-upper stratosphere, of up to about half a degree Celsius, lasting for several years.
“The Australian fires and the Hunga Tonga really packed a punch at stratospheric altitudes, and this study shows for the first time how to quantify how strong that punch was,” says Solomon. “I find their impact up high quite remarkable, but the ongoing issue is why the last several years have been so warm lower down, in the troposphere — ruling out those natural events points even more strongly at human influences.”
Exploring materials at the atomic scale
MIT.nano has added a new X-ray diffraction (XRD) instrument to its characterization toolset, enhancing facility users’ ability to analyze materials at the nanoscale. While many XRD systems exist across MIT’s campus, this new instrument, the Bruker D8 Discover Plus, is unique in that it features a high-brilliance micro-focus copper X-ray source — ideal for measuring small areas of thin film samples using a large area detector.
The new system is positioned within Characterization.nano’s X-ray diffraction and imaging shared experimental facility (SEF), where advanced instrumentation allows researchers to “see inside” materials at very small scales. Here, scientists and engineers can examine surfaces, layers, and internal structures without damaging the material, and create detailed 3D images to map composition and organization. The information gathered is supporting materials research for applications ranging from electronics and energy storage to health care and nanotechnology.
“The Bruker instrument is an important addition to MIT.nano that will help researchers efficiently gain insights into their materials’ structure and properties,” says Charlie Settens, research specialist and operations manager in the Characterization.nano X-ray diffraction and imaging SEF. “It brings high-performance diffraction capabilities to our lab, supporting everything from routine phase identification to complex thin film microstructural analysis and high-temperature studies.”
What is X-ray diffraction?
When people think of X-rays, they often picture medical imaging, where dense structures like bones appear in contrast to soft tissue. X-ray diffraction takes that concept further, revealing the crystalline structure of materials by measuring the interference patterns that form when X-rays interact with atomic planes. These diffraction patterns provide detailed information about a material’s crystalline phase, grain size, grain orientation, defects, and other structural properties.
XRD is essential across many fields. Civil engineers use it to analyze the components of concrete mixtures and monitor material changes over time. Materials scientists engineer new microstructures and track how atomic arrangements shift with different element combinations. Electrical engineers study crystalline thin film deposition on substrates — critical for semiconductor manufacturing. MIT.nano’s new X-ray diffractometer will support all of these applications, and more.
“The addition of another high-resolution XRD will make it a lot easier to get time on these very popular tools,” says Fred Tutt, PhD student in the MIT Department of Materials Science and Engineering. “The wide variety of options on the new Bruker will also make it easier for myself and my group members to take some of the more atypical measurements that aren't readily accessible with the current XRD tools.”
A closer, clearer look
Replacing two older systems, the Bruker D8 Discover Plus introduces the latest in X-ray diffraction technology to MIT.nano, along with several major upgrades for the Characterization.nano facility. One key feature is the high-brilliance microfocus copper X-ray source, capable of producing intense X-rays from a small spot size — ranging from 2mm down to 200 microns.
“It’s invaluable to have the flexibility to measure distinct regions of a sample with high flux and fine spatial resolution,” says Jordan Cox, MIT.nano research specialist in the MIT.nano X-ray diffraction and imaging facility.
Another highlight is in-plane XRD, a technique that enables surface diffraction studies of thin films with non-uniform grain orientations.
“In-plane XRD pairs well with many thin film projects that start in the fab,” says Settens. After researchers deposit thin film coatings in MIT.nano’s cleanroom, they can selectively measure the top 100 nanometers of the surface, he explains.
But it’s not just about collecting diffraction patterns. The new system includes a powerful software suite for advanced data analysis. Cox and Settens are now training users how to operate the diffractometer, as well as how to analyze and interpret the valuable structural data it provides.
Visit Characterization.nano for more information about this and other tools.
3 Questions: Exploring the mechanisms underlying changes during infection
With respiratory illness season in full swing, a bad night’s sleep, sore throat, and desire to cancel dinner plans could all be considered hallmark symptoms of the flu, Covid-19 or other illnesses. Although everyone has, at some point, experienced illness and these stereotypical symptoms, the mechanisms that generate them are not well understood.
Zuri Sullivan, a new assistant professor in the MIT Department of Biology and core member of the Whitehead Institute for Biomedical Research, works at the interface of neuroscience, microbiology, physiology, and immunology to study the biological workings underlying illness. In this interview, she describes her work on immunity thus far as well as research avenues — and professional collaborations — she’s excited to explore at MIT.
Q: What is immunity, and why do we get sick in the first place?
A: We can think of immunity in two ways: the antimicrobial programs that defend against a pathogen directly, and sickness, the altered organismal state that happens when we get an infection.
Sickness itself arises from brain-immune system interaction. The immune system is talking to the brain, and then the brain has a system-wide impact on host defense via its ability to have top-down control of physiologic systems and behavior. People might assume that sickness is an unintended consequence of infection, that it happens because your immune system is active, but we hypothesize that it’s likely an adaptive process that contributes to host defense.
If we consider sickness as immunity at the organismal scale, I think of my work as bridging the dynamic immunological processes that occur at the cellular scale, the tissue scale, and the organismal scale. I’m interested in the molecular and cellular mechanisms by which the immune system communicates with the brain to generate changes in behavior and physiology, such as fever, loss of appetite, and changes in social interaction.
Q: What sickness behaviors fascinate you?
A: During my thesis work at Yale University, I studied how the gut processes different nutrients and the role of the immune system in regulating gut homeostasis in response to different kinds of food. I’m especially interested in the interaction between food, the immune system, and the brain. One of the things I’m most excited about is the reduction in appetite, or changes in food choice, because we have what I would consider pretty strong evidence that these may be adaptive.
Sleep is another area we’re interested in exploring. From their own subjective experience, everyone knows that sleep is often altered during infection.
I also don’t just want to examine snapshots in time. I want to characterize changes over the course of an infection. There’s probably going to be individual variability, which I think may be in part because pathogens are also changing over the course of an illness — we’re studying two different biological systems interacting with each other.
Q: What sorts of expertise are you hoping to recruit to your lab, and what collaborations are you excited about pursuing?
A: I really want to bring together different areas of biology to think about organism-wide questions. The thing that’s most important to me is people who are creative — I’d rather trainees come in with an interesting idea than a perfectly formed question within the bounds of what we already believe to be true. I’m also interested in people who would complement my expertise; I’m fascinated by microbiology, but I don’t have any formal training.
The Whitehead Institute is really invested in interdisciplinary work, and there’s a natural synergy between my work and the other labs in this small community at the Whitehead Institute.
I’ve been collaborating with Sebastian Lourido’s lab for a few years, looking at how Toxoplasma gondii influences social behavior, and I’m excited to invest more time in that project. I’m also interested in molecular neuroscience, which is a focus of Siniša Hrvatin’s lab. That lab is interested in the hypothalamus, and trying to understand the mechanisms that generate torpor. My work also focuses on the hypothalamus because it regulates homeostatic behaviors that change during sickness, such as appetite, sleep, social behavior, and body temperature.
By studying different sickness states generated by different kinds of pathogens — parasites, viruses, bacteria — we can ask really interesting questions about how and why we get sick.
Fragile X study uncovers brain wave biomarker bridging humans and mice
Numerous potential treatments for neurological conditions, including autism spectrum disorders, have worked well in mice but then disappointed in humans. What would help is a non-invasive, objective readout of treatment efficacy that is shared in both species.
In a new open-access study in Nature Communications, a team of MIT researchers, backed by collaborators across the United States and in the United Kingdom, identifies such a biomarker in fragile X syndrome, the most common inherited form of autism.
Led by postdoc Sara Kornfeld-Sylla and Picower Professor Mark Bear, the team measured the brain waves of human boys and men, with or without fragile X syndrome, and comparably aged male mice, with or without the genetic alteration that models the disorder. The novel approach Kornfeld-Sylla used for analysis enabled her to uncover specific and robust patterns of differences in low-frequency brain waves between typical and fragile X brains shared between species at each age range. In further experiments, the researchers related the brain waves to specific inhibitory neural activity in the mice and showed that the biomarker was able to indicate the effects of even single doses of a candidate treatment for fragile X called arbaclofen, which enhances inhibition in the brain.
Both Kornfeld-Sylla and Bear praised and thanked colleagues at Boston Children’s Hospital, the Phelan-McDermid Syndrome Foundation, Cincinnati Children’s Hospital, the University of Oklahoma, and King’s College London for gathering and sharing data for the study.
“This research weaves together these different datasets and finds the connection between the brain wave activity that’s happening in fragile X humans that is different from typically developed humans, and in the fragile X mouse model that is different than the ‘wild-type’ mice,” says Kornfeld-Sylla, who earned her PhD in Bear’s lab in 2024 and continued the research as a FRAXA postdoc. “The cross-species connection and the collaboration really makes this paper exciting.”
Bear, a faculty member in The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT, says having a way to directly compare brain waves can advance treatment studies.
“Because that is something we can measure in mice and humans minimally invasively, you can pose the question: If drug treatment X affects this signature in the mouse, at what dose does that same drug treatment change that same signature in a human?” Bear says. “Then you have a mapping of physiological effects onto measures of behavior. And the mapping can go both ways.”
Peaks and powers
In the study, the researchers measured EEG over the occipital lobe of humans and on the surface of the visual cortex of the mice. They measured power across the frequency spectrum, replicating previous reports of altered low-frequency brain waves in adult humans with fragile X and showing for the first time how these disruptions differ in children with fragile X.
To enable comparisons with mice, Kornfeld-Sylla subtracted out background activity to specifically isolate only “periodic” fluctuations in power (i.e., the brain waves) at each frequency. She also disregarded the typical way brain waves are grouped by frequency (into distinct bands with Greek letter designations delta, theta, alpha, beta, and gamma) so that she could simply juxtapose the periodic power spectra of the humans and mice without trying to match them band by band (e.g., trying to compare the mouse “alpha” band to the human one). This turned out to be crucial because the significant, similar patterns exhibited by the mice actually occurred in a different low-frequency band than in the humans (theta vs. alpha). Both species also had alterations in higher-frequency bands in fragile X, but Kornfeld-Sylla noted that the differences in the low-frequency brainwaves are easier to measure and more reliable in humans, making them a more promising biomarker.
So what patterns constitute the biomarker? In adult men and mice alike, a peak in the power of low-frequency waves is shifted to a significantly slower frequency in fragile X cases compared to in neurotypical cases. Meanwhile, in fragile X boys and juvenile mice, while the peak is somewhat shifted to a slower frequency, what is really significant is a reduced power in that same peak.
The researchers were also able to discern that the peak in question is actually made of two distinct subpeaks, and that the lower-frequency subpeak is the one that varies specifically with fragile X syndrome.
Curious about the neural activity underlying the measurements, the researchers engaged in experiments in which they turned off activity of two different kinds of inhibitory neurons that are known to help produce and shape brain wave patterns: somatostatin-expressing and parvalbumin-expressing interneurons. Manipulating the somatostatin neurons specifically affected the lower-frequency subpeak that contained the newly discovered biomarker in fragile X model mice.
Drug testing
Somatostatin interneurons exert their effects on the neurons they connect to via the neurotransmitter chemical GABA, and evidence from prior studies suggest that GABA receptivity is reduced in fragile X syndrome. A therapeutic approach pioneered by Bear and others has been to give the drug arbaclofen, which enhances GABA activity. In the new study, the researchers treated both control and fragile X model mice with arbaclofen to see how it affected the low-frequency biomarker.
Even the lowest administered single dose made a significant difference in the neurotypical mice, which is consistent with those mice having normal GABA responsiveness. Fragile X mice needed a higher dose, but after one was administered, there was a notable increase in the power of the key subpeak, reducing the deficit exhibited by juvenile mice.
The arbaclofen experiments therefore demonstrated that the biomarker provides a significant readout of an underlying pathophysiology of fragile X: the reduced GABA responsiveness. Bear also noted that it helped to identify a dose at which arbaclofen exerted a corrective effect, even though the drug was only administered acutely, rather than chronically. An arbaclofen therapy would, of course, be given over a long time frame, not just once.
“This is a proof of concept that a drug treatment could move this phenotype acutely in a direction that makes it closer to wild-type,” Bear says. “This effort reveals that we have readouts that can be sensitive to drug treatments.”
Meanwhile, Kornfeld-Sylla notes, there is a broad spectrum of brain disorders in which human patients exhibit significant differences in low-frequency (alpha) brain waves compared to neurotypical peers.
“Disruptions akin to the biomarker we found in this fragile X study might prove to be evident in mouse models of those other disorders, too,” she says. “Identifying this biomarker could broadly impact future translational neuroscience research.”
The paper’s other authors are Cigdem Gelegen, Jordan Norris, Francesca Chaloner, Maia Lee, Michael Khela, Maxwell Heinrich, Peter Finnie, Lauren Ethridge, Craig Erickson, Lauren Schmitt, Sam Cooke, and Carol Wilkinson.
The National Institutes of Health, the National Science Foundation, the FRAXA Foundation, the Pierce Family Fragile X Foundation, the Autism Science Foundation, the Thrasher Research Fund, Harvard University, the Simons Foundation, Wellcome, the Biotechnology and Biological Sciences Research Council, and the Freedom Together Foundation provided support for the research.
