MIT Latest News

MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
MIT releases financials and endowment figures for 2025
The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 14.8 percent during the fiscal year ending June 30, 2025, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $27.4 billion, excluding pledges. Over the 10 years ending June 30, 2025, MIT generated an annualized return of 10.7 percent.
The endowment is the bedrock of MIT’s finances, made possible by gifts from alumni and friends for more than a century. The use of the endowment is governed by a state law that requires MIT to maintain each endowed gift as a permanent fund, preserve its purchasing power, and spend it as directed by its original donor. Most of the endowment’s funds are restricted and must be used for a specific purpose. MIT uses the bulk of the income these endowed gifts generate to support financial aid, research, and education.
The endowment supports 50 percent of undergraduate tuition, helping to enable the Institute’s need-blind undergraduate admissions policy, which ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families of undergraduates who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2024-25, the average need-based MIT undergraduate scholarship was $62,127. Fifty-seven percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.
Effective in fiscal 2026, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $200,000 and typical assets have tuition fully covered by scholarships, and that families with incomes below $100,000 and typical assets pay nothing at all for their students’ MIT education. Eighty-eight percent of seniors who graduated in academic year 2025 graduated with no debt.
MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.
MIT’s Report of the Treasurer for fiscal year 2025, which details the Institute’s annual financial performance, was made available publicly today.
Ray Kurzweil ’70 reinforces his optimism in tech progress
Innovator, futurist, and author Ray Kurzweil ’70 emphasized his optimism about artificial intelligence, and technological progress generally, in a lecture on Wednesday while accepting MIT’s Robert A. Muh Alumni Award from the School of Humanities, Arts, and Social Sciences (SHASS).
Kurzweil offered his signature high-profile forecasts about how AI and computing will entirely blend with human functionality, and proposed that AI will lead to monumental gains in longevity, medicine, and other realms of life.
“People do not appreciate that the rate of progress is accelerating,” Kurzweil said, forecasting “incredible breakthroughs” over the next two decades.
Kurzweil delivered his lecture, titled “Reinventing Intelligence,” in the Thomas Tull Concert Hall of the Edward and Joyce Linde Music Building, which opened earlier in 2025 on the MIT campus.
The Muh Award was founded and endowed by Robert A. Muh ’59 and his wife Berit, and is one of the leading alumni honors granted by SHASS and MIT. Muh, a life member emeritus of the MIT Corporation, established the award, which is granted every two years for “extraordinary contributions” by alumni in the humanities, arts, and social sciences.
Robert and Berit Muh were both present at the lecture, along with their daughter Carrie Muh ’96, ’97, SM ’97.
Agustín Rayo, dean of SHASS, offered introductory remarks, calling Kurzweil “one of the most prolific thinkers of our time.” Rayo added that Kurzweil “has built his life and career on the belief that ideas change the world, and change it for the better.”
Kurzweil has been an innovator in language recognition technologies, developing advances and founding companies that have served people who are blind or low-vision, and helped in music creation. He is also a best-selling author who has heralded advances in computing capabilities, and even the merging of human and machines.
The initial segment of Kurzweil’s lecture was autobiographical in focus, reflecting on his family and early years. The families of both of Kurzweil’s parents fled the Nazis in Europe, seeking refuge in the U.S., with the belief that people could create a brighter future for themselves.
“My parents taught me the power of ideas can really change the world,” Kurzweil said.
Showing an early interest in how things worked, Kurzweil had decided to become an inventor by about the age of 7, he recalled. He also described his mother as being tremendously encouraging to him as a child. The two would take walks together, and the young Kurzweil would talk about all the things he imagined inventing.
“I would tell her my ideas and no matter how fantastical they were, she believed them,” he said. “Now other parents might have simply chuckled … but she actually believed my ideas, and that actually gave me my confidence, and I think confidence is important in succeeding.”
He became interested in computing by the early 1960s and majored in both computer science and literature as an MIT undergraduate.
Kurzweil has a long-running association with MIT extending far beyond his undergraduate studies. He served as a member of the MIT Corporation from 2005 to 2012 and was the 2001 recipient of the $500,000 Lemelson-MIT Prize, an award for innovation, for his development of reading technology.
“MIT has played a major role in my personal and professional life over the years,” Kurzweil said, calling himself “truly honored to receive this award.” Addressing Muh, he added: “Your longstanding commitment to our alma mater is inspiring.”
After graduating from MIT, Kurzweil launched a successful career developing innovative computing products, including one that recognized text across all fonts and could produce an audio reading. He also developed leading-edge music synthesizers, among many other advances.
In a corresponding part of his career, Kurzweil has become an energetic author, whose best-known books include “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999), “The Singularity Is Near” (2005), and “The Singularity Is Nearer” (2024), among many others.
Kurzweil was recently named chief AI officer of Beyond Imagination, a robotics firm he co-founded; he has also held a position at Google in recent years, working on natural language technologies.
In his remarks, Kurzweil underscored his view that, as exemplified and enabled by the growth of computing power over time, technological innovation moves at an exponential pace.
“People don’t really think about exponential growth; they think about linear growth,” Kurzweil said.
This concept, he said, makes him confident that a string of innovations will continue at remarkable speed.
“One of the bigger transformations we’re going to see from AI in the near term is health and medicine,” Kurweil said, forecasting that human medical trials will be replaced by simulated “digital trials.”
Kurzweil also believes computing and AI advances can lead to so many medical advances it will soon produce a drastic improvement in human longevity.
“These incredible breakthroughs are going to lead to what we’ll call longevity escape velocity,” Kurzweil said. “By roughly 2032 when you live through a year, you’ll get back an entire year from scientific progress, and beyond that point you’ll get back more than a year for every year you live, so you’ll be going back into time as far as your health is concerned,” Kurweil said. He did offer that these advances will “start” with people who are the most diligent about their health.
Kurzweil also outlined one of his best-known forecasts, that AI and people will be combined. “As we move forward, the lines between humans and technology will blur, until we are … one and the same,” Kurzweil said. “This is how we learn to merge with AI. In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud. Think of it like having a phone, but in your brain.”
“By 2045, once we have fully merged with AI, our intelligence will no longer be constrained … it will expand a millionfold,” he said. “This is what we call the singularity.”
To be sure, Kurzweil acknowledged, “Technology has always been a double-edged sword,” given that a drone can deliver either medical supplies or weaponry. “Threats of AI are real, must be taken seriously, [and] I think we are doing that,” he said. In any case, he added, we have “a moral imperative to realize the promise of new technologies while controlling the peril.” He concluded: “We are not doomed to fail to control any of these risks.”
Gene-Wei Li named associate head of the Department of Biology
Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year.
Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator.
“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.”
A great environment to do science
Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago.
“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.”
As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs.
Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.
“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.”
Teaching for the future
Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.
“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.”
Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings.
Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world.
“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?”
Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos.
“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.”
Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.
“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.”
Immune-informed brain aging research offers new treatment possibilities, speakers say
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.
“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” said Picower Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and MIT’s Aging Brain Initiative (ABI), in her introduction to the event, which more than 450 people registered to attend. “Together, today’s speakers will trace how the neuro-immune axis shapes brain health and disease … Their work converges on the promise of immunology-informed therapies to slow or prevent neurodegeneration and age-related cognitive decline.”
For instance, keynote speaker Michal Schwartz of the Weizmann Institute in Israel described her decades of pioneering work to understand the neuro-immune “ecosystem.” Immune cells, she said, help the brain heal, and support many of its functions, including its “plasticity,” the ability it has to adapt to and incorporate new information. But Schwartz’s lab also found that an immune signaling cascade can arise with aging that undermines cognitive function. She has leveraged that insight to investigate and develop corrective immunotherapies that improve the brain’s immune response to Alzheimer’s both by rejuvenating the brain’s microglia immune cells and bringing in the help of peripheral immune cells called macrophages. Schwartz has brought the potential therapy to market as the chief science officer of ImmunoBrain, a company testing it in a clinical trial.
In her presentation, Tsai noted recent work from her lab and that of computer science professor and fellow ABI member Manolis Kellis showing that many of the genes associated with Alzheimer’s disease are most strongly expressed in microglia, giving it an expression profile more similar to autoimmune disorders than to many psychiatric ones (where expression of disease-associated genes typically is highest in neurons). The study showed that microglia become “exhausted” over the course of disease progression, losing their cellular identity and becoming harmfully inflammatory.
“Genetic risk, epigenomic instability, and microglia exhaustion really play a central role in Alzheimer’s disease,” Tsai said, adding that her lab is now also looking into how immune T cells, recruited by microglia, may also contribute to Alzheimer’s disease progression.
The body and the brain
The neuro-immune “axis” connects not only the nervous and immune systems, but also extends between the whole body and the brain, with numerous implications for aging. Several speakers focused on the key conduit: the vagus nerve, which runs from the brain to the body’s major organs.
For instance, Sara Prescott, an investigator in the Picower Institute and an MIT assistant professor of biology, presented evidence her lab is amassing that the brain’s communication via vagus nerve terminals in the body’s airways is crucial for managing the body’s defense of respiratory tissues. Given that we inhale about 20,000 times a day, our airways are exposed to many environmental challenges, Prescott noted, and her lab and others are finding that the nervous system interacts directly with immune pathways to mount physiological responses. But vagal reflexes decline in aging, she noted, increasing susceptibility to infection, and so her lab is now working in mouse models to study airway-to-brain neurons throughout the lifespan to better understand how they change with aging.
In his talk, Caltech Professor Sarkis Mazmanian focused on work in his lab linking the gut microbiome to Parkinson’s disease (PD), for instance by promoting alpha-synuclein protein pathology and motor problems in mouse models. His lab hypothesizes that the microbiome can nucleate alpha-synuclein in the gut via a bacterial amyloid protein that may subsequently promote pathology in the brain, potentially via the vagus nerve. Based on its studies, the lab has developed two interventions. One is giving alpha-synuclein overexpressing mice a high-fiber diet to increase short-chain fatty acids in their gut, which actually modulates the activity of microglia in the brain. The high-fiber diet helps relieve motor dysfunction, corrects microglia activity, and reduces protein pathology, he showed. Another is a drug to disrupt the bacterial amyloid in the gut. It prevents alpha synuclein formation in the mouse brain and ameliorates PD-like symptoms. These results are pending publication.
Meanwhile, Kevin Tracey, professor at Hofstra University and Northwell Health, took listeners on a journey up and down the vagus nerve to the spleen, describing how impulses in the nerve regulate immune system emissions of signaling molecules, or “cytokines.” Too great a surge can become harmful, for instance causing the autoimmune disorder rheumatoid arthritis. Tracey described how a newly U.S. Food and Drug Administration-approved pill-sized neck implant to stimulate the vagus nerve helps patients with severe forms of the disease without suppressing their immune system.
The brain’s border
Other speakers discussed opportunities for understanding neuro-immune interactions in aging and disease at the “borders” where the brain’s and body’s immune system meet. These areas include the meninges that surround the brain, the choroid plexus (proximate to the ventricles, or open spaces, within the brain), and the interface between brain cells and the circulatory system.
For instance, taking a cue from studies showing that circadian disruptions are a risk factor for Alzheimer’s disease, Harvard Medical School Professor Beth Stevens of Boston Children’s Hospital described new research in her lab that examined how brain immune cells may function differently around the day-night cycle. The project, led by newly minted PhD Helena Barr, found that “border-associated macrophages” — long-lived immune cells residing in the brain’s borders — exhibited circadian rhythms in gene expression and function. Stevens described how these cells are tuned by the circadian clock to “eat” more during the rest phase, a process that may help remove material draining from the brain, including Alzheimer’s disease-associated peptides such as amyloid-beta. So, Stevens hypothesizes, circadian disruptions, for example due to aging or night-shift work, may contribute to disease onset by disrupting the delicate balance in immune-mediated “clean-up” of the brain and its borders.
Following Stevens at the podium, Washington University Professor Marco Colonna traced how various kinds of macrophages, including border macrophages and microglia, develop from the embryonic stage. He described the different gene-expression programs that guide their differentiation into one type or another. One gene he highlighted, for instance, is necessary for border macrophages along the brain’s vasculature to help regulate the waste-clearing cerebrospinal fluid (CSF) flow that Stevens also discussed. Knocking out the gene also impairs blood flow. Importantly, his lab has found that versions of the gene may be somewhat protective against Alzheimer’s, and that regulating expression of the gene could be a therapeutic strategy.
Colonna’s WashU colleague Jonathan Kipnis (a former student of Schwartz) also discussed macrophages that are associated with the particular border between brain tissue and the plumbing alongside the vasculature that carries CSF. The macrophages, his lab showed in 2022, actively govern the flow of CSF. He showed that removing the macrophages let Alzheimer’s proteins accumulate in mice. His lab is continuing to investigate ways in which these specific border macrophages may play roles in disease. He’s also looking in separate studies of how the skull’s brain marrow contributes to the population of immune cells in the brain and may play a role in neurodegeneration.
For all the talk of distant organs and the brain’s borders, neurons themselves were never far from the discussion. Harvard Medical School Professor Isaac Chiu gave them their direct due in a talk focusing on how they participate in their own immune defense, for instance by directly sensing pathogens and giving off inflammation signals upon cell death. He discussed a key molecule in that latter process, which is expressed among neurons all over the brain.
Whether they were looking within the brain, at its border, or throughout the body, speakers showed that age-related nervous system diseases are not only better understood but also possibly better treated by accounting not only for the nerve cells, but their immune system partners.
MIT Schwarzman College of Computing and MBZUAI launch international collaboration to shape the future of AI
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.
Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the technological foundations of AI and its applications in three core areas: scientific discovery, human thriving, and the health of the planet.
“Artificial intelligence is transforming nearly every aspect of human endeavor. MIT’s leadership in AI is greatly enriched through collaborations with leading academic institutions in the U.S. and around the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our collaboration with MBZUAI reflects a shared commitment to advancing AI in ways that are responsible, inclusive, and globally impactful. Together, we can explore new horizons in AI and bring broad benefits to society.”
“This agreement will unite the efforts of researchers at two world-class institutions to advance frontier AI research across scientific discovery, human thriving, and the health of the planet. By combining MBZUAI’s focus on foundational models and real-world deployment with MIT’s depth in computing and interdisciplinary innovation, we are creating a transcontinental bridge for discovery. Together, we will not only expand the boundaries of AI science, but also ensure that these breakthroughs are pursued responsibly and applied where they matter most — improving human health, enabling intelligent robotics, and driving sustainable AI at scale,” says Eric Xing, president and university professor at MBZUAI.
Each institution has appointed an academic director to oversee the program on its campus. At MIT, Philip Isola, the Class of 1948 Career Development Professor in the Department of Electrical Engineering and Computer Science, will serve as program lead. At MBZUAI, Le Song, professor of machine learning, will take on the role.
Supported by MBZUAI — the first university dedicated entirely to advancing science through AI, and based in Abu Dhabi, U.A.E. — the collaboration will fund a number of joint research projects per year. The findings will be openly publishable, and each project will be led by a principal investigator from MIT and one from MBZUAI, with project selections made by a steering committee composed of representatives from both institutions.
Riccardo Comin, two MIT alumni named 2025 Moore Experimental Physics Investigators
MIT associate professor of physics Riccardo Comin has been selected as 2025 Experimental Physics Investigator by the Gordon and Betty Moore Foundation. Two MIT physics alumni — Gyu-Boong Jo PhD ’10 of Rice University, and Ben Jones PhD ’15 of the University of Texas at Arlington — were also among this year’s cohort of 22 honorees.
The prestigious Experimental Physics Investigators (EPI) Initiative recognizes mid-career scientists advancing the frontiers of experimental physics. Each award provides $1.3 million over five years to accelerate breakthroughs and strengthen the experimental physics community.
At MIT, Comin investigates magnetoelectric multiferroics by engineering interfaces between two-dimensional materials and three-dimensional oxide thin films. His research aims to overcome long-standing limitations in spin-charge coupling by moving beyond epitaxial constraints, enabling new interfacial phases and coupling mechanisms. In these systems, Comin’s team explores the coexistence and proximity of magnetic and ferroelectric order, with a focus on achieving strong magnetoelectric coupling. This approach opens new pathways for designing tunable multiferroic systems unconstrained by traditional synthesis methods.
Comin’s research expands the frontier of multiferroics by demonstrating stacking-controlled magnetoelectric coupling at 2D–3D interfaces. This approach enables exploration of fundamental physics in a versatile materials platform and opens new possibilities for spintronics, sensing, and data storage. By removing constraints of epitaxial growth, Comin’s work lays the foundation for microelectronic and spintronic devices with novel functionalities driven by interfacial control of spin and polarization.
Comin’s project, Interfacial MAGnetoElectrics (I-MAGinE), aims to study a new class of artificial magnetoelectric multiferroics at the interfaces between ferroic materials from 2D van der Waals systems and 3D oxide thin films. The team aims to identify and understand novel magnetoelectric effects to demonstrate the viability of stacking-controlled interfacial magnetoelectric coupling. This research could lead to significant contributions in multiferroics, and could pave the way for innovative, energy-efficient storage devices.
“This research has the potential to make significant contributions to the field of multiferroics by demonstrating the viability of stacking-controlled interfacial magnetoelectric coupling,” according to Comin’s proposal. “The findings could pave the way for future applications in spintronics, data storage, and sensing. It offers a significant opportunity to explore fundamental physics questions in a novel materials platform, while laying the ground for future technological applications, including microelectronic and spintronic devices with new functionalities.”
Comin’s group has extensive experience in researching 2D and 3D ferroic materials and electronically ordered oxide thin films, as well as ultrathin van der Waals magnets, ferroelectrics, and multiferroics. Their lab is equipped with state-of-the-art tools for material synthesis, including bulk crystal growth of van der Waals materials and pulsed laser deposition targets, along with comprehensive fabrication and characterization capabilities. Their expertise in magneto-optical probes and advanced magnetic X-ray techniques promises to enable in-depth studies of electronic and magnetic structures, specifically spin-charge coupling, in order to contribute significantly to understanding spin-charge coupling in magnetochiral materials.
The coexistence of ferroelectricity and ferromagnetism in a single material, known as multiferroicity, is rare, and strong spin-charge coupling is even rarer due to fundamental chemical and electronic structure incompatibilities.
The few known bulk multiferroics with strong magnetoelectric coupling generally rely on inversion symmetry-breaking spin arrangements, which only emerge at low temperatures, limiting practical applications. While interfacial magnetoelectric multiferroics offer an alternative, achieving efficient spin-charge coupling often requires stringent conditions like epitaxial growth and lattice matching, which limit material combinations. This research proposes to overcome these limitations by using non-epitaxial interfaces of 2D van der Waals materials and 3D oxide thin films.
Unique features of this approach include leveraging the versatility of 2D ferroics for seamless transfer onto any substrate, eliminating lattice matching requirements, and exploring new classes of interfacial magnetoelectric effects unconstrained by traditional thin-film synthesis limitations.
Launched in 2018, the Moore Foundation’s EPI Initiative cultivates collaborative research environments and provides research support to promote the discovery of new ideas and emphasize community building.
“We have seen numerous new connections form and new research directions pursued by both individuals and groups based on conversations at these gatherings,” says Catherine Mader, program officer for the initiative.
The Gordon and Betty Moore Foundation was established to create positive outcomes for future generations. In pursuit of that vision, it advances scientific discovery, environmental conservation, and the special character of the San Francisco Bay Area.
How to reduce greenhouse gas emissions from ammonia production
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.
Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.
“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.
Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.
To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.
Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.
But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.
The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.
“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.
The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.
“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.
“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.
“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.
Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”
This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”
He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”
The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.
Using generative AI to diversify virtual training grounds for robots
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
MIT physicists improve the precision of atomic clocks
Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.
An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.
Scientists are developing next-generation atomic clocks that rely on even faster-ticking atoms such as ytterbium, which can be tracked with lasers at higher, optical frequencies. If they can be kept stable, optical atomic clocks could track even finer intervals of time, up to 100 trillion times per second.
Now, MIT physicists have found a way to improve the stability of optical atomic clocks, by reducing “quantum noise” — a fundamental measurement limitation due to the effects of quantum mechanics, which obscures the atoms’ pure oscillations. In addition, the team discovered that an effect of a clock’s laser on the atoms, previously considered irrelevant, can be used to further stabilize the laser.
The researchers developed a method to harness a laser-induced “global phase” in ytterbium atoms, and have boosted this effect with a quantum-amplification technique. The new approach doubles the precision of an optical atomic clock, enabling it to discern twice as many ticks per second compared to the same setup without the new method. What’s more, they anticipate that the precision of the method should increase steadily with the number of atoms in an atomic clock.
The researchers detail the method, which they call global phase spectroscopy, in a study appearing today in the journal Nature. They envision that the clock-stabilizing technique could one day enable portable optical atomic clocks that can be transported to various locations to measure all manner of phenomena.
“With these clocks, people are trying to detect dark matter and dark energy, and test whether there really are just four fundamental forces, and even to see if these clocks can predict earthquakes,” says study author Vladan Vuletić, the Lester Wolfe Professor of Physics at MIT. “We think our method can help make these clocks transportable and deployable to where they’re needed.”
The paper’s co-authors are Leon Zaporski, Qi Liu, Gustavo Velez, Matthew Radzihovsky, Zeyang Li, Simone Colombo, and Edwin Pedrozo-Peñafiel, who are members of the MIT-Harvard Center for Ultracold Atoms and the MIT Research Laboratory of Electronics.
Ticking time
In 2020, Vuletić and his colleagues demonstrated that an atomic clock could be made more precise by quantumly entangling the clock’s atoms. Quantum entanglement is a phenomenon by which particles can be made to behave in a collective, highly correlated manner. When atoms are quantumly entangled, they redistribute any noise, or uncertainty in measuring the atoms’ oscillations, in a way that reveals a clearer, more measurable “tick.”
In their previous work, the team induced quantum entanglement among several hundred ytterbium atoms that they first cooled and trapped in a cavity formed by two curved mirrors. They sent a laser into the cavity, which bounced thousands of times between the mirrors, interacting with the atoms and causing the ensemble to entangle. They were able to show that quantum entanglement could improve the precision of existing atomic clocks by essentially reducing the noise, or uncertainty between the laser’s and atoms’ tick rates.
At the time, however, they were limited by the ticking instability of the clock’s laser. In 2022, the same team derived a way to further amplify the difference in laser versus atom tick rates with “time reversal” — a trick that relies on entangling and de-entangling the atoms to boost the signal acquired in between.
However, in that work the team was still using traditional microwaves, which oscillate at much lower frequencies than the optical frequency standards ytterbium atoms can provide. It was as if they had painstakingly lifted a film of dust off a painting, only to then photograph it with a low-resolution camera.
“When you have atoms that tick 100 trillion times per second, that’s 10,000 times faster than the frequency of microwaves,” Vuletić says. “We didn’t know at the time how to apply these methods to higher-frequency optical clocks that are much harder to keep stable.”
About phase
In their new study, the team has found a way to apply their previously developed approach of time reversal to optical atomic clocks. They then sent in a laser that oscillates near the optical frequency of the entangled atoms.
“The laser ultimately inherits the ticking of the atoms,” says first author Zaporski. “But in order for this inheritance to hold for a long time, the laser has to be quite stable.”
The researchers found they were able to improve the stability of an optical atomic clock by taking advantage of a phenomenon that scientists had assumed was inconsequential to the operation. They realized that when light is sent through entangled atoms, the interaction can cause the atoms to jump up in energy, then settle back down into their original energy state and still carry the memory about their round trip.
“One might think we’ve done nothing,” Vuletić says. “You get this global phase of the atoms, which is usually considered irrelevant. But this global phase contains information about the laser frequency.”
In other words, they realized that the laser was inducing a measurable change in the atoms, despite bringing them back to the original energy state, and that the magnitude of this change depends on the laser’s frequency.
“Ultimately, we are looking for the difference of laser frequency and the atomic transition frequency,” explains co-author Liu. “When that difference is small, it gets drowned by quantum noise. Our method amplifies this difference above this quantum noise.”
In their experiments, the team applied this new approach and found that through entanglement they were able to double the precision of their optical atomic clock.
“We saw that we can now resolve nearly twice as small a difference in the optical frequency or, the clock ticking frequency, without running into the quantum noise limit,” Zaporski says. “Although it’s a hard problem in general to run atomic clocks, the technical benefits of our method it will make it easier, and we think this can enable stable, transportable atomic clocks.”
This research was supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, the U.S. Office of Science, the National Quantum Information Science Research Centers, and the Quantum Systems Accelerator.
Uncovering new physics in metals manufacturing
For decades, it’s been known that subtle chemical patterns exist in metal alloys, but researchers thought they were too minor to matter — or that they got erased during manufacturing. However, recent studies have shown that in laboratory settings, these patterns can change a metal’s properties, including its mechanical strength, durability, heat capacity, radiation tolerance, and more.
Now, researchers at MIT have found that these chemical patterns also exist in conventionally manufactured metals. The surprising finding revealed a new physical phenomenon that explains the persistent patterns.
In a paper published in Nature Communications today, the researchers describe how they tracked the patterns and discovered the physics that explains them. The authors also developed a simple model to predict chemical patterns in metals, and they show how engineers could use the model to tune the effect of such patterns on metallic properties, for use in aerospace, semiconductors, nuclear reactors, and more.
“The conclusion is: You can never completely randomize the atoms in a metal. It doesn’t matter how you process it,” says Rodrigo Freitas, the TDK Assistant Professor in the Department of Materials Science and Engineering. “This is the first paper showing these non-equilibrium states that are retained in the metal. Right now, this chemical order is not something we’re controlling for or paying attention to when we manufacture metals.”
For Freitas, an early-career researcher, the findings offer vindication for exploring a crowded field that he says few believed would lead to unique or broadly impactful results. He credits the U.S. Air Force Office of Scientific Research, which supported the work through their Young Investigator Program. He also credits the collaborative effort that enabled the paper, which features three MIT PhD students as co-first authors: Mahmudul Islam, Yifan Cao, and Killian Sheriff.
“There was the question of whether I should even be tackling this specific problem because people have been working on it for a long time,” Freitas says. “But the more I learned about it, the more I saw researchers were thinking about this in idealized laboratory scenarios. We wanted to perform simulations that were as realistic as possible to reproduce these manufacturing processes with high fidelity. My favorite part of this project is how non-intuitive the findings are. The fact that you cannot completely mix something together, people didn’t see that coming.”
From surprises to theories
Freitas’ research team began with a practical question: How fast do chemical elements mix during metal processing? Conventional wisdom held that there’s a point where the chemical composition of metals becomes completely uniform from mixing during manufacturing. By finding that point, the researchers thought they could develop a simple way to design alloys with different levels of atomic order, also known as short-range order.
The researchers used machine-learning techniques to track millions of atoms as they moved and rearranged themselves under conditions that mimicked metal processing.
“The first thing we did was to deform a piece of metal,” Freitas explains. “That’s a common step during manufacturing: You roll the metal and deform it and heat it up again and deform it a little more, so it develops the structure you want. We did that and we tracked chemical order. The thought was as you deform the material, its chemical bonds are broken and that randomizes the system. These violent manufacturing processes essentially shuffle the atoms.”
The researchers hit a snag during the mixing process: The alloys never reached a fully random state. That was a surprise, because no known physical mechanism could explain the result.
“It pointed to a new piece of physics in metals,” the researchers write in the paper. “It was one of those cases where applied research led to a fundamental discovery.”
To uncover the new physics, the researchers developed computational tools, including high-fidelity machine-learning models, to capture atomic interactions, along with new statistical methods that quantify how chemical order changes over time. They then applied these tools in large-scale molecular dynamics simulations to track how atoms rearrange during processing.
The researchers found some standard chemical arrangements in their processed metals, but at higher temperatures than would normally be expected. Even more surprisingly, they found completely new chemical patterns never seen outside of manufacturing processes. This was the first time such patterns were observed. The researchers referred to the patterns as “far-from-equilibrium states.”
The researchers also built a simple model that reproduced key features of the simulations. The model explains how the chemical patterns arise from defects known as dislocations, which are like three-dimensional scribbles within a metal. As the metal is deformed, those scribbles warp, shuffling nearby atoms along the way. Previously, researchers believed that shuffling completely erased order in the metals, but they found that dislocations favor some atomic swaps over others, resulting not in randomness but in subtle patterns that explain their findings.
“These defects have chemical preferences that guide how they move,” Freitas says. “They look for low energy pathways, so given a choice between breaking chemical bonds, they tend to break the weakest bonds, and it’s not completely random. This is very exciting because it’s a non-equilibrium state: It’s not something you’d see naturally occurring in materials. It’s the same way our bodies live in non-equilibrium. The temperature outside is always hotter or colder than our bodies, and we’re maintaining that steady state equilibrium to stay alive. That’s why these states exist in metal: the balance between an internal push toward disorder plus this ordering tendency of breaking certain bonds that are always weaker than others.”
Applying a new theory
The researchers are now exploring how these chemical patterns develop across a wide range of manufacturing conditions. The result is a map that links various metal processing steps to different chemical patterns in metal.
To date, this chemical order and the properties they tune have been largely considered an academic subject. With this map, the researchers hope engineers can begin thinking of these patterns as levers in design that can be pulled during production to get new properties.
“Researchers have been looking at the ways these atomic arrangements change metallic properties — a big one is catalysis,” Freitas says of the process that drives chemical reactions. “Electrochemistry happens at the surface of the metal, and it’s very sensitive to local atomic arrangements. And there have been other properties that you wouldn't think would be influenced by these factors. Radiation damage is another big one. That affects these materials’ performance in nuclear reactors.”
Researchers have already told Freitas the paper could help explain other surprise findings about metallic properties, and he’s excited for the field to move from fundamental research into chemical order to more applied work.
“You can think of areas where you need very optimized alloys like aerospace,” Freitas says. “They care about very specific compositions. Advanced manufacturing now makes it possible to combine metals that normally wouldn’t mix through deformation. Understanding how atoms actually shuffle and mix in those processes is crucial, because it’s the key to gaining strength while still keeping the low density. So, this could be a huge deal for them.”
This work was supported, in part, by the U.S. Air Force Office of Scientific Research, MathWorks, and the MIT-Portugal Program.
Engineered “natural killer” cells could help fight cancer
One of the newest weapons that scientists have developed against cancer is a type of engineered immune cell known as CAR-NK (natural killer) cells. Similar to CAR-T cells, these cells can be programmed to attack cancer cells.
MIT and Harvard Medical School researchers have now come up with a new way to engineer CAR-NK cells that makes them much less likely to be rejected by the patient’s immune system, which is a common drawback of this type of treatment.
The new advance may also make it easier to develop “off-the-shelf” CAR-NK cells that could be given to patients as soon as they are diagnosed. Traditional approaches to engineering CAR-NK or CAR-T cells usually take several weeks.
“This enables us to do one-step engineering of CAR-NK cells that can avoid rejection by host T cells and other immune cells. And, they kill cancer cells better and they’re safer,” says Jianzhu Chen, an MIT professor of biology, a member of the Koch Institute for Integrative Cancer Research,and one of the senior authors of the study.
In a study of mice with humanized immune systems, the researchers showed that these CAR-NK cells could destroy most cancer cells while evading the host immune system.
Rizwan Romee, an associate professor of medicine at Harvard Medical School and Dana-Farber Cancer Institute, is also a senior author of the paper, which appears today in Nature Communications. The paper’s lead author is Fuguo Liu, a postdoc at the Koch Institute and a research fellow at Dana-Farber.
Evading the immune system
NK cells are a critical part of the body’s natural immune defenses, and their primary responsibility is to locate and kill cancer cells and virus-infected cells. One of their cell-killing strategies, also used by T cells, is a process called degranulation. Through this process, immune cells release a protein called perforin, which can poke holes in another cell to induce cell death.
To create CAR-NK cells to treat cancer patients, doctors first take a blood sample from the patient. NK cells are isolated from the sample and engineered to express a protein called a chimeric antigen receptor (CAR), which can be designed to target specific proteins found on cancer cells.
Then, the cells spend several weeks proliferating until there are enough to transfuse back into the patient. A similar approach is also used to create CAR-T cells. Several CAR-T cell therapies have been approved to treat blood cancers such as lymphoma and leukemia, but CAR-NK treatments are still in clinical trials.
Because it takes so long to grow a population of engineered cells that can be infused into the patient, and those cells may not be as viable as cells that came from a healthy person, researchers are exploring an alternative approach: using NK cells from a healthy donor.
Such cells could be grown in large quantities and would be ready whenever they were needed. However, the drawback to these cells is that the recipient’s immune system may see them as foreign and attack them before they can start killing cancer cells.
In the new study, the MIT team set out to find a way to help NK cells “hide” from a patient’s immune system. Through studies of immune cell interactions, they showed that NK cells could evade a host T-cell response if they did not carry surface proteins called HLA class 1 proteins. These proteins, usually expressed on NK cell surfaces, can trigger T cells to attack if the immune system doesn’t recognize them as “self.”
To take advantage of this, the researchers engineered the cells to express a sequence of siRNA (short interfering RNA) that interferes with the genes for HLA class 1. They also delivered the CAR gene, as well as the gene for either PD-L1 or single-chain HLA-E (SCE). PD-L1 and SCE are proteins that make NK cells more effective by turning up genes that are involved in killing cancer cells.
All of these genes can be carried on a single piece of DNA, known as a construct, making it simple to transform donor NK cells into immune-evasive CAR-NK cells. The researchers used this construct to create CAR-NK cells targeting a protein called CD-19, which is often found on cancerous B cells in lymphoma patients.
NK cells unleashed
The researchers tested these CAR-NK cells in mice with a human-like immune system. These mice were also injected with lymphoma cells.
Mice that received CAR-NK cells with the new construct maintained the NK cell population for at least three weeks, and the NK cells were able to nearly eliminate cancer in those mice. In mice that received either NK cells with no genetic modifications or NK cells with only the CAR gene, the host immune cells attacked the donor NK cells. In these mice, the NK cells died out within two weeks, and the cancer spread unchecked.
The researchers also found that these engineered CAR-NK cells were much less likely to induce cytokine release syndrome — a common side effect of immunotherapy treatments, which can cause life-threatening complications.
Because of CAR-NK cells’ potentially better safety profile, Chen anticipates that they could eventually be used in place of CAR-T cells. For any CAR-NK cells that are now in development to target lymphoma or other types of cancer, it should be possible to adapt them by adding the construct developed in this study, he says.
The researchers now hope to run a clinical trial of this approach, working with colleagues at Dana-Farber. They are also working with a local biotech company to test CAR-NK cells to treat lupus, an autoimmune disorder that causes the immune system to attack healthy tissues and organs.
The research was funded, in part, by Skyline Therapeutics, the Koch Institute Frontier Research Program through the Kathy and Curt Marble Cancer Research Fund and the Elisa Rah Memorial Fund, the Claudia Adams Barr Foundation, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Laurent Demanet appointed co-director of MIT Center for Computational Science and Engineering
Laurent Demanet, MIT professor of applied mathematics, has been appointed co-director of the MIT Center for Computational Science and Engineering (CCSE), effective Sept. 1.
Demanet, who holds a joint appointment in the departments of Mathematics and Earth, Atmospheric and Planetary Sciences — where he previously served as director of the Earth Resources Laboratory — succeeds Youssef Marzouk, who is now serving as the associate dean of the MIT Schwarzman College of Computing.
Joining co-director Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering, Demanet will help lead CCSE, supporting students, faculty, and researchers while fostering a vibrant community of innovation and discovery in computational science and engineering (CSE).
“Laurent’s ability to translate concepts of computational science and engineering into understandable, real-world applications is an invaluable asset to CCSE. His interdisciplinary experience is a benefit to the visibility and impact of CSE research and education. I look forward to working with him,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
“I’m pleased to welcome Laurent into his new role as co-director of CCSE. His work greatly supports the cross-cutting methodology at the heart of the computational science and engineering community. I’m excited for CCSE to have a co-director from the School of Science, and eager to see the center continue to broaden its connections across MIT,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, department head of Electrical Engineering and Computer Science, and MathWorks Professor.
Established in 2008, CCSE was incorporated into the MIT Schwarzman College of Computing as one of its core academic units in January 2020. An interdisciplinary research and education center dedicated to pioneering applications of computation, CCSE houses faculty, researchers, and students from a range of MIT schools, such as the schools of Engineering, Science, Architecture and Planning, and the MIT Sloan School of Management, as well as other units of the college.
“I look forward to working with Nicolas and the college leadership on raising the profile of CCSE on campus and globally. We will be pursuing a set of initiatives that span from enhancing the visibility of our research and strengthening our CSE PhD program, to expanding professional education offerings and deepening engagement with our alumni and with industry,” says Demanet.
Demanet’s research lies at the intersection of applied mathematics and scientific computing to visualize the structures beneath Earth’s surface. He also has a strong interest in scientific computing, machine learning, inverse problems, and wave propagation. Through his position as principal investigator of the Imaging and Computing Group, Demanet and his students aim to answer fundamental questions in computational seismic imaging to increase the quality and accuracy of mapping and the projection of changes in Earth’s geological structures. The implications of his work are rooted in environmental monitoring, water resources and geothermal energy, and the understanding of seismic hazards, among others.
He joined the MIT faculty in 2009. He received an Alfred P. Sloan Research Fellowship and the U.S. Air Force Young Investigator Award in 2011, and a CAREER award from the National Science Foundation in 2012. He also held the Class of 1954 Career Development Professorship from 2013 to 2016. Prior to coming to MIT, Demanet held the Szegö Assistant Professorship at Stanford University. He completed his undergraduate studies in mathematical engineering and theoretical physics at Universite de Louvain in Belgium, and earned a PhD in applied and computational mathematics at Caltech, where he was awarded the William P. Carey Prize for best dissertation in the mathematical sciences.
Fighting for the health of the planet with AI
For Priya Donti, childhood trips to India were more than an opportunity to visit extended family. The biennial journeys activated in her a motivation that continues to shape her research and her teaching.
Contrasting her family home in Massachusetts, Donti — now the Silverman Family Career Development Professor in the MIT Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the MIT Laboratory for Information and Decision Systems — was struck by the disparities in how people live.
“It was very clear to me the extent to which inequity is a rampant issue around the world,” Donti says. “From a young age, I knew that I definitely wanted to address that issue.”
That motivation was further stoked by a high school biology teacher, who focused his class on climate and sustainability.
“We learned that climate change, this huge, important issue, would exacerbate inequity,” Donti says. “That really stuck with me and put a fire in my belly.”
So, when Donti enrolled at Harvey Mudd College, she thought she would direct her energy toward the study of chemistry or materials science to create next-generation solar panels.
Those plans, however, were jilted. Donti “fell in love” with computer science, and then discovered work by researchers in the United Kingdom who were arguing that artificial intelligence and machine learning would be essential to help integrate renewables into power grids.
“It was the first time I’d seen those two interests brought together,” she says. “I got hooked and have been working on that topic ever since.”
Pursuing a PhD at Carnegie Mellon University, Donti was able to design her degree to include computer science and public policy. In her research, she explored the need for fundamental algorithms and tools that could manage, at scale, power grids relying heavily on renewables.
“I wanted to have a hand in developing those algorithms and tool kits by creating new machine learning techniques grounded in computer science,” she says. “But I wanted to make sure that the way I was doing the work was grounded both in the actual energy systems domain and working with people in that domain” to provide what was actually needed.
While Donti was working on her PhD, she co-founded a nonprofit called Climate Change AI. Her objective, she says, was to help the community of people involved in climate and sustainability — “be they computer scientists, academics, practitioners, or policymakers” — to come together and access resources, connection, and education “to help them along that journey.”
“In the climate space,” she says, “you need experts in particular climate change-related sectors, experts in different technical and social science tool kits, problem owners, affected users, policymakers who know the regulations — all of those — to have on-the-ground scalable impact.”
When Donti came to MIT in September 2023, it was not surprising that she was drawn by its initiatives directing the application of computer science toward society’s biggest problems, especially the current threat to the health of the planet.
“We’re really thinking about where technology has a much longer-horizon impact and how technology, society, and policy all have to work together,” Donti says. “Technology is not just one-and-done and monetizable in the context of a year.”
Her work uses deep learning models to incorporate the physics and hard constraints of electric power systems that employ renewables for better forecasting, optimization, and control.
“Machine learning is already really widely used for things like solar power forecasting, which is a prerequisite to managing and balancing power grids,” she says. “My focus is, how do you improve the algorithms for actually balancing power grids in the face of a range of time-varying renewables?”
Among Donti’s breakthroughs is a promising solution for power grid operators to be able to optimize for cost, taking into account the actual physical realities of the grid, rather than relying on approximations. While the solution is not yet deployed, it appears to work 10 times faster, and far more cheaply, than previous technologies, and has attracted the attention of grid operators.
Another technology she is developing works to provide data that can be used in training machine learning systems for power system optimization. In general, much data related to the systems is private, either because it is proprietary or because of security concerns. Donti and her research group are working to create synthetic data and benchmarks that, Donti says, “can help to expose some of the underlying problems” in making power systems more efficient.
“The question is,” Donti says, “can we bring our datasets to a point such that they are just hard enough to drive progress?”
For her efforts, Donti has been awarded the U.S. Department of Energy Computational Science Graduate Fellowship and the NSF Graduate Research Fellowship. She was recognized as part of MIT Technology Review’s 2021 list of “35 Innovators Under 35” and Vox’s 2023 “Future Perfect 50.”
Next spring, Donti will co-teach a class called AI for Climate Action with Sara Beery, EECS assistant professor, whose focus is AI for biodiversity and ecosystems, and Abigail Bodner, assistant professor in the departments of EECS and Earth, Atmospheric and Planetary Sciences, whose focus is AI for climate and Earth science.
“We’re all super-excited about it,” Donti says.
Coming to MIT, Donti says, “I knew that there would be an ecosystem of people who really cared, not just about success metrics like publications and citation counts, but about the impact of our work on society.”
New prediction model could improve the reliability of fusion power plants
Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.
Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.
Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.
Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.
The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.
“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”
The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.
“A delicate balance”
Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.
Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.
“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”
Bringing down the pulse
Wang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.
Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.
The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.
The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.
“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”
The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.
“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”
Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation.
Printable aluminum alloy sets strength records, may enable lighter aircraft parts
MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.
The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations of materials, the team’s new machine learning-based approach needed only to evaluate 40 possible compositions before identifying an ideal mix for a high-strength, printable aluminum alloy.
When they printed the alloy and tested the resulting material, the team confirmed that, as predicted, the aluminum alloy was as strong as the strongest aluminum alloys that are manufactured today using traditional casting methods.
The researchers envision that the new printable aluminum could be made into stronger, more lightweight and temperature-resistant products, such as fan blades in jet engines. Fan blades are traditionally cast from titanium — a material that is more than 50 percent heavier and up to 10 times costlier than aluminum — or made from advanced composites.
“If we can use lighter, high-strength material, this would save a considerable amount of energy for the transportation industry,” says Mohadeseh Taheri-Mousavi, who led the work as a postdoc at MIT and is now an assistant professor at Carnegie Mellon University.
“Because 3D printing can produce complex geometries, save material, and enable unique designs, we see this printable alloy as something that could also be used in advanced vacuum pumps, high-end automobiles, and cooling devices for data centers,” adds John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering at MIT.
Hart and Taheri-Mousavi provide details on the new printable aluminum design in a paper published in the journal Advanced Materials. The paper’s MIT co-authors include Michael Xu, Clay Houser, Shaolou Wei, James LeBeau, and Greg Olson, along with Florian Hengsbach and Mirko Schaper of Paderborn University in Germany, and Zhaoxuan Ge and Benjamin Glaser of Carnegie Mellon University.
Micro-sizing
The new work grew out of an MIT class that Taheri-Mousavi took in 2020, which was taught by Greg Olson, professor of the practice in the Department of Materials Science and Engineering. As part of the class, students learned to use computational simulations to design high-performance alloys. Alloys are materials that are made from a mix of different elements, the combination of which imparts exceptional strength and other unique properties to the material as a whole.
Olson challenged the class to design an aluminum alloy that would be stronger than the strongest printable aluminum alloy designed to date. As with most materials, the strength of aluminum depends in large part on its microstructure: The smaller and more densely packed its microscopic constituents, or “precipitates,” the stronger the alloy would be.
With this in mind, the class used computer simulations to methodically combine aluminum with various types and concentrations of elements, to simulate and predict the resulting alloy’s strength. However, the exercise failed to produce a stronger result. At the end of the class, Taheri-Mousavi wondered: Could machine learning do better?
“At some point, there are a lot of things that contribute nonlinearly to a material’s properties, and you are lost,” Taheri-Mousavi says. “With machine-learning tools, they can point you to where you need to focus, and tell you for example, these two elements are controlling this feature. It lets you explore the design space more efficiently.”
Layer by layer
In the new study, Taheri-Mousavi continued where Olson’s class left off, this time looking to identify a stronger recipe for aluminum alloy. This time, she used machine-learning techniques designed to efficiently comb through data such as the properties of elements, to identify key connections and correlations that should lead to a more desirable outcome or product.
She found that, using just 40 compositions mixing aluminum with different elements, their machine-learning approach quickly homed in on a recipe for an aluminum alloy with higher volume fraction of small precipitates, and therefore higher strength, than what the previous studies identified. The alloy’s strength was even higher than what they could identify after simulating over 1 million possibilities without using machine learning.
To physically produce this new strong, small-precipitate alloy, the team realized 3D printing would be the way to go instead of traditional metal casting, in which molten liquid aluminum is poured into a mold and is left to cool and harden. The longer this cooling time is, the more likely the individual precipitate is to grow.
The researchers showed that 3D printing, broadly also known as additive manufacturing, can be a faster way to cool and solidify the aluminum alloy. Specifically, they considered laser bed powder fusion (LBPF) — a technique by which a powder is deposited, layer by layer, on a surface in a desired pattern and then quickly melted by a laser that traces over the pattern. The melted pattern is thin enough that it solidfies quickly before another layer is deposited and similarly “printed.” The team found that LBPF’s inherently rapid cooling and solidification enabled the small-precipitate, high-strength aluminum alloy that their machine learning method predicted.
“Sometimes we have to think about how to get a material to be compatible with 3D printing,” says study co-author John Hart. “Here, 3D printing opens a new door because of the unique characteristics of the process — particularly, the fast cooling rate. Very rapid freezing of the alloy after it’s melted by the laser creates this special set of properties.”
Putting their idea into practice, the researchers ordered a formulation of printable powder, based on their new aluminum alloy recipe. They sent the powder — a mix of aluminum and five other elements — to collaborators in Germany, who printed small samples of the alloy using their in-house LPBF system. The samples were then sent to MIT where the team ran multiple tests to measure the alloy’s strength and image the samples’ microstructure.
Their results confirmed the predictions made by their initial machine learning search: The printed alloy was five times stronger than a casted counterpart and 50 percent stronger than alloys designed using conventional simulations without machine learning. The new alloy’s microstructure also consisted of a higher volume fraction of small precipitates, and was stable at high temperatures of up to 400 degrees Celsius — a very high temperature for aluminum alloys.
The researchers are applying similar machine-learning techniques to further optimize other properties of the alloy.
“Our methodology opens new doors for anyone who wants to do 3D printing alloy design,” Taheri-Mousavi says. “My dream is that one day, passengers looking out their airplane window will see fan blades of engines made from our aluminum alloys.”
This work was carried out, in part, using MIT.nano’s characterization facilities.
Study sheds light on musicians’ enhanced attention
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.
When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions.
“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”
The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.
Overcoming challenges
Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”
Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.
Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”
When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.
Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.
To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.
Top-down versus bottom-up attention
What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.
Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.
She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.
Matthew Shoulders named head of the Department of Chemistry
Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026.
“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”
Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.
“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”
Shoulders studies how cells fold proteins, and he develops and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.
“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.
In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.
“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.”
Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment.
“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.
“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.
Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.
He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.
At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.
Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.
Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.
Report: Sustainability in supply chains is still a firm-level priority
Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.
During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.
“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”
The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.
The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.
Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.
“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.
The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.
The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.
Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.
“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”
When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.
That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.
Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.
“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”
Chemists create red fluorescent dyes that may enable clearer biomedical imaging
MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.
The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.
In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.
That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.
“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.
MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.
Stabilized borenium
Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.
Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.
Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.
When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.
Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.
However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.
His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.
In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.
“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”
Potential applications
The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.
For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.
Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.
“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.
If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.
“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”
In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.
The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.