MIT Latest News
Blending neuroscience, AI, and music to create mental health innovations
Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.
“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing music became central to both how I communicated and to how I managed my own mental health.”
Along with equipping her with valuable skills and experiences, she credits her passion for music as the catalyst for her interest in neuroscience.
“I got to see firsthand not only the ways that audiences reacted to music, but also how much value music had for musicians,” she says. “That close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”
Lecamwasam earned a bachelor’s degree in 2021 from Wellesley College, where she studied neuroscience — specifically in the Systems and Computational Neuroscience track — and also music. During her first semester, she took a class in songwriting that she says made her more aware of the connections between music and emotions. While studying at Wellesley, she participated in the MIT Undergraduate Research Opportunities Program for three years. Working in the Department of Brain and Cognitive Sciences lab of Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, she focused primarily on classifying consciousness in anesthetized patients and training brain-computer interface-enabled prosthetics using reinforcement learning.
“I still had a really deep love for music, which I was pursuing in parallel to all of my neuroscience work, but I really wanted to try to find a way to combine both of those things in grad school,” says Lecamwasam. Brown recommended that she look into the graduate programs at the MIT Media Lab within the Program in Media Arts and Sciences (MAS), which turned out to be an ideal fit.
“One thing I really love about where I am is that I get to be both an artist and a scientist,” says Lecamwasam. “That was something that was important to me when I was picking a graduate program. I wanted to make sure that I was going to be able to do work that was really rigorous, validated, and important, but also get to do cool, creative explorations and actually put the research that I was doing into practice in different ways.”
Exploring the physical, mental, and emotional impacts of music
Informed by her years of neuroscience research as an undergraduate and her passion for music, Lecamwasam focused her graduate research on harnessing the emotional potency of music into scalable, non-pharmacological mental health tools. Her master’s thesis focused on “pharmamusicology,” looking at how music might positively affect the physiology and psychology of those with anxiety.
The overarching theme of Lecamwasam’s research is exploring the various impacts of music and affective computing — physically, mentally, and emotionally. Now in the third year of her doctoral program in the Opera of the Future group, she is currently investigating the impact of large-scale live music and concert experiences on the mental health and well-being of both audience members and performers. She is also working to clinically validate music listening, composition, and performance as health interventions, in combination with psychotherapy and pharmaceutical interventions.
Her recent work, in collaboration with Professor Anna Huang’s Human-AI Resonance Lab, assesses the emotional resonance of AI-generated music compared to human-composed music; the aim is to identify more ethical applications of emotion-sensitive music generation and recommendation that preserve human creativity and agency, and can also be used as health interventions. She has co-led a wellness and music workshop at the Wellbeing Summit in Bilbao, Spain, and has presented her work at the 2023 CHI conference on Human Factors in Computing Systems in Hamburg, Germany and the 2024 Audio Mostly conference in Milan, Italy.
Lecamwasam has collaborated with organizations near and far to implement real-world applications of her research. She worked with Carnegie Hall's Weill Music Institute on its Well-Being Concerts and is currently partnering on a study assessing the impact of lullaby writing on perinatal health with the North Shore Lullaby Project in Massachusetts, an offshoot of Carnegie Hall’s Lullaby Project. Her main international collaboration is with a company called Myndstream, working on projects comparing the emotional resonance of AI-generated music to human-composed music and thinking of clinical and real-world applications. She is also working on a project with the companies PixMob and Empatica (an MIT Media Lab spinoff), centered on assessing the impact of interactive lighting and large-scale live music experiences on emotional resonance in stadium and arena settings.
Building community
“Kimy combines a deep love for — and sophisticated knowledge of — music with scientific curiosity and rigor in ways that represent the Media Lab/MAS spirit at its best,” says Professor Tod Machover, Lecamwasam’s research advisor, Media Lab faculty director, and director of the Opera of the Future group. “She has long believed that music is one of the most powerful and effective ways to create personalized interventions to help stabilize emotional distress and promote empathy and connection. It is this same desire to establish sane, safe, and sustaining environments for work and play that has led Kimy to become one of the most effective and devoted community-builders at the lab.”
Lecamwasam has participated in the SOS (Students Offering Support) program in MAS for a few years, which assists students from a variety of life experiences and backgrounds during the process of applying to the Program in Media Arts and Sciences. She will soon be the first MAS peer mentor as part of a new initiative through which she will establish and coordinate programs including a “buddy system,” pairing incoming master’s students with PhD students as a way to help them transition into graduate student life at MIT. She is also part of the Media Lab’s Studcom, a student-run organization that promotes, facilitates, and creates experiences meant to bring the community together.
“I think everything that I have gotten to do has been so supported by the friends I’ve made in my lab and department, as well as across departments,” says Lecamwasam. “I think everyone is just really excited about the work that they do and so supportive of one another. It makes it so that even when things are challenging or difficult, I’m motivated to do this work and be a part of this community.”
Why some quantum materials stall while others scale
People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.
What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.
Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.
The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.
“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”
The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.
Materials with impact
Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.
“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”
Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.
“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”
The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.
For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.
“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”
The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.
The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.
Guiding research
Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.
“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”
“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward."
This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.
Earthquake damage at deeper depths occurs long after initial activity
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.
Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.
“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.
The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.
Everything but the quakes
In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.
“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.
What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.
“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.
Balancing the energy budget
Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.
What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.
“Either of those are not what we expected,” says Frank. “And both of them are interesting.”
Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.
“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.
Darcy McRose and Mehtaab Sawhney ’20, PhD ’24 named 2025 Packard Fellows for Science and Engineering
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University.
The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.
“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”
McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.
“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.”
Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.
Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”
A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.
McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.
Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.
Engineering next-generation fertilizers
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”
As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.
These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.
In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.
“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.
“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.
Recently, Rizzo was awarded a Kavanaugh Fellowship, a program that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.
Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.
“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.
Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.
Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.
He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”
Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.
Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.
“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.
Optimizing food subsidies: Applying digital platforms to maximize nutrition
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition.
World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.
J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.”
Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.
Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.
His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains.
For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from the data, and so this is where we need more sophisticated and robust algorithms.”
Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.”
The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”
While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.
There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.
Checking the quality of materials just got easier with a new AI tool
Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.
But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up the development and distribution of new technologies.
Now, a new AI tool developed by MIT engineers could help clear the quality-control bottleneck, offering a faster and cheaper option for certain materials-driven industries.
In a study appearing today in the journal Matter, the researchers present “SpectroGen,” a generative AI tool that turbocharges scanning capabilities by serving as a virtual spectrometer. The tool takes in “spectra,” or measurements of a material in one scanning modality, such as infrared, and generates what that material’s spectra would look like if it were scanned in an entirely different modality, such as X-ray. The AI-generated spectral results match, with 99 percent accuracy, the results obtained from physically scanning the material with the new instrument.
Certain spectroscopic modalities reveal specific properties in a material: Infrared reveals a material’s molecular groups, while X-ray diffraction visualizes the material’s crystal structures, and Raman scattering illuminates a material’s molecular vibrations. Each of these properties is essential in gauging a material’s quality and typically requires tedious workflows on multiple expensive and distinct instruments to measure.
With SpectroGen, the researchers envision that a diversity of measurements can be made using a single and cheaper physical scope. For instance, a manufacturing line could carry out quality control of materials by scanning them with a single infrared camera. Those infrared spectra could then be fed into SpectroGen to automatically generate the material’s X-ray spectra, without the factory having to house and operate a separate, often more expensive X-ray-scanning laboratory.
The new AI tool generates spectra in less than one minute, a thousand times faster compared to traditional approaches that can take several hours to days to measure and validate.
“We think that you don’t have to do the physical measurements in all the modalities you need, but perhaps just in a single, simple, and cheap modality,” says study co-author Loza Tadesse, assistant professor of mechanical engineering at MIT. “Then you can use SpectroGen to generate the rest. And this could improve productivity, efficiency, and quality of manufacturing.”
The study’s lead author is former MIT postdoc Yanmin Zhu.
Beyond bonds
Tadesse’s interdisciplinary group at MIT pioneers technologies that advance human and planetary health, developing innovations for applications ranging from rapid disease diagnostics to sustainable agriculture.
“Diagnosing diseases, and material analysis in general, usually involves scanning samples and collecting spectra in different modalities, with different instruments that are bulky and expensive and that you might not all find in one lab,” Tadesse says. “So, we were brainstorming about how to miniaturize all this equipment and how to streamline the experimental pipeline.”
Zhu noted the increasing use of generative AI tools for discovering new materials and drug candidates, and wondered whether AI could also be harnessed to generate spectral data. In other words, could AI act as a virtual spectrometer?
A spectroscope probes a material’s properties by sending light of a certain wavelength into the material. That light causes molecular bonds in the material to vibrate in ways that scatter the light back out to the scope, where the light is recorded as a pattern of waves, or spectra, that can then be read as a signature of the material’s structure.
For AI to generate spectral data, the conventional approach would involve training an algorithm to recognize connections between physical atoms and features in a material, and the spectra they produce. Given the complexity of molecular structures within just one material, Tadesse says such an approach can quickly become intractable.
“Doing this even for just one material is impossible,” she says. “So, we thought, is there another way to interpret spectra?”
The team found an answer with math. They realized that a spectral pattern, which is a sequence of waveforms, can be represented mathematically. For instance, a spectrum that contains a series of bell curves is known as a “Gaussian” distribution, which is associated with a certain mathematical expression, compared to a series of narrower waves, known as a “Lorentzian” distribution, that is described by a separate, distinct algorithm. And as it turns out, for most materials infrared spectra characteristically contain more Lorentzian waveforms, while Raman spectra are more Gaussian, and X-ray spectra is a mix of the two.
Tadesse and Zhu worked this mathematical interpretation of spectral data into an algorithm that they then incorporated into a generative AI model.
“It’s a physics-savvy generative AI that understands what spectra are,” Tadesse says. “And the key novelty is, we interpreted spectra not as how it comes about from chemicals and bonds, but that it is actually math — curves and graphs, which an AI tool can understand and interpret.”
Data co-pilot
The team demonstrated their SpectroGen AI tool on a large, publicly available dataset of over 6,000 mineral samples. Each sample includes information on the mineral’s properties, such as its elemental composition and crystal structure. Many samples in the dataset also include spectral data in different modalities, such as X-ray, Raman, and infrared. Of these samples, the team fed several hundred to SpectroGen, in a process that trained the AI tool, also known as a neural network, to learn correlations between a mineral’s different spectral modalities. This training enabled SpectroGen to take in spectra of a material in one modality, such as in infrared, and generate what a spectra in a totally different modality, such as X-ray, should look like.
Once they trained the AI tool, the researchers fed SpectroGen spectra from a mineral in the dataset that was not included in the training process. They asked the tool to generate a spectra in a different modality, based on this “new” spectra. The AI-generated spectra, they found, was a close match to the mineral’s real spectra, which was originally recorded by a physical instrument. The researchers carried out similar tests with a number of other minerals and found that the AI tool quickly generated spectra, with 99 percent correlation.
“We can feed spectral data into the network and can get another totally different kind of spectral data, with very high accuracy, in less than a minute,” Zhu says.
The team says that SpectroGen can generate spectra for any type of mineral. In a manufacturing setting, for instance, mineral-based materials that are used to make semiconductors and battery technologies could first be quickly scanned by an infrared laser. The spectra from this infrared scanning could be fed into SpectroGen, which would then generate a spectra in X-ray, which operators or a multiagent AI platform can check to assess the material’s quality.
“I think of it as having an agent or co-pilot, supporting researchers, technicians, pipelines and industry,” Tadesse says. “We plan to customize this for different industries’ needs.”
The team is exploring ways to adapt the AI tool for disease diagnostics, and for agricultural monitoring through an upcoming project funded by Google. Tadesse is also advancing the technology to the field through a new startup and envisions making SpectroGen available for a wide range of sectors, from pharmaceuticals to semiconductors to defense.
Helping scientists run complex data analyses without writing code
As costs for diagnostic and sequencing technologies have plummeted in recent years, researchers have collected an unprecedented amount of data around disease and biology. Unfortunately, scientists hoping to go from data to new cures often require help from someone with experience in software engineering.
Now, Watershed Bio is helping scientists and bioinformaticians run experiments and get insights with a platform that lets users analyze complex datasets regardless of their computational skills. The cloud-based platform provides workflow templates and a customizable interface to help users explore and share data of all types, including whole-genome sequencing, transcriptomics, proteomics, metabolomics, high-content imaging, protein folding, and more.
“Scientists want to learn about the software and data science parts of the field, but they don’t want to become software engineers writing code just to understand their data,” co-founder and CEO Jonathan Wang ’13, SM ’15 says. “With Watershed, they don’t have to.”
Watershed is being used by large and small research teams across industry and academia to drive discovery and decision-making. When new advanced analytic techniques are described in scientific journals, they can be added to Watershed’s platform immediately as templates, making cutting-edge tools more accessible and collaborative for researchers of all backgrounds.
“The data in biology is growing exponentially, and the sequencing technologies generating this data are only getting better and cheaper,” Wang says. “Coming from MIT, this issue was right in my wheelhouse: It’s a tough technical problem. It’s also a meaningful problem because these people are working to treat diseases. They know all this data has value, but they struggle to use it. We want to help them unlock more insights faster.”
No code discovery
Wang expected to major in biology at MIT, but he quickly got excited by the possibilities of building solutions that scaled to millions of people with computer science. He ended up earning both his bachelor’s and master’s degrees from the Department of Electrical Engineering and Computer Science (EECS). Wang also interned at a biology lab at MIT, where he was surprised how slow and labor-intensive experiments were.
“I saw the difference between biology and computer science, where you had these dynamic environments [in computer science] that let you get feedback immediately,” Wang says. “Even as a single person writing code, you have so much at your fingertips to play with.”
While working on machine learning and high-performance computing at MIT, Wang also co-founded a high frequency trading firm with some classmates. His team hired researchers with PhD backgrounds in areas like math and physics to develop new trading strategies, but they quickly saw a bottleneck in their process.
“Things were moving slowly because the researchers were used to building prototypes,” Wang says. “These were small approximations of models they could run locally on their machines. To put those approaches into production, they needed engineers to make them work in a high-throughput way on a computing cluster. But the engineers didn’t understand the nature of the research, so there was a lot of back and forth. It meant ideas you thought could have been implemented in a day took weeks.”
To solve the problem, Wang’s team developed a software layer that made building production-ready models as easy as building prototypes on a laptop. Then, a few years after graduating MIT, Wang noticed technologies like DNA sequencing had become cheap and ubiquitous.
“The bottleneck wasn’t sequencing anymore, so people said, ‘Let’s sequence everything,’” Wang recalls. “The limiting factor became computation. People didn’t know what to do with all the data being generated. Biologists were waiting for data scientists and bioinformaticians to help them, but those people didn’t always understand the biology at a deep enough level.”
The situation looked familiar to Wang.
“It was exactly like what we saw in finance, where researchers were trying to work with engineers, but the engineers never fully understood, and you had all this inefficiency with people waiting on the engineers,” Wang says. “Meanwhile, I learned the biologists are hungry to run these experiments, but there is such a big gap they felt they had to become a software engineer or just focus on the science.”
Wang officially founded Watershed in 2019 with physician Mark Kalinich ’13, a former classmate at MIT who is no longer involved in day-to-day operations of the company.
Wang has since heard from biotech and pharmaceutical executives about the growing complexity of biology research. Unlocking new insights increasingly involves analyzing data from entire genomes, population studies, RNA sequencing, mass spectrometry, and more. Developing personalized treatments or selecting patient populations for a clinical study can also require huge datasets, and there are new ways to analyze data being published in scientific journals all the time.
Today, companies can run large-scale analyses on Watershed without having to set up their own servers or cloud computing accounts. Researchers can use ready-made templates that work with all the most common data types to accelerate their work. Popular AI-based tools like AlphaFold and Geneformer are also available, and Watershed’s platform makes sharing workflows and digging deeper into results easy.
“The platform hits a sweet spot of usability and customizability for people of all backgrounds,” Wang says. “No science is ever truly the same. I avoid the word product because that implies you deploy something and then you just run it at scale forever. Research isn’t like that. Research is about coming up with an idea, testing it, and using the outcome to come up with another idea. The faster you can design, implement, and execute experiments, the faster you can move on to the next one.”
Accelerating biology
Wang believes Watershed is helping biologists keep up with the latest advances in biology and accelerating scientific discovery in the process.
“If you can help scientists unlock insights not a little bit faster, but 10 or 20 times faster, it can really make a difference,” Wang says.
Watershed is being used by researchers in academia and in companies of all sizes. Executives at biotech and pharmaceutical companies also use Watershed to make decisions about new experiments and drug candidates.
“We’ve seen success in all those areas, and the common thread is people understanding research but not being an expert in computer science or software engineering,” Wang says. “It’s exciting to see this industry develop. For me, it’s great being from MIT and now to be back in Kendall Square where Watershed is based. This is where so much of the cutting-edge progress is happening. We’re trying to do our part to enable the future of biology.”
New MIT initiative seeks to transform rare brain disorders research
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.
Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.
“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”
Building new coalitions
Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.
Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”
Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.
RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.
MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.
These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”
“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”
Geologists discover the first evidence of 4.5-billion-year-old “proto Earth”
Scientists at MIT and elsewhere have discovered extremely rare remnants of “proto Earth,” which formed about 4.5 billion years ago, before a colossal collision irreversibly altered the primitive planet’s composition and produced the Earth as we know today. Their findings, reported today in the journal Nature Geosciences, will help scientists piece together the primordial starting ingredients that forged the early Earth and the rest of the solar system.
Billions of years ago, the early solar system was a swirling disk of gas and dust that eventually clumped and accumulated to form the earliest meteorites, which in turn merged to form the proto Earth and its neighboring planets.
In this earliest phase, Earth was likely rocky and bubbling with lava. Then, less than 100 million years later, a Mars-sized meteorite slammed into the infant planet in a singular “giant impact” event that completely scrambled and melted the planet’s interior, effectively resetting its chemistry. Whatever original material the proto Earth was made from was thought to have been altogether transformed.
But the MIT team’s findings suggest otherwise. The researchers have identified a chemical signature in ancient rocks that is unique from most other materials found in the Earth today. The signature is in the form of a subtle imbalance in potassium isotopes discovered in samples of very old and very deep rocks. The team determined that the potassium imbalance could not have been produced by any previous large impacts or geological processes occurring in the Earth presently.
The most likely explanation for the samples’ chemical composition is that they must be leftover material from the proto Earth that somehow remained unchanged, even as most of the early planet was impacted and transformed.
“This is maybe the first direct evidence that we’ve preserved the proto Earth materials,” says Nicole Nie, the Paul M. Cook Career Development Assistant Professor of Earth and Planetary Sciences at MIT. “We see a piece of the very ancient Earth, even before the giant impact. This is amazing because we would expect this very early signature to be slowly erased through Earth’s evolution.”
The study’s other authors include Da Wang of Chengdu University of Technology in China, Steven Shirey and Richard Carlson of the Carnegie Institution for Science in Washington, Bradley Peters of ETH Zürich in Switzerland, and James Day of Scripps Institution of Oceanography in California.
A curious anomaly
In 2023, Nie and her colleagues analyzed many of the major meteorites that have been collected from sites around the world and carefully studied. Before impacting the Earth, these meteorites likely formed at various times and locations throughout the solar system, and therefore represent the solar system’s changing conditions over time. When the researchers compared the chemical compositions of these meteorite samples to Earth, they identified among them a “potassium isotopic anomaly.”
Isotopes are slightly different versions of an element that have the same number of protons but a different number of neutrons. The element potassium can exist in one of three naturally-occurring isotopes, with mass numbers (protons plus neutrons) of 39, 40, and 41, respectively. Wherever potassium has been found on Earth, it exists in a characteristic combination of isotopes, with potassium-39 and potassium-41 being overwhelmingly dominant. Potassium-40 is present, but at a vanishingly small percentage in comparison.
Nie and her colleagues discovered that the meteorites they studied showed balances of potassium isotopes that were different from most materials on Earth. This potassium anomaly suggested that any material that exhibits a similar anomaly likely predates Earth’s present composition. In other words, any potassium imbalance would be a strong sign of material from the proto Earth, before the giant impact reset the planet’s chemical composition.
“In that work, we found that different meteorites have different potassium isotopic signatures, and that means potassium can be used as a tracer of Earth’s building blocks,” Nie explains.
“Built different”
In the current study, the team looked for signs of potassium anomalies not in meteorites, but within the Earth. Their samples include rocks, in powder form, from Greenland and Canada, where some of the oldest preserved rocks are found. They also analyzed lava deposits collected from Hawaii, where volcanoes have brought up some of the Earth’s earliest, deepest materials from the mantle (the planet’s thickest layer of rock that separates the crust from the core).
“If this potassium signature is preserved, we would want to look for it in deep time and deep Earth,” Nie says.
The team first dissolved the various powder samples in acid, then carefully isolated any potassium from the rest of the sample and used a special mass spectrometer to measure the ratio of each of potassium’s three isotopes. Remarkably, they identified in the samples an isotopic signature that was different from what’s been found in most materials on Earth.
Specifically, they identified a deficit in the potassium-40 isotope. In most materials on Earth, this isotope is already an insignificant fraction compared to potassium’s other two isotopes. But the researchers were able to discern that their samples contained an even smaller percentage of potassium-40. Detecting this tiny deficit is like spotting a single grain of brown sand in a bucket rather than a scoop full of of yellow sand.
The team found that, indeed, the samples exhibited the potassium-40 deficit, showing that the materials “were built different,” says Nie, compared to most of what we see on Earth today.
But could the samples be rare remnants of the proto Earth? To answer this, the researchers assumed that this might be the case. They reasoned that if the proto Earth were originally made from such potassium-40-deficient materials, then most of this material would have undergone chemical changes — from the giant impact and subsequent, smaller meteorite impacts — that ultimately resulted in the materials with more potassium-40 that we see today.
The team used compositional data from every known meteorite and carried out simulations of how the samples’ potassium-40 deficit would change following impacts by these meteorites and by the giant impact. They also simulated geological processes that the Earth experienced over time, such as the heating and mixing of the mantle. In the end, their simulations produced a composition with a slightly higher fraction of potassium-40 compared to the samples from Canada, Greenland, and Hawaii. More importantly, the simulated compositions matched those of most modern-day materials.
The work suggests that materials with a potassium-40 deficit are likely leftover original material from the proto Earth.
Curiously, the samples’ signature isn’t a precise match with any other meteorite in geologists’ collections. While the meteorites in the team’s previous work showed potassium anomalies, they aren’t exactly the deficit seen in the proto Earth samples. This means that whatever meteorites and materials originally formed the proto Earth have yet to be discovered.
“Scientists have been trying to understand Earth’s original chemical composition by combining the compositions of different groups of meteorites,” Nie says. “But our study shows that the current meteorite inventory is not complete, and there is much more to learn about where our planet came from.”
This work was supported, in part, by NASA and MIT.
A new system can dial expression of synthetic genes up or down
For decades, synthetic biologists have been developing gene circuits that can be transferred into cells for applications such as reprogramming a stem cell into a neuron or generating a protein that could help treat a disease such as fragile X syndrome.
These gene circuits are typically delivered into cells by carriers such as nonpathogenic viruses. However, it has been difficult to ensure that these cells end up producing the correct amount of the protein encoded by the synthetic gene.
To overcome that obstacle, MIT engineers have designed a new control mechanism that allows them to establish a desired protein level, or set point, for any gene circuit. This approach also allows them to edit the set point after the circuit is delivered.
“This is a really stable and multifunctional tool. The tool is very modular, so there are a lot of transgenes you could control with this system,” says Katie Galloway, an assistant professor in Chemical Engineering at MIT and the senior author of the new study.
Using this strategy, the researchers showed that they could induce cells to generate consistent levels of target proteins. In one application that they demonstrated, they converted mouse embryonic fibroblasts to motor neurons by delivering high levels of a gene that promotes that conversion.
MIT graduate student Sneha Kabaria is the lead author of the paper, which appears today in Nature Biotechnology. Other authors include Yunbeen Bae ’24; MIT graduate students Mary Ehmann, Brittany Lende-Dorn, Emma Peterman, and Kasey Love; Adam Beitz PhD ’25; and former MIT postdoc Deon Ploessl.
Dialing up gene expression
Synthetic gene circuits are engineered to include not only the gene of interest, but also a promoter region. At this site, transcription factors and other regulators can bind, turning on the expression of the synthetic gene.
However, it’s not always possible to get all of the cells in a population to express the desired gene at a uniform level. One reason for that is that some cells may take up just one copy of the circuit, while others receive many more. Additionally, cells have natural variation in how much protein they produce.
That has made reprogramming cells challenging because it’s difficult to ensure that every cell in a population of skin cells, for example, will produce enough of the necessary transcription factors to successfully transition into a new cell identity, such as a neuron or induced pluripotent stem cell.
In the new paper, the researchers devised a way to control gene expression levels by changing the distance between the synthetic gene and its promoter. They found that when there was a longer DNA “spacer” between the promoter region and the gene, the gene would be expressed at a lower level. That extra distance, they showed, makes it less likely that transcription factors bound to the promoter will effectively turn on gene transcription.
Then, to create set points that could be edited, the researchers incorporated sites within the spacer that can be excised by an enzyme called Cre recombinase. As parts of the spacer are cut out, it helps bring the transcription factors closer to the gene of interest, which turns up gene expression.
The researchers showed they could create spacers with multiple excision points, each targeted by different recombinases. This allowed them to create a system called DIAL, that they could use to establish “high,” “med,” “low” and “off” set points for gene expression.
After the DNA segment carrying the gene and its promoter is delivered into cells, recombinases can be added to the cells, allowing the set point to be edited at any time.
The researchers demonstrated their system in mouse and human cells by delivering the gene for different fluorescent proteins and functional genes, and showed that they could get uniform expression across the a population of cells at the target level.
“We achieved uniform and stable control. This is very exciting for us because lack of uniform, stable control has been one of the things that's been limiting our ability to build reliable systems in synthetic biology. When there are too many variables that affect your system, and then you add in normal biological variation, it’s very hard to build stable systems,” Galloway says.
Reprogramming cells
To demonstrate potential applications of the DIAL system, the researchers then used it to deliver different levels of the gene HRasG12V to mouse embryonic fibroblasts. This HRas variant has previously been shown to increase the rate of conversion of fibroblasts to neurons. The MIT team found that in cells that received a higher dose of the gene, a larger percentage of them were able to successfully transform into neurons.
Using this system, researchers now hope to perform more systematic studies of different transcription factors that can induce cells to transition to different cell types. Such studies could reveal how different levels of those factors affect the success rate, and whether changing the transcription factors levels might alter the cell type that is generated.
In ongoing work, the researchers have shown that DIAL can be combined with a system they previously developed, known as ComMAND, that uses a feedforward loop to help prevent cells from overexpressing a therapeutic gene.
Using these systems together, it could be possible to tailor gene therapies to produce specific, consistent protein levels in the target cells of individual patients, the researchers say.
“This is something we’re excited about because both DIAL and ComMAND are highly modular, so you could not only have a well-controlled gene therapy that’s somewhat general for a population, but you could, in theory, tailor it for any given person or any given cell type,” Galloway says.
The research was funded, in part, by the National Institute of General Medical Sciences, the National Science Foundation, and the Institute for Collaborative Biotechnologies.
MIT releases financials and endowment figures for 2025
The Massachusetts Institute of Technology Investment Management Company (MITIMCo) announced today that MIT’s unitized pool of endowment and other MIT funds generated an investment return of 14.8 percent during the fiscal year ending June 30, 2025, as measured using valuations received within one month of fiscal year end. At the end of the fiscal year, MIT’s endowment funds totaled $27.4 billion, excluding pledges. Over the 10 years ending June 30, 2025, MIT generated an annualized return of 10.7 percent.
The endowment is the bedrock of MIT’s finances, made possible by gifts from alumni and friends for more than a century. The use of the endowment is governed by a state law that requires MIT to maintain each endowed gift as a permanent fund, preserve its purchasing power, and spend it as directed by its original donor. Most of the endowment’s funds are restricted and must be used for a specific purpose. MIT uses the bulk of the income these endowed gifts generate to support financial aid, research, and education.
The endowment supports 50 percent of undergraduate tuition, helping to enable the Institute’s need-blind undergraduate admissions policy, which ensures that an MIT education is accessible to all qualified candidates regardless of financial resources. MIT works closely with all families of undergraduates who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2024-25, the average need-based MIT undergraduate scholarship was $62,127. Fifty-seven percent of MIT undergraduates received need-based financial aid, and 39 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.
Effective in fiscal 2026, MIT enhanced undergraduate financial aid, ensuring that all families with incomes below $200,000 and typical assets have tuition fully covered by scholarships, and that families with incomes below $100,000 and typical assets pay nothing at all for their students’ MIT education. Eighty-eight percent of seniors who graduated in academic year 2025 graduated with no debt.
MITIMCo is a unit of MIT, created to manage and oversee the investment of the Institute’s endowment, retirement, and operating funds.
MIT’s Report of the Treasurer for fiscal year 2025, which details the Institute’s annual financial performance, was made available publicly today.
Ray Kurzweil ’70 reinforces his optimism in tech progress
Innovator, futurist, and author Ray Kurzweil ’70 emphasized his optimism about artificial intelligence, and technological progress generally, in a lecture on Wednesday while accepting MIT’s Robert A. Muh Alumni Award from the School of Humanities, Arts, and Social Sciences (SHASS).
Kurzweil offered his signature high-profile forecasts about how AI and computing will entirely blend with human functionality, and proposed that AI will lead to monumental gains in longevity, medicine, and other realms of life.
“People do not appreciate that the rate of progress is accelerating,” Kurzweil said, forecasting “incredible breakthroughs” over the next two decades.
Kurzweil delivered his lecture, titled “Reinventing Intelligence,” in the Thomas Tull Concert Hall of the Edward and Joyce Linde Music Building, which opened earlier in 2025 on the MIT campus.
The Muh Award was founded and endowed by Robert A. Muh ’59 and his wife Berit, and is one of the leading alumni honors granted by SHASS and MIT. Muh, a life member emeritus of the MIT Corporation, established the award, which is granted every two years for “extraordinary contributions” by alumni in the humanities, arts, and social sciences.
Robert and Berit Muh were both present at the lecture, along with their daughter Carrie Muh ’96, ’97, SM ’97.
Agustín Rayo, dean of SHASS, offered introductory remarks, calling Kurzweil “one of the most prolific thinkers of our time.” Rayo added that Kurzweil “has built his life and career on the belief that ideas change the world, and change it for the better.”
Kurzweil has been an innovator in language recognition technologies, developing advances and founding companies that have served people who are blind or low-vision, and helped in music creation. He is also a best-selling author who has heralded advances in computing capabilities, and even the merging of human and machines.
The initial segment of Kurzweil’s lecture was autobiographical in focus, reflecting on his family and early years. The families of both of Kurzweil’s parents fled the Nazis in Europe, seeking refuge in the U.S., with the belief that people could create a brighter future for themselves.
“My parents taught me the power of ideas can really change the world,” Kurzweil said.
Showing an early interest in how things worked, Kurzweil had decided to become an inventor by about the age of 7, he recalled. He also described his mother as being tremendously encouraging to him as a child. The two would take walks together, and the young Kurzweil would talk about all the things he imagined inventing.
“I would tell her my ideas and no matter how fantastical they were, she believed them,” he said. “Now other parents might have simply chuckled … but she actually believed my ideas, and that actually gave me my confidence, and I think confidence is important in succeeding.”
He became interested in computing by the early 1960s and majored in both computer science and literature as an MIT undergraduate.
Kurzweil has a long-running association with MIT extending far beyond his undergraduate studies. He served as a member of the MIT Corporation from 2005 to 2012 and was the 2001 recipient of the $500,000 Lemelson-MIT Prize, an award for innovation, for his development of reading technology.
“MIT has played a major role in my personal and professional life over the years,” Kurzweil said, calling himself “truly honored to receive this award.” Addressing Muh, he added: “Your longstanding commitment to our alma mater is inspiring.”
After graduating from MIT, Kurzweil launched a successful career developing innovative computing products, including one that recognized text across all fonts and could produce an audio reading. He also developed leading-edge music synthesizers, among many other advances.
In a corresponding part of his career, Kurzweil has become an energetic author, whose best-known books include “The Age of Intelligent Machines” (1990), “The Age of Spiritual Machines” (1999), “The Singularity Is Near” (2005), and “The Singularity Is Nearer” (2024), among many others.
Kurzweil was recently named chief AI officer of Beyond Imagination, a robotics firm he co-founded; he has also held a position at Google in recent years, working on natural language technologies.
In his remarks, Kurzweil underscored his view that, as exemplified and enabled by the growth of computing power over time, technological innovation moves at an exponential pace.
“People don’t really think about exponential growth; they think about linear growth,” Kurzweil said.
This concept, he said, makes him confident that a string of innovations will continue at remarkable speed.
“One of the bigger transformations we’re going to see from AI in the near term is health and medicine,” Kurweil said, forecasting that human medical trials will be replaced by simulated “digital trials.”
Kurzweil also believes computing and AI advances can lead to so many medical advances it will soon produce a drastic improvement in human longevity.
“These incredible breakthroughs are going to lead to what we’ll call longevity escape velocity,” Kurzweil said. “By roughly 2032 when you live through a year, you’ll get back an entire year from scientific progress, and beyond that point you’ll get back more than a year for every year you live, so you’ll be going back into time as far as your health is concerned,” Kurweil said. He did offer that these advances will “start” with people who are the most diligent about their health.
Kurzweil also outlined one of his best-known forecasts, that AI and people will be combined. “As we move forward, the lines between humans and technology will blur, until we are … one and the same,” Kurzweil said. “This is how we learn to merge with AI. In the 2030s, robots the size of molecules will go into our brains, noninvasively, through the capillaries, and will connect our brains directly to the cloud. Think of it like having a phone, but in your brain.”
“By 2045, once we have fully merged with AI, our intelligence will no longer be constrained … it will expand a millionfold,” he said. “This is what we call the singularity.”
To be sure, Kurzweil acknowledged, “Technology has always been a double-edged sword,” given that a drone can deliver either medical supplies or weaponry. “Threats of AI are real, must be taken seriously, [and] I think we are doing that,” he said. In any case, he added, we have “a moral imperative to realize the promise of new technologies while controlling the peril.” He concluded: “We are not doomed to fail to control any of these risks.”
Gene-Wei Li named associate head of the Department of Biology
Associate Professor Gene-Wei Li has accepted the position of associate head of the MIT Department of Biology, starting in the 2025-26 academic year.
Li, who has been a member of the department since 2015, brings a history of departmental leadership, service, and research and teaching excellence to his new role. He has received many awards, including a Sloan Research Fellowship (2016), an NSF Career Award (2019), Pew and Searle scholarships, and MIT’s Committed to Caring Award (2020). In 2024, he was appointed as a Howard Hughes Medical Institute (HHMI) Investigator.
“I am grateful to Gene-Wei for joining the leadership team,” says department head Amy E. Keating, the Jay A. Stein (1968) Professor of Biology and professor of biological engineering. “Gene will be a key leader in our educational initiatives, both digital and residential, and will be a critical part of keeping our department strong and forward-looking.”
A great environment to do science
Li says he was inspired to take on the role in part because of the way MIT Biology facilitates career development during every stage — from undergraduate and graduate students to postdocs and junior faculty members, as he was when he started in the department as an assistant professor just 10 years ago.
“I think we all benefit a lot from our environment, and I think this is a great environment to do science and educate people, and to create a new generation of scientists,” he says. “I want us to keep doing well, and I’m glad to have the opportunity to contribute to this effort.”
As part of his portfolio as associate department head, Li will continue in the role of scientific director of the Koch Biology Building, Building 68. In the last year, the previous scientific director, Stephen Bell, Uncas and Helen Whitaker Professor of Biology and HHMI Investigator, has continued to provide support and ensured a steady ramp-up, transitioning Li into his new duties. The building, which opened its doors in 1994, is in need of a slate of updates and repairs.
Although Li will be managing more administrative duties, he has provided a stable foundation for his lab to continue its interdisciplinary work on the quantitative biology of gene expression, parsing the mechanisms by which cells control the levels of their proteins and how this enables cells to perform their functions. His recent work includes developing a method that leverages the AI tool AlphaFold to predict whether protein fragments can recapitulate the native interactions of their full-length counterparts.
“I’m still very heavily involved, and we have a lab environment where everyone helps each other. It’s a team, and so that helps elevate everyone,” he says. “It’s the same with the whole building: nobody is working by themselves, so the science and administrative parts come together really nicely.”
Teaching for the future
Li is considering how the department can continue to be a global leader in biological sciences while navigating the uncertainty surrounding academia and funding, as well as the likelihood of reduced staff support and tightening budgets.
“The question is: How do you maintain excellence?” Li says. “That involves recruiting great people and giving them the resources that they need, and that’s going to be a priority within the limitations that we have to work with.”
Li will also be serving as faculty advisor for the MIT Biology Teaching and Learning Group, headed by Mary Ellen Wiltrout, and will serve on the Department of Biology Digital Learning Committee and the new Open Learning Biology Advisory Committee. Li will serve in the latter role in order to represent the department and work with new faculty member and HHMI Investigator Ron Vale on Institute-level online learning initiatives. Li will also chair the Biology Academic Planning Committee, which will help develop a longer-term outlook on faculty teaching assignments and course offerings.
Li is looking forward to hearing from faculty and students about the way the Institute teaches, and how it could be improved, both for the students on campus and for the online learners from across the world.
“There are a lot of things that are changing; what are the core fundamentals that the students need to know, what should we teach them, and how should we teach them?”
Although the commitment to teaching remains unchanged, there may be big transitions on the horizon. With two young children in school, Li is all too aware that the way that students learn today is very different from what he grew up with, and also very different from how students were learning just five or 10 years ago — writing essays on a computer, researching online, using AI tools, and absorbing information from media like short-form YouTube videos.
“There’s a lot of appeal to a shorter format, but it’s very different from the lecture-based teaching style that has worked for a long time,” Li says. “I think a challenge we should and will face is figuring out the best way to communicate the core fundamentals, and adapting our teaching styles to the next generation of students.”
Ultimately, Li is excited about balancing his research goals along with joining the department’s leadership team, and knows he can look to his fellow researchers in Building 68 and beyond for support.
“I’m privileged to be working with a great group of colleagues who are all invested in these efforts,” Li says. “Different people may have different ways of doing things, but we all share the same mission.”
Immune-informed brain aging research offers new treatment possibilities, speakers say
Understanding how interactions between the central nervous system and the immune system contribute to problems of aging, including Alzheimer’s disease, Parkinson’s disease, arthritis, and more, can generate new leads for therapeutic development, speakers said at MIT’s symposium “The Neuro-Immune Axis and the Aging Brain” on Sept 18.
“The past decade has brought rapid progress in our understanding of how adaptive and innate immune systems impact the pathogenesis of neurodegenerative disorders,” said Picower Professor Li-Huei Tsai, director of The Picower Institute for Learning and Memory and MIT’s Aging Brain Initiative (ABI), in her introduction to the event, which more than 450 people registered to attend. “Together, today’s speakers will trace how the neuro-immune axis shapes brain health and disease … Their work converges on the promise of immunology-informed therapies to slow or prevent neurodegeneration and age-related cognitive decline.”
For instance, keynote speaker Michal Schwartz of the Weizmann Institute in Israel described her decades of pioneering work to understand the neuro-immune “ecosystem.” Immune cells, she said, help the brain heal, and support many of its functions, including its “plasticity,” the ability it has to adapt to and incorporate new information. But Schwartz’s lab also found that an immune signaling cascade can arise with aging that undermines cognitive function. She has leveraged that insight to investigate and develop corrective immunotherapies that improve the brain’s immune response to Alzheimer’s both by rejuvenating the brain’s microglia immune cells and bringing in the help of peripheral immune cells called macrophages. Schwartz has brought the potential therapy to market as the chief science officer of ImmunoBrain, a company testing it in a clinical trial.
In her presentation, Tsai noted recent work from her lab and that of computer science professor and fellow ABI member Manolis Kellis showing that many of the genes associated with Alzheimer’s disease are most strongly expressed in microglia, giving it an expression profile more similar to autoimmune disorders than to many psychiatric ones (where expression of disease-associated genes typically is highest in neurons). The study showed that microglia become “exhausted” over the course of disease progression, losing their cellular identity and becoming harmfully inflammatory.
“Genetic risk, epigenomic instability, and microglia exhaustion really play a central role in Alzheimer’s disease,” Tsai said, adding that her lab is now also looking into how immune T cells, recruited by microglia, may also contribute to Alzheimer’s disease progression.
The body and the brain
The neuro-immune “axis” connects not only the nervous and immune systems, but also extends between the whole body and the brain, with numerous implications for aging. Several speakers focused on the key conduit: the vagus nerve, which runs from the brain to the body’s major organs.
For instance, Sara Prescott, an investigator in the Picower Institute and an MIT assistant professor of biology, presented evidence her lab is amassing that the brain’s communication via vagus nerve terminals in the body’s airways is crucial for managing the body’s defense of respiratory tissues. Given that we inhale about 20,000 times a day, our airways are exposed to many environmental challenges, Prescott noted, and her lab and others are finding that the nervous system interacts directly with immune pathways to mount physiological responses. But vagal reflexes decline in aging, she noted, increasing susceptibility to infection, and so her lab is now working in mouse models to study airway-to-brain neurons throughout the lifespan to better understand how they change with aging.
In his talk, Caltech Professor Sarkis Mazmanian focused on work in his lab linking the gut microbiome to Parkinson’s disease (PD), for instance by promoting alpha-synuclein protein pathology and motor problems in mouse models. His lab hypothesizes that the microbiome can nucleate alpha-synuclein in the gut via a bacterial amyloid protein that may subsequently promote pathology in the brain, potentially via the vagus nerve. Based on its studies, the lab has developed two interventions. One is giving alpha-synuclein overexpressing mice a high-fiber diet to increase short-chain fatty acids in their gut, which actually modulates the activity of microglia in the brain. The high-fiber diet helps relieve motor dysfunction, corrects microglia activity, and reduces protein pathology, he showed. Another is a drug to disrupt the bacterial amyloid in the gut. It prevents alpha synuclein formation in the mouse brain and ameliorates PD-like symptoms. These results are pending publication.
Meanwhile, Kevin Tracey, professor at Hofstra University and Northwell Health, took listeners on a journey up and down the vagus nerve to the spleen, describing how impulses in the nerve regulate immune system emissions of signaling molecules, or “cytokines.” Too great a surge can become harmful, for instance causing the autoimmune disorder rheumatoid arthritis. Tracey described how a newly U.S. Food and Drug Administration-approved pill-sized neck implant to stimulate the vagus nerve helps patients with severe forms of the disease without suppressing their immune system.
The brain’s border
Other speakers discussed opportunities for understanding neuro-immune interactions in aging and disease at the “borders” where the brain’s and body’s immune system meet. These areas include the meninges that surround the brain, the choroid plexus (proximate to the ventricles, or open spaces, within the brain), and the interface between brain cells and the circulatory system.
For instance, taking a cue from studies showing that circadian disruptions are a risk factor for Alzheimer’s disease, Harvard Medical School Professor Beth Stevens of Boston Children’s Hospital described new research in her lab that examined how brain immune cells may function differently around the day-night cycle. The project, led by newly minted PhD Helena Barr, found that “border-associated macrophages” — long-lived immune cells residing in the brain’s borders — exhibited circadian rhythms in gene expression and function. Stevens described how these cells are tuned by the circadian clock to “eat” more during the rest phase, a process that may help remove material draining from the brain, including Alzheimer’s disease-associated peptides such as amyloid-beta. So, Stevens hypothesizes, circadian disruptions, for example due to aging or night-shift work, may contribute to disease onset by disrupting the delicate balance in immune-mediated “clean-up” of the brain and its borders.
Following Stevens at the podium, Washington University Professor Marco Colonna traced how various kinds of macrophages, including border macrophages and microglia, develop from the embryonic stage. He described the different gene-expression programs that guide their differentiation into one type or another. One gene he highlighted, for instance, is necessary for border macrophages along the brain’s vasculature to help regulate the waste-clearing cerebrospinal fluid (CSF) flow that Stevens also discussed. Knocking out the gene also impairs blood flow. Importantly, his lab has found that versions of the gene may be somewhat protective against Alzheimer’s, and that regulating expression of the gene could be a therapeutic strategy.
Colonna’s WashU colleague Jonathan Kipnis (a former student of Schwartz) also discussed macrophages that are associated with the particular border between brain tissue and the plumbing alongside the vasculature that carries CSF. The macrophages, his lab showed in 2022, actively govern the flow of CSF. He showed that removing the macrophages let Alzheimer’s proteins accumulate in mice. His lab is continuing to investigate ways in which these specific border macrophages may play roles in disease. He’s also looking in separate studies of how the skull’s brain marrow contributes to the population of immune cells in the brain and may play a role in neurodegeneration.
For all the talk of distant organs and the brain’s borders, neurons themselves were never far from the discussion. Harvard Medical School Professor Isaac Chiu gave them their direct due in a talk focusing on how they participate in their own immune defense, for instance by directly sensing pathogens and giving off inflammation signals upon cell death. He discussed a key molecule in that latter process, which is expressed among neurons all over the brain.
Whether they were looking within the brain, at its border, or throughout the body, speakers showed that age-related nervous system diseases are not only better understood but also possibly better treated by accounting not only for the nerve cells, but their immune system partners.
MIT Schwarzman College of Computing and MBZUAI launch international collaboration to shape the future of AI
The MIT Schwarzman College of Computing and the Mohamed bin Zayed University of Artificial Intelligence (MBZUAI) recently celebrated the launch of the MIT–MBZUAI Collaborative Research Program, a new effort to strengthen the building blocks of artificial intelligence and accelerate its use in pressing scientific and societal challenges.
Under the five-year agreement, faculty, students, and research staff from both institutions will collaborate on fundamental research projects to advance the technological foundations of AI and its applications in three core areas: scientific discovery, human thriving, and the health of the planet.
“Artificial intelligence is transforming nearly every aspect of human endeavor. MIT’s leadership in AI is greatly enriched through collaborations with leading academic institutions in the U.S. and around the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “Our collaboration with MBZUAI reflects a shared commitment to advancing AI in ways that are responsible, inclusive, and globally impactful. Together, we can explore new horizons in AI and bring broad benefits to society.”
“This agreement will unite the efforts of researchers at two world-class institutions to advance frontier AI research across scientific discovery, human thriving, and the health of the planet. By combining MBZUAI’s focus on foundational models and real-world deployment with MIT’s depth in computing and interdisciplinary innovation, we are creating a transcontinental bridge for discovery. Together, we will not only expand the boundaries of AI science, but also ensure that these breakthroughs are pursued responsibly and applied where they matter most — improving human health, enabling intelligent robotics, and driving sustainable AI at scale,” says Eric Xing, president and university professor at MBZUAI.
Each institution has appointed an academic director to oversee the program on its campus. At MIT, Philip Isola, the Class of 1948 Career Development Professor in the Department of Electrical Engineering and Computer Science, will serve as program lead. At MBZUAI, Le Song, professor of machine learning, will take on the role.
Supported by MBZUAI — the first university dedicated entirely to advancing science through AI, and based in Abu Dhabi, U.A.E. — the collaboration will fund a number of joint research projects per year. The findings will be openly publishable, and each project will be led by a principal investigator from MIT and one from MBZUAI, with project selections made by a steering committee composed of representatives from both institutions.
Riccardo Comin, two MIT alumni named 2025 Moore Experimental Physics Investigators
MIT associate professor of physics Riccardo Comin has been selected as 2025 Experimental Physics Investigator by the Gordon and Betty Moore Foundation. Two MIT physics alumni — Gyu-Boong Jo PhD ’10 of Rice University, and Ben Jones PhD ’15 of the University of Texas at Arlington — were also among this year’s cohort of 22 honorees.
The prestigious Experimental Physics Investigators (EPI) Initiative recognizes mid-career scientists advancing the frontiers of experimental physics. Each award provides $1.3 million over five years to accelerate breakthroughs and strengthen the experimental physics community.
At MIT, Comin investigates magnetoelectric multiferroics by engineering interfaces between two-dimensional materials and three-dimensional oxide thin films. His research aims to overcome long-standing limitations in spin-charge coupling by moving beyond epitaxial constraints, enabling new interfacial phases and coupling mechanisms. In these systems, Comin’s team explores the coexistence and proximity of magnetic and ferroelectric order, with a focus on achieving strong magnetoelectric coupling. This approach opens new pathways for designing tunable multiferroic systems unconstrained by traditional synthesis methods.
Comin’s research expands the frontier of multiferroics by demonstrating stacking-controlled magnetoelectric coupling at 2D–3D interfaces. This approach enables exploration of fundamental physics in a versatile materials platform and opens new possibilities for spintronics, sensing, and data storage. By removing constraints of epitaxial growth, Comin’s work lays the foundation for microelectronic and spintronic devices with novel functionalities driven by interfacial control of spin and polarization.
Comin’s project, Interfacial MAGnetoElectrics (I-MAGinE), aims to study a new class of artificial magnetoelectric multiferroics at the interfaces between ferroic materials from 2D van der Waals systems and 3D oxide thin films. The team aims to identify and understand novel magnetoelectric effects to demonstrate the viability of stacking-controlled interfacial magnetoelectric coupling. This research could lead to significant contributions in multiferroics, and could pave the way for innovative, energy-efficient storage devices.
“This research has the potential to make significant contributions to the field of multiferroics by demonstrating the viability of stacking-controlled interfacial magnetoelectric coupling,” according to Comin’s proposal. “The findings could pave the way for future applications in spintronics, data storage, and sensing. It offers a significant opportunity to explore fundamental physics questions in a novel materials platform, while laying the ground for future technological applications, including microelectronic and spintronic devices with new functionalities.”
Comin’s group has extensive experience in researching 2D and 3D ferroic materials and electronically ordered oxide thin films, as well as ultrathin van der Waals magnets, ferroelectrics, and multiferroics. Their lab is equipped with state-of-the-art tools for material synthesis, including bulk crystal growth of van der Waals materials and pulsed laser deposition targets, along with comprehensive fabrication and characterization capabilities. Their expertise in magneto-optical probes and advanced magnetic X-ray techniques promises to enable in-depth studies of electronic and magnetic structures, specifically spin-charge coupling, in order to contribute significantly to understanding spin-charge coupling in magnetochiral materials.
The coexistence of ferroelectricity and ferromagnetism in a single material, known as multiferroicity, is rare, and strong spin-charge coupling is even rarer due to fundamental chemical and electronic structure incompatibilities.
The few known bulk multiferroics with strong magnetoelectric coupling generally rely on inversion symmetry-breaking spin arrangements, which only emerge at low temperatures, limiting practical applications. While interfacial magnetoelectric multiferroics offer an alternative, achieving efficient spin-charge coupling often requires stringent conditions like epitaxial growth and lattice matching, which limit material combinations. This research proposes to overcome these limitations by using non-epitaxial interfaces of 2D van der Waals materials and 3D oxide thin films.
Unique features of this approach include leveraging the versatility of 2D ferroics for seamless transfer onto any substrate, eliminating lattice matching requirements, and exploring new classes of interfacial magnetoelectric effects unconstrained by traditional thin-film synthesis limitations.
Launched in 2018, the Moore Foundation’s EPI Initiative cultivates collaborative research environments and provides research support to promote the discovery of new ideas and emphasize community building.
“We have seen numerous new connections form and new research directions pursued by both individuals and groups based on conversations at these gatherings,” says Catherine Mader, program officer for the initiative.
The Gordon and Betty Moore Foundation was established to create positive outcomes for future generations. In pursuit of that vision, it advances scientific discovery, environmental conservation, and the special character of the San Francisco Bay Area.
How to reduce greenhouse gas emissions from ammonia production
Ammonia is one of the most widely produced chemicals in the world, used mostly as fertilizer, but also for the production of some plastics, textiles, and other applications. Its production, through processes that require high heat and pressure, accounts for up to 20 percent of all the greenhouse gases from the entire chemical industry, so efforts have been underway worldwide to find ways to reduce those emissions.
Now, researchers at MIT have come up with a clever way of combining two different methods of producing the compound that minimizes waste products, that, when combined with some other simple upgrades, could reduce the greenhouse emissions from production by as much as 63 percent, compared to the leading “low-emissions” approach being used today.
The new approach is described in the journal Energy & Fuels, in a paper by MIT Energy Initiative (MITEI) Director William H. Green, graduate student Sayandeep Biswas, MITEI Director of Research Randall Field, and two others.
“Ammonia has the most carbon dioxide emissions of any kind of chemical,” says Green, who is the Hoyt C. Hottel Professor in Chemical Engineering. “It’s a very important chemical,” he says, because its use as a fertilizer is crucial to being able to feed the world’s population.
Until late in the 19th century, the most widely used source of nitrogen fertilizer was mined deposits of bat or bird guano, mostly from Chile, but that source was beginning to run out, and there were predictions that the world would soon be running short of food to sustain the population. But then a new chemical process, called the Haber-Bosch process after its inventors, made it possible to make ammonia out of nitrogen from the air and hydrogen, which was mostly derived from methane. But both the burning of fossil fuels to provide the needed heat and the use of methane to make the hydrogen led to massive climate-warming emissions from the process.
To address this, two newer variations of ammonia production have been developed: so-called “blue ammonia,” where the greenhouse gases are captured right at the factory and then sequestered deep underground, and “green ammonia,” produced by a different chemical pathway, using electricity instead of fossil fuels to hydrolyze water to make hydrogen.
Blue ammonia is already beginning to be used, with a few plants operating now in Louisiana, Green says, and the ammonia mostly being shipped to Japan, “so that’s already kind of commercial.” Other parts of the world are starting to use green ammonia, especially in places that have lots of hydropower, solar, or wind to provide inexpensive electricity, including a giant plant now under construction in Saudi Arabia.
But in most places, both blue and green ammonia are still more expensive than the traditional fossil-fuel-based version, so many teams around the world have been working on ways to cut these costs as much as possible so that the difference is small enough to be made up through tax subsidies or other incentives.
The problem is growing, because as the population grows, and as wealth increases, there will be ever-increasing demands for nitrogen fertilizer. At the same time, ammonia is a promising substitute fuel to power hard-to-decarbonize transportation such as cargo ships and heavy trucks, which could lead to even greater needs for the chemical.
“It definitely works” as a transportation fuel, by powering fuel cells that have been demonstrated for use by everything from drones to barges and tugboats and trucks, Green says. “People think that the most likely market of that type would be for shipping,” he says, “because the downside of ammonia is it’s toxic and it’s smelly, and that makes it slightly dangerous to handle and to ship around.” So its best uses may be where it’s used in high volume and in relatively remote locations, like the high seas. In fact, the International Maritime Organization will soon be voting on new rules that might give a strong boost to the ammonia alternative for shipping.
The key to the new proposed system is to combine the two existing approaches in one facility, with a blue ammonia factory next to a green ammonia factory. The process of generating hydrogen for the green ammonia plant leaves a lot of leftover oxygen that just gets vented to the air. Blue ammonia, on the other hand, uses a process called autothermal reforming that requires a source of pure oxygen, so if there’s a green ammonia plant next door, it can use that excess oxygen.
“Putting them next to each other turns out to have significant economic value,” Green says. This synergy could help hybrid “blue-green ammonia” facilities serve as an important bridge toward a future where eventually green ammonia, the cleanest version, could finally dominate. But that future is likely decades away, Green says, so having the combined plants could be an important step along the way.
“It might be a really long time before [green ammonia] is actually attractive” economically, he says. “Right now, it’s nowhere close, except in very special situations.” But the combined plants “could be a really appealing concept, and maybe a good way to start the industry,” because so far only small, standalone demonstration plants of the green process are being built.
“If green or blue ammonia is going to become the new way of making ammonia, you need to find ways to make it relatively affordable in a lot of countries, with whatever resources they’ve got,” he says. This new proposed combination, he says, “looks like a really good idea that can help push things along. Ultimately, there’s got to be a lot of green ammonia plants in a lot of places,” and starting out with the combined plants, which could be more affordable now, could help to make that happen. The team has filed for a patent on the process.
Although the team did a detailed study of both the technology and the economics that show the system has great promise, Green points out that “no one has ever built one. We did the analysis, it looks good, but surely when people build the first one, they’ll find funny little things that need some attention,” such as details of how to start up or shut down the process. “I would say there’s plenty of additional work to do to make it a real industry.” But the results of this study, which shows the costs to be much more affordable than existing blue or green plants in isolation, “definitely encourages the possibility of people making the big investments that would be needed to really make this industry feasible.”
This proposed integration of the two methods “improves efficiency, reduces greenhouse gas emissions, and lowers overall cost,” says Kevin van Geem, a professor in the Center for Sustainable Chemistry at Ghent University, who was not associated with this research. “The analysis is rigorous, with validated process models, transparent assumptions, and comparisons to literature benchmarks. By combining techno-economic analysis with emissions accounting, the work provides a credible and balanced view of the trade-offs.”
He adds that, “given the scale of global ammonia production, such a reduction could have a highly impactful effect on decarbonizing one of the most emissions-intensive chemical industries.”
The research team also included MIT postdoc Angiras Menon and MITEI research lead Guiyan Zang. The work was supported by IHI Japan through the MIT Energy Initiative and the Martin Family Society of Fellows for Sustainability.
Using generative AI to diversify virtual training grounds for robots
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence systems seem to have you covered. The source of this versatility? Billions, or even trillions, of textual data points across the internet.
Those data aren’t enough to teach a robot to be a helpful household or factory assistant, though. To understand how to handle, stack, and place various arrangements of objects across diverse environments, robots need demonstrations. You can think of robot training data as a collection of how-to videos that walk the systems through each motion of a task. Collecting these demonstrations on real robots is time-consuming and not perfectly repeatable, so engineers have created training data by generating simulations with AI (which don’t often reflect real-world physics), or tediously handcrafting each digital environment from scratch.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Toyota Research Institute may have found a way to create the diverse, realistic training grounds robots need. Their “steerable scene generation” approach creates digital scenes of things like kitchens, living rooms, and restaurants that engineers can use to simulate lots of real-world interactions and scenarios. Trained on over 44 million 3D rooms filled with models of objects such as tables and plates, the tool places existing assets in new scenes, then refines each one into a physically accurate, lifelike environment.
Steerable scene generation creates these 3D worlds by “steering” a diffusion model — an AI system that generates a visual from random noise — toward a scene you’d find in everyday life. The researchers used this generative system to “in-paint” an environment, filling in particular elements throughout the scene. You can imagine a blank canvas suddenly turning into a kitchen scattered with 3D objects, which are gradually rearranged into a scene that imitates real-world physics. For example, the system ensures that a fork doesn’t pass through a bowl on a table — a common glitch in 3D graphics known as “clipping,” where models overlap or intersect.
How exactly steerable scene generation guides its creation toward realism, however, depends on the strategy you choose. Its main strategy is “Monte Carlo tree search” (MCTS), where the model creates a series of alternative scenes, filling them out in different ways toward a particular objective (like making a scene more physically realistic, or including as many edible items as possible). It’s used by the AI program AlphaGo to beat human opponents in Go (a game similar to chess), as the system considers potential sequences of moves before choosing the most advantageous one.
“We are the first to apply MCTS to scene generation by framing the scene generation task as a sequential decision-making process,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student Nicholas Pfaff, who is a CSAIL researcher and a lead author on a paper presenting the work. “We keep building on top of partial scenes to produce better or more desired scenes over time. As a result, MCTS creates scenes that are more complex than what the diffusion model was trained on.”
In one particularly telling experiment, MCTS added the maximum number of objects to a simple restaurant scene. It featured as many as 34 items on a table, including massive stacks of dim sum dishes, after training on scenes with only 17 objects on average.
Steerable scene generation also allows you to generate diverse training scenarios via reinforcement learning — essentially, teaching a diffusion model to fulfill an objective by trial-and-error. After you train on the initial data, your system undergoes a second training stage, where you outline a reward (basically, a desired outcome with a score indicating how close you are to that goal). The model automatically learns to create scenes with higher scores, often producing scenarios that are quite different from those it was trained on.
Users can also prompt the system directly by typing in specific visual descriptions (like “a kitchen with four apples and a bowl on the table”). Then, steerable scene generation can bring your requests to life with precision. For example, the tool accurately followed users’ prompts at rates of 98 percent when building scenes of pantry shelves, and 86 percent for messy breakfast tables. Both marks are at least a 10 percent improvement over comparable methods like “MiDiffusion” and “DiffuScene.”
The system can also complete specific scenes via prompting or light directions (like “come up with a different scene arrangement using the same objects”). You could ask it to place apples on several plates on a kitchen table, for instance, or put board games and books on a shelf. It’s essentially “filling in the blank” by slotting items in empty spaces, but preserving the rest of a scene.
According to the researchers, the strength of their project lies in its ability to create many scenes that roboticists can actually use. “A key insight from our findings is that it’s OK for the scenes we pre-trained on to not exactly resemble the scenes that we actually want,” says Pfaff. “Using our steering methods, we can move beyond that broad distribution and sample from a ‘better’ one. In other words, generating the diverse, realistic, and task-aligned scenes that we actually want to train our robots in.”
Such vast scenes became the testing grounds where they could record a virtual robot interacting with different items. The machine carefully placed forks and knives into a cutlery holder, for instance, and rearranged bread onto plates in various 3D settings. Each simulation appeared fluid and realistic, resembling the real-world, adaptable robots steerable scene generation could help train, one day.
While the system could be an encouraging path forward in generating lots of diverse training data for robots, the researchers say their work is more of a proof of concept. In the future, they’d like to use generative AI to create entirely new objects and scenes, instead of using a fixed library of assets. They also plan to incorporate articulated objects that the robot could open or twist (like cabinets or jars filled with food) to make the scenes even more interactive.
To make their virtual environments even more realistic, Pfaff and his colleagues may incorporate real-world objects by using a library of objects and scenes pulled from images on the internet and using their previous work on “Scalable Real2Sim.” By expanding how diverse and lifelike AI-constructed robot testing grounds can be, the team hopes to build a community of users that’ll create lots of data, which could then be used as a massive dataset to teach dexterous robots different skills.
“Today, creating realistic scenes for simulation can be quite a challenging endeavor; procedural generation can readily produce a large number of scenes, but they likely won’t be representative of the environments the robot would encounter in the real world. Manually creating bespoke scenes is both time-consuming and expensive,” says Jeremy Binagia, an applied scientist at Amazon Robotics who wasn’t involved in the paper. “Steerable scene generation offers a better approach: train a generative model on a large collection of pre-existing scenes and adapt it (using a strategy such as reinforcement learning) to specific downstream applications. Compared to previous works that leverage an off-the-shelf vision-language model or focus just on arranging objects in a 2D grid, this approach guarantees physical feasibility and considers full 3D translation and rotation, enabling the generation of much more interesting scenes.”
“Steerable scene generation with post training and inference-time search provides a novel and efficient framework for automating scene generation at scale,” says Toyota Research Institute roboticist Rick Cory SM ’08, PhD ’10, who also wasn’t involved in the paper. “Moreover, it can generate ‘never-before-seen’ scenes that are deemed important for downstream tasks. In the future, combining this framework with vast internet data could unlock an important milestone towards efficient training of robots for deployment in the real world.”
Pfaff wrote the paper with senior author Russ Tedrake, the Toyota Professor of Electrical Engineering and Computer Science, Aeronautics and Astronautics, and Mechanical Engineering at MIT; a senior vice president of large behavior models at the Toyota Research Institute; and CSAIL principal investigator. Other authors were Toyota Research Institute robotics researcher Hongkai Dai SM ’12, PhD ’16; team lead and Senior Research Scientist Sergey Zakharov; and Carnegie Mellon University PhD student Shun Iwase. Their work was supported, in part, by Amazon and the Toyota Research Institute. The researchers presented their work at the Conference on Robot Learning (CoRL) in September.
MIT physicists improve the precision of atomic clocks
Every time you check the time on your phone, make an online transaction, or use a navigation app, you are depending on the precision of atomic clocks.
An atomic clock keeps time by relying on the “ticks” of atoms as they naturally oscillate at rock-steady frequencies. Today’s atomic clocks operate by tracking cesium atoms, which tick over 10 billion times per second. Each of those ticks is precisely tracked using lasers that oscillate in sync, at microwave frequencies.
Scientists are developing next-generation atomic clocks that rely on even faster-ticking atoms such as ytterbium, which can be tracked with lasers at higher, optical frequencies. If they can be kept stable, optical atomic clocks could track even finer intervals of time, up to 100 trillion times per second.
Now, MIT physicists have found a way to improve the stability of optical atomic clocks, by reducing “quantum noise” — a fundamental measurement limitation due to the effects of quantum mechanics, which obscures the atoms’ pure oscillations. In addition, the team discovered that an effect of a clock’s laser on the atoms, previously considered irrelevant, can be used to further stabilize the laser.
The researchers developed a method to harness a laser-induced “global phase” in ytterbium atoms, and have boosted this effect with a quantum-amplification technique. The new approach doubles the precision of an optical atomic clock, enabling it to discern twice as many ticks per second compared to the same setup without the new method. What’s more, they anticipate that the precision of the method should increase steadily with the number of atoms in an atomic clock.
The researchers detail the method, which they call global phase spectroscopy, in a study appearing today in the journal Nature. They envision that the clock-stabilizing technique could one day enable portable optical atomic clocks that can be transported to various locations to measure all manner of phenomena.
“With these clocks, people are trying to detect dark matter and dark energy, and test whether there really are just four fundamental forces, and even to see if these clocks can predict earthquakes,” says study author Vladan Vuletić, the Lester Wolfe Professor of Physics at MIT. “We think our method can help make these clocks transportable and deployable to where they’re needed.”
The paper’s co-authors are Leon Zaporski, Qi Liu, Gustavo Velez, Matthew Radzihovsky, Zeyang Li, Simone Colombo, and Edwin Pedrozo-Peñafiel, who are members of the MIT-Harvard Center for Ultracold Atoms and the MIT Research Laboratory of Electronics.
Ticking time
In 2020, Vuletić and his colleagues demonstrated that an atomic clock could be made more precise by quantumly entangling the clock’s atoms. Quantum entanglement is a phenomenon by which particles can be made to behave in a collective, highly correlated manner. When atoms are quantumly entangled, they redistribute any noise, or uncertainty in measuring the atoms’ oscillations, in a way that reveals a clearer, more measurable “tick.”
In their previous work, the team induced quantum entanglement among several hundred ytterbium atoms that they first cooled and trapped in a cavity formed by two curved mirrors. They sent a laser into the cavity, which bounced thousands of times between the mirrors, interacting with the atoms and causing the ensemble to entangle. They were able to show that quantum entanglement could improve the precision of existing atomic clocks by essentially reducing the noise, or uncertainty between the laser’s and atoms’ tick rates.
At the time, however, they were limited by the ticking instability of the clock’s laser. In 2022, the same team derived a way to further amplify the difference in laser versus atom tick rates with “time reversal” — a trick that relies on entangling and de-entangling the atoms to boost the signal acquired in between.
However, in that work the team was still using traditional microwaves, which oscillate at much lower frequencies than the optical frequency standards ytterbium atoms can provide. It was as if they had painstakingly lifted a film of dust off a painting, only to then photograph it with a low-resolution camera.
“When you have atoms that tick 100 trillion times per second, that’s 10,000 times faster than the frequency of microwaves,” Vuletić says. “We didn’t know at the time how to apply these methods to higher-frequency optical clocks that are much harder to keep stable.”
About phase
In their new study, the team has found a way to apply their previously developed approach of time reversal to optical atomic clocks. They then sent in a laser that oscillates near the optical frequency of the entangled atoms.
“The laser ultimately inherits the ticking of the atoms,” says first author Zaporski. “But in order for this inheritance to hold for a long time, the laser has to be quite stable.”
The researchers found they were able to improve the stability of an optical atomic clock by taking advantage of a phenomenon that scientists had assumed was inconsequential to the operation. They realized that when light is sent through entangled atoms, the interaction can cause the atoms to jump up in energy, then settle back down into their original energy state and still carry the memory about their round trip.
“One might think we’ve done nothing,” Vuletić says. “You get this global phase of the atoms, which is usually considered irrelevant. But this global phase contains information about the laser frequency.”
In other words, they realized that the laser was inducing a measurable change in the atoms, despite bringing them back to the original energy state, and that the magnitude of this change depends on the laser’s frequency.
“Ultimately, we are looking for the difference of laser frequency and the atomic transition frequency,” explains co-author Liu. “When that difference is small, it gets drowned by quantum noise. Our method amplifies this difference above this quantum noise.”
In their experiments, the team applied this new approach and found that through entanglement they were able to double the precision of their optical atomic clock.
“We saw that we can now resolve nearly twice as small a difference in the optical frequency or, the clock ticking frequency, without running into the quantum noise limit,” Zaporski says. “Although it’s a hard problem in general to run atomic clocks, the technical benefits of our method it will make it easier, and we think this can enable stable, transportable atomic clocks.”
This research was supported, in part, by the U.S. Office of Naval Research, the National Science Foundation, the U.S. Defense Advanced Research Projects Agency, the U.S. Department of Energy, the U.S. Office of Science, the National Quantum Information Science Research Centers, and the Quantum Systems Accelerator.
