MIT Latest News
New prediction model could improve the reliability of fusion power plants
Tokamaks are machines that are meant to hold and harness the power of the sun. These fusion machines use powerful magnets to contain a plasma hotter than the sun’s core and push the plasma’s atoms to fuse and release energy. If tokamaks can operate safely and efficiently, the machines could one day provide clean and limitless fusion energy.
Today, there are a number of experimental tokamaks in operation around the world, with more underway. Most are small-scale research machines built to investigate how the devices can spin up plasma and harness its energy. One of the challenges that tokamaks face is how to safely and reliably turn off a plasma current that is circulating at speeds of up to 100 kilometers per second, at temperatures of over 100 million degrees Celsius.
Such “rampdowns” are necessary when a plasma becomes unstable. To prevent the plasma from further disrupting and potentially damaging the device’s interior, operators ramp down the plasma current. But occasionally the rampdown itself can destabilize the plasma. In some machines, rampdowns have caused scrapes and scarring to the tokamak’s interior — minor damage that still requires considerable time and resources to repair.
Now, scientists at MIT have developed a method to predict how plasma in a tokamak will behave during a rampdown. The team combined machine-learning tools with a physics-based model of plasma dynamics to simulate a plasma’s behavior and any instabilities that may arise as the plasma is ramped down and turned off. The researchers trained and tested the new model on plasma data from an experimental tokamak in Switzerland. They found the method quickly learned how plasma would evolve as it was tuned down in different ways. What’s more, the method achieved a high level of accuracy using a relatively small amount of data. This training efficiency is promising, given that each experimental run of a tokamak is expensive and quality data is limited as a result.
The new model, which the team highlights this week in an open-access Nature Communications paper, could improve the safety and reliability of future fusion power plants.
“For fusion to be a useful energy source it’s going to have to be reliable,” says lead author Allen Wang, a graduate student in aeronautics and astronautics and a member of the Disruption Group at MIT’s Plasma Science and Fusion Center (PSFC). “To be reliable, we need to get good at managing our plasmas.”
The study’s MIT co-authors include PSFC Principal Research Scientist and Disruptions Group leader Cristina Rea, and members of the Laboratory for Information and Decision Systems (LIDS) Oswin So, Charles Dawson, and Professor Chuchu Fan, along with Mark (Dan) Boyer of Commonwealth Fusion Systems and collaborators from the Swiss Plasma Center in Switzerland.
“A delicate balance”
Tokamaks are experimental fusion devices that were first built in the Soviet Union in the 1950s. The device gets its name from a Russian acronym that translates to a “toroidal chamber with magnetic coils.” Just as its name describes, a tokamak is toroidal, or donut-shaped, and uses powerful magnets to contain and spin up a gas to temperatures and energies high enough that atoms in the resulting plasma can fuse and release energy.
Today, tokamak experiments are relatively low-energy in scale, with few approaching the size and output needed to generate safe, reliable, usable energy. Disruptions in experimental, low-energy tokamaks are generally not an issue. But as fusion machines scale up to grid-scale dimensions, controlling much higher-energy plasmas at all phases will be paramount to maintaining a machine’s safe and efficient operation.
“Uncontrolled plasma terminations, even during rampdown, can generate intense heat fluxes damaging the internal walls,” Wang notes. “Quite often, especially with the high-performance plasmas, rampdowns actually can push the plasma closer to some instability limits. So, it’s a delicate balance. And there’s a lot of focus now on how to manage instabilities so that we can routinely and reliably take these plasmas and safely power them down. And there are relatively few studies done on how to do that well.”
Bringing down the pulse
Wang and his colleagues developed a model to predict how a plasma will behave during tokamak rampdown. While they could have simply applied machine-learning tools such as a neural network to learn signs of instabilities in plasma data, “you would need an ungodly amount of data” for such tools to discern the very subtle and ephemeral changes in extremely high-temperature, high-energy plasmas, Wang says.
Instead, the researchers paired a neural network with an existing model that simulates plasma dynamics according to the fundamental rules of physics. With this combination of machine learning and a physics-based plasma simulation, the team found that only a couple hundred pulses at low performance, and a small handful of pulses at high performance, were sufficient to train and validate the new model.
The data they used for the new study came from the TCV, the Swiss “variable configuration tokamak” operated by the Swiss Plasma Center at EPFL (the Swiss Federal Institute of Technology Lausanne). The TCV is a small experimental fusion experimental device that is used for research purposes, often as test bed for next-generation device solutions. Wang used the data from several hundred TCV plasma pulses that included properties of the plasma such as its temperature and energies during each pulse’s ramp-up, run, and ramp-down. He trained the new model on this data, then tested it and found it was able to accurately predict the plasma’s evolution given the initial conditions of a particular tokamak run.
The researchers also developed an algorithm to translate the model’s predictions into practical “trajectories,” or plasma-managing instructions that a tokamak controller can automatically carry out to for instance adjust the magnets or temperature maintain the plasma’s stability. They implemented the algorithm on several TCV runs and found that it produced trajectories that safely ramped down a plasma pulse, in some cases faster and without disruptions compared to runs without the new method.
“At some point the plasma will always go away, but we call it a disruption when the plasma goes away at high energy. Here, we ramped the energy down to nothing,” Wang notes. “We did it a number of times. And we did things much better across the board. So, we had statistical confidence that we made things better.”
The work was supported in part by Commonwealth Fusion Systems (CFS), an MIT spinout that intends to build the world’s first compact, grid-scale fusion power plant. The company is developing a demo tokamak, SPARC, designed to produce net-energy plasma, meaning that it should generate more energy than it takes to heat up the plasma. Wang and his colleagues are working with CFS on ways that the new prediction model and tools like it can better predict plasma behavior and prevent costly disruptions to enable safe and reliable fusion power.
“We’re trying to tackle the science questions to make fusion routinely useful,” Wang says. “What we’ve done here is the start of what is still a long journey. But I think we’ve made some nice progress.”
Additional support for the research came from the framework of the EUROfusion Consortium, via the Euratom Research and Training Program and funded by the Swiss State Secretariat for Education, Research, and Innovation.
Printable aluminum alloy sets strength records, may enable lighter aircraft parts
MIT engineers have developed a printable aluminum alloy that can withstand high temperatures and is five times stronger than traditionally manufactured aluminum.
The new printable metal is made from a mix of aluminum and other elements that the team identified using a combination of simulations and machine learning, which significantly pruned the number of possible combinations of materials to search through. While traditional methods would require simulating over 1 million possible combinations of materials, the team’s new machine learning-based approach needed only to evaluate 40 possible compositions before identifying an ideal mix for a high-strength, printable aluminum alloy.
When they printed the alloy and tested the resulting material, the team confirmed that, as predicted, the aluminum alloy was as strong as the strongest aluminum alloys that are manufactured today using traditional casting methods.
The researchers envision that the new printable aluminum could be made into stronger, more lightweight and temperature-resistant products, such as fan blades in jet engines. Fan blades are traditionally cast from titanium — a material that is more than 50 percent heavier and up to 10 times costlier than aluminum — or made from advanced composites.
“If we can use lighter, high-strength material, this would save a considerable amount of energy for the transportation industry,” says Mohadeseh Taheri-Mousavi, who led the work as a postdoc at MIT and is now an assistant professor at Carnegie Mellon University.
“Because 3D printing can produce complex geometries, save material, and enable unique designs, we see this printable alloy as something that could also be used in advanced vacuum pumps, high-end automobiles, and cooling devices for data centers,” adds John Hart, the Class of 1922 Professor and head of the Department of Mechanical Engineering at MIT.
Hart and Taheri-Mousavi provide details on the new printable aluminum design in a paper published in the journal Advanced Materials. The paper’s MIT co-authors include Michael Xu, Clay Houser, Shaolou Wei, James LeBeau, and Greg Olson, along with Florian Hengsbach and Mirko Schaper of Paderborn University in Germany, and Zhaoxuan Ge and Benjamin Glaser of Carnegie Mellon University.
Micro-sizing
The new work grew out of an MIT class that Taheri-Mousavi took in 2020, which was taught by Greg Olson, professor of the practice in the Department of Materials Science and Engineering. As part of the class, students learned to use computational simulations to design high-performance alloys. Alloys are materials that are made from a mix of different elements, the combination of which imparts exceptional strength and other unique properties to the material as a whole.
Olson challenged the class to design an aluminum alloy that would be stronger than the strongest printable aluminum alloy designed to date. As with most materials, the strength of aluminum depends in large part on its microstructure: The smaller and more densely packed its microscopic constituents, or “precipitates,” the stronger the alloy would be.
With this in mind, the class used computer simulations to methodically combine aluminum with various types and concentrations of elements, to simulate and predict the resulting alloy’s strength. However, the exercise failed to produce a stronger result. At the end of the class, Taheri-Mousavi wondered: Could machine learning do better?
“At some point, there are a lot of things that contribute nonlinearly to a material’s properties, and you are lost,” Taheri-Mousavi says. “With machine-learning tools, they can point you to where you need to focus, and tell you for example, these two elements are controlling this feature. It lets you explore the design space more efficiently.”
Layer by layer
In the new study, Taheri-Mousavi continued where Olson’s class left off, this time looking to identify a stronger recipe for aluminum alloy. This time, she used machine-learning techniques designed to efficiently comb through data such as the properties of elements, to identify key connections and correlations that should lead to a more desirable outcome or product.
She found that, using just 40 compositions mixing aluminum with different elements, their machine-learning approach quickly homed in on a recipe for an aluminum alloy with higher volume fraction of small precipitates, and therefore higher strength, than what the previous studies identified. The alloy’s strength was even higher than what they could identify after simulating over 1 million possibilities without using machine learning.
To physically produce this new strong, small-precipitate alloy, the team realized 3D printing would be the way to go instead of traditional metal casting, in which molten liquid aluminum is poured into a mold and is left to cool and harden. The longer this cooling time is, the more likely the individual precipitate is to grow.
The researchers showed that 3D printing, broadly also known as additive manufacturing, can be a faster way to cool and solidify the aluminum alloy. Specifically, they considered laser bed powder fusion (LBPF) — a technique by which a powder is deposited, layer by layer, on a surface in a desired pattern and then quickly melted by a laser that traces over the pattern. The melted pattern is thin enough that it solidfies quickly before another layer is deposited and similarly “printed.” The team found that LBPF’s inherently rapid cooling and solidification enabled the small-precipitate, high-strength aluminum alloy that their machine learning method predicted.
“Sometimes we have to think about how to get a material to be compatible with 3D printing,” says study co-author John Hart. “Here, 3D printing opens a new door because of the unique characteristics of the process — particularly, the fast cooling rate. Very rapid freezing of the alloy after it’s melted by the laser creates this special set of properties.”
Putting their idea into practice, the researchers ordered a formulation of printable powder, based on their new aluminum alloy recipe. They sent the powder — a mix of aluminum and five other elements — to collaborators in Germany, who printed small samples of the alloy using their in-house LPBF system. The samples were then sent to MIT where the team ran multiple tests to measure the alloy’s strength and image the samples’ microstructure.
Their results confirmed the predictions made by their initial machine learning search: The printed alloy was five times stronger than a casted counterpart and 50 percent stronger than alloys designed using conventional simulations without machine learning. The new alloy’s microstructure also consisted of a higher volume fraction of small precipitates, and was stable at high temperatures of up to 400 degrees Celsius — a very high temperature for aluminum alloys.
The researchers are applying similar machine-learning techniques to further optimize other properties of the alloy.
“Our methodology opens new doors for anyone who wants to do 3D printing alloy design,” Taheri-Mousavi says. “My dream is that one day, passengers looking out their airplane window will see fan blades of engines made from our aluminum alloys.”
This work was carried out, in part, using MIT.nano’s characterization facilities.
Study sheds light on musicians’ enhanced attention
In a world full of competing sounds, we often have to filter out a lot of noise to hear what’s most important. This critical skill may come more easily for people with musical training, according to scientists at MIT’s McGovern Institute for Brain Research, who used brain imaging to follow what happens when people try to focus their attention on certain sounds.
When Cassia Low Manting, a recent MIT postdoc working in the labs of MIT Professor and McGovern Institute PI John Gabrieli and former McGovern Institute PI Dimitrios Pantazis, asked people to focus on a particular melody while another melody played at the same time, individuals with musical backgrounds were, unsurprisingly, better able to follow the target tune. An analysis of study participants’ brain activity suggests this advantage arises because musical training sharpens neural mechanisms that amplify the sounds they want to listen to while turning down distractions.
“People can hear, understand, and prioritize multiple sounds around them that flow on a moment-to-moment basis,” explains Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology at MIT. “This study reveals the specific brain mechanisms that successfully process simultaneous sounds on a moment-to-moment basis and promote attention to the most important sounds. It also shows how musical training alters that processing in the mind and brain, offering insight into how experience shapes the way we listen and pay attention.”
The research team, which also included senior author Daniel Lundqvist at the Karolinska Institute in Sweden, reported their open-access findings Sept. 17 in the journal Science Advances. Manting, who is now at the Karolinska Institute, notes that the research is part of an ongoing collaboration between the two institutions.
Overcoming challenges
Participants in the study had vastly difference backgrounds when it came to music. Some were professional musicians with deep training and experience, while others struggled to differentiate between the two tunes they were played, despite each one’s distinct pitch. This disparity allowed the researchers to explore how the brain’s capacity for attention might change with experience. “Musicians are very fun to study because their brains have been morphed in ways based on their training,” Manting says. “It’s a nice model to study these training effects.”
Still, the researchers had significant challenges to overcome. It has been hard to study how the brain manages auditory attention, because when researchers use neuroimaging to monitor brain activity, they see the brain’s response to all sounds: those that the listener cares most about, as well as those the listener is trying to ignore. It is usually difficult to figure out which brain signals were triggered by which sounds.
Manting and her colleagues overcame this challenge with a method called frequency tagging. Rather than playing the melodies in their experiments at a constant volume, the volume of each melody oscillated, rising and falling with a particular frequency. Each melody had its own frequency, creating detectable patterns in the brain signals that responded to it. “When you play these two sounds simultaneously to the subject and you record the brain signal, you can say, this 39-Hertz activity corresponds to the lower-pitch sound and the 43-Hertz activity corresponds specifically to the higher-pitch sound,” Manting explains. “It is very clean and very clear.”
When they paired frequency tagging with magnetoencephalography, a noninvasive method of monitoring brain activity, the team was able to track how their study participants’ brains responded to each of two melodies during their experiments. While the two tunes played, subjects were instructed to follow either the higher-pitched or the lower-pitched melody. When the music stopped, they were asked about the final notes of the target tune: did they rise or did they fall? The researchers could make this task harder by making the two tunes closer together in pitch, as well as by altering the timing of the notes.
Manting used a survey that asked about musical experience to score each participant’s musicality, and this measure had an obvious effect on task performance: The more musical a person was, the more successful they were at following the tune they had been asked to track.
To look for differences in brain activity that might explain this, the research team developed a new machine-learning approach to analyze their data. They used it to tease apart what was happening in the brain as participants focused on the target tune — even, in some cases, when the notes of the distracting tune played at the exact same time.
Top-down versus bottom-up attention
What they found was a clear separation of brain activity associated with two kinds of attention, known as top-down and bottom-up attention. Manting explains that top-down attention is goal-oriented, involving a conscious focus — the kind of attention listeners called on as they followed the target tune. Bottom-up attention, on the other hand, is triggered by the nature of the sound itself. A fire alarm would be expected to trigger this kind of attention, both with its volume and its suddenness. The distracting tune in the team’s experiments triggered activity associated with bottom-up attention — but more so in some people than in others.
“The more musical someone is, the better they are at focusing their top-down selective attention, and the less the effect of bottom-up attention is,” Manting explains.
Manting expects that musicians use their heightened capacity for top-down attention in other situations, as well. For example, they might be better than others at following a conversation in a room filled with background chatter. “I would put my bet on it that there is a high chance that they will be great at zooming into sounds,” she says.
She wonders, however, if one kind of distraction might actually be harder for a musician to filter out: the sound of their own instrument. Manting herself plays both the piano and the Chinese harp, and she says hearing those instruments is “like someone calling my name.” It’s one of many questions about how musical training affects cognition that she plans to explore in her future work.
Matthew Shoulders named head of the Department of Chemistry
Matthew D. Shoulders, the Class of 1942 Professor of Chemistry, a MacVicar Faculty Fellow, and an associate member of the Broad Institute of MIT and Harvard, has been named head of the MIT Department of Chemistry, effective Jan. 16, 2026.
“Matt has made pioneering contributions to the chemistry research community through his research on mechanisms of proteostasis and his development of next-generation techniques to address challenges in biomedicine and agriculture,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “He is also a dedicated educator, beloved by undergraduates and graduates alike. I know the department will be in good hands as we double down on our commitment to world-leading research and education in the face of financial headwinds.”
Shoulders succeeds Troy Van Voorhis, the Robert T. Haslam and Bradley Dewey Professor of Chemistry, who has been at the helm since October 2019.
“I am tremendously grateful to Troy for his leadership the past six years, building a fantastic community here in our department. We face challenges, but also many exciting opportunities, as a department in the years to come,” says Shoulders. “One thing is certain: Chemistry innovations are critical to solving pressing global challenges. Through the research that we do and the scientists we train, our department has a huge role to play in shaping the future.”
Shoulders studies how cells fold proteins, and he develops and applies novel protein engineering techniques to challenges in biotechnology. His work across chemistry and biochemistry fields including proteostasis, extracellular matrix biology, virology, evolution, and synthetic biology is yielding not just important insights into topics like how cells build healthy tissues and how proteins evolve, but also influencing approaches to disease therapy and biotechnology development.
“Matt is an outstanding researcher whose work touches on fundamental questions about how the cell machinery directs the synthesis and folding of proteins. His discoveries about how that machinery breaks down as a result of mutations or in response to stress has a fundamental impact on how we think about and treat human diseases,” says Van Voorhis.
In one part of Matt's current research program, he is studying how protein folding systems in cells — known as chaperones — shape the evolution of their clients. Amongst other discoveries, his lab has shown that viral pathogens hijack human chaperones to enable their rapid evolution and escape from host immunity. In related recent work, they have discovered that these same chaperones can promote access to malignancy-driving mutations in tumors. Beyond fundamental insights into evolutionary biology, these findings hold potential to open new therapeutic strategies to target cancer and viral infections.
“Matt’s ability to see both the details and the big picture makes him an outstanding researcher and a natural leader for the department,” says Timothy Swager, the John D. MacArthur Professor of Chemistry. “MIT Chemistry can only benefit from his dedication to understanding and addressing the parts and the whole.”
Shoulders also leads a food security project through the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Shoulders, along with MIT Research Scientist Robbie Wilson, assembled an interdisciplinary team based at MIT to enhance climate resilience in agriculture by improving one of the most inefficient aspects of photosynthesis, the carbon dioxide-fixing plant enzyme RuBisCO. J-WAFS funded this high-risk, high-reward MIT Grand Challenge project in 2023, and it has received further support from federal research agencies and the Grantham Foundation for the Protection of the Environment.
“Our collaborative team of biochemists and synthetic biologists, computational biologists, and chemists is deeply integrated with plant biologists, creating a robust feedback loop for enzyme engineering,” Shoulders says. “Together, this team is making a concerted effort using state-of-the-art techniques to engineer crop RuBisCO with an eye to helping make meaningful gains in securing a stable crop supply, hopefully with accompanying improvements in both food and water security.”
In addition to his research contributions, Shoulders has taught multiple classes for Course V, including 5.54 (Advances in Chemical Biology) and 5.111 (Principles of Chemical Science), along with a number of other key chemistry classes. His contributions to a 5.111 “bootcamp” through the MITx platform served to address gaps in the classroom curriculum by providing online tools to help undergraduate students better grasp the material in the chemistry General Institute Requirement (GIR). His development of Guided Learning Demonstrations to support first-year chemistry courses at MIT has helped bring the lab to the GIR, and also contributed to the popularity of 5.111 courses offered regularly via MITx.
“I have had the pleasure of teaching with Matt on several occasions, and he is a fantastic educator. He is an innovator both inside and outside the classroom and has an unwavering commitment to his students’ success,” says Van Voorhis of Shoulders, who was named a 2022 MacVicar Faculty Fellow, and who received a Committed to Caring award through the Office of Graduate Education.
Shoulders also founded the MIT Homeschool Internship Program for Science and Technology, which brings high school students to campus for paid summer research experiences in labs across the Institute.
He is a founding member of the Department of Chemistry’s Quality of Life Committee and chair for the last six years, helping to improve all aspects of opportunity, professional development, and experience in the department: “countless changes that have helped make MIT a better place for all,” as Van Voorhis notes, including creating a peer mentoring program for graduate students and establishing universal graduate student exit interviews to collect data for department-wide assessment and improvement.
At the Institute level, Shoulders has served on the Committee on Graduate Programs, Committee on Sexual Misconduct Prevention and Response (in which he co-chaired the provost's working group on the Faculty and Staff Sexual Misconduct Survey), and the Committee on Assessment of Biohazards and Embryonic Stem Cell Research Oversight, among other roles.
Shoulders graduated summa cum laude from Virginia Tech in 2004, earning a BS in chemistry with a minor in biochemistry. He earned a PhD in chemistry at the University of Wisconsin at Madison in 2009 under Professor Ronald Raines. Following an American Cancer Society Postdoctoral Fellowship at Scripps Research Institute, working with professors Jeffery Kelly and Luke Wiseman, Shoulders joined the MIT Department of Chemistry faculty as an assistant professor in 2012. Shoulders also serves as an associate member of the Broad Institute and an investigator at the Center for Musculoskeletal Research at Massachusetts General Hospital.
Among his many awards, Shoulders has received a NIH Director's New Innovator Award under the NIH High-Risk, High-Reward Research Program; an NSF CAREER Award; an American Cancer Society Research Scholar Award; the Camille Dreyfus Teacher-Scholar Award; and most recently the Ono Pharma Foundation Breakthrough Science Award.
Report: Sustainability in supply chains is still a firm-level priority
Corporations are actively seeking sustainability advances in their supply chains — but many need to improve the business metrics they use in this area to realize more progress, according to a new report by MIT researchers.
During a time of shifting policies globally and continued economic uncertainty, the survey-based report finds 85 percent of companies say they are continuing supply chain sustainability practices at the same level as in recent years, or are increasing those efforts.
“What we found is strong evidence that sustainability still matters,” says Josué Velázquez Martínez, a research scientist and director of the MIT Sustainable Supply Chain Lab, which helped produce the report. “There are many things that remain to be done to accomplish those goals, but there’s a strong willingness from companies in all parts of the world to do something about sustainability.”
The new analysis, titled “Sustainability Still Matters,” was released today. It is the sixth annual report on the subject prepared by the MIT Sustainable Supply Chain Lab, which is part of MIT’s Center for Transportation and Logistics. The Council of Supply Chain Management Professionals collaborated on the project as well.
The report is based on a global survey, with responses from 1,203 professionals in 97 countries. This year, the report analyzes three issues in depth, including regulations and the role they play in corporate approaches to supply chain management. A second core topic is management and mitigation of what industry professionals call “Scope 3” emissions, which are those not from a firm itself, but from a firm’s supply chain. And a third issue of focus is the future of freight transportation, which by itself accounts for a substantial portion of supply chain emissions.
Broadly, the survey finds that for European-based firms, the principal driver of action in this area remains government mandates, such as the Corporate Sustainability Reporting Directive, which requires companies to publish regular reports on their environmental impact and the risks to society involved. In North America, firm leadership and investor priorities are more likely to be decisive factors in shaping a company’s efforts.
“In Europe the pressure primarily comes more from regulation, but in the U.S. it comes more from investors, or from competitors,” Velázquez Martínez says.
The survey responses on Scope 3 emissions reveal a number of opportunities for improvement. In business and sustainability terms, Scope 1 greenhouse gas emissions are those a firm produces directly. Scope 2 emissions are the energy it has purchased. And Scope 3 emissions are those produced across a firm’s value chain, including the supply chain activities involved in producing, transporting, using, and disposing of its products.
The report reveals that about 40 percent of firms keep close track of Scope 1 and 2 emissions, but far fewer tabulate Scope 3 on equivalent terms. And yet Scope 3 may account for roughly 75 percent of total firm emissions, on aggregate. About 70 percent of firms in the survey say they do not have enough data from suppliers to accurately tabulate the total greenhouse gas and climate impact of their supply chains.
Certainly it can be hard to calculate the total emissions when a supply chain has many layers, including smaller suppliers lacking data capacity. But firms can upgrade their analytics in this area, too. For instance, 50 percent of North American firms are still using spreadsheets to tabulate emissions data, often making rough estimates that correlate emissions to simple economic activity. An alternative is life cycle assessment software that provides more sophisticated estimates of a product’s emissions, from the extraction of its materials to its post-use disposal. By contrast, only 32 percent of European firms are still using spreadsheets rather than life cycle assessment tools.
“You get what you measure,” Velázquez Martínez says. “If you measure poorly, you’re going to get poor decisions that most likely won’t drive the reductions you’re expecting. So we pay a lot of attention to that particular issue, which is decisive to defining an action plan. Firms pay a lot of attention to metrics in their financials, but in sustainability they’re often using simplistic measurements.”
When it comes to transportation, meanwhile, the report shows that firms are still grappling with the best ways to reduce emissions. Some see biofuels as the best short-term alternative to fossil fuels; others are investing in electric vehicles; some are waiting for hydrogen-powered vehicles to gain traction. Supply chains, after all, frequently involve long-haul trips. For firms, as for individual consumers, electric vehicles are more practical with a larger infrastructure of charging stations. There are advances on that front but more work to do as well.
That said, “Transportation has made a lot of progress in general,” Velázquez Martínez says, noting the increased acceptance of new modes of vehicle power in general.
Even as new technologies loom on the horizon, though, supply chain sustainability is not wholly depend on their introduction. One factor continuing to propel sustainability in supply chains is the incentives companies have to lower costs. In a competitive business environment, spending less on fossil fuels usually means savings. And firms can often find ways to alter their logistics to consume and spend less.
“Along with new technologies, there is another side of supply chain sustainability that is related to better use of the current infrastructure,” Velázquez Martínez observes. “There is always a need to revise traditional ways of operating to find opportunities for more efficiency.”
Chemists create red fluorescent dyes that may enable clearer biomedical imaging
MIT chemists have designed a new type of fluorescent molecule that they hope could be used for applications such as generating clearer images of tumors.
The new dye is based on a borenium ion — a positively charged form of boron that can emit light in the red to near-infrared range. Until recently, these ions have been too unstable to be used for imaging or other biomedical applications.
In a study appearing today in Nature Chemistry, the researchers showed that they could stabilize borenium ions by attaching them to a ligand. This approach allowed them to create borenium-containing films, powders, and crystals, all of which emit and absorb light in the red and near-infrared range.
That is important because near-IR light is easier to see when imaging structures deep within tissues, which could allow for clearer images of tumors and other structures in the body.
“One of the reasons why we focus on red to near-IR is because those types of dyes penetrate the body and tissue much better than light in the UV and visible range. Stability and brightness of those red dyes are the challenges that we tried to overcome in this study,” says Robert Gilliard, the Novartis Professor of Chemistry at MIT and the senior author of the study.
MIT research scientist Chun-Lin Deng is the lead author of the paper. Other authors include Bi Youan (Eric) Tra PhD ’25, former visiting graduate student Xibao Zhang, and graduate student Chonghe Zhang.
Stabilized borenium
Most fluorescent imaging relies on dyes that emit blue or green light. Those imaging agents work well in cells, but they are not as useful in tissue because low levels of blue and green fluorescence produced by the body interfere with the signal. Blue and green light also scatters in tissue, limiting how deeply it can penetrate.
Imaging agents that emit red fluorescence can produce clearer images, but most red dyes are inherently unstable and don’t produce a bright signal, because of their low quantum yields (the ratio of fluorescent photons emitted per photon of light is absorbed). For many red dyes, the quantum yield is only about 1 percent.
Among the molecules that can emit near-infrared light are borenium cations —positively charged ions containing an atom of boron attached to three other atoms.
When these molecules were first discovered in the mid-1980s, they were considered “laboratory curiosities,” Gilliard says. These molecules were so unstable that they had to be handled in a sealed container called a glovebox to protect them from exposure to air, which can lead them to break down.
Later, chemists realized they could make these ions more stable by attaching them to molecules called ligands. Working with these more stable ions, Gillliard’s lab discovered in 2019 that they had some unusual properties: Namely, they could respond to changes in temperature by emitting different colors of light.
However, at that point, “there was a substantial problem in that they were still too reactive to be handled in open air,” Gilliard says.
His lab began working on new ways to further stabilize them using ligands known as carbodicarbenes (CDCs), which they reported in a 2022 study. Due to this stabilization, the compounds can now be studied and handled without using a glovebox. They are also resistant to being broken down by light, unlike many previous borenium-based compounds.
In the new study, Gilliard began experimenting with the anions (negatively charged ions) that are a part of the CDC-borenium compounds. Interactions between these anions and the borenium cation generate a phenomenon known as exciton coupling, the researchers discovered. This coupling, they found, shifted the molecules’ emission and absorption properties toward the infrared end of the color spectrum. These molecules also generated a high quantum yield, allowing them to shine more brightly.
“Not only are we in the correct region, but the efficiency of the molecules is also very suitable,” Gilliard says. “We’re up to percentages in the thirties for the quantum yields in the red region, which is considered to be high for that region of the electromagnetic spectrum.”
Potential applications
The researchers also showed that they could convert their borenium-containing compounds into several different states, including solid crystals, films, powders, and colloidal suspensions.
For biomedical imaging, Gilliard envisions that these borenium-containing materials could be encapsulated in polymers, allowing them to be injected into the body to use as an imaging dye. As a first step, his lab plans to work with researchers in the chemistry department at MIT and at the Broad Institute of MIT and Harvard to explore the potential of imaging these materials within cells.
Because of their temperature responsiveness, these materials could also be deployed as temperature sensors, for example, to monitor whether drugs or vaccines have been exposed to temperatures that are too high or low during shipping.
“For any type of application where temperature tracking is important, these types of ‘molecular thermometers’ can be very useful,” Gilliard says.
If incorporated into thin films, these molecules could also be useful as organic light-emitting diodes (OLEDs), particularly in new types of materials such as flexible screens, Gilliard says.
“The very high quantum yields achieved in the near-IR, combined with the excellent environmental stability, make this class of compounds extremely interesting for biological applications,” says Frieder Jaekle, a professor of chemistry at Rutgers University, who was not involved in the study. “Besides the obvious utility in bioimaging, the strong and tunable near-IR emission also makes these new fluorophores very appealing as smart materials for anticounterfeiting, sensors, switches, and advanced optoelectronic devices.”
In addition to exploring possible applications for these dyes, the researchers are now working on extending their color emission further into the near-infrared region, which they hope to achieve by incorporating additional boron atoms. Those extra boron atoms could make the molecules less stable, so the researchers are also working on new types of carbodicarbenes to help stabilize them.
The research was funded by the Arnold and Mabel Beckman Foundation and the National Institutes of Health.
AI maps how a new antibiotic targets gut bacteria
For patients with inflammatory bowel disease, antibiotics can be a double-edged sword. The broad-spectrum drugs often prescribed for gut flare-ups can kill helpful microbes alongside harmful ones, sometimes worsening symptoms over time. When fighting gut inflammation, you don’t always want to bring a sledgehammer to a knife fight.
Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and McMaster University have identified a new compound that takes a more targeted approach. The molecule, called enterololin, suppresses a group of bacteria linked to Crohn’s disease flare-ups while leaving the rest of the microbiome largely intact. Using a generative AI model, the team mapped how the compound works, a process that usually takes years but was accelerated here to just months.
“This discovery speaks to a central challenge in antibiotic development,” says Jon Stokes, senior author of a new paper on the work, assistant professor of biochemistry and biomedical sciences at McMaster, and research affiliate at MIT’s Abdul Latif Jameel Clinic for Machine Learning in Health. “The problem isn’t finding molecules that kill bacteria in a dish — we’ve been able to do that for a long time. A major hurdle is figuring out what those molecules actually do inside bacteria. Without that detailed understanding, you can’t develop these early-stage antibiotics into safe and effective therapies for patients.”
Enterololin is a stride toward precision antibiotics: treatments designed to knock out only the bacteria causing trouble. In mouse models of Crohn’s-like inflammation, the drug zeroed in on Escherichia coli, a gut-dwelling bacterium that can worsen flares, while leaving most other microbial residents untouched. Mice given enterololin recovered faster and maintained a healthier microbiome than those treated with vancomycin, a common antibiotic.
Pinning down a drug’s mechanism of action, the molecular target it binds inside bacterial cells, normally requires years of painstaking experiments. Stokes’ lab discovered enterololin using a high-throughput screening approach, but determining its target would have been the bottleneck. Here, the team turned to DiffDock, a generative AI model developed at CSAIL by MIT PhD student Gabriele Corso and MIT Professor Regina Barzilay.
DiffDock was designed to predict how small molecules fit into the binding pockets of proteins, a notoriously difficult problem in structural biology. Traditional docking algorithms search through possible orientations using scoring rules, often producing noisy results. DiffDock instead frames docking as a probabilistic reasoning problem: a diffusion model iteratively refines guesses until it converges on the most likely binding mode.
“In just a couple of minutes, the model predicted that enterololin binds to a protein complex called LolCDE, which is essential for transporting lipoproteins in certain bacteria,” says Barzilay, who also co-leads the Jameel Clinic. “That was a very concrete lead — one that could guide experiments, rather than replace them.”
Stokes’ group then put that prediction to the test. Using DiffDock predictions as an experimental GPS, they first evolved enterololin-resistant mutants of E. coli in the lab, which revealed that changes in the mutant’s DNA mapped to lolCDE, precisely where DiffDock had predicted enterololin to bind. They also performed RNA sequencing to see which bacterial genes switched on or off when exposed to the drug, as well as used CRISPR to selectively knock down expression of the expected target. These laboratory experiments all revealed disruptions in pathways tied to lipoprotein transport, exactly what DiffDock had predicted.
“When you see the computational model and the wet-lab data pointing to the same mechanism, that’s when you start to believe you’ve figured something out,” says Stokes.
For Barzilay, the project highlights a shift in how AI is used in the life sciences. “A lot of AI use in drug discovery has been about searching chemical space, identifying new molecules that might be active,” she says. “What we’re showing here is that AI can also provide mechanistic explanations, which are critical for moving a molecule through the development pipeline.”
That distinction matters because mechanism-of-action studies are often a major rate-limiting step in drug development. Traditional approaches can take 18 months to two years, or more, and cost millions of dollars. In this case, the MIT–McMaster team cut the timeline to about six months, at a fraction of the cost.
Enterololin is still in the early stages of development, but translation is already underway. Stokes’ spinout company, Stoked Bio, has licensed the compound and is optimizing its properties for potential human use. Early work is also exploring derivatives of the molecule against other resistant pathogens, such as Klebsiella pneumoniae. If all goes well, clinical trials could begin within the next few years.
The researchers also see broader implications. Narrow-spectrum antibiotics have long been sought as a way to treat infections without collateral damage to the microbiome, but they have been difficult to discover and validate. AI tools like DiffDock could make that process more practical, rapidly enabling a new generation of targeted antimicrobials.
For patients with Crohn’s and other inflammatory bowel conditions, the prospect of a drug that reduces symptoms without destabilizing the microbiome could mean a meaningful improvement in quality of life. And in the bigger picture, precision antibiotics may help tackle the growing threat of antimicrobial resistance.
“What excites me is not just this compound, but the idea that we can start thinking about the mechanism of action elucidation as something we can do more quickly, with the right combination of AI, human intuition, and laboratory experiments,” says Stokes. “That has the potential to change how we approach drug discovery for many diseases, not just Crohn’s.”
“One of the greatest challenges to our health is the increase of antimicrobial-resistant bacteria that evade even our best antibiotics,” adds Yves Brun, professor at the University of Montreal and distinguished professor emeritus at Indiana University Bloomington, who wasn’t involved in the paper. “AI is becoming an important tool in our fight against these bacteria. This study uses a powerful and elegant combination of AI methods to determine the mechanism of action of a new antibiotic candidate, an important step in its potential development as a therapeutic.”
Corso, Barzilay, and Stokes wrote the paper with McMaster researchers Denise B. Catacutan, Vian Tran, Jeremie Alexander, Yeganeh Yousefi, Megan Tu, Stewart McLellan, and Dominique Tertigas, and professors Jakob Magolan, Michael Surette, Eric Brown, and Brian Coombes. Their research was supported, in part, by the Weston Family Foundation; the David Braley Centre for Antibiotic Discovery; the Canadian Institutes of Health Research; the Natural Sciences and Engineering Research Council of Canada; M. and M. Heersink; Canadian Institutes for Health Research; Ontario Graduate Scholarship Award; the Jameel Clinic; and the U.S. Defense Threat Reduction Agency Discovery of Medical Countermeasures Against New and Emerging Threats program.
The researchers posted sequencing data in public repositories and released the DiffDock-L code openly on GitHub.
Secretary of Energy Chris Wright ’85 visits MIT
U.S. Secretary of Energy Chris Wright ’85 visited MIT on Monday, meeting Institute leaders, discussing energy innovation at a campus forum, viewing poster presentations from researchers supported through the MIT-GE Vernova Energy and Climate Alliance, and watching energy research demos in the lab where he used to work as a student.
“I’ve always been in energy because I think it’s just far and away the world’s most important industry,” Wright said at the forum, which included a panel discussion with business leaders and a fireside chat with MIT Professor Ernest Moniz, who was the U.S. secretary of energy from 2013 to 2017. Wright added: “Not only is it by far the world’s most important industry, because it enables all the others, but it’s also a booming time right now. … It is an awesomely exciting time to be in energy.”
Wright was greeted on campus by MIT President Sally Kornbluth, who also gave introductory remarks at the forum, held in MIT’s Samberg Center. While the Institute has added many research facilities and buildings since Wright was a student, Kornbluth observed, the core MIT ethos remains the same.
“MIT is still MIT,” Kornbluth said. “It’s a community that rewards merit, boldness, and scientific rigor. And it’s a magnet for people with a drive to solve hard problems that matter in the real world, an enthusiasm for working with industry, and an ethic of national service.”
When it comes to energy research, Kornbluth added, “MIT is developing transformational approaches to make American energy more secure, reliable, affordable, and clean — which in turn will strengthen both U.S. competitiveness and national security.”
At the event, Wright, the 17th U.S. secretary of energy, engaged in a fireside chat with Moniz, the 13th U.S. secretary of energy, the Cecil and Ida Green Professor of Physics and Engineering Systems Post-Tenure, a special advisor to the MIT president, and the founding director of the MIT Energy Initiative (MITEI). Wright began his remarks by reflecting on Kornbluth’s description of the Institute.
“Merit, boldness, and scientific rigor,” Wright said. “That is MIT … to me. That hit me hard when I got here, and frankly, it’s a good part of the reason my life has gone the way it’s gone.”
On energy topics, Wright emphasized the need for continued innovation in energy across a range of technologies, including fusion, geothermal, and more, while advocating for the benefits of vigorous market-based progress. Before becoming secretary of energy, Wright most recently served as founder and CEO of Liberty Energy. He also was the founder of Pinnacle Technologies, among other enterprises. Wright was confirmed as secretary by the U.S. Senate in February.
Asked to name promising areas of technological development, Wright focused on three particular areas of interest. Citing artificial intelligence, he noted that the interest in it was “overwhelming,” with many possible applications. Regarding fusion energy, Wright said, “We are going to see meaningful breakthroughs.” And quantum computing, he added, was going to be a “game-changer” as well.
Wright also emphasized the value of federal support for fundamental research, including projects in the national laboratories the Department of Energy oversees.
“The 17 national labs we have in this country are absolute jewels. They are gems of this country,” Wright said. He later noted, “There are things, like this foundational research, that are just an essential part of our country and an essential part of our future.”
Moniz asked Wright a range of questions in the fireside chat, while adding his own perspective at times about the many issues connected to energy abundance globally.
“Climate, energy, security, equity, affordability, have to be recognized as one conversation, and not separate conversations,” Moniz said. “That’s what’s at stake in my view.”
Wright’s appearance was part of the Energy Freedom Tour developed by the American Conservation Coalition (ACC), in coordination with the Hamm Institute for American Energy at Oklahoma State University. Later stops are planned for Stanford University and Texas A&M University.
Ann Bluntzer Pullin, executive director of the Hamm Institute, gave remarks at the forum as well, noting the importance of making students aware of the energy industry and helping to “get them excited about the impact this career can make.” She also praised MIT’s advances in the field, adding, “This is where so many ideas were born and executed that have allowed America to really thrive in this energy abundance in our country that we have [had] for so long.”
The forum also featured remarks from Roger Martella, chief corporate officer, chief sustainability officer, and head of government affairs at GE Vernova. In March, MIT and GE Vernova announced a new five-year joint program, the MIT-GE Vernova Energy and Climate Alliance, featuring research projects, education programs, and career opportunities for MIT students.
“That’s what we’re about, electrification as the lifeblood of prosperity,” Martella said, describing GE Vernova’s work. “When we’re here at MIT we feel like we’re living history every moment when we’re walking down the halls, because no institution has [contributed] to innovation and technology more, doing it every single day to advance prosperity for all people around the world.”
A panel discussion at the forum featured Wright speaking along with three MIT alumni who are active in the energy business: Carlos Araque ’01, SM ’02, CEO of Quaise Energy, a leading-edge firm in geothermal energy solutions; Bob Mumgaard SM ’15, PhD ’15, CEO of Commonwealth Fusion Systems, a leading fusion energy firm and an MIT spinout; and Milo Werner SM ’07, MBA ’07, a general partner at DCVC and expert in energy and climate investments. The panel was moderated by Chris Barnard, president of the ACC.
Mumgaard noted that Commonwealth Fusion Systems launched in 2018 with “an explicit mission, working with MIT still today, of putting fusion onto an industrial trajectory,” although there is “plenty left to do, still, at that intersection of science, technology, innovation, and business.”
Araque said he believes geothermal is “metric-by-metric” more powerful and profitable than many other forms of energy. “This is not a stop-gap,” he added. Quaise is currently developing its first power-plant-scale facility in the U.S.
Werner noted that the process of useful innovation only begins in the lab; making an advance commercially viable is the critical next step. The biggest impact “is not in the breakthrough,” she said. “It’s not in the discovery that you make in the lab. It’s actually once you’ve built a billion of them. That’s when you actually change the world.”
After the forum, Wright took a tour of multiple research centers on the MIT campus, including the MIT.nano facility, guided by Vladimir Bulović, faculty director of MIT.nano and the Fariborz Maseeh Chair in Emerging Technology.
At MIT.nano, Bulović showed Wright the Titan Krios G3i, a nearly room-size electron microscope that enables researchers to take a high-resolution look at the structure of tiny particles, with a variety of research applications. The tour also viewed one of MIT.nano’s cleanrooms, a shared fabrication facility used by both MIT researchers and users outside of MIT, including many in industry.
On a different note, in an MIT.nano hallway, Bulović showed Wright the One.MIT mosaics, which contain the names of all MIT students and employees past and present — well over 300,000 in all. First etched on a 6-inch wafer, the mosaics are a visual demonstration of the power of nanotechnology — and a searchable display, so Bulović located Wright’s name, which is printed near the chin of one of the figures on the MIT seal.
The tour ended in the basement of Building 10, in what is now the refurbished Grainger Energy Machine Facility, where Wright used to conduct research. After earning his undergraduate degree in mechanical engineering, Wright entered into graduate studies at MIT before leaving, as he recounted at the forum, to pursue business opportunities.
At the lab, Wright met with David Perreault, the Ford Foundation Professor of Engineering; and Steven Leeb, the Emanuel Landsman Professor, a specialist in power systems. A half-dozen MIT graduate students gave Wright demos of their research projects, all involving energy-generation innovations. Wright readily engaged with all the graduate students about the technologies and the parameters of the devices, and asked the students about their own careers.
Wright was accompanied on the lab tour by MIT Provost Anantha Chandrakasan, himself an expert in developing energy-efficient systems. Chandrakasan delivered closing remarks at the forum in the Samberg Center, noting MIT’s “strong partnership with the Department of Energy” and its “long and proud history of engaging industry.”
As such, Chandrakasan said, MIT has a “role as a resource in service of the nation, so please don’t hesitate to call on us.”
MIT-affiliated physicists win McMillan Award for discovery of exotic electronic state
Last year, MIT physicists reported in the journal Nature that electrons can become fractions of themselves in graphene, an atomically thin form of carbon. This exotic electronic state, called the fractional quantum anomalous Hall effect (FQAHE), could enable more robust forms of quantum computing.
Now two young MIT-affiliated physicists involved in the discovery of FQAHE have been named the 2025 recipients of the McMillan Award from the University of Illinois for their work. Jiaqi Cai and Zhengguang Lu won the award “for the discovery of fractional anomalous quantum hall physics in 2D moiré materials.”
Cai is currently a Pappalardo Fellow at MIT working with Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics, and collaborating with several other labs at MIT including Long Ju, the Lawrence and Sarah W. Biedenharn Career Development Associate Professor in the MIT Department of Physics. He discovered FQAHE while working in the laboratory of Professor Xiaodong Xu at the University of Washington.
Lu discovered FQAHE while working as a postdoc Ju's lab and has since become an assistant professor at Florida State University.
The two independent discoveries were made in the same year.
“The McMillan award is the highest honor that a young condensed matter physicist can receive,” says Ju. “My colleagues and I in the Condensed Matter Experiment and the Condensed Matter Theory Group are very proud of Zhengguang and Jiaqi.”
Ju and Jarillo-Herrero are both also affiliated with the Materials Research Laboratory.
In addition to a monetary prize and a plaque, Lu and Cai will give a colloquium on their work at the University of Illinois this fall.
Martin Trust Center for MIT Entrepreneurship welcomes Ana Bakshi as new executive director
The Martin Trust Center for MIT Entrepreneurship announced that Ana Bakshi has been named its new executive director. Bakshi stepped into the role at the start of the fall semester and will collaborate closely with the managing director, Ethernet Inventors Professor of the Practice Bill Aulet, to elevate the center to higher levels.
“Ana is uniquely qualified for this role. She brings a deep and highly decorated background in entrepreneurship education at the highest levels, along with exceptional leadership and execution skills,” says Aulet. “Since I first met her 12 years ago, I have been extraordinarily impressed with her commitment to create the highest-quality centers and institutes for entrepreneurs, first at King’s College London and then at Oxford University. This ideal skill set is compounded by her experience in leading high-growth companies, most recently as the chief operation officer in an award-winning AI startup. I’m honored and thrilled to welcome her to MIT — her knowledge and energy will greatly elevate our community, and the field as a whole.”
A rapidly changing environment creates imperative for raising the bar for entrepreneurship education
The need to raise the bar for innovation-driven entrepreneurship education is both timely and urgent. The rate of change is getting faster and faster every day, especially with artificial intelligence, and is generating new problems that need to be solved, as well as exacerbating existing problems in climate, health care, manufacturing, future of work, education, and economic stratification, to name but a few. The world needs more entrepreneurs and better entrepreneurs.
Bakshi joins the Trust Center at an exciting time in its history. MIT is at the forefront of helping to develop people and systems that can turn challenges into opportunities using an entrepreneurial mindset, skill set, and way of operating. Bakshi’s deep experience and success will be key to unlocking this opportunity. “I am truly honored to join the Trust Center at such a pivotal moment,” Bakshi says. “In an era defined by both extraordinary challenges and extraordinary possibilities, the future will be built by those bold enough to try, and MIT will be at the forefront of this.”
Translating academic research into real-world impact
Bakshi has a decade of experience building two world-class entrepreneurship centers from the ground up. She served as the founding director at King’s College and then at Oxford. In this role, she was responsible for all aspects of these centers, including fundraising.
While at Oxford, she authored a data-driven approach to determining efficacy of outcomes for their programs, as evidenced by a 61-page study, “Universities: Drivers of Prosperity and Economic Recovery.”
As the director of the Oxford Foundry (Oxford’s cross-university entrepreneurship center), Bakshi focused on investing in ambitious founders and talent. The center was backed by global entrepreneurial leaders such as the founders of LinkedIn and Twitter, with corporate partnerships including Santander and EY, and investment funds including Oxford Science Enterprises (OSE). As of 2021, the startups supported by the Foundry and King’s College have raised over $500 million and have created nearly 3,000 jobs, spanning diverse industries including health tech, climate tech, cybersecurity, fintech, and deep tech spinouts focusing on world-class science.
In addition, she built the highly successful and economically sustainable Entrepreneurship School, Oxford’s first digital online learning platform.
Bakshi comes to MIT after having worked in the private sector as the chief operating officer (COO) in a rapidly growing artificial intelligence startup for almost two years, Quench.ai, with offices in London and New York City. She was the first C-suite employee at Quench.ai, serving as COO and now senior advisor, helping companies unlock value from their knowledge through AI.
Right place, right time, right person moving at the speed of MIT AI
Since its inception, then turbocharged in the 1940s with the creation and operation of the RadLab, and continuing to this day, entrepreneurship is at the core of MIT’s identity and mission.
"MIT has been a leader in entrepreneurship for decades. It’s now the third leg of the school, alongside teaching and research,” says Mark Gorenberg ’76, chair of the MIT Corporation. “I’m excited to have such a transformative leader as Ana join the Trust Center team, and I look forward to the impact she will have on the students and the wider academic community at MIT as we enter an exciting new phase in company building, driven by the accelerated use of AI and emerging technologies."
“In a time where we are rethinking management education, entrepreneurship as an interdisciplinary field to create impact is even more important to our future. To have such an experienced and accomplished leader in academia and the startup world, especially in AI, reinforces our commitment to be a global leader in this field,” says Richard M. Locke, John C Head III Dean at the MIT Sloan School of Management.
“MIT is a unique hub of research, innovation, and entrepreneurship, and that special mix creates massive positive impact that ripples around the world,” says Frederic Kerrest, MIT Sloan MBA ’09, co-founder of Okta, and member of the MIT Corporation. “In a rapidly changing, AI-driven world, Ana has the skills and experience to further accelerate MIT’s global leadership in entrepreneurship education to ensure that our students launch and scale the next generation of groundbreaking, innovation-driven startups.”
Prior to her time at Oxford and King’s College, Bakshi served as an elected councilor representing 6,000-plus constituents, held roles in international nongovernmental organizations, and led product execution strategy at MAHI, an award-winning family-led craft sauce startup, available in thousands of major retailers across the U.K. Bakshi sits on the advisory council for conservation charity Save the Elephants, leveraging AI-driven and scientific approaches to reduce human-wildlife conflict and protect elephant populations. Her work and impact have been featured across FT, Forbes, BBC, The Times, and The Hill. Bakshi was twice honored as a Top 50 Woman in Tech (U.K.), most recently in 2025.
“As AI changes how we learn, how we build, and how we scale, my focus will be on helping MIT expand its support for phenomenal talent — students and faculty — with the skills, ecosystem, and backing to turn knowledge into impact,” Bakshi says.
35 years of impact to date
The Trust Center was founded in 1990 by the late Professor Edward Roberts and serves all MIT students across all schools and all disciplines. It supports 60-plus courses and extensive extracurricular programming, including the delta v academic accelerator. Much of the work of the center is generated through the Disciplined Entrepreneurship methodology, which offers a proven approach to create new ventures. Over a thousand schools and other organizations across the world use Disciplined Entrepreneurship books and resources to teach entrepreneurship.
Now, with AI-powered tools like Orbit and JetPack, the Trust Center is changing the way that entrepreneurship is taught and practiced. Its mission is to produce the next generation of innovation-driven entrepreneurs while advancing the field more broadly to make it both rigorous and practical. This approach of leveraging proven evidence-based methodology, emerging technology, the ingenuity of MIT students, and responding to industry shifts is similar to how MIT established the field of chemical engineering in the 1890s. The desired result in both cases was to create a comprehensive, integrated, scalable, rigorous, and practical curriculum to create a new workforce to address the nation’s and world’s greatest challenges.
Lincoln Lab unveils the most powerful AI supercomputer at any US university
The new TX-Generative AI Next (TX-GAIN) computing system at the Lincoln Laboratory Supercomputing Center (LLSC) is the most powerful AI supercomputer at any U.S. university. With its recent ranking from TOP500, which biannually publishes a list of the top supercomputers in various categories, TX-GAIN joins the ranks of other powerful systems at the LLSC, all supporting research and development at Lincoln Laboratory and across the MIT campus.
"TX-GAIN will enable our researchers to achieve scientific and engineering breakthroughs. The system will play a large role in supporting generative AI, physical simulation, and data analysis across all research areas," says Lincoln Laboratory Fellow Jeremy Kepner, who heads the LLSC.
The LLSC is a key resource for accelerating innovation at Lincoln Laboratory. Thousands of researchers tap into the LLSC to analyze data, train models, and run simulations for federally funded research projects. The supercomputers have been used, for example, to simulate billions of aircraft encounters to develop collision-avoidance systems for the Federal Aviation Administration, and to train models in the complex tasks of autonomous navigation for the Department of Defense. Over the years, LLSC capabilities have been essential to numerous award-winning technologies, including those that have improved airline safety, prevented the spread of new diseases, and aided in hurricane responses.
As its name suggests, TX-GAIN is especially equipped for developing and applying generative AI. Whereas traditional AI focuses on categorization tasks, like identifying whether a photo depicts a dog or cat, generative AI produces entirely new outputs. Kepner describes it as a mathematical combination of interpolation (filling in the gaps between known data points) and extrapolation (extending data beyond known points). Today, generative AI is widely known for its use of large language models to create human-like responses to user prompts.
At Lincoln Laboratory, teams are applying generative AI to various domains beyond large language models. They are using the technology, for instance, to evaluate radar signatures, supplement weather data where coverage is missing, root out anomalies in network traffic, and explore chemical interactions to design new medicines and materials.
To enable such intense computations, TX-GAIN is powered by more than 600 NVIDIA graphics processing unit accelerators specially designed for AI operations, in addition to traditional high-performance computing hardware. With a peak performance of two AI exaflops (two quintillion floating-point operations per second), TX-GAIN is the top AI system at a university, and in the Northeast. Since TX-GAIN came online this summer, researchers have taken notice.
"TX-GAIN is allowing us to model not only significantly more protein interactions than ever before, but also much larger proteins with more atoms. This new computational capability is a game-changer for protein characterization efforts in biological defense," says Rafael Jaimes, a researcher in Lincoln Laboratory's Counter–Weapons of Mass Destruction Systems Group.
The LLSC's focus on interactive supercomputing makes it especially useful to researchers. For years, the LLSC has pioneered software that lets users access its powerful systems without needing to be experts in configuring algorithms for parallel processing.
"The LLSC has always tried to make supercomputing feel like working on your laptop," Kepner says. "The amount of data and the sophistication of analysis methods needed to be competitive today are well beyond what can be done on a laptop. But with our user-friendly approach, people can run their model and get answers quickly from their workspace."
Beyond supporting programs solely at Lincoln Laboratory, TX-GAIN is enhancing research collaborations with MIT's campus. Such collaborations include the Haystack Observatory, Center for Quantum Engineering, Beaver Works, and Department of Air Force–MIT AI Accelerator. The latter initiative is rapidly prototyping, scaling, and applying AI technologies for the U.S. Air Force and Space Force, optimizing flight scheduling for global operations as one fielded example.
The LLSC systems are housed in an energy-efficient data center and facility in Holyoke, Massachusetts. Research staff in the LLSC are also tackling the immense energy needs of AI and leading research into various power-reduction methods. One software tool they developed can reduce the energy of training an AI model by as much as 80 percent.
"The LLSC provides the capabilities needed to do leading-edge research, while in a cost-effective and energy-efficient manner," Kepner says.
All of the supercomputers at the LLSC use the "TX" nomenclature in homage to Lincoln Laboratory's Transistorized Experimental Computer Zero (TX-0) of 1956. TX-0 was one of the world's first transistor-based machines, and its 1958 successor, TX-2, is storied for its role in pioneering human-computer interaction and AI. With TX-GAIN, the LLSC continues this legacy.
A simple formula could guide the design of faster-charging, longer-lasting batteries
At the heart of all lithium-ion batteries is a simple reaction: Lithium ions dissolved in an electrolyte solution “intercalate” or insert themselves into a solid electrode during battery discharge. When they de-intercalate and return to the electrolyte, the battery charges.
This process happens thousands of times throughout the life of a battery. The amount of power that the battery can generate, and how quickly it can charge, depend on how fast this reaction happens. However, little is known about the exact mechanism of this reaction, or the factors that control its rate.
In a new study, MIT researchers have measured lithium intercalation rates in a variety of different battery materials and used that data to develop a new model of how the reaction is controlled. Their model suggests that lithium intercalation is governed by a process known as coupled ion-electron transfer, in which an electron is transferred to the electrode along with a lithium ion.
Insights gleaned from this model could guide the design of more powerful and faster charging lithium-ion batteries, the researchers say.
“What we hope is enabled by this work is to get the reactions to be faster and more controlled, which can speed up charging and discharging,” says Martin Bazant, the Chevron Professor of Chemical Engineering and a professor of mathematics at MIT.
The new model may also help scientists understand why tweaking electrodes and electrolytes in certain ways leads to increased energy, power, and battery life — a process that has mainly been done by trial and error.
“This is one of these papers where now we began to unify the observations of reaction rates that we see with different materials and interfaces, in one theory of coupled electron and ion transfer for intercalation, building up previous work on reaction rates,” says Yang Shao-Horn, the J.R. East Professor of Engineering at MIT and a professor of mechanical engineering, materials science and engineering, and chemistry.
Shao-Horn and Bazant are the senior authors of the paper, which appears today in Science. The paper’s lead authors are Yirui Zhang PhD ’22, who is now an assistant professor at Rice University; Dimitrios Fraggedakis PhD ’21, who is now an assistant professor at Princeton University; Tao Gao, a former MIT postdoc who is now an assistant professor at the University of Utah; and MIT graduate student Shakul Pathak.
Modeling lithium flow
For many decades, scientists have hypothesized that the rate of lithium intercalation at a lithium-ion battery electrode is determined by how quickly lithium ions can diffuse from the electrolyte into the electrode. This reaction, they believed, was governed by a model known as the Butler-Volmer equation, originally developed almost a century ago to describe the rate of charge transfer during an electrochemical reaction.
However, when researchers have tried to measure lithium intercalation rates, the measurements they obtained were not always consistent with the rates predicted by the Butler-Volmer equation. Furthermore, obtaining consistent measurements across labs has been difficult, with different research teams reporting measurements for the same reaction that varied by a factor of up to 1 billion.
In the new study, the MIT team measured lithium intercalation rates using an electrochemical technique that involves applying repeated, short bursts of voltage to an electrode. They generated these measurements for more than 50 combinations of electrolytes and electrodes, including lithium nickel manganese cobalt oxide, which is commonly used in electric vehicle batteries, and lithium cobalt oxide, which is found in the batteries that power most cell phones, laptops, and other portable electronics.
For these materials, the measured rates are much lower than has previously been reported, and they do not correspond to what would be predicted by the traditional Butler-Volmer model.
The researchers used the data to come up with an alternative theory of how lithium intercalation occurs at the surface of an electrode. This theory is based on the assumption that in order for a lithium ion to enter an electrode, an electron from the electrolyte solution must be transferred to the electrode at the same time.
“The electrochemical step is not lithium insertion, which you might think is the main thing, but it’s actually electron transfer to reduce the solid material that is hosting the lithium,” Bazant says. “Lithium is intercalated at the same time that the electron is transferred, and they facilitate one another.”
This coupled-electron ion transfer (CIET) lowers the energy barrier that must be overcome for the intercalation reaction to occur, making it more likely to happen. The mathematical framework of CIET allowed the researchers to make reaction rate predictions, which were validated by their experiments and substantially different from those made by the Butler-Volmer model.
Faster charging
In this study, the researchers also showed that they could tune intercalation rates by changing the composition of the electrolyte. For example, swapping in different anions can lower the amount of energy needed to transfer the lithium and electron, making the process more efficient.
“Tuning the intercalation kinetics by changing electrolytes offers great opportunities to enhance the reaction rates, alter electrode designs, and therefore enhance the battery power and energy,” Shao-Horn says.
Shao-Horn’s lab and their collaborators have been using automated experiments to make and test thousands of different electrolytes, which are used to develop machine-learning models to predict electrolytes with enhanced functions.
The findings could also help researchers to design batteries that would charge faster, by speeding up the lithium intercalation reaction. Another goal is reducing the side reactions that can cause battery degradation when electrons are picked off the electrode and dissolve into the electrolyte.
“If you want to do that rationally, not just by trial and error, you need some kind of theoretical framework to know what are the important material parameters that you can play with,” Bazant says. “That’s what this paper tries to provide.”
The research was funded by Shell International Exploration and Production and the Toyota Research Institute through the D3BATT Center for Data-Driven Design of Rechargeable Batteries.
Accounting for uncertainty to help engineers design complex systems
Designing a complex electronic device like a delivery drone involves juggling many choices, such as selecting motors and batteries that minimize cost while maximizing the payload the drone can carry or the distance it can travel.
Unraveling that conundrum is no easy task, but what happens if the designers don’t know the exact specifications of each battery and motor? On top of that, the real-world performance of these components will likely be affected by unpredictable factors, like changing weather along the drone’s route.
MIT researchers developed a new framework that helps engineers design complex systems in a way that explicitly accounts for such uncertainty. The framework allows them to model the performance tradeoffs of a device with many interconnected parts, each of which could behave in unpredictable ways.
Their technique captures the likelihood of many outcomes and tradeoffs, giving designers more information than many existing approaches which, at most, can usually only model best-case and worst-case scenarios.
Ultimately, this framework could help engineers develop complex systems like autonomous vehicles, commercial aircraft, or even regional transportation networks that are more robust and reliable in the face of real-world unpredictability.
“In practice, the components in a device never behave exactly like you think they will. If someone has a sensor whose performance is uncertain, and an algorithm that is uncertain, and the design of a robot that is also uncertain, now they have a way to mix all these uncertainties together so they can come up with a better design,” says Gioele Zardini, the Rudge and Nancy Allen Assistant Professor of Civil and Environmental Engineering at MIT, a principal investigator in the Laboratory for Information and Decision Systems (LIDS), an affiliate faculty with the Institute for Data, Systems, and Society (IDSS), and senior author of a paper on this framework.
Zardini is joined on the paper by lead author Yujun Huang, an MIT graduate student; and Marius Furter, a graduate student at the University of Zurich. The research will be presented at the IEEE Conference on Decision and Control.
Considering uncertainty
The Zardini Group studies co-design, a method for designing systems made of many interconnected components, from robots to regional transportation networks.
The co-design language breaks a complex problem into a series of boxes, each representing one component, that can be combined in different ways to maximize outcomes or minimize costs. This allows engineers to solve complex problems in a feasible amount of time.
In prior work, the researchers modeled each co-design component without considering uncertainty. For instance, the performance of each sensor the designers could choose for a drone was fixed.
But engineers often don’t know the exact performance specifications of each sensor, and even if they do, it is unlikely the senor will perfectly follow its spec sheet. At the same time, they don’t know how each sensor will behave once integrated into a complex device, or how performance will be affected by unpredictable factors like weather.
“With our method, even if you are unsure what the specifications of your sensor will be, you can still design the robot to maximize the outcome you care about,” says Furter.
To accomplish this, the researchers incorporated this notion of uncertainty into an existing framework based on category theory.
Using some mathematical tricks, they simplified the problem into a more general structure. This allows them to use the tools of category theory to solve co-design problems in a way that considers a range of uncertain outcomes.
By reformulating the problem, the researchers can capture how multiple design choices affect one another even when their individual performance is uncertain.
This approach is also simpler than many existing tools that typically require extensive domain expertise. With their plug-and-play system, one can rearrange the components in the system without violating any mathematical constraints.
And because no specific domain expertise is required, the framework could be used by a multidisciplinary team where each member designs one component of a larger system.
“Designing an entire UAV isn’t feasible for just one person, but designing a component of a UAV is. By providing the framework for how these components work together in a way that considers uncertainty, we’ve made it easier for people to evaluate the performance of the entire UAV system,” Huang says.
More detailed information
The researchers used this new approach to choose perception systems and batteries for a drone that would maximize its payload while minimizing its lifetime cost and weight.
While each perception system may offer a different detection accuracy under varying weather conditions, the designer doesn’t know exactly how its performance will fluctuate. This new system allows the designer to take these uncertainties into consideration when thinking about the drone’s overall performance.
And unlike other approaches, their framework reveals distinct advantages of each battery technology.
For instance, their results show that at lower payloads, nickel-metal hydride batteries provide the lowest expected lifetime cost. This insight would be impossible to fully capture without accounting for uncertainty, Zardini says.
While another method might only be able to show the best-case and worst-case performance scenarios of lithium polymer batteries, their framework gives the user more detailed information.
For example, it shows that if the drone’s payload is 1,750 grams, there is a 12.8 percent chance the battery design would be infeasible.
“Our system provides the tradeoffs, and then the user can reason about the design,” he adds.
In the future, the researchers want to improve the computational efficiency of their problem-solving algorithms. They also want to extend this approach to situations where a system is designed by multiple parties that are collaborative and competitive, like a transportation network in which rail companies operate using the same infrastructure.
“As the complexity of systems grow, and involves more disparate components, we need a formal framework in which to design these systems. This paper presents a way to compose large systems from modular components, understand design trade-offs, and importantly do so with a notion of uncertainty. This creates an opportunity to formalize the design of large-scale systems with learning-enabled components,” says Aaron Ames, the Bren Professor of Mechanical and Civil Engineering, Control and Dynamical Systems, and Aerospace at Caltech, who was not involved with this research.
MIT OpenCourseWare is “a living testament to the nobility of open, unbounded learning”
Mostafa Fawzy became interested in physics in high school. It was the “elegance and paradox” of quantum theory that got his attention and led to his studies at the undergraduate and graduate level. But even with a solid foundation of coursework and supportive mentors, Fawzy wanted more. MIT Open Learning’s OpenCourseWare was just the thing he was looking for.
Now a doctoral candidate in atomic physics at Alexandria University and an assistant lecturer of physics at Alamein International University in Egypt, Fawzy reflects on how MIT OpenCourseWare bolstered his learning early in his graduate studies in 2019.
Part of MIT Open Learning, OpenCourseWare offers free, online, open educational resources from more than 2,500 courses that span the MIT undergraduate and graduate curriculum. Fawzy was looking for advanced resources to supplement his research in quantum mechanics and theoretical physics, and he was immediately struck by the quality, accessibility, and breadth of MIT’s resources.
“OpenCourseWare was transformative in deepening my understanding of advanced physics,” Fawzy says. “I found the structured lectures and assignments in quantum physics particularly valuable. They enhanced both my theoretical insight and practical problem-solving skills — skills I later applied in research on atomic systems influenced by magnetic fields and plasma environments.”
He completed educational resources including Quantum Physics I and Quantum Physics II, calling them “dense and mathematically sophisticated.” He met the challenge by engaging with the content in different ways: first, by simply listening to lectures, then by taking detailed notes, and finally by working though problem sets. Although initially he struggled to keep up, this methodical approach paid off, he says.
Fawzy is now in the final stages of his doctoral research on high-precision atomic calculations under extreme conditions. While in graduate school, he has published eight peer-reviewed international research papers, making him one of the most prolific doctoral researchers in physics working in Egypt currently. He served as an ambassador for the United Nations International Youth Conference (IYC), and he was nominated for both the African Presidential Leadership Program and the Davisson–Germer Prize in Atomic or Surface Physics, a prestigious annual prize offered by the American Physical Society.
He is grateful to his undergraduate mentors, professors M. Sakr and T. Bahy of Alexandria University, as well as to MIT OpenCourseWare, calling it a “steadfast companion through countless solitary nights of study, a beacon in times when formal resources were scarce, and a living testament to the nobility of open, unbounded learning.”
Recognizing the power of mentorship and teaching, Fawzy serves as an academic mentor with the African Academy of Sciences, supporting early-career researchers across the continent in theoretical and atomic physics.
“Many of these mentees lack access to advanced academic resources,” he explains. “I regularly incorporate OpenCourseWare into our mentorship sessions, using it as a foundational teaching and reference tool. It’s an equalizer, providing the same high-caliber content to students regardless of geographical or institutional limitations.”
As he looks toward the future, Fawzy has big plans, influenced by MIT.
“I aspire to establish a regional center for excellence in atomic and plasma physics, blending cutting-edge research with open-access education in the Global South,” he says.
As he continues his research and teaching, he also hopes to influence science policy and contribute to international partnerships that shine the spotlight on research and science in emerging nations.
Along the way, he says, “OpenCourseWare remains a cornerstone resource that I will return to again and again.”
Fawzy says he’s also interested in MIT Open Learning resources in computational physics and energy and sustainability. He’s following MIT’s Energy Initiative, calling it increasingly relevant to his current work and future plans.
Fawzy is a proponent of open learning and a testament to its power.
“The intellectual seeds sown by Open Learning resources such as MIT OpenCourseWare have flourished within me, shaping my identity as a physicist and affirming my deep belief in the transformative power of knowledge shared freely, without barriers,” he says.
Concrete “battery” developed at MIT now packs 10 times the power
Concrete already builds our world, and now it’s one step closer to powering it, too. Made by combining cement, water, ultra-fine carbon black (with nanoscale particles), and electrolytes, electron-conducting carbon concrete (ec3, pronounced “e-c-cubed”) creates a conductive “nanonetwork” inside concrete that could enable everyday structures like walls, sidewalks, and bridges to store and release electrical energy. In other words, the concrete around us could one day double as giant “batteries.”
As MIT researchers report in a new PNAS paper, optimized electrolytes and manufacturing processes have increased the energy storage capacity of the latest ec3 supercapacitors by an order of magnitude. In 2023, storing enough energy to meet the daily needs of the average home would have required about 45 cubic meters of ec3, roughly the amount of concrete used in a typical basement. Now, with the improved electrolyte, that same task can be achieved with about 5 cubic meters, the volume of a typical basement wall.
“A key to the sustainability of concrete is the development of ‘multifunctional concrete,’ which integrates functionalities like this energy storage, self-healing, and carbon sequestration. Concrete is already the world’s most-used construction material, so why not take advantage of that scale to create other benefits?” asks Admir Masic, lead author of the new study, MIT Electron-Conducting Carbon-Cement-Based Materials Hub (EC³ Hub) co-director, and associate professor of civil and environmental engineering (CEE) at MIT.
The improved energy density was made possible by a deeper understanding of how the nanocarbon black network inside ec3 functions and interacts with electrolytes. Using focused ion beams for the sequential removal of thin layers of the ec3 material, followed by high-resolution imaging of each slice with a scanning electron microscope (a technique called FIB-SEM tomography), the team across the EC³ Hub and MIT Concrete Sustainability Hub was able to reconstruct the conductive nanonetwork at the highest resolution yet. This approach allowed the team to discover that the network is essentially a fractal-like “web” that surrounds ec3 pores, which is what allows the electrolyte to infiltrate and for current to flow through the system.
“Understanding how these materials ‘assemble’ themselves at the nanoscale is key to achieving these new functionalities,” adds Masic.
Equipped with their new understanding of the nanonetwork, the team experimented with different electrolytes and their concentrations to see how they impacted energy storage density. As Damian Stefaniuk, first author and EC³ Hub research scientist, highlights, “we found that there is a wide range of electrolytes that could be viable candidates for ec3. This even includes seawater, which could make this a good material for use in coastal and marine applications, perhaps as support structures for offshore wind farms.”
At the same time, the team streamlined the way they added electrolytes to the mix. Rather than curing ec3 electrodes and then soaking them in electrolyte, they added the electrolyte directly into the mixing water. Since electrolyte penetration was no longer a limitation, the team could cast thicker electrodes that stored more energy.
The team achieved the greatest performance when they switched to organic electrolytes, especially those that combined quaternary ammonium salts — found in everyday products like disinfectants — with acetonitrile, a clear, conductive liquid often used in industry. A cubic meter of this version of ec3 — about the size of a refrigerator — can store over 2 kilowatt-hours of energy. That’s about enough to power an actual refrigerator for a day.
While batteries maintain a higher energy density, ec3 can in principle be incorporated directly into a wide range of architectural elements — from slabs and walls to domes and vaults — and last as long as the structure itself.
“The Ancient Romans made great advances in concrete construction. Massive structures like the Pantheon stand to this day without reinforcement. If we keep up their spirit of combining material science with architectural vision, we could be at the brink of a new architectural revolution with multifunctional concretes like ec3,” proposes Masic.
Taking inspiration from Roman architecture, the team built a miniature ec3 arch to show how structural form and energy storage can work together. Operating at 9 volts, the arch supported its own weight and additional load while powering an LED light.
However, something unique happened when the load on the arch increased: the light flickered. This is likely due to the way stress impacts electrical contacts or the distribution of charges. “There may be a kind of self-monitoring capacity here. If we think of an ec3 arch at architectural scale, its output may fluctuate when it’s impacted by a stressor like high winds. We may be able to use this as a signal of when and to what extent a structure is stressed, or monitor its overall health in real time,” envisions Masic.
The latest developments in ec³ technology bring it a step closer to real-world scalability. It’s already been used to heat sidewalk slabs in Sapporo, Japan, due to its thermally conductive properties, representing a potential alternative to salting. “With these higher energy densities and demonstrated value across a broader application space, we now have a powerful and flexible tool that can help us address a wide range of persistent energy challenges,” explains Stefaniuk. “One of our biggest motivations was to help enable the renewable energy transition. Solar power, for example, has come a long way in terms of efficiency. However, it can only generate power when there’s enough sunlight. So, the question becomes: How do you meet your energy needs at night, or on cloudy days?”
Franz-Josef Ulm, EC³ Hub co-director and CEE professor, continues the thread: “The answer is that you need a way to store and release energy. This has usually meant a battery, which often relies on scarce or harmful materials. We believe that ec3 is a viable substitute, letting our buildings and infrastructure meet our energy storage needs.” The team is working toward applications like parking spaces and roads that could charge electric vehicles, as well as homes that can operate fully off the grid.
“What excites us most is that we’ve taken a material as ancient as concrete and shown that it can do something entirely new,” says James Weaver, a co-author on the paper who is an associate professor of design technology and materials science and engineering at Cornell University, as well as a former EC³ Hub researcher. “By combining modern nanoscience with an ancient building block of civilization, we’re opening a door to infrastructure that doesn’t just support our lives, it powers them.”
Palladium filters could enable cheaper, more efficient generation of hydrogen fuel
Palladium is one of the keys to jump-starting a hydrogen-based energy economy. The silvery metal is a natural gatekeeper against every gas except hydrogen, which it readily lets through. For its exceptional selectivity, palladium is considered one of the most effective materials at filtering gas mixtures to produce pure hydrogen.
Today, palladium-based membranes are used at commercial scale to provide pure hydrogen for semiconductor manufacturing, food processing, and fertilizer production, among other applications in which the membranes operate at modest temperatures. If palladium membranes get much hotter than around 800 kelvins, they can break down.
Now, MIT engineers have developed a new palladium membrane that remains resilient at much higher temperatures. Rather than being made as a continuous film, as most membranes are, the new design is made from palladium that is deposited as “plugs” into the pores of an underlying supporting material. At high temperatures, the snug-fitting plugs remain stable and continue separating out hydrogen, rather than degrading as a surface film would.
The thermally stable design opens opportunities for membranes to be used in hydrogen-fuel-generating technologies such as compact steam methane reforming and ammonia cracking — technologies that are designed to operate at much higher temperatures to produce hydrogen for zero-carbon-emitting fuel and electricity.
“With further work on scaling and validating performance under realistic industrial feeds, the design could represent a promising route toward practical membranes for high-temperature hydrogen production,” says Lohyun Kim PhD ’24, a former graduate student in MIT’s Department of Mechanical Engineering.
Kim and his colleagues report details of the new membrane in a study appearing today in the journal Advanced Functional Materials. The study’s co-authors are Randall Field, director of research at the MIT Energy Initiative (MITEI); former MIT chemical engineering graduate student Chun Man Chow PhD ’23; Rohit Karnik, the Jameel Professor in the Department of Mechanical Engineering at MIT and the director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS); and Aaron Persad, a former MIT research scientist in mechanical engineering who is now an assistant professor at the University of Maryland Eastern Shore.
Compact future
The team’s new design came out of a MITEI project related to fusion energy. Future fusion power plants, such as the one MIT spinout Commonwealth Fusion Systems is designing, will involve circulating hydrogen isotopes of deuterium and tritium at extremely high temperatures to produce energy from the isotopes’ fusing. The reactions inevitably produce other gases that will have to be separated, and the hydrogen isotopes will be recirculated into the main reactor for further fusion.
Similar issues arise in a number of other processes for producing hydrogen, where gases must be separated and recirculated back into a reactor. Concepts for such recirculating systems would require first cooling down the gas before it can pass through hydrogen-separating membranes — an expensive and energy-intensive step that would involve additional machinery and hardware.
“One of the questions we were thinking about is: Can we develop membranes which could be as close to the reactor as possible, and operate at higher temperatures, so we don’t have to pull out the gas and cool it down first?” Karnik says. “It would enable more energy-efficient, and therefore cheaper and compact, fusion systems.”
The researchers looked for ways to improve the temperature resistance of palladium membranes. Palladium is the most effective metal used today to separate hydrogen from a variety of gas mixtures. It naturally attracts hydrogen molecules (H2) to its surface, where the metal’s electrons interact with and weaken the molecule’s bonds, causing H2 to temporarily break apart into its respective atoms. The individual atoms then diffuse through the metal and join back up on the other side as pure hydrogen.
Palladium is highly effective at permeating hydrogen, and only hydrogen, from streams of various gases. But conventional membranes typically can operate at temperatures of up to 800 kelvins before the film starts to form holes or clumps up into droplets, allowing other gases to flow through.
Plugging in
Karnik, Kim and their colleagues took a different design approach. They observed that at high temperatures, palladium will start to shrink up. In engineering terms, the material is acting to reduce surface energy. To do this, palladium, and most other materials and even water, will pull apart and form droplets with the smallest surface energy. The lower the surface energy, the more stable the material can be against further heating.
This gave the team an idea: If a supporting material’s pores could be “plugged” with deposits of palladium — essentially already forming a droplet with the lowest surface energy — the tight quarters might substantially increase palladium’s heat tolerance while preserving the membrane’s selectivity for hydrogen.
To test this idea, they fabricated small chip-sized samples of membrane using a porous silica supporting layer (each pore measuring about half a micron wide), onto which they deposited a very thin layer of palladium. They applied techniques to essentially grow the palladium into the pores, and polished down the surface to remove the palladium layer and leave palladium only inside the pores.
They then placed samples in a custom-built apparatus in which they flowed hydrogen-containing gas of various mixtures and temperatures to test its separation performance. The membranes remained stable and continued to separate hydrogen from other gases even after experiencing temperatures of up to 1,000 kelvins for over 100 hours — a significant improvement over conventional film-based membranes.
“The use of palladium film membranes are generally limited to below around 800 kelvins, at which point they degrade,” Kim says. “Our plug design therefore extends palladium’s effective heat resilience by roughly at least 200 kelvins and maintains integrity far longer under extreme conditions.”
These conditions are within the range of hydrogen-generating technologies such as steam methane reforming and ammonia cracking.
Steam methane reforming is an established process that has required complex, energy-intensive systems to preprocess methane to a form where pure hydrogen can be extracted. Such preprocessing steps could be replaced with a compact “membrane reactor,” through which a methane gas would directly flow, and the membrane inside would filter out pure hydrogen. Such reactors would significantly cut down the size, complexity, and cost of producing hydrogen from steam methane reforming, and Kim estimates a membrane would have to work reliably in temperatures of up to nearly 1,000 kelvins. The team’s new membrane could work well within such conditions.
Ammonia cracking is another way to produce hydrogen, by “cracking” or breaking apart ammonia. As ammonia is very stable in liquid form, scientists envision that it could be used as a carrier for hydrogen and be safely transported to a hydrogen fuel station, where ammonia could be fed into a membrane reactor that again pulls out hydrogen and pumps it directly into a fuel cell vehicle. Ammonia cracking is still largely in pilot and demonstration stages, and Kim says any membrane in an ammonia cracking reactor would likely operate at temperatures of around 800 kelvins — within the range of the group’s new plug-based design.
Karnik emphasizes that their results are just a start. Adopting the membrane into working reactors will require further development and testing to ensure it remains reliable over much longer periods of time.
“We showed that instead of making a film, if you make discretized nanostructures you can get much more thermally stable membranes,” Karnik says. “It provides a pathway for designing membranes for extreme temperatures, with the added possibility of using smaller amounts of expensive palladium, toward making hydrogen production more efficient and affordable. There is potential there.”
This work was supported by Eni S.p.A. via the MIT Energy Initiative.
A cysteine-rich diet may promote regeneration of the intestinal lining, study suggests
A diet rich in the amino acid cysteine may have rejuvenating effects in the small intestine, according to a new study from MIT. This amino acid, the researchers discovered, can turn on an immune signaling pathway that helps stem cells to regrow new intestinal tissue.
This enhanced regeneration may help to heal injuries from radiation, which often occur in patients undergoing radiation therapy for cancer. The research was conducted in mice, but if future research shows similar results in humans, then delivering elevated quantities of cysteine, through diet or supplements, could offer a new strategy to help damaged tissue heal faster, the researchers say.
“The study suggests that if we give these patients a cysteine-rich diet or cysteine supplementation, perhaps we can dampen some of the chemotherapy or radiation-induced injury,” says Omer Yilmaz, director of the MIT Stem Cell Initiative, an associate professor of biology at MIT, and a member of MIT’s Koch Institute for Integrative Cancer Research. “The beauty here is we’re not using a synthetic molecule; we’re exploiting a natural dietary compound.”
While previous research has shown that certain types of diets, including low-calorie diets, can enhance intestinal stem cell activity, the new study is the first to identify a single nutrient that can help intestinal cells to regenerate.
Yilmaz is the senior author of the study, which appears today in Nature. Koch Institute postdoc Fangtao Chi is the paper’s lead author.
Boosting regeneration
It is well-established that diet can affect overall health: High-fat diets can lead to obesity, diabetes, and other health problems, while low-calorie diets have been shown to extend lifespans in many species. In recent years, Yilmaz’s lab has investigated how different types of diets influence stem cell regeneration, and found that high-fat diets, as well as short periods of fasting, can enhance stem cell activity in different ways.
“We know that macro diets such as high-sugar diets, high-fat diets, and low-calorie diets have a clear impact on health. But at the granular level, we know much less about how individual nutrients impact stem cell fate decisions, as well as tissue function and overall tissue health,” Yilmaz says.
In their new study, the researchers began by feeding mice a diet high in one of 20 different amino acids, the building blocks of proteins. For each group, they measured how the diet affected intestinal stem cell regeneration. Among these amino acids, cysteine had the most dramatic effects on stem cells and progenitor cells (immature cells that differentiate into adult intestinal cells).
Further studies revealed that cysteine initiates a chain of events leading to the activation of a population of immune cells called CD8 T cells. When cells in the lining of the intestine absorb cysteine from digested food, they convert it into CoA, a cofactor that is released into the mucosal lining of the intestine. There, CD8 T cells absorb CoA, which stimulates them to begin proliferating and producing a cytokine called IL-22.
IL-22 is an important player in the regulation of intestinal stem cell regeneration, but until now, it wasn’t known that CD8 T cells can produce it to boost intestinal stem cells. Once activated, those IL-22-releasing T cells are primed to help combat any kind of injury that could occur within the intestinal lining.
“What’s really exciting here is that feeding mice a cysteine-rich diet leads to the expansion of an immune cell population that we typically don’t associate with IL-22 production and the regulation of intestinal stemness,” Yilmaz says. “What happens in a cysteine-rich diet is that the pool of cells that make IL-22 increases, particularly the CD8 T-cell fraction.”
These T cells tend to congregate within the lining of the intestine, so they are already in position when needed. The researchers found that the stimulation of CD8 T cells occurred primarily in the small intestine, not in any other part of the digestive tract, which they believe is because most of the protein that we consume is absorbed by the small intestine.
Healing the intestine
In this study, the researchers showed that regeneration stimulated by a cysteine-rich diet could help to repair radiation damage to the intestinal lining. Also, in work that has not been published yet, they showed that a high-cysteine diet had a regenerative effect following treatment with a chemotherapy drug called 5-fluorouracil. This drug, which is used to treat colon and pancreatic cancers, can also damage the intestinal lining.
Cysteine is found in many high-protein foods, including meat, dairy products, legumes, and nuts. The body can also synthesize its own cysteine, by converting the amino acid methionine to cysteine — a process that takes place in the liver. However, cysteine produced in the liver is distributed through the entire body and doesn’t lead to a buildup in the small intestine the way that consuming cysteine in the diet does.
“With our high-cysteine diet, the gut is the first place that sees a high amount of cysteine,” Chi says.
Cysteine has been previously shown to have antioxidant effects, which are also beneficial, but this study is the first to demonstrate its effect on intestinal stem cell regeneration. The researchers now hope to study whether it may also help other types of stem cells regenerate new tissues. In one ongoing study, they are investigating whether cysteine might stimulate hair follicle regeneration.
They also plan to further investigate some of the other amino acids that appear to influence stem cell regeneration.
“I think we’re going to uncover multiple new mechanisms for how these amino acids regulate cell fate decisions and gut health in the small intestine and colon,” Yilmaz says.
The research was funded, in part, by the National Institutes of Health, the V Foundation, the Koch Institute Frontier Research Program via the Kathy and Curt Marble Cancer Research Fund, the Bridge Project — a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center, the American Federation for Aging Research, the MIT Stem Cell Initiative, and the Koch Institute Support (core) Grant from the National Cancer Institute.
System lets people personalize online social spaces while staying connected with others
Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s followers may be unwilling to join the new platform because it could mean leaving their connections and data behind.
Now, researchers from MIT have launched a framework called Graffiti that makes building personalized social applications easier, while allowing users to migrate between multiple applications without losing their friends or data.
“We want to empower people to have control over their own designs rather than having them dictated from the top down,” says electrical engineering and computer science graduate student Theia Henderson.
Henderson and her colleagues designed Graffiti with a flexible structure so individuals have the freedom to create a variety of customized applications, from messenger apps like WhatsApp to microblogging platforms like X to location-based social networking sites like Nextdoor, all using only front-end development tools like HTML.
The protocol ensures all applications can interoperate, so content posted on one application can appear on any other application, even those with disparate designs or functionality. Importantly, Graffiti users retain control of their data, which is stored on a decentralized infrastructure rather than being held by a specific application.
While the pros and cons of implementing Graffiti at scale remain to be fully explored, the researchers hope this new approach can someday lead to healthier online interactions.
“We’ve shown that you can have a rich social ecosystem where everyone owns their own data and can use whatever applications they want to interact with whoever they want in whatever way they want. And they can have their own experiences without losing connection with the people they want to stay connected with,” says David Karger, professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Henderson, the lead author, and Karger are joined by MIT Research Scientist David D. Clark on a paper about Graffiti, which will be presented at the ACM Symposium on User Interface Software and Technology.
Personalized, integrated applications
With Graffiti, the researchers had two main goals: to lower the barrier to creating personalized social applications and to enable those personalized applications to interoperate without requiring permission from developers.
To make the design process easier, they built a collective back-end infrastructure that all applications access to store and share content. This means developers don’t need to write any complex server code. Instead, designing a Graffiti application is more like making a website using popular tools like Vue.
Developers can also easily introduce new features and new types of content, giving them more freedom and fostering creativity.
“Graffiti is so straightforward that we used it as the infrastructure for the intro to web design class I teach, and students were able to write the front-end very easily to come up with all sorts of applications,” Karger says.
The open, interoperable nature of Graffiti means no one entity has the power to set a moderation policy for the entire platform. Instead, multiple competing and contradictory moderation services can operate, and people can choose the ones they like.
Graffiti uses the idea of “total reification,” where every action taken in Graffiti, such as liking, sharing, or blocking a post, is represented and stored as its own piece of data. A user can configure their social application to interpret or ignore those data using its own rules.
For instance, if an application is designed so a certain user is a moderator, posts blocked by that user won’t appear in the application. But for an application with different rules where that person isn’t considered a moderator, other users might just see a warning or no flag at all.
“Theia’s system lets each person pick their own moderators, avoiding the one-sized-fits-all approach to moderation taken by the major social platforms,” Karger says.
But at the same time, having no central moderator means there is no one to remove content from the platform that might be offensive or illegal.
“We need to do more research to understand if that is going to provide real, damaging consequences or if the kind of personal moderation we created can provide the protections people need,” he adds.
Empowering social media users
The researchers also had to overcome a problem known as context collapse, which conflicts with their goal of interoperation.
For instance, context collapse would occur if a person’s Tinder profile appeared on LinkedIn, or if a post intended for one group, like close friends, would create conflict with another group, such as family members. Context collapse can lead to anxiety and have social repercussions for the user and their different communities.
“We realize that interoperability can sometimes be a bad thing. People have boundaries between different social contexts, and we didn’t want to violate those,” Henderson says.
To avoid context collapse, the researchers designed Graffiti so all content is organized into distinct channels. Channels are flexible and can represent a variety of contexts, such as people, applications, locations, etc.
If a user’s post appears in an application channel but not their personal channel, others using that application will see the post, but those who only follow this user will not.
“Individuals should have the power to choose the audience for whatever they want to say,” Karger adds.
The researchers created multiple Graffiti applications to showcase personalization and interoperability, including a community-specific application for a local concert venue, a text-centric microblogging platform patterned off X, a Wikipedia-like application that enables collective editing, and a real-time messaging app with multiple moderation schemes patterned off WhatsApp and Slack.
“It also leaves room to create so many social applications people haven’t thought of yet. I’m really excited to see what people come up with when they are given full creative freedom,” Henderson says.
In the future, she and her colleagues want to explore additional social applications they could build with Graffiti. They also intend to incorporate tools like graphical editors to simplify the design process. In addition, they want to strengthen Graffiti’s security and privacy.
And while there is still a long way to go before Graffiti could be implemented at scale, the researchers are currently running a user study as they explore the potential positive and negative impacts the system could have on the social media landscape.
MIT cognitive scientists reveal why some sentences stand out from others
“You still had to prove yourself.”
“Every cloud has a blue lining!”
Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.
According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.
“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.
The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.
“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.
Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.
Distinctive sentences
What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.
In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.
As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.
Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.
In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.
To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.
The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.
The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”
Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.
The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.
Noisy memories
While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.
This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.
Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.
“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.
However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.
“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.
The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.
The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.
3 Questions: How a new mission to Uranus could be just around the corner
The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact, in the Indian Ocean. The 400-foot rocket is designed to carry up to 150 tons of cargo to low Earth orbit, dramatically increasing potential payload volume from rockets currently in operation. In addition to the planned Artemis III mission to the lunar surface and proposed missions to Mars in the near future, Starship also poses an opportunity for large-scale scientific missions throughout the solar system.
The National Academy of Sciences Planetary Science Decadal Survey published a recommendation in 2022 outlining exploration of Uranus as its highest-priority flagship mission. This proposed mission was envisioned for the 2030s, assuming use of a Falcon Heavy expendable rocket and anticipating arrival at the planet before 2050. Earlier this summer, a paper from researchers in MIT’s Engineering Systems Lab found that Starship may enable this flagship mission to Uranus in half the flight time.
In this 3Q, Chloe Gentgen, a PhD student in aeronautics and astronautics and co-author on the recent study, describes the significance of Uranus as a flagship mission and what the current trajectory of Starship means for scientific exploration.
Q: Why has Uranus been identified as the highest-priority flagship mission?
A: Uranus is one of the most intriguing and least-explored planets in our solar system. The planet is tilted on its side, is extremely cold, presents a highly dynamic atmosphere with fast winds, and has an unusual and complex magnetic field. A few of Uranus’ many moons could be ocean worlds, making them potential candidates in the search for life in the solar system. The ice giants Uranus and Neptune also represent the closest match to most of the exoplanets discovered. A mission to Uranus would therefore radically transform our understanding of ice giants, the solar system, and exoplanets.
What we know about Uranus largely dates back to Voyager 2’s brief flyby nearly 40 years ago. No spacecraft has visited Uranus or Neptune since, making them the only planets yet to have a dedicated orbital mission. One of the main obstacles has been the sheer distance. Uranus is 19 times farther from the sun than the Earth is, and nearly twice as far as Saturn. Reaching it requires a heavy-lift launch vehicle and trajectories involving gravity assists from other planets.
Today, such heavy-lift launch vehicles are available, and trajectories have been identified for launch windows throughout the 2030s, which resulted in selecting a Uranus mission as the highest priority flagship in the 2022 decadal survey. The proposed concept, called Uranus Orbiter and Probe (UOP), would release a probe into the planet’s atmosphere and then embark on a multiyear tour of the system to study the planet’s interior, atmosphere, magnetosphere, rings, and moons.
Q: How do you envision your work on the Starship launch vehicle being deployed for further development?
A: Our study assessed the feasibility and potential benefits of launching a mission to Uranus with a Starship refueled in Earth’s orbit, instead of a Falcon Heavy (another SpaceX launch vehicle, currently operational). The Uranus decadal study showed that launching on a Falcon Heavy Expendable results in a cruise time of at least 13 years. Long cruise times present challenges, such as loss of team expertise and a higher operational budget. With the mission not yet underway, we saw an opportunity to evaluate launch vehicles currently in development, particularly Starship.
When refueled in orbit, Starship could launch a spacecraft directly to Uranus, without detours by other planets for gravity-assist maneuvers. The proposed spacecraft could then arrive at Uranus in just over six years, less than half the time currently envisioned. These high-energy trajectories require significant deceleration at Uranus to capture in orbit. If the spacecraft slows down propulsively, the burn would require 5 km/s of delta v (which quantifies the energy needed for the maneuver), much higher than is typically performed by spacecraft, which might result in a very complex design. A more conservative approach, assuming a maximum burn of 2 km/s at Uranus, would result in a cruise time of 8.5 years.
An alternative to propulsive orbit insertion at Uranus is aerocapture, where the spacecraft, enclosed in a thermally protective aeroshell, dips into the planet’s atmosphere and uses aerodynamic drag to decelerate. We examined whether Starship itself could perform aerocapture, rather than being separated from the spacecraft shortly after launch. Starship is already designed to withstand atmospheric entry at Earth and Mars, and thus already has a thermal protection system that could, potentially, be modified for aerocapture at Uranus. While bringing a Starship vehicle all the way to Uranus presents significant challenges, our analysis showed that aerocapture with Starship would produce deceleration and heating loads similar to those of other Uranus aerocapture concepts and would enable a cruise time of six years.
In addition to launching the proposed spacecraft on a faster trajectory that would reach Uranus sooner, Starship’s capabilities could also be leveraged to deploy larger masses to Uranus, enabling an enhanced mission with additional instruments or probes.
Q: What does the recent successful test of Starship tell us about the viability and timeline for a potential mission to the outer solar system?
A: The latest Starship launch marked an important milestone for the company after three failed launches in recent months, renewing optimism about the rocket’s future capabilities. Looking ahead, the program will need to demonstrate on-orbit refueling, a capability central to both SpaceX’s long-term vision of deep-space exploration and this proposed mission.
Launch vehicle selection for flagship missions typically occurs approximately two years after the official mission formulation process begins, which has not yet commenced for the Uranus mission. As such, Starship still has a few more years to demonstrate its on-orbit refueling architecture before a decision has to be made.
Overall, Starship is still under development, and significant uncertainty remains about its performance, timelines, and costs. Even so, our initial findings paint a promising picture of the benefits that could be realized by using Starship for a flagship mission to Uranus.
