MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 10 hours 33 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Electrons in moiré crystals explore higher-dimensional quantum worlds

Fri, 04/03/2026 - 5:30pm

The electrons that power our society flow left and right through the circuitry in our electronics, back and forth along the transmission lines that make up our power grid, and up and down to light up every floor of every building. But the electrons in newly discovered “moiré crystals” move in much stranger ways. They can move left and right, back and forth, or up and down in our three-dimensional world, but these electrons also act as if they can teleport in and out of a mysterious fourth dimension of space that is perpendicular to our perceivable reality. Physicists have found that this strange, newly discovered quantum behavior has nothing to do with the electrons themselves and everything to do with the strange material environment in which they live.

The electrons in moiré crystals leap into a fourth dimension through a process called “quantum tunneling.” While a soccer ball sitting at the bottom of a hill will stay put until someone retrieves it, a quantum particle in a valley can jump out all on its own. Quantum tunneling may seem magical to us, but it is quite commonplace in the microscopic quantum world, on the length scales of atoms. Quantum tunneling is also important on larger length scales, particularly in large superconducting circuits that underlie an emerging landscape of quantum technology, as recognized by the 2025 Nobel Prize in Physics. 

However, quantum tunneling in moiré crystals is different, in that once an electron tunnels, physicists have now measured that it acts as if it had tunneled into a completely different world and come back again, as if it had been transported through a fourth “synthetic” dimension.

In a paper published recently in the journal Nature, a team of MIT researchers realize a long-anticipated scalable technique for producing high-quality moiré materials as moiré crystals, overcoming a materials bottleneck for next-generation electronic applications. In addition, the electrons in these crystals act as if they can teleport through a fourth dimension of space, unlocking a realistic materials approach for realizing numerous theoretical predictions of higher-dimensional superconductivity and higher-dimensional topological properties in the laboratory.

The study’s co-lead authors are Kevin Nuckolls, a Pappalardo postdoc in physics at MIT, and Nisarga Paul PhD ’25, and the study’s corresponding author is Joe Checkelsky, professor of physics at MIT. In addition, the study’s MIT co-authors include Alan Chen, Filippo Gaggioli, Joshua Wakefield, and Liang Fu, along with collaborators at Harvard University, Toho University, and the National High Magnetic Field Laboratory.

Crystal perfection

To make a moiré material, physicists first start with atomically thin two-dimensional (2D) materials, like the thinnest sheets of carbon known as graphene. Moiré materials can be created by combining individual sheets of the same 2D material and twisting them back and forth with respect to one another. Moiré materials can also be created by combining two different 2D materials that are very similar, but not quite the same, which ensures that they can never perfectly match one another even when carefully aligned. Both of these methods create intricate interference patterns where the individual layers of moiré materials are nearly aligned in some areas and visibly misaligned in others. Physicists call these patterns “moiré superlattices,” named after historical French fabrics that show similarly beautiful patterns generated by overlaying two different threading patterns.

For more than a decade, moiré materials have completely reshaped how physicists design and control quantum material properties, and the physics labs at MIT have been the hotbed of transformative discoveries in this ever-growing research field. Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT, and Raymond Ashoori, professor of physics at MIT, were early adopters of new techniques for fabricating moiré materials. Together in 2014, their labs discovered that electrons in moiré materials made from graphene and the 2D material boron nitride live in an intricate quantum fractal known as “Hofstadter’s butterfly.” In 2018, Jarillo-Herrero’s lab discovered that moiré materials made from twisting two sheets of graphene were fertile grounds for unconventional superconductivity that, by some metrics, is one of the strongest superconductors ever discovered. Long Ju, the Lawrence C. and Sarah W. Biedenharn Associate Professor of Physics, and his lab discovered in 2024 that moiré materials made from multilayer graphene and boron nitride cause electrons to split apart into fractional pieces, a quantum phenomenon previously thought to be exclusively confined to extremely high magnetic fields, but now realized without the need for a magnetic field.

Common across all of these experiments, and those performed around the world, were the tireless efforts of students and postdocs in carefully assembling moiré material devices by hand, one at a time. To make a moiré material device, 2D materials like graphene are peeled using Scotch tape from rock-like crystals, such as graphite. Then, sticky polymer films and microscopes enable researchers to pick up different 2D materials one by one with a precise sequence of twist angles. Finally, these stacks of 2D materials are etched into individual devices that allow researchers to investigate their properties in the lab.

In their new study, Joe Checkelsky and his lab have discovered a new technique for generating moiré materials that skips over all of these laborious steps. Their new method takes an entirely different approach, and it’s one that can assemble moiré materials by the tens of thousands. Instead of assembling samples one by one and layer by layer, Checkelsky and his lab have found new chemical synthesis routes that enlist Mother Nature’s help to grow “moiré crystals” with high-quality moiré superlattices built into each of their layers. By analogy, if one were to think of previous generations of moiré materials like two stacked sheets of paper with different line spacings, Checkelsky has figured out how to generate entire libraries of encyclopedias whose odd-numbered pages and even-numbered pages have two different line spacings.

“It feels incredible for our team to have made this materials discovery, particularly at MIT,” says Nuckolls, co-lead author on the work. “Moiré materials have become a central focus of quantum materials research today in large part because of the work of our colleagues just down the hallway.”

In the end, it turns out that nature is by far the best at assembling moiré materials when given the right tools. The MIT team discovered that naturally grown moiré materials are nearly perfect and highly reproducible. This offers a long-anticipated proof-of-concept demonstration of a potentially scalable route to using moiré materials in next-generation electronics. Although there are many more obstacles to be overcome to transform these fundamental science results into usable technology, the team has demonstrated a crucial first step in the right direction.

4D in 4K

After discovering how to grow and manipulate moiré superlattices in moiré crystals, the team began to investigate their properties. Initially, the team found that the metallic properties of these materials were surprisingly complicated, but they soon shifted their perspective to think from a higher-dimensional point of view, an idea inspired by theoretical proposals made roughly half a century ago. To peer into this prospective four-dimensional quantum world, the team performed detailed studies of the electronic and magnetic properties of moiré crystals at very large magnetic fields. The electrons in common metals move in tight circular orbits when placed in a magnetic field. However, something very special happens when they move in moiré crystals with two different interfering lattices. This interference generates a moiré superlattice that is mathematically equivalent to an emergent four-dimensional “superspace” lattice. Guided by this new 4D superspace lattice, the team discovered that these electrons could now move through this fourth dimension when their motion aligns to the direction where the two competing lattices interfere the most.

“Metaphorically, our measurements uncover ‘shadows’ of emergent 4D landscape upon which the electrons live,” says Nuckolls. “By carefully analyzing these 3D silhouettes from different angles and perspectives, our measurement reconstructs the 4D landscape that guides electrons in moiré crystals.”

Although this extra synthetic dimension is fictitious and the electrons in moiré crystals are actually still stuck in our 3D reality, they simulate a four-dimensional quantum world so closely that the measured properties of moiré crystals appear as if the researchers had actually performed their experiments in 4D. It seems like moiré crystals aren’t particularly bothered by whether the fourth dimension is fictitious and synthetic or if it’s real. It’s all the same to them.

“Mathematically, the equations describing the electron dynamics in these crystals are four-dimensional,” says co-lead author Nisarga Paul. “The electrons propagate in the synthetic dimension just as they do in our world’s three physical dimensions. It’s hard to detect this motion, but one of the striking realizations was that a magnetic field can reveal fingerprints of this synthetic dimension in experimentally measurable electronic properties known as quantum oscillations.”

Going forward, the team will explore how a wide variety of material properties might benefit from extra synthetic dimensions, which now could be within reach of realization.

“It’s fascinating to consider what may be possible next,” Checkelsky says. “There are long-standing theoretical predictions for higher-dimensional conductors and superconductors, for example — materials of this type may offer a new platform to examine these experimentally in the laboratory.”

This research was supported, in part, by the Gordon and Betty Moore Foundation, the U.S. Department of Energy Office of Science, the U.S. Office of Naval Research, the U.S. Army Research Office, U.S. Air Force Office of Scientific Research, MIT Pappalardo Fellowships in Physics, the Swiss National Science Foundation, and the U.S. National Science Foundation. 

Urban planning students engage with communities through the Freedom Summer Fellowship

Fri, 04/03/2026 - 5:15pm

For the past three summers, MIT master’s students and recently graduated planners have collaborated with cities and community organizations to advance climate, infrastructure, and economic development initiatives. They’re known as the Freedom Summer Fellows, participants in an impact-driven program launched in 2023 by the MIT Department of Urban Studies and Planning (DUSP), an expression of the department’s commitment to equal opportunity and experiential learning. 

Over the course of eight to 10 weeks, fellows are immersed in the real stakes and challenges of projects that involve navigating a network of interconnected causes, competing agendas, a range of stakeholders, and rapidly changing circumstances. Host organizations define discrete tasks and provide ongoing supervision, while fellows develop actionable tools and materials designed to empower organizations in the long term — from policy research and grant application strategies to navigate funding, to analytical tools and implementation frameworks to ensure informed and streamlined project management. 

“You can’t teach planning today without grappling with how policy actually unfolds within communities; under pressure, with limited resources, and with multiple conflicting interests,” says Phillip Thompson, professor of urban planning at MIT and former New York City deputy mayor for strategic policy initiatives under Mayor Bill de Blasio. “The Freedom Summer Fellowship is about capacity building through cooperative learning — a knowledge exchange intended to have lasting positive results for communities, while equipping planners with critical experience as they embark on their careers.”

From classroom to communities

The fellowship emerged from Bills and Billions, a DUSP Independent Activities Period course taught by Thompson and Elisabeth Reynolds, professor of the practice at MIT and former special assistant to President Joe Biden for manufacturing and economic development. The course examines U.S. federal policy and its intersection with local economic development, labor markets, and the infrastructure of industry, energy, and the built environment more broadly.  

“We were at an inflection point,” says Reynolds, speaking of her return to MIT in fall 2022 after serving at the National Economic Council. “There was a real sense of urgency about the wave of new legislation and funding around clean energy, infrastructure, and reindustrialization, and much of the investment and work in these areas continues today. It’s a very dynamic time for cities and states, with significant experimentation and innovative strategies — a perfect environment for MIT graduate students and recent grads.”  

Securing federal funding is typically dependent on competitive grants requiring technical, financial, and community planning that many local governments and nonprofits are not equipped for. “While much funding to localities has since been cut, the momentum for change is still there,” says Thompson. “The incentives put forward by the Inflation Reduction Act encouraged localities and communities to initiate their own clean energy projects, and there’s a continued recognition that climate change is going to take a movement from the bottom up.”

At a time when the U.S. is experiencing a paradigm shift in policy — characterized by challenges to a free-market economy and global trade, renewed investment in industrial strategy, and the lifting of environmental and other regulations — the fellowship offers a way to support the planning and implementation of equitable development strategies and to redirect resources where they are needed most.

From placements to professional practice

Since 2023, 31 Freedom Summer Fellows have collaborated with 19 host organizations, and contributed to more than $100 million in state, federal, and philanthropic grant applications, including a successful $3 million EPA Climate Pollution Reduction grant for Hawaii. Fellows have helped convene more than 3,500 community members and have produced dozens of planning tools, including implementation maps, technical tools, and dashboards that support equitable project design and production. Collaborations have inspired the focus of graduate theses produced as client reports for hosts, and in several cases fellows have extended their positions to full-time roles. 

For Sara Jex MCP ’25, her 2024 Freedom Summer Fellowship became a direct pathway from graduate study to professional practice. She was placed with the Site Readiness Fund for Good Jobs in Cleveland, Ohio, an organization working to transform brownfields and disinvested industrial sites into engines of inclusive economic growth.

“Much of my work that summer involved developing an EPA Community Change Grant application for a proposed industrial district spanning over 350 acres — 200 of which we’re looking to reactivate,” says Jex. “So, it’s a transformative project that will bring in new jobs, but there are also major challenges that come with industrial place-making, especially given the proximity to residential neighborhoods. In Rust Belt cities, there’s a history of industrial disinvestment leading to job loss, population decline, and environmental injustices. We don’t want to repeat the harms of the past — we want to create something better.”

To support equitable development strategies for the industrial corridor, Jex helped to prepare technical tools mapping the effects of development on home values, seeking to identify a balance of growth, affordability, and resident benefit. She also evaluated wealth-building strategies such as land trusts and mixed-income neighborhood trusts, offering recommendations for community ownership of land holdings.

“Our vision for the project is not just about bringing in new businesses and creating new jobs,” says Jex, “it’s also about going beyond job creation to create lasting benefit for communities surrounding the sites.”

Jex continued working with Site Readiness Fund for Good Jobs during her second year at MIT and now holds a full-time role at the organization. “The Freedom Summer Fellowship gave me a platform to start building my planning career,” she reflects. “It was eye-opening to be in a cohort of other students doing similar work across the country. The insights from our weekly meetings have stayed with me since graduating — we were able to share perspectives on the challenges we were facing from multiple different contexts, and that brought a new dimension to the learning process.”

Redefining resilience

For Deena Darby, an MIT master’s student with a background in architecture and public art, her 2025 Freedom Summer Fellowship offered a way to bridge creative practice with structural change. Working with the LA84 Foundation and the Ubuntu Climate Initiative in Los Angeles, Darby focused on neighborhood-based resilience in the context of the 2025 wildfires and the upcoming 2028 Olympics.

“My decision to apply to do a master’s in city planning at MIT was informed by the projects I had been working on in Harlem, the Bronx, Brooklyn, and other cities, including Philadelphia and Detroit. Much of that work involved community engagement work when producing public art at an architectural scale, but I kept feeling that residents deserved more than an art piece at the end of a project.” 

During the fellowship, Darby contributed to asset mapping across six neighborhoods, developed case studies on resilience hubs, and helped shape strategies that tied climate adaptation to culture, play, and community ownership. Her immersion in the lived experience of those neighborhoods — visiting sites, meeting organizers, and participating in local coalitions — was crucial to her development of strategic recommendations for decentralized infrastructure, cultural arts cohorts, and neighborhood-based resilience festivals.

“Resilience is often narrowly framed around climate,” Darby reflects. “But what we were really redefining was economic resilience, social resilience, and the ability of communities to tell their own stories.” 

Darby’s fellowship experience has led to her thesis project, working with the residents of a historically Black neighborhood in her hometown of Savannah, Georgia, who are experiencing displacement. “Coming from an architecture and planning background, my instinct is to ask, How can we frame these issues in terms of cultural preservation and community-based policy development and implementation?” says Darby. “How can we manage change, with the goal of benefiting present residents as well as honoring those who have lived here in the past?”

For Darby, gaining practical understanding of the inseparability of planning and policy has been key to shaping her approach to navigating the educational opportunities at MIT. “In a higher-education context, you’ll often find policy housed separately from planning. But the moment you’re working in situ, it doesn’t make sense to separate the two. For me, the fellowship was a bridge between two often-siloed disciplines.”

Reassessing expertise

“Impact at MIT is typically associated with technological breakthroughs,” says Reynolds. “But much of MIT’s work can make a huge difference when applied in the near term, on the ground. At DUSP, we’re all about bringing theory and practice together, about the interrelation of communities, infrastructure, policy, and how that maps out in the built environment. We can bring expertise and knowledge into the field tomorrow, into places that can immediately benefit from the collaboration.” 

Initial funding for the fellowship at MIT was provided by the MIT Climate Project, in addition to national foundations. Faculty are exploring ways to expand and increase the number of student placements, further embedding relationships between MIT and cities across the United States. There are also discussions about sharing the model with other institutions, including historically Black colleges and international collaborators. 

“We’re just starting these conversations with other institutions, but it’s the model of engaged, experiential, cooperative learning that matters,” says Thompson. “It’s clear that the experts aren’t necessarily those who have read a lot of books about planning or design, but those who are embedded within communities, trying to figure out these challenges from the inside.”

The planner might not be the primary expert — but they are the ones who guide decisions that shape the futures of communities. The Freedom Summer Fellowship is about fostering a culture of urban planning in which those decisions are centered upon the lived experience of stakeholders. An approach to practice in which — as Jex put it, reflecting on her experience in Cleveland: “Planners are the people who make decisions about how cities shape access to opportunity.”

Applications for the 2026 Freedom Summer Fellowships are being accepted now through April 7. 

Why does wealth inequality matter?

Fri, 04/03/2026 - 5:00pm

The MIT James M. and Cathleen D. Stone Center on Inequality and Shaping the Future of Work recently hosted a half-day symposium at the Institute on “Why Wealth Inequality Matters.”

Three panel discussions convened experts from economics, philosophy, sociology, and political science to explore the origins, mechanisms, and political consequences of wealth inequality.

Richard Locke, John C Head III Dean of the MIT Sloan School of Management, welcomed attendees to the symposium, emphasizing how the event reflects MIT’s commitments to interdisciplinary collaboration and to addressing “society's most pressing issues.”

Here are three key takeaways from the afternoon’s panels.

When wealth buys political influence and legal immunity, democracy is threatened

Hélène Landemore of Yale University argued that wealth inequality isn’t inherently problematic, but becomes dangerous when wealth offers disproportionate influence in other spheres, including political power.

Wojciech Kopczuk of Columbia University echoed this, emphasizing that wealth is a complicated and often ambiguous measure of inequality. Wealth reflects institutional contexts — for example, weak safety nets drive precautionary saving. Still, he agreed that wealth is a relevant metric at the very top, where it correlates with political capture and corporate power.

Landemore explained that when the wealthy dominate policy discussions, “some groups are systematically disbelieved or ignored, and the result is policy failure.” For example, French carbon taxes disproportionately burdened working-class people who were more dependent on cars, which led to the yellow vests protests.

Elizabeth Anderson of the University of Michigan extended this point to corporate power, warning that extreme concentration gives powerful firms de facto immunity from the rule of law — the wealthiest companies can hire hundreds of lawyers to swamp the legal system.

To counteract these negative consequences of high inequality, Oren Cass of American Compass argued that strengthening worker power is key. Redistribution, he said, is a way to improve living standards, but “it is not a solution to the kinds of problems that actually plague democratic capitalism.”

The roots of the racial wealth gap are so deep that equal opportunity alone won’t close it

Ellora Derenoncourt of Princeton University explained that in the United States today, the wealth gap between Black and white Americans is 6:1. In other words, for every dollar of wealth held by an average white American, the average Black American holds about $0.17. She noted that this racial wealth gap has largely remained unchanged for the past 50 years.

“Even if we were to equalize differences in wealth accumulating opportunities — equal savings rates, equal capital gains rates going forward — we’re still hundreds of years away from convergence,” she explained, due to the magnitude of the original gap.

Alexandra Killewald of the University of Michigan added that the racial wealth gap is actively rebuilt each generation through unequal schools, unequal pay, and unequal access to homeownership.

“The past matters, but it’s not just about the past,” she explained. Even if a massive reparations plan were implemented, “if we just let things go on as they are, we will start to recreate inequality from Day 1.”

High inequality and authoritarianism reinforce each other

Daron Acemoglu of MIT described how increasing inequality goes hand-in-hand with the weakening of democracy: “Once inequality starts building up, it also naturally erodes democracies’ claim for legitimacy.”

High inequality, he argued, is both a cause and an effect of liberal democracy failing to deliver on its promise of shared prosperity. This failure, in turn, weakens public support for democracy.

Building on this argument, Sheri Berman of Barnard College examined why economically disadvantaged voters in the United States and Europe have increasingly voted for right-wing populist parties, despite holding economically progressive views.

She described how center-left parties have transformed since the late 20th century, converging with the right on economic policy (embracing free trade and market deregulation) while moving left on social and cultural issues. As a result, she argued, working-class and rural voters no longer saw center-left parties as champions of their economic interests, or as reflecting their social and cultural preferences.

David Yang of Harvard University explained that once authoritarianism takes hold, regimes continue to produce inequality. For example, non-democratic regimes are most responsive not to the average citizen, but to whoever poses the greatest threat to regime survival. In China, this tends to be the wealthier urban population capable of organizing large-scale collective action.

Working to advance the nuclear renaissance

Fri, 04/03/2026 - 4:55pm

Today, there are 94 nuclear reactors operating in the United States, more than in any other country in the world, and these units collectively provide nearly 20 percent of the nation’s electricity. That is a major accomplishment, according to Dean Price, but he believes that our country needs much more out of nuclear energy, especially at a moment when alternatives to fossil fuel-based power plants are desperately being sought. He became a nuclear engineer for this very reason — to make sure that nuclear technology is up to the task of delivering in this time of considerable need.

“Nuclear energy has been a tremendous part of our nation’s energy infrastructure for the past 60 years, and the number of people who maintain that infrastructure is incredibly small,” says Price, an MIT assistant professor in the Department of Nuclear Science and Engineering (NSE), as well as the Atlantic Richfield Career Development Professor in Energy Studies. “By becoming a nuclear engineer, you become one of a select number of people responsible for carbon-free energy generation in the United States.” 

That was a mission he was eager to take part in, and the goals he set for himself were far from modest: He wanted to help design and usher in a new class of nuclear reactors, building on the safety, economics, and reliability of the existing nuclear fleet.

Price has never wavered from this objective, and he’s only found encouragement along the way. The nuclear engineering community, he says, “is small, close-knit, and very welcoming. Once you get into it, most people are not inclined to do anything else.”

Illuminating the relationships between physical processes

In his first research project as an undergraduate at the University of Illinois Urbana at Champaign, Price studied the safety of the steel and concrete casks used to store spent reactor fuel rods after they’ve cooled off in tanks of water, typically for several years. His analysis indicated that this storage method was quite safe, although the question as to what should ultimately be done with these fuel casks, in terms of long-term disposal, remains open in this country.

After starting graduate studies at the University of Michigan in 2020, Price took up a different line of research that he’s still engaged in today. That area of study, called multiphysics modeling, involves looking at various physical processes going on in the core of a nuclear reactor to see how they interact — an alternative to studying these processes one at a time.

One key process, neutronics, concerns how neutrons buzz around in the reactor core causing nuclear fission, which is what generates the power. A second process, called thermal hydraulics, involves cooling the reactor to extract the heat generated by neutrons. A multiphysics simulation, analyzing how these two processes interact, could show how the heat carried away as the reactor produces power affects the behavior of neutrons, because the hotter the fuel is, the less likely it is to cause fission.

“If you ever want to change your power level, or do anything with the reactor, the temperature of the fuel is a critical input that you need to know,” says Price. “Multiphysics modeling allows us to correlate the fission neutronics processes with a thermal property, temperature. That, in turn, can help us predict how the reactor will behave under different conditions.”

Multiphysics modeling for light water reactors, which are the ones operating today with capacities on the order of 1,000 megawatts, are pretty well established, Prices says. But methods for modeling advanced reactors — small modular reactors (SMRs with capacities ranging from around 20 to 300 MW) and microreactors (rated at 1 to 20 MW) — are far less advanced. Only a very small number of these reactors are operating today, but Price is focusing his efforts on them because of their potential to produce power more cheaply and more safely, along with their greater flexibility in power and size.   

Although multiphysics simulations have supplied the nuclear community with a wealth of information, they can require supercomputers to solve, or find approximate solutions to, coupled and extremely difficult nonlinear equations. In the hopes of greatly reducing the computational burden, Price is actively exploring artificial intelligence approaches that could provide similar answers while bypassing those burdensome equations altogether. That has been a central theme of his research agenda since he joined the MIT faculty in September 2025.

A crucial role for artificial intelligence

What artificial intelligence and machine-learning methods, in particular, are good at is finding patterns concealed within data, such as correlations between variables critical to the functioning of a nuclear plant. For example, Price says, “if you tell me the power level of your reactor, it [AI] could tell you what the fuel temperature is and even tell you the 3-dimensional temperature distribution in your core.” And if this can be done without solving any complicated differential equations, computational costs could be greatly reduced.

Price is investigating several applications where AI may be especially useful, such as helping with the design of novel kinds of reactors. “We could then rely on the safety frameworks developed over the past 50 years to carry out a safety analysis of the proposed design,” he says. “In this way, AI will not be directly interfacing with anything that is safety-critical.” As he sees it, AI’s role would be to augment established procedures, rather than replacing them, helping to fill in existing gaps in knowledge.

When a machine-learning model is given a sufficient amount of data to learn from, it can help us better understand the relationship between key physical processes — again without having to solve nonlinear differential equations. 

“By really pinning down those relationships, we can make better design decisions in the early stages,” Price says. “And when that technology is developed and deployed, AI can help us make more intelligent control decisions that will enable us to operate our reactors in a safer and more economical way.”

Giving back to the community that nurtured him

Simply put, one of his chief goals is to bring the benefits of AI to the nuclear industry, and he views the possibilities as vast and largely untapped. Price also believes that he is well-positioned as a professor at MIT to bring us closer to the nuclear future that he envisions. As he sees it, he’s working not only to develop the next generation of reactors, but also to help prepare the next generation of leaders in the field.

Price became acquainted with some prospective members of that “next generation” in a design course he co-taught last fall with Curtis Smith, the KEPCO Professor of the Practice of Nuclear Science and Engineering. For Price, that introduction lasted just a few months, but it was long enough for him to discover that MIT students are exceptionally motivated, hard-working, and capable. Not surprisingly, those happen to be the same qualities he’s hoping to find in the students that join his research team.

Price vividly recalls the support he received when taking his first, tentative steps in this field. Now that he’s moved up the ranks from undergraduate to professor, and acquired a substantial body of knowledge along the way, he wants his students “to experience that same feeling that I had upon entering the field.” Beyond his specific goals for improving the design and operation of nuclear reactors, Price says, “I hope to perpetuate the same fun and healthy environment that made me love nuclear engineering in the first place.”

Toward cheaper, cleaner hydrogen production

Fri, 04/03/2026 - 12:00am

Hydrogen sits at the center of some of the world’s most important industrial processes, but its production still comes with a heavy environmental cost. Today, most hydrogen is produced through high-emissions processes like steam methane reforming and coal gasification.

But hydrogen can also be made by splitting water molecules using renewable electricity, eliminating fossil fuel emissions and other toxic byproducts. Such “green hydrogen” is made by running an electric current through water in an electrolyzer.

Green hydrogen won’t scale through decarbonization alone. It also has to be cost-competitive with the traditional methods of production.

1s1 Energy thinks it has the technology to finally make green hydrogen go mainstream. The company says its boron-based membrane material unlocks previously unachievable performance and durability in electrolyzers.

In tests with partners, 1s1 says, electrolyzers with its membranes needed just 70 percent of the energy to produce each kilogram of hydrogen, compared to incumbent devices.

“Green hydrogen has been a hard industry to have success in so far,” acknowledges 1s1 co-founder Dan Sobek ’88, SM ’92, PhD ’97. “The difference with us is we’ve done very targeted customer discovery. We have a very strong value proposition that’s not just about decarbonization. We have a pipeline of potential customers that see around a 60 percent reduction in operating costs with our technology. That’s a nice point of entry.”

Although 1s1 is focused on hydrogen production now, its technology could also be used in fuel cells and solid-state batteries, and to extract critical metals from mining waste. The company is beginning trials in some of those applications, and it is working with a large materials company to scale up production of its membranes for hydrogen production.

“We’re at an inflection point for the company,” Sobek says. “The plan is, by 2030, to have a solid business in several segments: electrolyzers, mineral extraction, and in collaborations with several large companies. But right now, we have to be judicious and focused.”

Improving electrolyzers

Sobek was born and raised in Argentina, but he also grew up at MIT over the course of three degrees and more than a decade. He first studied aeronautics and astronautics at MIT, then jumped to mechanical engineering as a graduate student, then moved to the Department of Electrical Engineering and Computer Science, where he worked under PhD advisors and MIT professors Martha Gray and Stephen Senturia. His thesis focused on a technique for quickly measuring optical properties of large numbers of biological cells.

“A lot of my learnings around microfabrication and materials chemistry ended up being really relevant for 1s1,” Sobek says. “A class that was very important to me was taught by Professor Amar Bose. I was a teaching assistant for him for a couple of semesters, and that had an incredible influence on my thinking.”

Following graduation, Sobek worked in microelectronics and microfluidics before founding his own company, Zymera, in 2004. The company developed deep-tissue imaging technology for detecting cancer and other serious diseases.

Around 2013, Sobek started talking to his Zymera co-founder, Sukanta Bhattacharyya, about making electrolysis more efficient, focusing on “proton exchange membrane” electrolyzers. Such electrolyzers employ a large amount of electricity to split water into hydrogen and oxygen ions. At their center is a membrane that can lose efficiency through voltage resistance.

On top of the efficiency challenge, electricity is often more expensive than fossil fuels in many parts of the world. Traditional hydrogen production also has the benefit of existing infrastructure, making it that much more difficult for green hydrogen production to scale.

Sobek and Bhattacharyya knew the most important part of such electrolyzers is their proton-conducting membrane, which shuttles hydrogen ions from the anode to the cathode in the electrolyzer’s electrochemical cell.

“I asked Sukanta how we could improve the efficiency and durability of that element,” Sobek recalls. “He gave me a one-word answer: boron.”

Boron can be given a negative charge, which makes hydrogen ions, or protons, bond to it more quickly. The hydrogen ions can then be filtered through the membrane and released as they move through the cell. Boron-based materials are also more stable and resistant to corrosion, further improving the long-term performance of electrolyzers.

The company was officially founded in late 2019. After years of development, today 1s1 attaches a chemically tailored version of boron onto polymer materials to create its membranes for exchanging protons.

“These are first-of-a-kind membranes with stable and durable, super-acid proton exchange groups that do not poison catalysts,” Sobek says.

Tiny membranes with big impact

In 2021, the U.S. Department of Energy set a goal for proton exchange membrane electrolysis to achieve 77 percent electrical efficiency by 2031. Sobek says 1s1 is already reaching that milestone in tests.

“It’s not just the technology, but the way we’re applying it,” Sobek says, “We’re making hydrogen viable for use in the production of different industrial chemicals.”

1s1 is currently conducting pilots with partners, including an electrical utility owned by a large steel company in Brazil. The company is also actively exploring other applications for its technology. Last year, 1s1 announced a project to produce green ammonia with the company Nitrofix through joint funding from the U.S. Department of Energy and the Israeli Ministry of Energy and Infrastructure. It’s also working with a large mine in Brazil to extract a material called niobium, which is useful for high-strength steel as well as fast-charging batteries. A similar process could even be used to extract gold.

“We can do that without using harsh chemicals, because the standard processes used to extract niobium and gold use extremely strong acids at high temperatures or extremely toxic chemicals,” Sobek says. “It’s gratifying for me because my home country of Argentina has had a lot of problems with the use of toxic chemicals to extract gold. We’re trying to enable low-cost, responsible mining.”

As 1s1 scales its membrane technology, Sobek says the goal is to deploy wherever the technology can improve processes.

“We have a large number of potential customers because this technology is really foundational,” Sobek says. “Creating high-impact technologies is always fun.”

Lincoln Laboratory laser communications terminal launches on historic Artemis II moon mission

Thu, 04/02/2026 - 9:00am

In 1969, Apollo 11 astronaut Neil Armstrong stepped onto the moon’s surface — a momentous engineering and science feat marked by his iconic words: "That’s one small step for man, one giant leap for mankind." Now, NASA is making history again.

With the successful launch of NASA’s Artemis II mission yesterday, four astronauts are set to become the first humans to travel to the moon in more than 50 years. In 2022, the uncrewed Artemis I mission demonstrated that NASA’s new Orion spacecraft could travel farther into space than ever before and return safely to Earth. Building on that success, the 10-day Artemis II mission will pave the way for future Artemis missions, which aim to land astronauts on the moon to prepare for a lasting lunar presence, and eventually human missions to Mars.

As it orbits the moon, the Orion spacecraft will carry an optical (laser) communications system developed at MIT Lincoln Laboratory in collaboration with NASA Goddard Space Flight Center. Called the Orion Artemis II Optical Communications System (O2O), the system is capable of higher-bandwidth data transmissions from space compared to traditional radio-frequency (RF) systems. During the Artemis II mission, O2O will use laser beams to send high-resolution video and images of the lunar surface down to Earth.

"Space-based communications has always been a big challenge," says lead systems engineer Farzana Khatri, a senior staff member in the laboratory’s Optical and Quantum Communications Group. "RF communications have served their purpose well. However, the RF spectrum is highly congested now, and RF does not scale well to longer distances across space. Laser communication [lasercom] is a solution that could solve this problem, and the laboratory is an expert in the field, which was really pioneered here."

Artemis II is historic not only for renewing human exploration beyond Earth, but also for being the first crewed lunar flight to demonstrate lasercom technologies, which are poised to revolutionize how spacecraft communicate. Lincoln Laboratory has been developing such technologies for more than two decades, and NASA has been infusing them into its missions to meet the growing demands of long-distance and data-intensive space exploration.

"The Orion spacecraft collects a huge amount of data during the first day of a mission, and typically these data sit on the spacecraft until it splashes down and can take months to be offloaded," Khatri says. "With an optical link running at the highest rate, we should be able to get all the data down to Earth within a few hours for immediate analysis. Furthermore, astronauts will be able to communicate in real-time over the optical link to stay in touch with Earth during their journey, inspiring the public and the next generation of deep-space explorers, much like the Apollo 11 astronauts who first landed on the moon 57 years ago."

At the heart of O2O is the laboratory-developed Modular, Agile, Scalable Optical Terminal (MAScOT). About the size of a house cat, MAScOT features a 4-inch telescope mounted on a two-axis pivoted support (gimbal) with fixed backend optics. The gimbal precisely points the telescope and tracks the laser beam through which communications signals are emitted and received in the direction of the desired data recipient or sender. Underneath the gimbal, in a separate assembly, are the backend optics, which contain light-focusing lenses, tracking sensors, fast-steering mirrors, and other components to finely point the laser beam.

MAScOT made its debut in space as part of the laboratory’s Integrated Laser Communications Relay Demonstration (LCRD) LEO User Modem and Amplifier Terminal (ILLUMA-T), which launched to the International Space Station in November 2023. Over the following six months, the laboratory team performed experiments to test and characterize the system's basic functionality, performance, and utility for human crews and user applications. Initially, the team checked whether the ILLUMA-T-to-LCRD optical link was operating at the intended data rates in both directions: 622 Mbps down and 51 Mbps up. In fact, even higher data rates were achieved: 1.2 Gbps down and 155 Mbps up. MAScOT’s lasercom terminal architecture, which was recognized with a 2025 R&D 100 Award, is now being used for Artemis II and will support future space missions.

"Our success with ILLUMA-T laid the foundation for streaming HD [high-definition] video to and from the moon," says co-principal investigator Jade Wang, an assistant leader of the Optical and Quantum Communications Group. "You can imagine the Artemis astronauts using videoconferencing to connect with physicians, coordinate mission activities, and livestream their lunar trips."

A dedicated operations team from Lincoln Laboratory is following the 10-day Artemis II mission from ground stations in Houston, Texas, and White Sands, New Mexico, and even as far as an experimental ground station in Australia, which allows for a better view of the spacecraft from the Southern Hemisphere. Leading up to the launch, the operations team had been making monthly trips to the Houston and White Sands ground stations to perform maintenance and simulations of various stages of the Artemis mission — from prelaunch to launch to the journey to the moon and back to the splashdown at the end of the mission. 

"Doing these monthly simulations is important so we all stay fresh and engaged, especially when there is a launch delay," says Khatri, who adds that team members have had the opportunity to meet and speak with the four astronauts several times during these trips.

Lessons learned throughout the Artemis II mission will pave the way for humans to return to the lunar surface and beyond, eventually to Mars. Through the Artemis program, NASA will travel farther into space and explore more of the moon while creating an enduring presence in deep space and a legacy for future generations.

O2O is funded by the Space Communication and Navigation (SCaN) program at NASA Headquarters in Washington. O2O was developed by a team of engineers from NASA’s Goddard Space Flight Center and Lincoln Laboratory. This partnership has led to multiple lasercom missions, such as the 2013 Lunar Laser Communication Demonstration (LLCD), the 2021 LCRD, the 2022 TeraByte Infrared Delivery (TBIRD), and the 2023 ILLUMA-T.

MIT researchers measure traffic emissions, to the block, in real-time

Thu, 04/02/2026 - 5:00am

In a study focused on New York City, MIT researchers have shown that existing sensors and mobile data can be used to generate a near real-time, high-resolution picture of auto emissions, which could be used to develop local transportation and decarbonization policies.

The new method produces much more detailed data than some other common approaches, which use intermittent samples of vehicle emissions. The researchers say it is also more practical and scales up better than some studies that have aimed for very granular emissions data from a small number of automobiles at once. The work helps bridge the gap between less-detailed citywide emissions inventories and highly detailed analyses based on individual vehicles.

“Our model, by combining real-time traffic cameras with multiple data sources, allows extrapolating very detailed emission maps, down to a single road and hour of the day,” says Paolo Santi, a principal research scientist in the MIT Senseable City Lab and co-author of a new paper detailing the project’s results. “Such detailed information can prove very helpful to support decision-making and understand effects of traffic and mobility interventions.”

Carlo Ratti, director of the MIT Senseable City Lab, notes that the research “is part of our lab’s ongoing quest into hyperlocal measurements of air quality and other environmental factors. By integrating multiple streams of data, we can reach a level of precision that was unthinkable just a few years ago — giving policymakers powerful new tools to understand and protect human health.”

The new method also protects privacy, since it uses computer vision techniques to recognize types of vehicles, but without compiling license plate numbers. The study leverages technologies, including those already installed at intersections, to yield richer data about vehicle movement and pollution.

“The very basic idea is just to estimate traffic emissions using existing data sources in a cost-effective way,” says Songhua Hu, a former postdoc in the Senseable City Lab, and now an assistant professor at City University of Hong Kong.

The paper, “Ubiquitous Data-driven Framework for Traffic Emission Estimation and Policy Evaluation,” is published in Nature Sustainability.

The authors are Hu; Santi; Tom Benson, a researcher in the Senseable City Lab; Xuesong Zhou, a professor of transportation engineering at Arizona State University; An Wang, an assistant professor at Hong Kong Polytechnic University; Ashutosh Kumar, a visiting doctoral student at the Senseable City Lab; and Ratti. The MIT Senseable City Lab is part of MIT’s Department of Urban Studies and Planning.

Manhattan measurements

To conduct the study, the researchers used images from 331 cameras already in use in Manhattan intersections, along with anonymized location records from over 1.75 million mobile phones. Applying vehicle-recognition programs and defining 12 broad categories of automobiles, the scholars found they could correctly place 93 percent of vehicles in the right category. The imaging also yielded important information about the specific ways traffic signals affect traffic flow. That matters because traffic signals are a major reason for stop-and-go driving patterns, which strongly affect urban emissions but are often omitted in conventional inventories.

The mobile phone data then provided rich information about the overall patterns of traffic and movement of individual vehicles throughout the city. The scholars combined the camera and phone data with known information about emissions rates to arrive at their own emissions estimates for New York City.

“We just need to input all emission-related information based on existing urban data sources, and we can estimate the traffic emissions,” Hu says.

Moreover, the researchers evaluated the changes in emissions that might occur in different scenarios when traffic patterns, or vehicle types, also change.

For one, they modeled what would happen to emissions if a certain percentage of travel demand shifted from private vehicles to buses. In another scenario, they looked at what would happen if morning and evening rush hour times were spread out a bit longer, leaving fewer vehicles on the road at once. They also modeled the effects of replacing fine-grained emissions inputs with citywide averages — finding that the rougher emissions estimates could vary widely, from −49 percent to 25 percent of the more fine-tuned results. That underscores how seemingly small simplifications can introduce large errors into emission estimates.

Major emissions drop

On one level, this work involved altering inputs into the model and seeing what emerged. But one scenario the researchers studied is based on a real-world change: In January 2025, New York City implemented congestion pricing south of 60th Street in Manhattan.

To study that, the researchers looked at what happened to vehicle traffic at intervals of two, four, six, and eight weeks after the program began. Overall, congestion pricing lowered traffic volume by about 10 percent — but there was a corresponding drop in emissions of 16-22 percent.

This finding aligns with a previous study by researchers at Cornell University, which reported a 22 percent reduction in particulate matter (PM2.5) levels within the pricing zone. The MIT team also found that these reductions were not evenly distributed across the network, with larger declines on some major streets and more mixed effects outside the pricing zone.

“We see these kinds of huge changes after the congestion pricing began, Hu says. “I think that’s a demonstration that our model can be very helpful if a government really wants to know if a new policy converts into real-world impact.”

There are additional forms of data that could be fed into the researchers’ new method. For instance, in related work in Amsterdam, the team leveraged dashboard cams from vehicles to yield rich information about vehicle movement.

“With our model we can make any camera used in cities, from the hundreds of traffic cameras to the thousands of dash cams, a powerful device to estimate traffic emissions in real-time,” says Fábio Duarte, the associate director of research and design at the MIT Senseable City Lab, who has worked on multiple related studies.

The research was supported by the city of Amsterdam, the AMS Institute, and the Abu Dhabi’s Department of Municipalities and Transport.

It was also supported by the MIT Senseable City Consortium, which consists of Atlas University, the city of Laval, the city of Rio de Janeiro, Consiglio per la Ricerca in Agricoltura e l’Analisi dell’Economia Agraria, the Dubai Future Foundation, FAE Technology, KAIST Center for Advanced Urban Systems, Sondotecnica, Toyota, and Volkswagen Group America.

Evaluating the ethics of autonomous systems

Thu, 04/02/2026 - 12:00am

Artificial intelligence is increasingly being used to help optimize decision-making in high-stakes settings. For instance, an autonomous system can identify a power distribution strategy that minimizes costs while keeping voltages stable.

But while these AI-driven outputs may be technically optimal, are they fair? What if a low-cost power distribution strategy leaves disadvantaged neighborhoods more vulnerable to outages than higher-income areas?

To help stakeholders quickly pinpoint potential ethical dilemmas before deployment, MIT researchers developed an automated evaluation method that balances the interplay between measurable outcomes, like cost or reliability, and qualitative or subjective values, such as fairness.   

The system separates objective evaluations from user-defined human values, using a large language model (LLM) as a proxy for humans to capture and incorporate stakeholder preferences. 

The adaptive framework selects the best scenarios for further evaluation, streamlining a process that typically requires costly and time-consuming manual effort. These test cases can show situations where autonomous systems align well with human values, as well as scenarios that unexpectedly fall short of ethical criteria.

“We can insert a lot of rules and guardrails into AI systems, but those safeguards can only prevent the things we can imagine happening. It is not enough to say, ‘Let’s just use AI because it has been trained on this information.’ We wanted to develop a more systematic way to discover the unknown unknowns and have a way to predict them before anything bad happens,” says senior author Chuchu Fan, an associate professor in the MIT Department of Aeronautics and Astronautics (AeroAstro) and a principal investigator in the MIT Laboratory for Information and Decision Systems (LIDS).

Fan is joined on the paper by lead author Anjali Parashar, a mechanical engineering graduate student; Yingke Li, an AeroAstro postdoc; and others at MIT and Saab. The research will be presented at the International Conference on Learning Representations.

Evaluating ethics

In a large system like a power grid, evaluating the ethical alignment of an AI model’s recommendations in a way that considers all objectives is especially difficult.

Most testing frameworks rely on pre-collected data, but labeled data on subjective ethical criteria are often hard to come by. In addition, because ethical values and AI systems are both constantly evolving, static evaluation methods based on written codes or regulatory documents require frequent updates.

Fan and her team approached this problem from a different perspective. Drawing on their prior work evaluating robotic systems, they developed an experimental design framework to identify the most informative scenarios, which human stakeholders would then evaluate more closely.

Their two-part system, called Scalable Experimental Design for System-level Ethical Testing (SEED-SET), incorporates quantitative metrics and ethical criteria. It can identify scenarios that effectively meet measurable requirements and align well with human values, and vice versa.   

“We don’t want to spend all our resources on random evaluations. So, it is very important to guide the framework toward the test cases we care the most about,” Li says.

Importantly, SEED-SET does not need pre-existing evaluation data, and it adapts to multiple objectives.

For instance, a power grid may have several user groups, including a large rural community and a data center. While both groups may want low-cost and reliable power, each group’s priority from an ethical perspective may vary widely.

These ethical criteria may not be well-specified, so they can’t be measured analytically.

The power grid operator wants to find the most cost-effective strategy that best meets the subjective ethical preferences of all stakeholders.

SEED-SET tackles this challenge by splitting the problem into two, following a hierarchical structure. An objective model considers how the system performs on tangible metrics like cost. Then a subjective model that considers stakeholder judgements, like perceived fairness, builds on the objective evaluation.

“The objective part of our approach is tied to the AI system, while the subjective part is tied to the users who are evaluating it. By decomposing the preferences in a hierarchical fashion, we can generate the desired scenarios with fewer evaluations,” Parashar says.

Encoding subjectivity

To perform the subjective assessment, the system uses an LLM as a proxy for human evaluators. The researchers encode the preferences of each user group into a natural language prompt for the model.

The LLM uses these instructions to compare two scenarios, selecting the preferred design based on the ethical criteria.

“After seeing hundreds or thousands of scenarios, a human evaluator can suffer from fatigue and become inconsistent in their evaluations, so we use an LLM-based strategy instead,” Parashar explains.

SEED-SET uses the selected scenario to simulate the overall system (in this case, a power distribution strategy). These simulation results guide its search for the next best candidate scenario to test.

In the end, SEED-SET intelligently selects the most representative scenarios that either meet or are not aligned with objective metrics and ethical criteria. In this way, users can analyze the performance of the AI system and adjust its strategy.

For instance, SEED-SET can pinpoint cases of power distribution that prioritize higher-income areas during periods of peak demand, leaving underprivileged neighborhoods more prone to outages.

To test SEED-SET, the researchers evaluated realistic autonomous systems, like an AI-driven power grid and an urban traffic routing system. They measured how well the generated scenarios aligned with ethical criteria.

The system generated more than twice as many optimal test cases as the baseline strategies in the same amount of time, while uncovering many scenarios other approaches overlooked.

“As we shifted the user preferences, the set of scenarios SEED-SET generated changed drastically. This tells us the evaluation strategy responds well to the preferences of the user,” Parashar says.

To measure how useful SEED-SET would be in practice, the researchers will need to conduct a user study to see if the scenarios it generates help with real decision-making.

In addition to running such a study, the researchers plan to explore the use of more efficient models that can scale up to larger problems with more criteria, such as evaluating LLM decision-making.

This research was funded, in part, by the U.S. Defense Advanced Research Projects Agency.

Preview tool helps makers visualize 3D-printed objects

Wed, 04/01/2026 - 12:00am

Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.

But previews generated by most 3D-printing software focus on function rather than aesthetics. A printed object may end up with a different color, texture, or shading than the user expected, resulting in multiple reprints that waste time, effort, and material.

To help users envision how a fabricated object will look, researchers from MIT and elsewhere developed an easy-to-use preview tool that puts appearance first.

Users upload a screenshot of the object from their 3D-printing software, along with a single image of the print material. From these inputs, the system automatically generates a rendering of how the fabricated object is likely to look.

The artificial intelligence-powered system, called VisiPrint, is designed to work with a range of 3D-printing software and can handle any material example. It considers not only the color of the material, but also gloss, translucency, and how nuances of the fabrication process affect the object’s appearance.

Such aesthetics-focused previews could be especially useful in areas like dentistry, by helping clinicians ensure temporary crowns and bridges match the appearance of a patient’s teeth, or in architecture, to aid designers in assessing the visual impact of models.

“3D printing can be a very wasteful process. Some studies estimate that as much as a third of the material used goes straight to the landfill, often from prototypes the user ends of discarding. To make 3D printing more sustainable, we want to reduce the number of tries it takes to get the prototype you want. The user shouldn’t have to try out every printing material they have before they settle on a design,” says Maxine Perroni-Scharf, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on VisiPrint.

She is joined on the paper by Faraz Faruqi, a fellow EECS graduate student; Raul Hernandez, an MIT undergraduate; SooYeon Ahn, a graduate student at the Gwangju Institute of Science and Technology; Szymon Rusinkiewicz, a professor of computer science at Princeton University; William Freeman, the Thomas and Gerd Perkins Professor of EECS at MIT and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Stefanie Mueller, an associate professor of EECS and Mechanical Engineering at MIT, and a member of CSAIL. The research will be presented at the ACM CHI Conference on Human Factors in Computing Systems.

Accurate aesthetics

The researchers focused on fused deposition modeling (FDM), the most common type of 3D printing. In FDM, print material filament is melted and then squirted through a nozzle to fabricate an object one layer at a time.

Generating accurate aesthetic previews is challenging because the melting and extrusion process can change the appearance of a material, as can the height of each deposited layer and the path the nozzle follows during fabrication.

VisiPrint uses two AI models that work together to overcome those challenges.

The VisiPrint preview is based on two inputs: a screenshot of the digital design from a user’s 3D-printing software (called “slicer” software), and an image of the print material, which can be taken from an online source or captured from a printed sample.

From these inputs, a computer vision model extracts features from the material sample that are important for the object’s appearance.

It feeds those features to a generative AI model that computes the geometry and structure of the object, while incorporating the so-called “slicing” pattern the nozzle will follow as it extrudes each layer.

The key to the researchers’ approach is a special conditioning method. This involves carefully adjusting the inner workings of the model to guide it, so it follows the slicing pattern and obeys the constraints of the 3D-printing process.

Their conditioning method utilizes a depth map that preserves the shape and shading of the object, along with a map of the edges that reflects the internal contours and structural boundaries.

“If you don’t have the right balance of these two things, you could use up with bad geometry or an incorrect slicing pattern. We had to be careful to combine them in the right way,” Perroni-Scharf says.

A user-focused system

The team also produced an easy-to-use interface where one can upload the required images and evaluate the preview.

The VisiPrint interface enables more advanced makers to adjust multiple settings, such as the influence of certain colors on the final appearance.

In the end, the aesthetic preview is intended to complement the functional preview generated by slicer software, since VisiPrint does not estimate printability, mechanical feasibility, or likelihood of failure.

To evaluate VisiPrint, the researchers conducted a user study that asked participants to compare the system to other approaches. Nearly all participants said it provided better overall appearance as well as more textural similarity with printed objects.

In addition, the VisiPrint preview process took about a minute on average, which was more than twice as fast as any competing method.

“VisiPrint really shined when compared to other AI interfaces. If you give a more general AI model the same screenshots, it might randomly change the shape or use the wrong slicing pattern because it had no direct conditioning,” she says.

In the future, the researchers want to address artifacts that can occur when model previews have extremely fine details. They also want to add features that allow users to optimize parts of the printing process beyond color of the material.

“It is important to think about the way that we fabricate objects. We need to continue striving to develop methods that reduce waste. To that end, this marriage of AI with the physical making process is an exciting area of future work,” Perroni-Scharf says.

“‘What you see is what you get’ has been the main thing that made desktop publishing ‘happen’ in the 1980s, as it allowed users to get what they wanted at first try. It is time to get WYSIWYG for 3D printing as well. VisiPrint is a great step in this direction,” says Patrick Baudisch, a professor of computer science at the Hasso Plattner Institute, who was not involved with this work.

This research was funded, in part, by an MIT Morningside Academy for Design Fellowship and an MIT MathWorks Fellowship.

Two physicists and a curious host walk into a studio…

Tue, 03/31/2026 - 7:00pm

This March on The Curiosity Desk, GBH’s daily science show with host Edgar B. Herwick III, MIT scientists dropped by to address the questions: “How close are we to observing the dark universe?” (Thursday, March 12 episode) and “Is Earth prepared for asteroids?” (Thursday, March 26 episode).

Up first, Prof. Nergis Mavalvala, dean of the MIT School of Science, and Prof. Salvatore Vitale joined the host live in studio to talk about the science behind the Laser Interferometer Gravitational-wave Observatory (LIGO) and how LIGO has provided the ability to observe the universe in ways that have never been done before.

In addition to learning something new, Mavalvala explained how experimenting delivers an added piece of excitement: “pushing the technology, the precision of the instrument, requires you to be very inventive. There’s almost nothing in these experiments that you can go buy off a shelf. Everything you’re designing, everything is from scratch. You’re meeting very stringent requirements.”

Herwick likened how they might tweak or tinker with the experiment to souping up a car engine, and the LIGO scientists nodded – adding that in the most complex experiments, each bite-sized part on its own works well, and it’s the interfaces between them that scientists must get right.

While there, the two long-time colleagues also took a detour to explain how in physics experimentalists benefit from the work of theorists and vice versa. Mavalvala, whose work focuses on building the world’s most precise instruments to study physical phenomena, described the synergy between ideas that come from theory (work that Vitale does) and how you measure. (No, they assure Herwick, they don’t get into a lot of fights.)

In fact, it’s fantastic to have people from both worlds at MIT, said Vitale.  Mavalvala agreed. “One of the things that’s really important about theory in science is that ultimately, in physics especially, it’s a bunch of math. And the important thing that you have to ask is, ‘does nature really behave that way?’ And how do you answer that question? You have to go out and measure. You have to go observe nature,” said Mavalvala.

As scientists fine-tune the gravitational wave detectors, they will inform what data are collected, what astrophysical objects they might find or hope to find – and the search for certain fainter, farther away, or more exotic objects can inform what enhancements they prioritize.

But what if I’m not interested in any of that, asked Herwick? Why should I care? 

“To me, it falls in the category of for the betterment of humankind. You never know what is going to be useful. A lot of fundamental research was very far at the beginning from what turned out to be fundamental applications,” said Vitale, adding, “What they do on the instrument side has already now very important applications.”

Mavalvala was unequivocal, underscoring how pursuing curiosity is put to good use:

“When you’re making instruments that achieve that kind of precision, you’re inventing new technologies. [With LIGO] We’ve invented vibration isolation technologies to keep our mirrors really still. We’ve invented lasers that are quieter than any that were ever made before. We’ve invented photonic techniques that are allowing us to make applications even to far off things like quantum computing. 

“So, this is one of the beauties of fundamental discovery science. A, you’ll discover something. But B you’ll be doing two things: you’ll be inventing the technologies of the future, and you’ll be training the generations of scientists who may go off to do completely different things, but this is what inspires them.”

Watch the full conversation below and on YouTube:

 

Planetary defense

Turning to objects beyond Earth – specifically, asteroids – Associate Professor Julien de Wit, along with research scientists Artem Burdanov and Saverio Cambioni, joined Herwick at the Curiosity Desk later in the month. They talked about their ongoing research to identify smaller asteroids (about the size of a school bus) using the James Webb Space Telescope and why planetary defense goes beyond thinking about the massive asteroids featured in movies like Armageddon. Notably, a lot of technology on earth depends on satellites, and asteroids pose the biggest threat to satellites.    

“Dinosaurs didn’t need to care about an asteroid hitting the moon. Humanity a century ago didn’t care. Now, if [an asteroid] hits the moon, a lot of debris will be expelled and all those particles – big and small – they will affect the fleet of satellites around Earth. That’s a big potential problem, so we need to take that into account in our future,” said Burdanov.

There’s also a potential upside to being better able to detect and potentially “capture” asteroids, explained de Wit, all of it benefitted by new instruments. “It’s really an asteroid revolution going on… Our situational awareness of what’s out there is really about to change dramatically.”

He explains that one dream is to mine asteroids themselves for material to build or power next generation technologies or stations in space. “The way to reliably move into space is to use resources from space. We can’t just move stuff to build a full city. We use stuff from space.”

Echoing the sentiments expressed earlier in the month by MIT’s dean of science, the trio of asteroid explorers also described how the pursuits of planetary scientists can lead to unexpected rewards along the way. “We are swimming in an era that is data rich, and so what we do in our group and at MIT is mine that data to reveal the universe like never before,” says de Wit. “Revealing new populations of asteroids, new populations of planets, and making sense of our universe like we have never done.”

Watch the full conversation below and on the GBH YouTube channel: 

Tune in to the Curiosity Desk some Thursdays to hear from MIT researchers as they visit Herwick and the production team. 

Building the blocks of life

Tue, 03/31/2026 - 4:50pm

Billions of years ago, simple organic molecules drifted across Earth's primordial landscape — nothing more than basic chemical compounds. But as natural forces shaped the planet over hundreds of millions of years, these molecules began to interact and bond in increasingly complex ways. Along the way, something spectacular emerged: life.

“Life is, to some degree, magical,” says computational biologist Sergei Kotelnikov. Simple organic compounds congregate into polymers, which assemble into living cells and ultimately organisms — the whole being greater than the sum of its parts.

“You can write formulas on how a molecule behaves,” he says, referring to the world of quantum mechanics. “But yet somehow, a few orders of magnitude above, on a bigger scale, it gives rise to such a mystery.”

Kotelnikov builds models to analyze and predict the structure of these biomolecules, particularly proteins, the fundamental building blocks of every organism. This year, he joined MIT as part of the School of Science Dean’s Postdoctoral Fellowship to work with the Keating Lab, where researchers focus on protein structure, function, and interaction. Using machine learning, his goal is to develop new methods in protein modeling with potential applications that span from medicine to agriculture.

A hunger for problems to solve

Kotelnikov grew up in Abakan, Russia, a small city sitting right in the center of Eurasia. As a child, one of his favorite pastimes was playing with Lego bricks.

“It encouraged me to build new things, rather than just following instructions,” he says. “You can do anything.”

Kotelnikov’s father, whose background lies in engineering and economics, would often challenge him with math problems.

“Your brain — you can feel some kind of expansion of understanding how things work, and that’s a very satisfactory feeling,” Kotelnikov says.

This itch to solve problems led him to join science Olympiad competitions, and later, a science-focused public boarding school located near the Russian Academy of Sciences, from which he often encountered scientists.

“It was like a candy shop,” he recalls, describing the period as a life-changing experience.

In 2012, Kotelnikov began his bachelor of science in physics and applied mathematics at the Moscow Institute of Physics and Technology — considered one of the leading STEM universities in Russia, and globally — and continued there for his master’s degree. It was there that biology came into the picture.

During a course on statistical physics, Kotelnikov was first introduced to the idea of the “emergence of complexity.” He became fascinated by this “mysterious and attractive manifestation of biology … this evolution that sharpens the physical phenomenon” to create, drive, and shape life as we know it today. By the time he completed his master’s degree, he realized he had only scratched surface of the field of computational biology.

In 2018, he began his PhD at Stony Brook University in New York and began working with Dima Kozakov, who is recognized as one of the world’s leaders in predicting protein interactions and complex structures.

Studying the architecture of life  

Proteins act like the bricks that construct an organism, underpinning almost every cellular process from tissue repair to hormone production. Like pieces of a Lego tower, their structures and interactions determine the functions that they carry out in a body.

However, diseases arise when they’re folded, curled, twisted, or connected in unusual ways. To develop medical interventions, scientists break down the tower and examine each individual piece to find the culprit and correct their shape and pairing. With limited experimental data on protein structures and interactions currently available, simulations developed by computational biologists like Kotelnikov provide crucial insight that inform fundamental understanding and applications like drug discovery.

With the guidance of Kozakov at Stony Brook’s Laufer Center for Physical and Quantitative Biology, Kotelnikov carried over his understanding of physics to create modeling methods that are more effective, efficient, reliable, and generalizable. Among them, he developed a new way of predicting the protein complex structures mediated by proteolysis-targeting chimeras, or PROTACs, a new class of molecules that can trigger the breakdown of specific proteins previously considered undruggable, such as those found in cancer.

PROTACs have been challenging to model, in part because they are composed of proteins that don’t naturally interact with each other, and because the linker that connects them is flexible. Imagine trying to guess the overall shape of a bendy Lego piece attached to two other pieces of different irregular, unmatched shapes. To efficiently find all possible configurations, Kotelnikov’s method conceptually cuts the linker into two halves and models each separately, then reformulates the problem and calculates it using a powerful algorithm called Fast Fourier Transform.

“It’s kind of like applied math judo that you sometimes need to do in order to make certain intractable computations tractable,” he says.

Kotelnikov’s state-of-the-art methods have been instrumental to his team’s top performance in numerous international challenges including the Critical Assessment of protein Structure Prediction (CASP) competition — the same contest in which the Nobel Prize-winning AlphaFold system for protein 3D structure prediction was presented.

Physics and machine learning

At MIT, Kotelnikov is working with Amy Keating, the Jay A. Stein (1968) Professor of Biology, biology department head, and professor of biological engineering, to study protein structure, function, and interactions.  

A recognized leader in the field, Keating employs both computational and experimental methods to study proteins, their interactions, as well as how this can impact disease. By infusing physics with machine learning, Kotelnikov’s goal is to advance modeling methods that can vastly inform applications such as cancer immunology and crop protection.

“Kotelnikov stands to gain a lot from working closely with wet lab researchers who are doing the experiments that will complement and test his predictions, and my lab will benefit from his experience developing and applying advanced computational analyses,” says Keating.

Kotelnikov is also planning to work with professors Tommi Jaakkola and Tess Smidt in MIT’s Department of Electrical Engineering and Computer Science to explore a field called geometric deep learning. In particular, he aims to integrate physical and geometric knowledge about biomolecules into neural network architectures and learning procedures. This approach can significantly reduce the amount of data needed for learning, and improve the generalizability of resulting models.

Beyond the two departments, Kotelnikov is also excited to see how the diversity and interdisciplinary mix of MIT’s community will help him come up with ideas.

“When you’re building a model, you’re entering this imaginary world of assumptions and simplifications and it might feel challenging because of this disconnect with reality,” Kotelnikov says. “Being able to efficiently communicate with experimentalists is of high value.”

Tomás Palacios named director of the Institute for Soldier Nanotechnologies

Tue, 03/31/2026 - 4:15pm

Tomás Palacios, the Clarence J. LeBel Professor of Electrical Engineering at MIT, has been appointed director of the MIT Institute for Soldier Nanotechnologies (ISN). Palacios assumed the role on Feb. 4, and will continue to serve as the director of the MIT Microsystems Technology Laboratories (MTL).

Founded in 2002, ISN is a U.S. Army-sponsored University Affiliated Research Center focused on advancing fundamental science and engineering to enable next-generation capabilities for protection, survivability, sensing, and system performance. ISN brings together researchers from across MIT to address challenges at the intersection of materials, devices, and systems. In collaboration with industry, MIT Lincoln Laboratory, the U.S. Army, and other U.S. military services, ISN works to transition promising technologies for both commercial and defense applications.

As director, Palacios will oversee ISN’s research portfolio, facilities, and strategic partnerships, working closely with the ISN leadership team, MIT administration, U.S. Army, and other research sponsors to guide the institute’s next phase of research and collaboration.

“Tomás Palacios brings exceptional energy, vision, and leadership to the Institute for Soldier Nanotechnologies,” says Ian A. Waitz, MIT’s vice president for research, who announced the appointment in a recent letter. “As director of Microsystems Technology Laboratories, he has demonstrated a rare ability to build strong research communities and partnerships across academia, industry, and government. I am confident he will guide ISN’s next phase with momentum, scientific excellence, and a deep sense of service to MIT and the nation.”

Palacios brings deep leadership experience within MIT and across national research collaborations. As director of MTL, he leads one of MIT’s flagship interdisciplinary research laboratories supporting work in micro- and nano-scale materials, devices, and systems. He is a member of the MIT.nano Leadership Council and, since 2023, has served as associate director of the multi-university SUPeRior Energy-efficient Materials and dEvices (SUPREME) Center, a Semiconductor Research Corp. JUMP 2.0 program focused on next-generation energy-efficient semiconductor technologies. Palacios is also the co-founder of several technology companies, including Vertical Semiconductor, Finwave Semiconductor, and CDimension, Inc.

“MIT’s motto, ‘mens et manus’ — ‘mind and hand’ — reminds us that fundamental research and real-world impact must go hand-in-hand,” says Palacios. “At ISN, our mission is to help protect and empower those who defend our nation. That responsibility demands urgency, creativity, and deep collaboration. I look forward to building on ISN’s strong partnership with the U.S. Army, industry, and colleagues across MIT to push the frontiers of nanotechnology and translate discovery into meaningful impact at the speed of relevance.”

Palacios is internationally recognized for his work on wide-bandgap semiconductors, nanoelectronics, and advanced electronic materials. An IEEE Fellow, his research spans fundamental device physics through system-level integration, with applications in high-power and high-frequency electronics, sensing, and energy systems. He is widely recognized for his research contributions, as well as for his leadership in education and mentoring.

Palacios succeeds John Joannopoulos, who served as ISN director from 2006 until his death in August 2025. During his nearly two decades of ISN leadership, Joannopoulos strengthened ISN’s interdisciplinary culture, devoting significant effort to fostering collaborations among ISN-funded principal investigators, building partnerships that extend across MIT and beyond to the Army research community. Joannopoulos, an extraordinary researcher and a generous mentor, was also a co-founder of companies such as WiTricity and OmniGuide, helping to translate many of ISN’s foundational scientific discoveries into commercial technologies. Raúl Radovitzky, ISN’s associate director, served as interim director during the search for a new director, providing continuity to ISN’s research programs, facilities, and partnerships.

“It is an honor to serve as director of the Institute for Soldier Nanotechnologies at such an important moment in time,” says Palacios. “ISN has built an extraordinary foundation of interdisciplinary excellence under Professor John Joannopoulos’ leadership and, more recently, Prof. Radovitzky’s. I look forward to working with the ISN community to advance breakthrough research at the intersection of materials, devices, and systems — research that not only strengthens national security, but also translates into technologies that benefit society more broadly.” 

Turning muscles into motors gives static organs new life

Tue, 03/31/2026 - 2:30pm

What if a technology could reanimate parts of the body that have lost their connection to the brain — like a bladder that can no longer empty due to a spinal cord injury, or intestines that can’t push food forward due to Crohn’s disease? What if this technology could also send sensations such as hunger or touch back to the brain?

New MIT research offers a glimpse into this future. In an open-access study published today in Nature Communications, the researchers introduce a novel myoneural actuator (MNA) that reprograms living muscles into fatigue-resistant, computer-controlled motors that can be implanted inside the body to restore movement in organs.

“We’ve built an interface that leverages natural pathways used by the nervous system so that we can seamlessly control organs in the body, while also enabling the transmission of sensory feedback to the brain,” says Hugh Herr, senior author of the study, a professor of media arts and sciences at the MIT Media Lab, co-director of the K. Lisa Yang Center for Bionics, and an associate member of the McGovern Institute for Brain Research at MIT. The study was co-led by Herr’s postdoc Guillermo Herrera-Arcos and former postdoc Hyungeun Song.

By repurposing existing muscle in the body, the researchers have developed the first “living” implant that uses rewired sensory nerves to revive paralyzed organs — which may present a new genre of medicine, where a person’s own tissue becomes the hardware.

Rewiring the brain-body interface

Many scientists have toiled to restore function in paralyzed organs, but it’s extremely challenging to design a technology that both communicates with the nervous system and doesn't fatigue over time. Some have tried to insert miniaturized actuators — small machines that can power bionic limbs — into the body. However, Herrera-Arcos says, “it’s hard to make actuators at the centimeter level, and they aren’t very efficient.” Others have focused on creating muscle tissue in the lab, but building muscles cell by cell is time-intensive and far from ready for human use.

Herr’s team tried something different.

“We engineered existing muscles to become an actuator, or motor, that reinstates motion in organs,” says Song.

To do this, the researchers had to navigate the delicate dynamics within the nervous system. The actuator would have to interface with the nervous system to work properly, but it must also somehow evade the brain’s control. “You don’t want the brain to consciously control the muscle actuator because you want the actuator to automatically control an organ, like the heart,” explains Herrera-Arcos. Establishing a computer-controlled muscle to move organs could ensure automatic function and also bypass damaged brain pathways.

Incorporating motor neurons into the actuator may help generate movement, but these neurons are directly controlled by the brain. “Sensory neurons, however, are wired to receive, not to command,” explains Song. “We thought we could leverage this dynamic and reroute motor signals through sensory fibers, making a computer — rather than the brain — the muscle’s new command center.”

To achieve this, sensory nerves would need to fuse fluidly with muscle, and scientists had not yet determined if this was possible. Remarkably, when the team replaced motor nerves in rodent muscle with sensory ones, “the sensory nerves re-innervated the muscles and formed functional synapses. It’s a tremendous discovery,” says Herrera-Arcos.

Sensory neurons not only enabled the use of a digital controller, but also helped curb muscle fatigue — increasing fatigue resistance in rodent muscle by 260 percent compared to native muscles. That’s because muscle fatigue depends largely on the diameter of the axons, or cable-like projections that innervate muscles. Motor neuron axons vary greatly in size, and when a motor nerve is electrically stimulated, the largest axons fire first — exhausting the muscle quickly. However, sensory axons are all nearly the same size, so the signal is broadcast more evenly across muscle fibers, avoiding fatigue, explains Herrera-Arcos.

Designing a biohybrid system

They combined all of these elements into a fatigue-resistant biohybrid motor called a myoneural actuator (MNA). By wrapping their actuator around a paralyzed intestine in a rodent, the researchers reinstated the organ’s squeezing motion. They also successfully controlled rodent calf muscles in an experiment designed to mimic residual muscle in human lower-limb amputations. Importantly, the MNA system transmitted sensory signals to the brain. “This suggests that our technology could seamlessly link organs to the brain. For example, we might be able to make a paralyzed stomach relay hunger,” explains Song.

Bringing their MNA to clinic will require further testing in larger animal models, and eventually, humans. But if it passes the regulatory gauntlet, their system could pave a smoother and safer path toward reviving static organs. Implanting MNAs would require a surgery that is already commonplace in clinic, the researchers say, and their system might be simpler and safer to implement than mechanical devices or organ transplants that introduce foreign material into the body.

The team is hopeful that their new technology could improve the lives of millions living with organ dysfunctions. “Today’s solutions are mostly synthetic: pacemakers and other mechanical assist devices. A living muscle actuator implanted alongside a weakened organ would be part of the body itself. That is a category of medicine different from anything seen in clinic,” explains Herrera-Arcos.

Song says that skin is of special interest. “Hypothetically, we could wrap MNAs around skin grafts to relay tactile feedback, such as strain or tension, which is currently missing for users of prostheses.” Their technology could even augment virtual reality systems, too. “The idea is that, if we couple the MNA system to skin and muscles, a person could feel what their virtual avatar is touching even though their real body isn’t moving,” says Song.

“Our research is on the brink of giving new life to various parts and extensions of the body,” adds Herrera-Arcos. “It’s exciting to think that our system could enhance human potential in ways that once only belonged to the realm of science fiction.”

This research was funded, in part, by the Yang Tan Collective at MIT, K. Lisa Yang Center for Bionics at MIT, Nakos Family Bionics Research Fund at MIT, and the Carl and Ruth Shapiro Foundation.

Climate change may produce “fast-food” phytoplankton

Tue, 03/31/2026 - 5:00am

We are what we eat. And in the ocean, most life-forms source their food from phytoplankton. These microscopic, plant-like algae are the primary food source for krill, sea snails, some small fish, and jellyfish, which in turn feed larger marine animals that are prey for the ocean’s top predators, including humans.

Now MIT scientists are finding that phytoplankton's composition, and the basic diet of the ocean, will shift significantly with climate change.

In an open-access study appearing today in the journal Nature Climate Change, the team reports that as sea surface temperatures rise over the next century, phytoplankton in polar regions will adapt to be less rich in proteins, heavier in carbohydrates, and lower in nutrients overall.

The conclusions are based on results from the team’s new model, which simulates the composition of phytoplankton in response to changes in ocean temperature, circulation, and sea ice coverage. In a scenario in which humans continue to emit greenhouse gases through the year 2100, the team found that changing ocean conditions, particularly in the polar regions, will shift phytoplankton’s balance of proteins to carbohydrates and lipids by approximately 20 percent. The researchers analyzed observations from the past several decades, and already have found a signature of this change in the real world.

“We’re moving in the poles toward a sort of fast-food ocean,” says lead author and MIT postdoc Shlomit Sharoni. “Based on this prediction, the nutritional composition of the surface ocean will look very different by the end of the century.”

The study’s MIT co-authors are Mick Follows, Stephanie Dutkiewicz, and Oliver Jahn; along with Keisuke Inomura of the University of Rhode Island; Zoe Finkel, Andrew Irwin, and Mohammad Amirian of Dalhousie University in Halifax, Canada; and Erwan Monier of the University of California at Davis.

Nutritional information

Phytoplankton drift through the upper, sun-lit layers of the ocean. Like plants on land, the marine microalgae are photosynthetic. Their growth depends on light from the sun, carbon dioxide from the atmosphere, and nutrients such as nitrogen and iron that well up from the deep ocean.

When studying how phytoplankton will respond to climate change, scientists have primarily focused on how rising ocean temperatures will affect phytoplankton populations. Whether and how the plankton’s composition will change is less well-understood.

“There’s been an awareness that the nutritional value of phytoplankton can shift with climate change,” says Sharoni, “But there has been very little work in directly addressing that question.”

She and her colleagues set out to understand how ocean conditions influence phytoplankton macromolecular composition. Macromolecules are large molecules that are essential for life. The main types of macromolecules include proteins, lipids, carbohydrates, and nucleic acids (the building blocks of DNA and RNA). Every form of life, including phytoplankton, is composed of a balance of macromolecules that helps it to survive in its particular environment.

“Nearly all the material in a living organism is in these broad molecular forms, each having a particular physiological function, depending on the circumstances that the organism finds itself in,” says Follows, a professor in the Department of Earth, Atmospheric and Planetary Sciences.

An unbalanced diet

In their new study, the researchers first looked at how today’s ocean conditions influence phytoplankton’s macromolecular composition. The team used data from lab experiments carried out by their collaborators at Dalhousie. These experiments revealed ways in which phytoplankton’s balance of macromolecules, such as proteins to carbohydrates, shifted in response to changes in water temperature and the availability of light and nutrients.

With these lab-based data, the group developed a quantitative model that simulates how plankton in the lab would readjust its balance of proteins to carbohydrates under different light and nutrient conditions. Sharoni and Inomura then paired this new model with an established model of ocean circulation and dynamics developed previously at MIT. With this modeling combination, they simulated how phytoplankton composition shifts in response to ocean conditions in different parts of the world and under different climate scenarios.

The team first modeled today’s current climate conditions. Consistent with observations, their model predicts that that a little more than half of the average phytoplankton cell today is composed of proteins. The rest is a mix of carbohydrates and lipids.

Interestingly, in polar regions, phytoplankton are slightly more protein-rich. At the poles, the cover of sea ice limits the amount of sunlight phytoplankton can absorb. The researchers surmise that phytoplankton may have adapted by making more light-harvesting proteins to help the organisms efficiently absorb the weak sunlight.

However, when they modeled a future climate change scenario, the team found a significant shift in phytoplankton composition. They simulated a scenario in which humans continue to emit greenhouse gases through the year 2100. In this scenario, the ocean sea surface temperatures will rise by 3 degrees Celsius, substantially reducing sea ice coverage. Warmer temperatures will also limit the ocean’s circulation, as well as the amount of nutrients that can circulate up from the deep ocean.

Under these conditions, the model predicts that the population of phytoplankton growth in polar regions will increase significantly, consistent with earlier studies. Uniquely, this model predicts that phytoplankton in polar regions will shift from a protein-rich to a carb- and lipid-heavy composition. They found that plankton will not need as much light-harvesting protein, since less sea ice will make sunlight more easily available for the organisms to absorb. Total protein levels in these polar phytoplankton will decline by up to 30 percent, with a corresponding increase in the contribution of carbs and lipids.

It’s unclear what impact a larger population of carb- and lipid-heavy phytoplankton may have on the rest of the marine food web. While some organisms may be stressed by a reduction in protein, others that make lipid stores to survive through the winter might thrive.

The team also simulated phytoplankton in subtropical, higher-latitude regions. In these ocean areas, it’s expected that phytoplankton populations will decline by 50 percent. And the team’s modeling shows that their composition will also shift.

With warmer temperatures, the ocean’s circulation will slow down, limiting the amount of nutrients that can upwell from the deep ocean. In response, subtropical phytoplankton may have to find ways to live at deeper depths, to strike a balance between getting enough sunlight and nutrients. Under these conditions, the organisms will likely shift to a slightly more protein-rich composition, making use of the same photosynthetic proteins that their polar counterparts will require less of.

On balance, given the projected changes in phytoplankton populations with climate change, their average composition around the world will shift to a more carb-heavy, low-nutrient composition.

The researchers went a step further and found that their modeling agrees with available small set of actual phytoplankton field samples that other scientists previously collected from Arctic and Antarctic regions. These samples showed compositions of phytoplankton have become  more carb- and lipid-heavy over the past few decades, as the team’s model predicts under climate warming.

“In these regions, you can already see climate change, because sea ice is already melting,” Sharoni explains. “And our model shows that proteins in polar plankton have been declining, while carbs and lipids are increasing.”

“It turns out that climate change is accelerated in the Arctic, and we have data showing that the composition of phytoplankton has already responded,” Follows adds. “The main message is: The caloric content at the base of the marine food web is already changing. And it’s not a clear story as to how this change will transmit through the food web.”

This work was supported, in part, by the Simons Foundation.

MIT researchers use AI to uncover atomic defects in materials

Mon, 03/30/2026 - 11:00am

In biology, defects are generally bad. But in materials science, defects can be intentionally tuned to give materials useful new properties. Today, atomic-scale defects are carefully introduced during the manufacturing process of products like steel, semiconductors, and solar cells to help improve strength, control electrical conductivity, optimize performance, and more.

But even as defects have become a powerful tool, accurately measuring different types of defects and their concentrations in finished products has been challenging, especially without cutting open or damaging the final material. Without knowing what defects are in their materials, engineers risk making products that perform poorly or have unintended properties.

Now, MIT researchers have built an AI model capable of classifying and quantifying certain defects using data from a noninvasive neutron-scattering technique. The model, which was trained on 2,000 different semiconductor materials, can detect up to six kinds of point defects in a material simultaneously, something that would be impossible using conventional techniques alone.

“Existing techniques can’t accurately characterize defects in a universal and quantitative way without destroying the material,” says lead author Mouyang Cheng, a PhD candidate in the Department of Materials Science and Engineering. “For conventional techniques without machine learning, detecting six different defects is unthinkable. It’s something you can’t do any other way.”

The researchers say the model is a step toward harnessing defects more precisely in products like semiconductors, microelectronics, solar cells, and battery materials.

“Right now, detecting defects is like the saying about seeing an elephant: Each technique can only see part of it,” says senior author and associate professor of nuclear science and engineering Mingda Li. “Some see the nose, others the trunk or ears. But it is extremely hard to see the full elephant. We need better ways of getting the full picture of defects, because we have to understand them to make materials more useful.”

Joining Cheng and Li on the paper are postdoc Chu-Liang Fu, undergraduate researcher Bowen Yu, master’s student Eunbi Rha, PhD student Abhijatmedhi Chotrattanapituk ’21, and Oak Ridge National Laboratory staff members Douglas L Abernathy PhD ’93 and Yongqiang Cheng. The paper appears today in the journal Matter.

Detecting defects

Manufacturers have gotten good at tuning defects in their materials, but measuring precise quantities of defects in finished products is still largely a guessing game.

“Engineers have many ways to introduce defects, like through doping, but they still struggle with basic questions like what kind of defect they’ve created and in what concentration,” Fu says. “Sometimes they also have unwanted defects, like oxidation. They don’t always know if they introduced some unwanted defects or impurity during synthesis. It’s a longstanding challenge.”

The result is that there are often multiple defects in each material. Unfortunately, each method for understanding defects has its limits. Techniques like X-ray diffraction and positron annihilation characterize only some types of defects. Raman spectroscopy can discern the type of defect but can’t directly infer the concentration. Another technique known as transmission electron microscope requires people to cut thin slices of samples for scanning.

In a few previous papers, Li and collaborators applied machine learning to experimental spectroscopy data to characterize crystalline materials. For the new paper, they wanted to apply that technique to defects.

For their experiment, the researchers built a computational database of 2,000 semiconductor materials. They made sample pairs of each material, with one doped for defects and one left without defects, then used a neutron-scattering technique that measures the different vibrational frequencies of atoms in solid materials. They trained a machine-learning model on the results.

“That built a foundational model that covers 56 elements in the periodic table,” Cheng says. “The model leverages the multihead attention mechanism, just like what ChatGPT is using. It similarly extracts the difference in the data between materials with and without defects and outputs a prediction of what dopants were used and in what concentrations.”

The researchers fine-tuned their model, verified it on experimental data, and showed it could measure defect concentrations in an alloy commonly used in electronics and in a separate superconductor material.

The researchers also doped the materials multiple times to introduce multiple point defects and test the limits of the model, ultimately finding it can make predictions about up to six defects in materials simultaneously, with defect concentrations as low as 0.2 percent.

“We were really surprised it worked that well,” Cheng says. “It’s very challenging to decode the mixed signals from two different types of defects — let alone six.”

A model approach

Typically, manufacturers of things like semiconductors run invasive tests on a small percentage of products as they come off the manufacturing line, a slow process that limits their ability to detect every defect.

“Right now, people largely estimate the quantities of defects in their materials,” Yu says. “It is a painstaking experience to check the estimates by using each individual technique, which only offers local information in a single grain anyway. It creates misunderstandings about what defects people think they have in their material.”

The results were exciting for the researchers, but they note their technique measuring the vibrational frequencies with neutrons would be difficult for companies to quickly deploy in their own quality-control processes.

“This method is very powerful, but its availability is limited,” Rha says. “Vibrational spectra is a simple idea, but in certain setups it’s very complicated. There are some simpler experimental setups based on other approaches, like Raman spectroscopy, that could be more quickly adopted.”

Li says companies have already expressed interest in the approach and asked when it will work with Raman spectroscopy, a widely used technique that measures the scattering of light. Li says the researchers’ next step is training a similar model based on Raman spectroscopy data. They also plan to expand their approach to detect features that are larger than point defects, like grains and dislocations.

For now, though, the researchers believe their study demonstrates the inherent advantage of AI techniques for interpreting defect data.

“To the human eye, these defect signals would look essentially the same,” Li says. “But the pattern recognition of AI is good enough to discern different signals and get to the ground truth. Defects are this double-edged sword. There are many good defects, but if there are too many, performance can degrade. This opens up a new paradigm in defect science.”

The work was supported, in part, by the Department of Energy and the National Science Foundation.

Leading with rigor, kindness, and care

Fri, 03/27/2026 - 5:00pm

Professor Sara Prescott embodies the kind of mentorship every graduate student hopes to find: grounded in scientific rigor, guided by kindness, and defined by a deep commitment to well-being. Her approach reflects a simple but powerful belief that transformative mentorship is not only about advancing research, but about cultivating confidence, belonging, and resilience in the next generation of scholars.

A member of the 2025–27 Committed to Caring cohort, Prescott exemplifies the program’s spirit, which honors faculty who go above and beyond in nurturing both the intellectual and personal development of MIT’s graduate students.

Prescott is the Pfizer Inc. - Gerald D. Laubach Career Development Professor in the MIT departments of Biology and Brain and Cognitive Sciences, and an investigator at the Picower Institute for Learning and Memory. Her research addresses fundamental questions in body-brain communication, with a focus on lung biology, early-life adversity, women’s health, and the impacts of climate change on respiratory health.

A culture of compassion

Prescott’s mentoring philosophy begins with a focus on professional sustainability. “We cannot be effective scientists if we are unhappy or unhealthy outside of the lab,” she says. 

She pushes back against what she sees as an unhelpful narrative in academia. “There’s this idea that you must choose between a successful PhD or having a personal life. This is a false dichotomy, and a problematic attitude.” Instead, she reminds her mentees that “graduate school is a marathon, not a sprint,” encouraging them to place importance not only on their research, but also on their mental and physical well-being.

This set of values shines through within her lab climate as a whole. Students describe support for flexible scheduling and mental health leave, a willingness to reimburse meals during late-night lab sessions, and encouragement during stretches of experimental failure. In addition to these more technical supports, nominators also shared stories of Prescott engaging in the smaller details: prioritizing connection for her students, celebrating their milestones, organizing lab retreats, and fostering a culture where people feel valued beyond their productivity.

Students recognize Prescott as a safe haven within the often complex and challenging world of research. Joining Prescott’s lab was a turning point for one student who was recovering from a damaging prior mentorship experience. They arrived uncertain, struggling to trust faculty and questioning whether they belonged in science at all. Prescott met them with empathy and professionalism, offering patience and trust not just in their work, but in them as a person. They describe steady support that, over time, helped them “fall back in love with science” and envision a future they had nearly abandoned.

Prescott draws inspiration from the mentorship she received early in her career. As a trainee, she had mentors who helped her believe that she could succeed. Now in a mentoring role herself, she does her best to pass this sense of confidence on to her advisees.

She is intentional about creating space where students can grow without fear. From their very first meetings, one nominator wrote, Prescott emphasized that “graduate school is a place for learning and curiosity.” They never felt judged for not knowing something; instead, they were encouraged to ask questions, share ideas, and take intellectual risks. That environment, the student explained, allowed them to grow into their scientific identity with confidence.

Prescott reinforces this message often. Success, she tells students, grows from effort, learning, and persistence, rather than from fixed traits. When working with students, she does her best to reframe failure as part of the process, emphasizing its importance within the scientific journey. Through these avenues, she cultivates a lab culture where nominators are challenged to think boldly while feeling genuinely supported, and where her students are seen not only as researchers, but as whole people.

Advocacy beyond the bench

Prescott’s commitment to caring extends well beyond day-to-day lab work. Her nominators relate that she actively supports her students’ professional development, encouraging them to pursue writing projects, certificates, internships, leadership roles, and community engagement.

Nominators also highlight Prescott’s focus on supporting underserved communities within the field as a whole. Students highlight her involvement with Graduate Women in Biology (GwiBio), where she volunteered as a speaker for the “Glass Shards” series. Her talk “Failure as the Path to Success,” in which she candidly shared pivots and setbacks in her own career, was described as one of the organization’s most impactful sessions. 

Her dedication to inclusion is equally evident in her mentorship of scholars whose role in her lab is more temporary.  She welcomes international visiting scholars, temporary lab techs, and undergraduate interns in the MIT Summer Research Program. When one intern encountered barriers at their home institution, Prescott ensured they had a continued research home in her lab at MIT. These additional resources allowed them to complete their undergraduate thesis and graduate on time from their university.

Prescott says that she views mentorship as an evolving practice, regularly soliciting feedback from her students. Effective leadership, in her view, grows from mutual trust and open communication.

For many nominators, Prescott’s impact extends beyond their careers. “She has taught me what positive and supportive mentoring relationships look like,” one student reflected. “When I think about the type of mentor I want to be, I hope I can emulate the ways in which she supports and guides nominators to develop their scientific independence and confidence.”

In lifting up the people behind the science as thoughtfully as the science itself, Sara Prescott demonstrates that the most enduring legacy of a mentor is not only the discoveries from their lab, but the composure and courage their advisees carry forward.

MIT hackathon tackles real-world challenges in Ukraine

Fri, 03/27/2026 - 4:40pm

During this year’s Independent Activities Period (IAP), students, researchers, and collaborators across seven time zones came together to tackle urgent technical challenges facing Ukraine as the full-scale war enters its fourth year. 

A four-week hackathon, Build for Ukraine 2.0, brought MIT students and Ukrainian collaborators into a shared innovation environment where power outages, air-raid alerts, and subzero temperatures were part of the daily reality of teamwork.

The event was co-led by the MIT-Ukraine Program, MIT Edgerton Center, and MIT Lincoln Laboratory Beaver Works, with support from Mission Innovation X, MathWorks, and MIT.nano.

Designed and taught as an IAP subject EC.S01/EC.S11 (Build for Ukraine 2026), the hackathon paired technically diverse participants with Ukrainian organizations seeking near-term solutions to problems arising directly from wartime conditions.

“It’s not every working group that has to reschedule team meetings because some members are in Ukraine and just had a blackout,” says Hosea Siu ’14, SM ’15, PhD ’18, one of the lead organizers. “This class is unusual — in the most meaningful ways.”

A collaborative class built for real-world urgency

Build for Ukraine centered on co-design and rapid prototyping with in-country partners. Organizers spent the fall gathering challenge statements from stakeholders in Ukraine, Taiwan, the United Kingdom, Spain, and across the United States. The goal: identify problems where a small, interdisciplinary team could make measurable progress in one month.

The participant pool reflected MIT’s open IAP structure. First-year undergraduates worked alongside senior engineers, international researchers, and Ukrainian colleagues participating remotely despite frequent blackouts. Many joined meetings from darkened apartments in Kyiv, Kharkiv, and Cherkasy — often relying on unstable heaters and backup battery packs. One participant excused himself from a design review due to an air-raid alert.

“These groups developed what I call ‘quantum entanglement,’” says Svetlana Boriskina, a principal research scientist at MIT and director of the Multifunctional Metamaterials Laboratory in the Department of Mechanical Engineering. “They were sharing data in real time across continents, while experiencing the war’s impacts directly and indirectly.”

Setting the foundation: briefings and technical overviews

The first week introduced participants to the geopolitical, technical, and humanitarian landscape that would frame their work. Topics included:

  • War context and co-design practices. Boriskina and Elizabeth Wood, faculty director of the MIT-Ukraine Program and professor of history at MIT, outlined current conditions in Ukraine. Student mentor Natalie Dean ’26 (vice president of MIT’s Assistive Technology Club) led a session on co-design — emphasizing partnership with, not for, Ukrainian collaborators.
  • Extreme-environment engineering. Boriskina introduced two possible technical tracks proposed by her collaborators at Kharkiv Institute of Physics and Technology: radiation-hardened materials and self-powered sensors for extreme environments, and acoustic analysis for monitoring supercritical water cooling systems in nuclear reactors. One team, later known as HotPot, adopted the latter challenge.
  • AI, Open Source Intelligence, and disinformation. Phil Tinn ’16, a research scientist at SINTEF and an affiliate of the MIT-Ukraine Program, along with specialists from IN2, described how disinformation narratives travel across platforms, from Telegram to global social media. Cambridge University researcher Jon Roozenbeek discussed early threat-signal detection using pricing fluctuations in fake SMS verifications. Ukrainian partners presented on large language model bias propagation, bot detection, and media-anomaly analysis — groundwork for the eventual VibeTracking team.
  • Explosive ordnance disposal. Experts from MineSight and the U.S. Army National Guard detailed the scale of landmine contamination in Ukraine — by some estimates affecting a third of the country. These sessions inspired Clearview Interface, which worked on improving visual feedback for de-mining tools.
  • Drone detection. Engineers from Skyfall and MIT’s student community introduced acoustic, radiofrequency (RF), and fiber-optic-tether detection methods for drones — leading to two separate teams: Birdwatch (acoustic detection) and Hrobachki (RF detection).

Five teams, seven time zones, and one month of development

Nearly 90 people joined the project through Discord, and by the end of week one, five core teams had formed. Roles blurred: Undergraduates mentored professionals; Ukrainian engineers supplied real-time operational data; and faculty offered rapid problem-solving guidance. Each team completed a Preliminary Design Review, Critical Design Review, and final presentation to an audience of more than 80 people, online and in person.

Despite the compressed timeline, the teams delivered promising prototypes and analyses with potential real-world application.

Team highlights

Clearview Interface — Visualizing metal-detector data for safer de-mining

Two undergraduates from Olin College developed a method for converting complex metal-detector audio signals — often an overwhelming sequence of indistinguishable beeps — into intuitive visual information. Their approach could help de-miners identify object types more quickly and accurately, enhancing both safety and mapping. The team reverse-engineered commercial detector outputs and produced a preliminary interface they plan to refine this spring.

HotPot — Acoustic monitoring for nuclear-reactor cooling systems

This team of seven (five at MIT and two from the Kharkiv Institute of Physics and Technology) worked to detect transitions from water to supercritical states inside steam pipes — a critical safety parameter in nuclear facilities that have remained in operation during wartime. Combining physics simulations, hardware engineering, and acoustics, the group analyzed data from Ukrainian partners and proposed a model capable of identifying supercritical conditions via remote monitoring.

Birdwatch — Acoustic detection of fiber-optic-controlled drones

With drones frequently used along the front and often tethered to fiber-optic control lines that evade RF detection, the Birdwatch team built an audio-based detection system using a network of cameras and microphones. They trained their model on drone signatures recorded across MIT’s campus and integrated early detections into a decision-support tool to help operators interpret and act on the alerts.

Hrobachki — Radiofrequency localization for long-range drones

Two MIT students, along with collaborators at Kenyon College, Olin College, and a partner in Cherkasy, Ukraine, focused on RF detection for drones operating beyond front-line distances. They established nodes at MIT, Olin, and the town of Milton, Massachusetts, demonstrating the feasibility of distributed RF sensing for aerial threat identification.

VibeTracking — Following the movement of disinformation narratives

The smallest team — a master’s student in Lviv supported by several advisors — collaborated with IN2 to build a large-language-model pipeline that classifies and groups narratives across platforms such as Telegram and X. Their system demonstrated the likely propagation path of a specific narrative, illustrating how early-stage disinformation can be identified before it reaches mainstream channels.

Resilience, connection, and next steps

On the final day of presentations, specialists from Ukrainian universities, industry partners, and MIT-affiliated programs filled the room and populated the Zoom call. Their response was enthusiastic, not only because of what the teams produced in four weeks, but because of the collaborative networks formed under difficult conditions.

“The most important outcome is the community that emerged,” Boriskina says. “These teams built tools — but they also built relationships that will carry this work forward.”

Organizers expect several projects to continue this spring through research internships, Undergraduate Research Opportunity Program projects, and follow-on collaborations with Ukrainian institutions.

Students interested in joining ongoing Build for Ukraine projects can email the MIT-Ukraine Program. To support MIT-Ukraine initiatives, contact Svitlana Krasynska.

Seeing sounds

Thu, 03/26/2026 - 4:45pm

As one of the first students in MIT’s new Music Technology and Computation Graduate Program, Mariano Salcedo ’25 is researching the intersection between artificial intelligence and music visuals.

Specifically, his graduate research focuses on neural cellular automata (NCA), which merges classical cellular automata with machine learning techniques to grow images that can regenerate.

When paired with a stimulus like music, these images can “show” sounds in action.

“This approach enables anyone to create music-driven visuals while leveraging the expressive and sometimes unpredictable dynamics of self-organized systems,” Salcedo says. Through the web interface Salcedo has designed, users can adjust the relationship between the music’s energy and the NCA system to create unique visual performances using any music audio stream.

“I want the visuals to complement and elevate the listening experience,” he says.

Last year Salcedo, the Alex Rigopulos (1992) Fellow in Music Technology and Computation, earned a BS in artificial intelligence and decision making from MIT, where he explored signal processing in machine learning and how a classical understanding of signals can inform how we understand AI. Now he’s one of five master’s students in the Music Technology and Computation Graduate Program’s inaugural cohort.

The program, directed by professor of the practice in music technology Eran Egozy ’93, MNG ’95, is a collaboration between MIT Music and Theater Arts in the School of Humanities, Arts, and Social Sciences, and the School of Engineering. It invites practitioners to study, discover, and develop new computational approaches to music. It also includes a speaker series that exposes students and the broader MIT community to music industry professionals, artists, technologists, and other researchers.

Rigopulos ’92, SM ’94, is a video game designer, musician, and former CEO of Harmonix Music Systems, a company he co-founded with Egozy in 1995. Harmonix is now a part of Epic Games, where Rigopulos is the director of game development for music.

“MIT is where I was first able to pursue my passion for music technology decades ago, and that experience was the springboard for a long and fulfilling career,” says Rigopulos. “So, when MIT launched an advanced degree program in music technology, I was thrilled to fund a fellowship to help propel this exciting new program.”

Egozy is enthusiastic about Salcedo’s work and his commitment to further exploring its possibilities. “He is a beautiful example of a multidisciplinary researcher who thinks deeply about how to best use technology to enhance and expand human creativity,” he says.

Salcedo has been selected to deliver the student address at the 2026 Advanced Degree Ceremony for the School of Humanities, Arts, and Social Sciences. “It’s an honor and it’s daunting,” he says. “It feels like a huge responsibility,” though one he’s eager to embrace. His selection also pleases Egozy. “I am super excited that Maraino was chosen to deliver this year’s keynote,” he enthuses.

Changing gears

Growing up in Mexico and Texas, Mariano Salcedo couldn’t readily indulge his passion for creating music. “There are no bands in Mexican public schools,” he says. While some families could pay for instruments and lessons, others like Salcedo’s were less fortunate.

“I’ve always loved music,” he continues. “I was a listener.”

Salcedo began his MIT journey as a mechanical engineering student, applying to MIT through the Questbridge program. “I heard if you like engineering and science that attending MIT would be a great choice,” he recalls. “Nerds are welcomed and embraced.” While he dutifully worked toward completing his MechE curriculum, music and technology came calling after a chance encounter with an LLM.

“I was introduced to an LLM chatbot and was blown away,” he recalls. “This was something that was speaking to me. I was both awed and frightened.” After his encounter with the chatbot, Salcedo switched his major from mechanical engineering to artificial intelligence and decision making.

“I basically started over after being two thirds of the way through the MechE curriculum,” he says. He learned about the possibilities available with AI but also confronted some of the challenges bedeviling researchers and developers including its potential power, ensuring its responsible use, human bias, limited access for people from underrepresented groups, and a lack of diversity among developers. He decided he might be able to change that picture.

“I thought one more person in the field could make a difference,” he says.

While completing his undergraduate studies, Salcedo’s love of music resurfaced. “I began DJ’ing at MIT and was hooked,” he says. While he hadn’t learned to play a traditional instrument, he discovered he could create engaging soundscapes with technology. “I bought a digital audio work station to help me make music,” he continues.

Egozy and Salcedo met in 2024 while Salcedo completed an Undergraduate Research Opportunities Program rotation as a game developer in Egozy’s lab. “He was incredibly curious and has grown tremendously over a very short time period,” Egozy says. Egozy became an informal, though important, mentor to Salcedo. “He brings great energy and thoughtfulness to his work, and to supporting others in the [music technology and computation graduate] program,” Egozy notes.

Salcedo also took a class with Egozy, 21M.385/21M.585/6.4450 (Interactive Music Systems), which further fed his appetite for the creativity he craved while also allowing him to indulge his fascination with music’s possibilities. By taking advantage of courses in the HASS curriculum, he further developed his understanding of music theory and related technologies.

“I took a class with professor Leslie Tilley, 21M.240 (Critically Thinking in Music), which helped establish a valuable framework for understanding music making,” he says, “while a class like 6.3000 (Signal Processing) helped me connect intuition with science.”

Working across disciplines

While Salcedo is passionate about his music and his research, he’s also invested in building relationships with his fellow students. He’s a member of the fraternity Sigma Nu, where he says he “found a home and community.” He also took a MISTI trip to Chile in summer 2023, where he conducted music technology research. Salcedo praises the culture of camaraderie at MIT and is grateful for its influence on his work as a scholar. “MIT has taught me how to learn,” he says.

Professors encouraged him to present his research and findings. He presented his work — Artificial Dancing Intelligence: Neural Cellular Automata for Visual Performance of Music — at the Association for the Advancement of Artificial Intelligence conference in Singapore in January 2026.

Salcedo believes his research can potentially move beyond music visualization. “What if we could improve the ways we model self-organized systems?” he asks. “That is, systems like multicellular organisms, flocks of birds, or societies that interact locally but exhibit interesting behaviors.” Any system, Salcedo says, where the whole is more than the sum of its parts.

Developing the technology used to design his application can potentially help answer important ethical questions regarding AI’s continued expansion and growth. The path to his work’s development is both daunting and lonely, but those challenges feed his work ethic.

“It’s intimidating to pursue this path when the academy is currently focused on LLMs,” he says. “But it’s also important to explain and explore the base technology before digging into more nuanced work, which can help audiences understand it better.” Knowing that he has the support of his professors helps Salcedo maintain excitement for his ideas. “They only ask that we ground our interests in research,” he says.

His investigations are impacting his work as a musician. “My music has gotten more interesting because of the classes I’m taking,” he says. He’s also interested in understanding whose music the academy and the world hears, exploring biases toward Western music in the canon and exploring how to reduce biases related to which kinds of music are valued.

“The work we do as technologists is far less subjective than we’re led to believe,” he believes.

Salcedo is especially grateful for the support he’s received during his time at MIT. “Program faculty encourage a variety of pursuits,” he says, “and ask us to advance our individual aims rather than focusing on theirs.” During his time in the graduate program, he notes with enthusiasm how often he’s been challenged to pursue his ideas.

Ultimately, Salcedo wants people to experience the joy he feels working at the intersection of the humanities and the sciences. Music and technology impact nearly everyone. Inviting audiences into his laboratory as participants in the creative and research processes offers the same kind of satisfaction he gets from crafting a great beat or solving for a thorny technical challenge. Helping audiences understand his work’s value fuels his drive to succeed.

“I want users to feel movement and explore sounds and their impact more fully,” he says.

MIT engineers design proteins by their motion, not just their shape

Thu, 03/26/2026 - 4:20pm

Proteins are far more than nutrients we track on a food label. Present in every cell of our bodies, they work like nature’s molecular machines. They walk, stretch, bend, and flex to do their jobs, pumping blood, fighting disease, building tissue, and many other jobs too small for the eye to see. Their power doesn’t come from shape alone, but from how they move. 

In recent years, artificial intelligence has allowed scientists to design entirely new protein structures not found in nature tailored for specific functions, such as binding to viruses, or mimicking the mechanical properties of silk for sustainable materials. But designing for structure alone is like building a car body without any control over how the engine performs. The subtle vibrations, shifts, and mechanical dynamics of a protein are just as critical to its functions as its form.

Now, MIT engineers have taken a major step toward closing the gap with the development of an AI model known as VibeGen. If vibe coding lets programmers describe what they want and then AI generates the software, VibeGen does the same for living molecules: specify the vibe — the pattern of motion you want — and the model writes the protein. 

The new model allows scientists to target how a protein flexes, vibrates, and shifts between shapes in response to its environment, opening a new frontier in the design of molecular mechanics. VibeGen builds on a series of advances from the Buehler lab in agentic AI for science — systems in which multiple AI models collaborate autonomously to solve problems too complex for any single model.

“The essence of life at fundamental molecular levels lies not just in structure, but in movement,” says Markus Buehler, the Jerry McAfee Professor of Engineering in the departments of Civil and Environmental Engineering and Mechanical Engineering. “Everything from protein folding to the deformation of materials under stress follows the fundamental laws of physics.”

Buehler and his former postdoc, Bo Ni, identified a critical need for what they call physics-aware AI: systems capable of reasoning about motion, not just snapshots of molecular structure. “AI must go beyond analyzing static forms to understanding how structure and motion are fundamentally intertwined,” Buehler adds.

The new approach, described in a paper March 24 in the journal Matteruses generative AI to create proteins with tailor-made dynamics.

Training AI to think about motion 

The revolution in AI-driven protein science has been, overwhelmingly, a revolution in structure. Tools like AlphaFold solved the decades-old problem of predicting a protein’s three-dimensional shape. Existing generative models learned to design new shapes from scratch. But in focusing on the folded snapshot — the protein frozen in place — the field largely set aside the property that makes proteins work: their motion. “Structure prediction was such a grand challenge that it absorbed the field’s attention,” Buehler says. “But a protein’s shape is just one frame of a much longer film, and the design space extends through space and time, where structure sits on a much broader manifold.” Scientists could design a protein with a particular architecture. They couldn’t yet specify how that protein would move, flex, or vibrate once it was built.

VibeGen does something no protein design tool has done before. It inverts the traditional problem. Rather than asking, “What shape will this sequence produce?” it asks, “What sequence will make a protein move in exactly this way?”

To build VibeGen, Buehler and Ni turned to a class of AI diffusion models, the same underlying technology that powers AI image generators capable of creating realistic pictures from pure noise. In VibeGen’s case, the model starts with a random sequence of amino acids and refines it, step by step, until it converges on a sequence predicted to vibrate and flex in a targeted way.

The system works through two cooperating agents that design and challenge each other. A “designer” proposes candidate sequences aimed at a target motion profile. A “predictor” evaluates those candidates, asking whether they’ll actually move the way the designer intended. The two models iterate back and forth like an internal dialogue, until the design stabilizes into something that meets the goal. By specifying this vibrational fingerprint as the design input, VibeGen inverts the usual logic: dynamics becomes the blueprint, and structure follows.

“It’s a collaborative system,” Ni says. “The designer proposes, the predictor critiques, and the design improves through that tension.”

Most sequences VibeGen produces are entirely de novo, not borrowed from nature, not a variation on something evolution already made. To confirm the designs actually work, the team ran detailed physics-based molecular simulations, and the proteins behaved exactly as intended, flexing and vibrating in the patterns VibeGen had targeted.

One of the study’s most striking findings is that many different protein sequences and folds can satisfy the same vibrational target — a property the researchers call functional degeneracy. Where evolution converged on one solution, VibeGen reveals an entire family of alternatives: proteins with different structures and sequences that nonetheless move in the same way. “It suggests that nature explored only a fraction of what’s possible,” Buehler says. “For any given dynamic behavior, there may be a large, untapped space of viable designs."

A new frontier in molecular engineering

Controlling protein dynamics could have wide-ranging applications. In medicine, proteins that can change shape on cue hold enormous potential. Many therapeutic proteins work by binding to a target molecule — a virus, a cancer cell, a misfiring receptor. How well they bind often depends not just on their shape, but on how flexibly they can adapt to their target. A protein that is engineered with motion could grip more precisely, reduce unintended interactions, and ultimately become a safer, more effective drug.

In materials science, which is an area of Buehler’s research, mechanical properties at the molecular scale affect their performance. Biological materials like silk and collagen get their strength and resilience from the coordinated motion of their molecular building blocks. Designing proteins that are stiffer, flexible, or vibrate in a certain way could lead to new sustainable fibers, impact-resistant materials, or biodegradable alternatives to petroleum-based plastics.

Buehler envisions further possibilities: structural materials for buildings or vehicles incorporating protein-based components that heal themselves after mechanical stress, or that adjust in response to heavy load.

By enabling researchers to specify motion as a direct design parameter, VibeGen treats proteins less like static shapes and more like programmable mechanical devices. The advance bridges artificial intelligence, medicine, synthetic biology, and materials engineering — toward a future in which molecular machines can be designed with the same precision and intentionality as bridges, engines, or microchips.

VibeGen can venture into uncharted territory, proposing protein designs beyond the repertoire of evolution, tailored purely to our specifications. It’s as if we’ve invented a new creative engine that designs molecular machines on demand,” Buehler adds.

The researchers plan to refine the model further and validate their designs in the lab. They also hope to integrate motion-aware design with other AI tools, building toward systems that can design proteins to be not just dynamic, but multifunctional; machines that sense their environment, respond to signals, and adapt in real-time.

The word “vibe” comes from vibration, and Buehler sees the connection as more than wordplay. “We've turned 'vibe' into a metaphor, a feeling, something subjective,” he says. “But for a protein, the vibe is the physics. It is the actual pattern of motion that determines what the molecule can do, the very machinery of life.”

The research was supported by the U.S. Department of Agriculture, the MIT-IBM Watson AI Lab, and MIT’s Generative AI Initiative. 

Pages