MIT Latest News

Insights into political outsiders
As the old saw has it, 90 percent of politics is just showing up. Which is fine for people who are already engaged in the political system and expect to influence it. What about everyone else? The U.S. has millions and millions of people who typically do not vote or participate in politics. Is there a way into political life for those who are normally disconnected from it?
This is a topic MIT political scientist Ariel White has been studying closely over the last decade. White conducts careful empirical research on typically overlooked subjects, such as the relationship between incarceration and political participation; the way people interact with government administrators; and how a variety of factors, from media coverage to income inequality, influence engagement with politics.
While the media heavily cover the views of frequent voters in certain areas, there is very little attention paid to citizens who do not vote regularly but could. To grasp U.S. politics, it might help us to better understand such people.
“I think there is a much broader story to be told here,” says White, an associate professor in MIT’s Department of Political Science.
Study by study, her research has been telling that story. Even short, misdemeanor-linked jail terms, White has found, reduce the likelihood that people will vote — and lower the propensity of family members to vote as well. When people are convicted of felonies, they often lose their right to vote, but they also vote at low rates when eligible. Other studies by White also suggest that an 8 percent minimum wage increase leads to an increase in turnout of about one-third of 1 percent, and that those receiving public benefits are far less likely to vote than those who do not.
These issues are often viewed in partisan terms, although the reality, White thinks, is considerably more complex. When evaluating infrequent or disconnected voters, we do not know enough to make assumptions about these matters.
“Getting people with past criminal convictions registered and voting, when they are eligible, is not a surefire partisan advantage for anybody,” White says. “There’s a lot of heterogeneity in this group, which is not what people assume. Legislators tend to treat this as a partisan issue, but at the mass public level you see less polarization, and more people are willing to support a path for others back into daily life.”
Experiences matter
White grew up near Rochester, New York, and majored in economics and government at Cornell University. She says that initially she never considered entering academia, and tried her hand at a few jobs after graduation. One of them, working as an Americorps-funded paralegal in a legal services office, had a lasting influence; she started thinking more about the nature of government-citizen interactions in these settings.
“It really stuck in my mind the way people’s experiences, one-on-one with a person who is representing government, when trying to get benefits, really shapes people’s views about how government is going to operate and see them, and what they can expect from the state,” White says. “People’s experiences with government matter for what they do politically.”
Before long, White was accepted into the doctoral program at Harvard University, where she earned an MA in 2012 and her PhD in 2016. White then joined the MIT faculty, also in 2016, and has remained at the Institute ever since.
White’s first published paper, in 2015, co-authored with Julie Faller and Noah Nathan, found that government officials tended to have different levels of responsiveness when providing voting information to people of apparently different ethnicities. It won an award from the American Political Science Association. (Nathan is now also a faculty member at MIT.)
Since then, White has published a string of papers examining how many factors interact with voting propensities. In one study focused in Pennsylvania, she found that public benefits recipients made up 20 percent of eligible voters in 2020 but just 12 percent of those who voted. When examining the criminal justice system, White has found that even short-term jail time leads to a turnout drop of several percentage points among the incarcerated. Family members of those serving even short jail sentences are less likely to vote in the near term too, although their participation rebounds over time.
“People don’t often think of incarceration as a thing they connect with politics,” White says. “Descriptively, with many people who have had the experience of incarceration or criminal convictions, or who are living in families or neighborhoods with a lot of it, we don’t see a lot of political action, and we see low levels of voting. Given how widespread incarceration is in the U.S., it seems like one of the most common and impactful things the government can do. But for a long time it was left to sociology to study.”
How to reach people?
Having determined that citizens are less likely to vote in many circumstances, White’s research is now evolving toward a related question: What are the most viable ways of changing that? To be sure, nothing is likely to create a tsunami of new voters. Even where people convicted of felonies can vote from prison, she found in still another study, they do so at single-digit rates. People who are used to not voting are not going to start voting at high rates, on aggregate.
Still, this fall, White led a new field experiment about getting unregistered voters to both register and vote. In this case, she and some colleagues created a study designed to see if friends of unregistered voters might be especially able to get their networks to join the voter rolls. The results are still under review. But for White, it is a new area where many kinds of experiments and studies seem possible.
“Political science in general and the world of actual practicing political campaigns knows an awful lot about how to get registered voters to turn out to vote,” White says. “There’s so much work on get-out-the-vote activities, mailers and calls and texts. We know way, way less about the 1-in-4 or so eligible voters who are simply not registered at all, and are in a very real sense invisible in the political landscape. Overwhelmingly, the people I’m curious about fall into that category.”
It is also a subject that she hopes will sustain the interest of her students. White’s classes tend to be filled by students with many different registered majors but an abiding interest in civic life. White wants them to come away with a more informed sense of their civic landscape, as well as new tools for conducting clean empirical studies. And, who knows? Like White herself, some of her students may end up making a career out of political engagement, even if they don’t know it yet.
“I really like working with MIT students,” White says. “I do hope my students gain some key understandings about what we know about political life, and how we can know about it, which I think are likely to be helpful to them in a variety of realms. My hope is they take a fundamental understanding of social science research, and some big questions, and some big concepts, out into the world.”
Coffee fix: MIT students decode the science behind the perfect cup
Elaine Jutamulia ’24 took a sip of coffee with a few drops of anise extract. It was her second try.
“What do you think?” asked Omar Orozco, standing at a lab table in MIT’s Breakerspace, surrounded by filters, brewing pots, and other coffee paraphernalia.
“I think when I first tried it, it was still pretty bitter,” Jutamulia said thoughtfully. “But I think now that it’s steeped for a little bit — it took out some of the bitterness.”
Jutamulia and current MIT senior Orozco were part of class 3.000 (Coffee Matters: Using the Breakerspace to Make the Perfect Cup), a new MIT course that debuted in spring 2024. The class combines lectures on chemistry and the science of coffee with hands-on experimentation and group projects. Their project explored how additives such as anise, salt, and chili oil influence coffee extraction — the process of dissolving flavor compounds from ground coffee into water — to improve taste and correct common brewing errors.
Alongside tasting, they used an infrared spectrometer to identify the chemical compounds in their coffee samples that contribute to flavor. Does anise make bitter coffee smoother? Could chili oil balance the taste?
“Generally speaking, if we could make a recommendation, that’s what we’re trying to find,” Orozco said.
A three-unit “discovery class” designed to help first-year students explore majors, 3.000 was widely popular, enrolling more than 50 students. Its success was driven by the beverage at its core and the class’s hands-on approach, which pushes students to ask and answer questions they might not have otherwise.
For aeronautics and astronautics majors Gabi McDonald and McKenzie Dinesen, coffee was the draw, but the class encouraged them to experiment and think in new ways. “It’s easy to drop people like us in, who love coffee, and, ‘Oh my gosh, there’s this class where we can go make coffee half the time and try all different kinds of things?’” McDonald says.
Percolating knowledge
The class pairs weekly lectures on topics such as coffee chemistry, the anatomy and composition of a coffee bean, the effects of roasting, and the brewing process with tasting sessions — students sample coffee brewed from different beans, roasts, and grinds. In the MIT Breakerspace, a new space on campus conceived and managed by the Department of Materials Science and Engineering (DMSE), students use equipment such as a digital optical microscope to examine ground coffee particles and a scanning electron microscope, which shoots beams of electrons at samples to reveal cross-sections of beans in stunning detail.
Once students learn to operate instruments for guided tasks, they form groups and design their own projects.
“The driver for those projects is some question they have about coffee raised by one of the lectures or the tasting sessions, or just something they’ve always wanted to know,” says DMSE Professor Jeffrey Grossman, who designed and teaches the class. “Then they’ll use one or more of these pieces of equipment to shed some light on it.”
Grossman traces the origins of the class to his initial vision for the Breakerspace, a laboratory for materials analysis and lounge for MIT undergraduates. Opened in November 2023, the space gives students hands-on experience with materials science and engineering, an interdisciplinary field combining chemistry, physics, and engineering to probe the composition and structure of materials.
“The world is made of stuff, and these are the tools to understand that stuff and bring it to life,” says Grossman. So he envisioned a class that would give students an “exploratory, inspiring nudge.”
“Then the question wasn’t the pedagogy, it was, ‘What’s the hook?’ In materials science, there are a lot of directions you could go, but if you have one that inspires people because they know it and maybe like it already, then that’s exciting.”
Cup of ambition
That hook, of course, was coffee, the second-most-consumed beverage after water. It captured students’ imagination and motivated them to push boundaries.
Orozco brought a fair amount of coffee knowledge to the class. In 2023, he taught in Mexico through the MISTI Global Teaching Labs program, where he toured several coffee farms and acquired a deeper knowledge of the beverage. He learned, for example, that black coffee, contrary to general American opinion, isn’t naturally bitter; bitterness arises from certain compounds that develop during the roasting process.
“If you properly brew it with the right beans, it actually tastes good,” says Orozco, a humanities and engineering major. A year later, in 3.000, he expanded his understanding of making a good brew, particularly through the group project with Jutamulia and other students to fix bad coffee.
The group prepared a control sample of “perfectly brewed” coffee — based on taste, coffee-to-water ratio, and other standards covered in class — alongside coffee that was under-extracted and over-extracted. Under-extracted coffee, made with water that isn’t hot enough or brewed for too short a time, tastes sharp or sour. Over-extracted coffee, brewed with too much coffee or for too long, tastes bitter.
Those coffee samples got additives and were analyzed using Fourier Transform Infrared (FTIR) spectroscopy, measuring how coffee absorbed infrared light to identify flavor-related compounds. Jutamulia examined FTIR readings taken from a sample with lime juice to see how the citric acid influenced its chemical profile.
“Can we find any correlation between what we saw and the existing known measurements of citric acid?” asks Jutamulia, who studied computation and cognition at MIT, graduating last May.
Another group dove into coffee storage, questioning why conventional wisdom advises against freezing.
“We just wondered why that’s the case,” says electrical engineering and computer science major Noah Wiley, a coffee enthusiast with his own espresso machine.
The team compared methods like freezing brewed coffee, frozen coffee grounds, and whole beans ground after freezing, evaluating their impact on flavor and chemical composition.
“Then we’re going to see which ones taste good,” says Wiley. The team used a class coffee review sheet to record attributes like acidity, bitterness, sweetness, and overall flavor, pairing the results with FTIR analysis to determine how storage affected taste.
Wiley acknowledged that “good” is subjective. “Sometimes there’s a group consensus. I think people like fuller coffee, not watery,” he says.
Other student projects compared caffeine levels in different coffee types, analyzed the effect of microwaving coffee on its chemical composition and flavor, and investigated the differences between authentic and counterfeit coffee beans.
“We gave the students some papers to look at in case they were interested,” says Justin Lavallee, Breakerspace manager and co-teacher of the class. “But mostly we told them to focus on something they wanted to learn more about.”
Drip, drip, drip
Beyond answering specific questions about coffee, both students and teachers gained deeper insights into the beverage.
“Coffee is a complicated material. There are thousands of molecules in the beans, which change as you roast and extract them,” says Grossman. “The number of ways you can engineer this collection of molecules — it’s profound, ranging from where and how the coffee’s grown to how the cherries are then treated to get the beans to how the beans are roasted and ground to the brewing method you use.”
Dinesen learned firsthand, discovering, for example, that darker roasts have less caffeine than lighter roasts, puncturing a common misconception. “You can vary coffee so much — just with the roast of the bean, the size of the ground,” she says. “It’s so easily manipulatable, if that's a word.”
In addition to learning about the science and chemistry behind coffee, Dinesen and McDonald gained new brewing techniques, like using a pour-over cone. The pair even incorporated coffee making and testing into their study routine, brewing coffee while tackling problem sets for another class.
“I would put my pour-over cone in my backpack with a Ziploc bag full of grounds, and we would go to the Student Center and pull out the cone, a filter, and the coffee grounds,” McDonald says. “And then we would make pour-overs while doing a P-set. We tested different amounts of water, too. It was fun.”
Tony Chen, a materials science and engineering major, reflected on the 3.000’s title — “Using the Breakerspace to Make the Perfect Cup” — and whether making a perfect cup is possible. “I don’t think there’s one perfect cup because each person has their own preferences. I don't think I’ve gotten to mine yet,” he says.
Enthusiasm for coffee’s complexity and the discovery process was exactly what Grossman hoped to inspire in his students. “The best part for me was also just seeing them developing their own sense of curiosity,” he says.
He recalled a moment early in the class when students, after being given a demo of the optical microscope, saw the surface texture of a magnified coffee bean, the mottled shades of color, and the honeycomb-like pattern of tiny irregular cells.
“They’re like, ‘Wait a second. What if we add hot water to the grounds while it’s under the microscope? Would we see the extraction?’ So, they got hot water and some ground coffee beans, and lo and behold, it looked different. They could see the extraction right there,” Grossman says. “It’s like they have an idea that’s inspired by the learning, and they go and try it. I saw that happen many, many times throughout the semester.”
Personal interests can influence how children’s brains respond to language
A recent study from the McGovern Institute for Brain Research shows how interests can modulate language processing in children’s brains and paves the way for personalized brain research.
The paper, which appears in Imaging Neuroscience, was conducted in the lab of MIT professor and McGovern Institute investigator John Gabrieli, and led by senior author Anila D’Mello, a recent McGovern postdoc who is now an assistant professor at the University of Texas Southwestern Medical Center and the University of Texas at Dallas.
“Traditional studies give subjects identical stimuli to avoid confounding the results,” says Gabrieli, who is the Grover Hermann Professor of Health Sciences and Technology and a professor of brain and cognitive sciences at MIT. “However, our research tailored stimuli to each child’s interest, eliciting stronger — and more consistent — activity patterns in the brain’s language regions across individuals.”
This work unveils a new paradigm that challenges current methods and shows how personalization can be a powerful strategy in neuroscience. The paper’s co-first authors are Halie Olson, a postdoc at the McGovern Institute, and Kristina Johnson PhD '21, an assistant professor at Northeastern University and former doctoral student at the MIT Media Lab. “Our research integrates participants’ lived experiences into the study design,” says Johnson. “This approach not only enhances the validity of our findings, but also captures the diversity of individual perspectives, often overlooked in traditional research.”
Taking interest into account
When it comes to language, our interests are like operators behind the switchboard. They guide what we talk about and who we talk to. Research suggests that interests are also potent motivators and can help improve language skills. For instance, children score higher on reading tests when the material covers topics that are interesting to them.
But neuroscience has shied away from using personal interests to study the brain, especially in the realm of language. This is mainly because interests, which vary between people, could throw a wrench into experimental control — a core principle that drives scientists to limit factors that can muddle the results.
Gabrieli, D’Mello, Olson, and Johnson ventured into this unexplored territory. The team wondered if tailoring language stimuli to children’s interests might lead to higher responses in language regions of the brain. “Our study is unique in its approach to control the kind of brain activity our experiments yield, rather than control the stimuli we give subjects,” says D’Mello. “This stands in stark contrast to most neuroimaging studies that control the stimuli but might introduce differences in each subject’s level of interest in the material.”
In their recent study, the authors recruited a cohort of 20 children to investigate how personal interests affected the way the brain processes language. Caregivers described their child’s interests to the researchers, spanning baseball, train lines, “Minecraft,” and musicals. During the study, children listened to audio stories tuned to their unique interests. They were also presented with audio stories about nature (this was not an interest among the children) for comparison. To capture brain activity patterns, the team used functional magnetic resonance imaging (fMRI), which measures changes in blood flow caused by underlying neural activity.
New insights into the brain
“We found that, when children listened to stories about topics they were really interested in, they showed stronger neural responses in language areas than when they listened to generic stories that weren’t tailored to their interests,” says Olson. “Not only does this tell us how interests affect the brain, but it also shows that personalizing our experimental stimuli can have a profound impact on neuroimaging results.”
The researchers noticed a particularly striking result. “Even though the children listened to completely different stories, their brain activation patterns were more overlapping with their peers when they listened to idiosyncratic stories compared to when they listened to the same generic stories about nature,” says D’Mello. This, she notes, points to how interests can boost both the magnitude and consistency of signals in language regions across subjects without changing how these areas communicate with each other.
Gabrieli noted another finding: “In addition to the stronger engagement of language regions for content of interest, there was also stronger activation in brain regions associated with reward and also with self-reflection.” Personal interests are individually relevant and can be rewarding, potentially driving higher activation in these regions during personalized stories.
These personalized paradigms might be particularly well-suited to studies of the brain in unique or neurodivergent populations. Indeed, the team is already applying these methods to study language in the brains of autistic children.
This study breaks new ground in neuroscience and serves as a prototype for future work that personalizes research to unearth further knowledge of the brain. In doing so, scientists can compile a more complete understanding of the type of information that is processed by specific brain circuits and more fully grasp complex functions such as language.
The role of modeling in the energy transition
Joseph F. DeCarolis, administrator for the U.S. Energy Information Administration (EIA), has one overarching piece of advice for anyone poring over long-term energy projections.
“Whatever you do, don’t start believing the numbers,” DeCarolis said at the MIT Energy Initiative (MITEI) Fall Colloquium. “There’s a tendency when you sit in front of the computer and you’re watching the model spit out numbers at you … that you’ll really start to believe those numbers with high precision. Don’t fall for it. Always remain skeptical.”
This event was part of MITEI’s new speaker series, MITEI Presents: Advancing the Energy Transition, which connects the MIT community with the energy experts and leaders who are working on scientific, technological, and policy solutions that are urgently needed to accelerate the energy transition.
The point of DeCarolis’s talk, titled “Stay humble and prepare for surprises: Lessons for the energy transition,” was not that energy models are unimportant. On the contrary, DeCarolis said, energy models give stakeholders a framework that allows them to consider present-day decisions in the context of potential future scenarios. However, he repeatedly stressed the importance of accounting for uncertainty, and not treating these projections as “crystal balls.”
“We can use models to help inform decision strategies,” DeCarolis said. “We know there’s a bunch of future uncertainty. We don’t know what's going to happen, but we can incorporate that uncertainty into our model and help come up with a path forward.”
Dialogue, not forecasts
EIA is the statistical and analytic agency within the U.S. Department of Energy, with a mission to collect, analyze, and disseminate independent and impartial energy information to help stakeholders make better-informed decisions. Although EIA analyzes the impacts of energy policies, the agency does not make or advise on policy itself. DeCarolis, who was previously professor and University Faculty Scholar in the Department of Civil, Construction, and Environmental Engineering at North Carolina State University, noted that EIA does not need to seek approval from anyone else in the federal government before publishing its data and reports. “That independence is very important to us, because it means that we can focus on doing our work and providing the best information we possibly can,” he said.
Among the many reports produced by EIA is the agency’s Annual Energy Outlook (AEO), which projects U.S. energy production, consumption, and prices. Every other year, the agency also produces the AEO Retrospective, which shows the relationship between past projections and actual energy indicators.
“The first question you might ask is, ‘Should we use these models to produce a forecast?’” DeCarolis said. “The answer for me to that question is: No, we should not do that. When models are used to produce forecasts, the results are generally pretty dismal.”
DeCarolis pointed to wildly inaccurate past projections about the proliferation of nuclear energy in the United States as an example of the problems inherent in forecasting. However, he noted, there are “still lots of really valuable uses” for energy models. Rather than using them to predict future energy consumption and prices, DeCarolis said, stakeholders should use models to inform their own thinking.
“[Models] can simply be an aid in helping us think and hypothesize about the future of energy,” DeCarolis said. “They can help us create a dialogue among different stakeholders on complex issues. If we’re thinking about something like the energy transition, and we want to start a dialogue, there has to be some basis for that dialogue. If you have a systematic representation of the energy system that you can advance into the future, we can start to have a debate about the model and what it means. We can also identify key sources of uncertainty and knowledge gaps.”
Modeling uncertainty
The key to working with energy models is not to try to eliminate uncertainty, DeCarolis said, but rather to account for it. One way to better understand uncertainty, he noted, is to look at past projections, and consider how they ended up differing from real-world results. DeCarolis pointed to two “surprises” over the past several decades: the exponential growth of shale oil and natural gas production (which had the impact of limiting coal’s share of the energy market and therefore reducing carbon emissions), as well as the rapid rise in wind and solar energy. In both cases, market conditions changed far more quickly than energy modelers anticipated, leading to inaccurate projections.
“For all those reasons, we ended up with [projected] CO2 [carbon dioxide] emissions that were quite high compared to actual,” DeCarolis said. “We’re a statistical agency, so we’re really looking carefully at the data, but it can take some time to identify the signal through the noise.”
Although EIA does not produce forecasts in the AEO, people have sometimes interpreted the reference case in the agency’s reports as predictions. In an effort to illustrate the unpredictability of future outcomes in the 2023 edition of the AEO, the agency added “cones of uncertainty” to its projection of energy-related carbon dioxide emissions, with ranges of outcomes based on the difference between past projections and actual results. One cone captures 50 percent of historical projection errors, while another represents 95 percent of historical errors.
“They capture whatever bias there is in our projections,” DeCarolis said of the uncertainty cones. “It’s being captured because we’re comparing actual [emissions] to projections. The weakness of this, though, is: who’s to say that those historical projection errors apply to the future? We don’t know that, but I still think that there’s something useful to be learned from this exercise.”
The future of energy modeling
Looking ahead, DeCarolis said, there is a “laundry list of things that keep me up at night as a modeler.” These include the impacts of climate change; how those impacts will affect demand for renewable energy; how quickly industry and government will overcome obstacles to building out clean energy infrastructure and supply chains; technological innovation; and increased energy demand from data centers running compute-intensive workloads.
“What about enhanced geothermal? Fusion? Space-based solar power?” DeCarolis asked. “Should those be in the model? What sorts of technology breakthroughs are we missing? And then, of course, there are the unknown unknowns — the things that I can’t conceive of to put on this list, but are probably going to happen.”
In addition to capturing the fullest range of outcomes, DeCarolis said, EIA wants to be flexible, nimble, transparent, and accessible — creating reports that can easily incorporate new model features and produce timely analyses. To that end, the agency has undertaken two new initiatives. First, the 2025 AEO will use a revamped version of the National Energy Modeling System that includes modules for hydrogen production and pricing, carbon management, and hydrocarbon supply. Second, an effort called Project BlueSky is aiming to develop the agency’s next-generation energy system model, which DeCarolis said will be modular and open source.
DeCarolis noted that the energy system is both highly complex and rapidly evolving, and he warned that “mental shortcuts” and the fear of being wrong can lead modelers to ignore possible future developments. “We have to remain humble and intellectually honest about what we know,” DeCarolis said. “That way, we can provide decision-makers with an honest assessment of what we think could happen in the future.”
How hard is it to prevent recurring blackouts in Puerto Rico?
Researchers at MIT’s Laboratory for Information and Decision Systems (LIDS) have shown that using decision-making software and dynamic monitoring of weather and energy use can significantly improve resiliency in the face of weather-related outages, and can also help to efficiently integrate renewable energy sources into the grid.
The researchers point out that the system they suggest might have prevented or at least lessened the kind of widespread power outage that Puerto Rico experienced last week by providing analysis to guide rerouting of power through different lines and thus limit the spread of the outage.
The computer platform, which the researchers describe as DyMonDS, for Dynamic Monitoring and Decision Systems, can be used to enhance the existing operating and planning practices used in the electric industry. The platform supports interactive information exchange and decision-making between the grid operators and grid-edge users — all the distributed power sources, storage systems and software that contribute to the grid. It also supports optimization of available resources and controllable grid equipment as system conditions vary. It further lends itself to implementing cooperative decision-making by different utility- and non-utility-owned electric power grid users, including portfolios of mixed resources, users, and storage. Operating and planning the interactions of the end-to-end high-voltage transmission grid with local distribution grids and microgrids represents another major potential use of this platform.
This general approach was illustrated using a set of publicly-available data on both meteorology and details of electricity production and distribution in Puerto Rico. An extended AC Optimal Power Flow software developed by SmartGridz Inc. is used for system-level optimization of controllable equipment. This provides real-time guidance for deciding how much power, and through which transmission lines, should be channeled by adjusting plant dispatch and voltage-related set points, and in extreme cases, where to reduce or cut power in order to maintain physically-implementable service for as many customers as possible. The team found that the use of such a system can help to ensure that the greatest number of critical services maintain power even during a hurricane, and at the same time can lead to a substantial decrease in the need for construction of new power plants thanks to more efficient use of existing resources.
The findings are described in a paper in the journal Foundations and Trends in Electric Energy Systems, by MIT LIDS researchers Marija Ilic and Laurentiu Anton, along with recent alumna Ramapathi Jaddivada.
“Using this software,” Ilic says, they show that “even during bad weather, if you predict equipment failures, and by using that information exchange, you can localize the effect of equipment failures and still serve a lot of customers, 50 percent of customers, when otherwise things would black out.”
Anton says that “the way many grids today are operated is sub-optimal.” As a result, “we showed how much better they could do even under normal conditions, without any failures, by utilizing this software.” The savings resulting from this optimization, under everyday conditions, could be in the tens of percents, they say.
The way utility systems plan currently, Ilic says, “usually the standard is that they have to build enough capacity and operate in real time so that if one large piece of equipment fails, like a large generator or transmission line, you still serve customers in an uninterrupted way. That’s what’s called N-minus-1.” Under this policy, if one major component of the system fails, they should be able to maintain service for at least 30 minutes. That system allows utilities to plan for how much reserve generating capacity they need to have on hand. That’s expensive, Ilic points out, because it means maintaining this reserve capacity all the time, even under normal operating conditions when it’s not needed.
In addition, “right now there are no criteria for what I call N-minus-K,” she says. If bad weather causes five pieces of equipment to fail at once, “there is no software to help utilities decide what to schedule” in terms of keeping the most customers, and the most important services such as hospitals and emergency services, provided with power. They showed that even with 50 percent of the infrastructure out of commission, it would still be possible to keep power flowing to a large proportion of customers.
Their work on analyzing the power situation in Puerto Rico started after the island had been devastated by hurricanes Irma and Maria. Most of the electric generation capacity is in the south, yet the largest loads are in San Juan, in the north, and Mayaguez in the west. When transmission lines get knocked down, a lot of rerouting of power needs to happen quickly.
With the new systems, “the software finds the optimal adjustments for set points,” for example, changing voltages can allow for power to be redirected through less-congested lines, or can be increased to lessen power losses, Anton says.
The software also helps in the long-term planning for the grid. As many fossil-fuel power plants are scheduled to be decommissioned soon in Puerto Rico, as they are in many other places, planning for how to replace that power without having to resort to greenhouse gas-emitting sources is a key to achieving carbon-reduction goals. And by analyzing usage patterns, the software can guide the placement of new renewable power sources where they can most efficiently provide power where and when it’s needed.
As plants are retired or as components are affected by weather, “We wanted to ensure the dispatchability of power when the load changes,” Anton says, “but also when crucial components are lost, to ensure the robustness at each step of the retirement schedule.”
One thing they found was that “if you look at how much generating capacity exists, it’s more than the peak load, even after you retire a few fossil plants,” Ilic says. “But it’s hard to deliver.” Strategic planning of new distribution lines could make a big difference.
Jaddivada, director of innovation at SmartGridz, says that “we evaluated different possible architectures in Puerto Rico, and we showed the ability of this software to ensure uninterrupted electricity service. This is the most important challenge utilities have today. They have to go through a computationally tedious process to make sure the grid functions for any possible outage in the system. And that can be done in a much more efficient way through the software that the company developed.”
The project was a collaborative effort between the MIT LIDS researchers and others at MIT Lincoln Laboratory, the Pacific Northwest National Laboratory, with overall help of SmartGridz software.
New filter captures and recycles aluminum from manufacturing waste
Used in everything from soda cans and foil wrap to circuit boards and rocket boosters, aluminum is the second-most-produced metal in the world after steel. By the end of this decade, demand is projected to drive up aluminum production by 40 percent worldwide. This steep rise will magnify aluminum’s environmental impacts, including any pollutants that are released with its manufacturing waste.
MIT engineers have developed a new nanofiltration process to curb the hazardous waste generated from aluminum production. Nanofiltration could potentially be used to process the waste from an aluminum plant and retrieve any aluminum ions that would otherwise have escaped in the effluent stream. The captured aluminum could then be upcycled and added to the bulk of the produced aluminum, increasing yield while simultaneously reducing waste.
The researchers demonstrated the membrane’s performance in lab-scale experiments using a novel membrane to filter various solutions that were similar in content to the waste streams produced by aluminum plants. They found that the membrane selectively captured more than 99 percent of aluminum ions in these solutions.
If scaled up and implemented in existing production facilities, the membrane technology could reduce the amount of wasted aluminum and improve the environmental quality of the waste that plants generate.
“This membrane technology not only cuts down on hazardous waste but also enables a circular economy for aluminum by reducing the need for new mining,” says John Lienhard, the Abdul Latif Jameel Professor of Water in the Department of Mechanical Engineering, and director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) at MIT. “This offers a promising solution to address environmental concerns while meeting the growing demand for aluminum.”
Lienhard and his colleagues report their results in a study appearing today in the journal ACS Sustainable Chemistry and Engineering. The study’s co-authors include MIT mechanical engineering undergraduates Trent Lee and Vinn Nguyen, and Zi Hao Foo SM ’21, PhD ’24, who is a postdoc at the University of California at Berkeley.
A recycling niche
Lienhard’s group at MIT develops membrane and filtration technologies for desalinating seawater and remediating various sources of wastewater. In looking for new areas to apply their work, the team found an unexplored opportunity in aluminum and, in particular, the wastewater generated from the metal’s production.
As part of aluminum’s production, metal-rich ore, called bauxite, is first mined from open pits, then put through a series of chemical reactions to separate the aluminum from the rest of the mined rock. These reactions ultimately produce aluminum oxide, in a powdery form called alumina. Much of this alumina is then shipped to refineries, where the powder is poured into electrolysis vats containing a molten mineral called cryolite. When a strong electric current is applied, cryolite breaks alumina’s chemical bonds, separating aluminum and oxygen atoms. The pure aluminum then settles in liquid form to the bottom of the vat, where it can be collected and cast into various forms.
Cryolite electrolyte acts as a solvent, facilitating the separation of alumina during the molten salt electrolysis process. Over time, the cryolite accumulates impurities such as sodium, lithium, and potassium ions — gradually reducing its effectiveness in dissolving alumina. At a certain point, the concentration of these impurities reaches a critical level, at which the electrolyte must be replaced with fresh cryolite to main process efficiency. The spent cryolite, a viscous sludge containing residual aluminum ions and impurities, is then transported away for disposal.
“We learned that for a traditional aluminum plant, something like 2,800 tons of aluminum are wasted per year,” says lead author Trent Lee. “We were looking at ways that the industry can be more efficient, and we found cryolite waste hadn’t been well-researched in terms of recycling some of its waste products.”
A charged kick
In their new work, the researchers aimed to develop a membrane process to filter cryolite waste and recover aluminum ions that inevitably make it into the waste stream. Specifically, the team looked to capture aluminum while letting through all other ions, especially sodium, which builds up significantly in the cryolite over time.
The team reasoned that if they could selectively capture aluminum from cryolite waste, the aluminum could be poured back into the electrolysis vat without adding excessive sodium that would further slow the electrolysis process.
The researchers’ new design is an adaptation of membranes used in conventional water treatment plants. These membranes are typically made from a thin sheet of polymer material that is perforated by tiny, nanometer-scale pores, the size of which is tuned to let through specific ions and molecules.
The surface of conventional membranes carries a natural, negative charge. As a result, the membranes repel any ions that carry the same negative charge, while they attract positively charged ions to flow through.
In collaboration with the Japanese membrane company Nitto Denko, the MIT team sought to examine the efficacy of commercially available membranes that could filter through most positively charged ions in cryolite wastewater while repelling and capturing aluminum ions. However, aluminum ions also carry a positive charge, of +3, where sodium and the other cations carry a lesser positive charge of +1.
Motivated by the group’s recent work investigating membranes for recovering lithium from salt lakes and spent batteries, the team tested a novel Nitto Denko membrane with a thin, positively charged coating covering the membrane. The coating’s charge is just positive enough to strongly repel and retain aluminum while allowing less positively charged ions to flow through.
“The aluminum is the most positively charged of the ions, so most of it is kicked away from the membrane,” Foo explains.
The team tested the membrane’s performance by passing through solutions with various balances of ions, similar to what can be found in cryolite waste. They observed that the membrane consistently captured 99.5 percent of aluminum ions while allowing through sodium and the other cations. They also varied the pH of the solutions, and found the membrane maintained its performance even after sitting in highly acidic solution for several weeks.
“A lot of this cryolite waste stream comes at different levels of acidity,” Foo says. “And we found the membrane works really well, even within the harsh conditions that we would expect.”
The new experimental membrane is about the size of a playing card. To treat cryolite waste in an industrial-scale aluminum production plant, the researchers envision a scaled-up version of the membrane, similar to what is used in many desalination plants, where a long membrane is rolled up in a spiral configuration, through which water flows.
“This paper shows the viability of membranes for innovations in circular economies,” Lee says. “This membrane provides the dual benefit of upcycling aluminum while reducing hazardous waste.”
Loren Graham, professor emeritus of the history of science, dies at 91
Loren R. Graham, professor emeritus of the history of science who served on the MIT faculty for nearly three decades, died on Dec. 15, 2024, at the age of 91.
Graham received a BS in chemical engineering from Purdue University in 1955, the same year his classmate, acquaintance, and future NASA astronaut and moon walker Neil Armstrong graduated with a BS in aeronautical engineering. Graham went on to earn a PhD in history in 1964 from Columbia University, where he taught from 1965 until 1978.
In 1978, Graham joined the MIT Program in Science, Technology, and Society (STS) as a professor of the history of science. His specialty during his tenure with the program was in the history of science in Russia and the Soviet Union in the 19th, 20th, and 21st centuries. His work focused on Soviet and Marxist philosophy of science and science politics.
Much of Graham’s career spanned the Cold War. He participated in one of the first academic exchange programs between the United States and the Soviet Union from 1960 to 1961 and marched in the Moscow May Day Parade just weeks after Yuri Gagarin became the first human in space. In 1965, he received a Fulbright Award to do research in the Soviet Union.
Graham wrote extensively on the influence of social context in science and the study of contemporary science and technology in Russia. He also experimented in writing a nonfiction mystery, “Death in the Lighthouse” (2013), and making documentary films. His publications include “Science, Philosophy and Human Behavior in the Soviet Union” (1987), “Science and the Soviet Social Order” (1990), “Science in Russia and the Soviet Union: A Short History” (1993), “The Ghost of the Executed Engineer” (1993); “A Face in the Rock” (1995); and “What Have We Learned About Science and Technology from the Russian Experience?” (1998).
His publication “Science, Philosophy and Science in the Soviet Union” was nominated for the National Book Award in 1987. He received the George Sarton Medal from the History of Science Society in 1996 and the Follo Award of the Michigan Historical Society in 2000 for his contributions to Michigan history.
Many former colleagues recall the impact he had at MIT. In 1988, with fellow faculty member Roe Merrett Smith, professor emeritus of history, he played a leading role in establishing the graduate program in the history and social study of science and technology that is now known as HASTS. This interdisciplinary graduate Program in History, Anthropology, and Science, Technology, and Society has become one of the most selective graduate programs at MIT.
“Loren was an intellectual innovator and role model for teaching and advising,” says Sherry Turkle, MIT professor of sociology. “And he was a wonderful colleague. … He experimented. He had fun. He cared about writing and about finding joy in work.”
Graham served on the STS faculty until his retirement in 2006.
Throughout his life, Graham was a member of many foundations and honorary societies, including the U.S. Civilian Research and Development Foundation, the American Philosophical Society, the American Academy of Arts and Sciences, and the Russian Academy of Natural Science.
He was also a member on several boards of trustees, including George Soros' International Science Foundation, which supported Russian scientists after the collapse of the Soviet Union. For many years he served on the board of trustees of the European University at St. Petersburg, remaining an active member on its development board until 2024. After donating thousands of books from his own library to the university, a special collection was established in his name.
In 2012, Graham was awarded a medal by the Russian Academy of Sciences at a ceremony in Moscow for his contributions to the history of science. “His own life as a scholar covered a great deal of important history,” says David Mindell, MIT professor of aeronautics and astronautics and the Dibner Professor of the History of Engineering and Manufacturing.
Graham is survived by his wife, Patricia Graham, and daughter, Meg Peterson.
Richard Locke PhD ’89 named dean of the MIT Sloan School of Management
Richard Locke PhD ’89, a prominent scholar and academic administrator with a wide range of leadership experience, has been named the new dean of the MIT Sloan School of Management. The appointment is effective July 1.
In becoming the school’s 10th dean, Locke is rejoining the Institute, where he previously served in multiple roles from 1988 to 2013, as a faculty member, a department head, and a deputy dean of MIT Sloan. After leaving MIT, Locke was a senior leader at Brown University, including seven and a half years as Brown’s provost. Since early 2023, he has been dean of Apple University, an educational unit within Apple Inc. focused on educating the company’s employees on leadership, management, and the company’s culture and organization.
“I am thrilled to be returning to MIT Sloan,” says Locke, whose formal title will be the John C Head III Dean at MIT Sloan. “It is a special place, with its world-class faculty, innovative research and educational programs, and close-knit community, all within the MIT ecosystem.”
He adds: “All of these assets give MIT Sloan an opportunity to chart the future — to shape how new technologies will reconfigure industries and careers, how new enterprises will be created and run, how individuals will work and live, and how national economies will develop and adapt. It will be exciting and fun to work with great colleagues and to help lead the school to its next phase of global prominence and impact.”
As dean at MIT Sloan, Locke follows David C. Schmittlein, who stepped down in February 2024 after a nearly 17-year tenure. Georgia Perakis, the William F. Pounds Professor of Operations Research and Statistics and Operations Management at MIT Sloan, has been serving as the interim John C Head III Dean since then and will continue in the role until Locke begins.
Institute leaders welcomed Locke back, citing his desire to help MIT Sloan address significant global challenges, including climate change, the role of artificial intelligence in society, and new health care solutions, while refining best practices for businesses and workplaces.
“MIT Sloan has been very fortunate in its leaders. Both Dave Schmittlein and Georgia Perakis set a high bar, and we continue that tradition with the selection of Rick Locke,” says MIT President Sally A. Kornbluth. “Beyond his wide-ranging experience and accomplishments and superb academic credentials, I have come to know Rick as an outstanding leader, both from the years when we were both provosts and through his thoughtful service on the MIT Corporation. Rick has always impressed me with his intellectual breadth, personal grace, and fresh ideas. We’re delighted that he will be rejoining our campus community.”
In a letter to the MIT community, MIT Provost Cynthia Barnhart praised Locke’s “transformative career” and noted how she and the search committee agree “that Rick’s depth of experience makes him a once-in-a-generation leader who will ‘hit the ground sprinting’” as MIT Sloan’s next dean.
Barnhart added: “The committee and I were impressed by his vision for removing frictions that slow research efforts, his exceptional track record of raising substantial funds to support academic communities, and his strong grasp of and attentiveness to the interests and needs of MIT Sloan’s constituencies.”
A political scientist by training, Locke has conducted high-profile research on labor practices in global supply chains, among other topics. His career has also included efforts to bring together stakeholders, from multinational firms to supply-chain workers, in an effort to upgrade best practices in business.
Locke is widely known for a vigorous work ethic, a humane manner around co-workers, and a leadership outlook that blends idealism about civic engagement with realism about global challenges.
His wide-ranging work and interests make Locke well-suited to MIT Sloan. The school has about 115 tenure-track faculty and 1,600 students spread over eight degree programs, with wide-ranging initiatives and academic groups connecting core management topics with more specialized topics relating to the innovation economy and entrepreneurship, the social impact of business and technology, policy development, and much more.
MIT conducted an extensive search process for the position, evaluating internal and external candidates over the last several months. The search committee’s co-chairs were Kate Kellogg, the David J. McGrath jr (1959) Professor of Management and Innovation at MIT Sloan; and Andrew W. Lo, the Charles E. and Susan T. Harris Professor at MIT Sloan.
The committee solicited and received extensive feedback about the position and the school from stakeholders including faculty, students, staff, and alumni, while engaging with MIT leadership about the role.
“MIT Sloan occupies a rare position in the world as a management school connected to one of the great engineering and scientific universities,” Kellogg says.
She adds: “Rick has a strong track record of bringing faculty from different domains together, and we think he is going to be great at connecting Sloan even further to the rest of MIT, around grand challenges such as climate, AI, and health care.”
Lo credits Schmittlein for “an incredible 17-year legacy of extraordinary leadership,” observing that Schmittlein helped MIT Sloan expand in size, consolidate its strengths, and build new programs. About Perakis, Kellogg notes, “Georgia’s outstanding work as dean has built on these strengths and sparked important new innovations and partnerships in areas like AI and entrepreneurship. She’s also expanded the school’s footprint in Southeast Asia and helped advance key Institute-wide priorities like the Climate Project at MIT and the Generative AI consortium.”
Kellogg and Lo expressed confidence that Locke would help MIT Sloan continue to adapt and grow.
“MIT and MIT Sloan are at inflection points in our ability to invent the future, given the role technology is playing in virtually every aspect of our lives,” Lo says. “Rick has the same vision and ambitions that we do, and the experience and skills to help us realize that vision. We couldn’t be more excited by this choice.”
Lo adds: “Rick is a first-rate scholar and first-rate educator who really gets our mission and core values and ethos. Dave was an extraordinary dean, and we expect the same from Rick. He sees the full potential of MIT Sloan and how to achieve it.”
Locke received his BA from Wesleyan University and an MA in education from the University of Chicago. He earned his doctorate in political science at MIT, writing a dissertation about local politics and industrial change in Italy, under the supervision of now-Institute Professor Suzanne Berger.
Locke joined the MIT faculty as an assistant professor of international management, was promoted in 1993 to an associate professor of management and political science, and earned tenure in 1996. In 2000, he was named the Alvin J. Siteman Professor of Entrepreneurship, becoming a full professor in 2001.
In 2010, Locke took on a new role at MIT, heading the Department of Political Science, a position he held through 2013; he was also given a new endowed professorship, the Class of 1922 Professor of Political Science and Management. During the same time frame, Locke also served as deputy dean at MIT Sloan, from 2009 through 2010, and then again from 2012 through 2013.
Locke moved to Brown in order to take the position of director of the Thomas J. Watson Institute for International and Public Affairs. In 2015, he was named Brown’s provost, the university’s chief academic officer and budget officer.
During his initial chapter at MIT Sloan, Locke co-founded MIT’s Global Entrepreneurship Lab (G-Lab) as well as other action learning programs, helped the effort to double the size of the Sloan Fellows Program, and worked to update MIT Sloan Executive Education programs, among other projects.
Locke has authored or co-authored five books and dozens of journal articles and book chapters, helping open up the study of global labor practices while also examining the political implications of industrial changes and labor relations. For his research on working conditions in global supply chains, Locke was given the Faculty Pioneer for Academic Leadership award by the Aspen Institute’s Business and Society Program, the Progress Medal from the Society of Progress, the Dorothy Day Award for Outstanding Labor Research from the American Political Society Association, and the Responsible Research in Management Award.
His books include “Remaking the Italian Economy” (1995); “Employment Relations in a Changing World Economy” (co-edited with Thomas Kochan, and Michael Piore, 1995); “Working in America” (co-authored with Paul Osterman, Thomas Kochan, Michael Piore, 2001); “The Promise and Limits of Private Power Promoting Labor Standards in a Global Economy” (2013); and “Production in the Innovation Economy (co-edited with Rachel Wellhausen, 2014).
A committed educator, Locke has won numerous awards for teaching in his career including the Graduate Management Society Teaching Award, in 1990; the Excellence in Teaching Award from MIT Sloan, in 2003; the Class of 1960 Innovation in Teaching Award, from MIT in 2007; and the Jamieson Prize for Excellence in Teaching, from MIT, in 2008.
Over the course of his career, Locke has been a visiting professor or scholar at several universities, including Bocconi University in Milan; the Harvard Kennedy School; the Saïd Business School of the University of Oxford; the Universidade Federal do Rio de Janeiro, Brazil; the Universita’ Ca Foscari of Venice, Italy; the Universita Degli Studi di Milano, Italy; Georg-August Universität, in Göttingen, Germany; and the Universita’ Federico II in Naples, Italy.
Locke has remained connected to MIT even over the most recent decade of his career, including his service as a member of the MIT Corporation.
“I loved my time at MIT Sloan because of its wonderful mix of ambition, energy, and drive for excellence, but also humility,” Locke says. “We knew that we didn’t always have all the answers, but were curious to learn more, and eager to do the work to find solutions to some of the world’s great challenges. Now as dean, I look forward to once again being part of this wonderful community.”
A new way to determine whether a species will successfully invade an ecosystem
When a new species is introduced into an ecosystem, it may succeed in establishing itself, or it may fail to gain a foothold and die out. Physicists at MIT have now devised a formula that can predict which of those outcomes is most likely.
The researchers created their formula based on analysis of hundreds of different scenarios that they modeled using populations of soil bacteria grown in their laboratory. They now plan to test their formula in larger-scale ecosystems, including forests. This approach could also be helpful in predicting whether probiotics or fecal microbiota treatments (FMT) would successfully combat infections of the human GI tract.
“People eat a lot of probiotics, but many of them can never invade our gut microbiome at all, because if you introduce it, it does not necessarily mean that it can grow and colonize and benefit your health,” says Jiliang Hu SM ’19, PhD ’24, the lead author of the study.
MIT professor of physics Jeff Gore is the senior author of the paper, which appears today in the journal Nature Ecology and Evolution. Matthieu Barbier, a researcher at the Plant Health Institute Montpellier, and Guy Bunin, a professor of physics at Technion, are also authors of the paper.
Population fluctuations
Gore’s lab specializes in using microbes to analyze interspecies interactions in a controlled way, in hopes of learning more about how natural ecosystems behave. In previous work, the team has used bacterial populations to demonstrate how changing the environment in which the microbes live affects the stability of the communities they form.
In this study, the researchers wanted to study what determines whether an invasion by a new species will succeed or fail. In natural communities, ecologists have hypothesized that the more diverse an ecosystem is, the more it will resist an invasion, because most of the ecological niches will already be occupied and few resources are left for an invader.
However, in both natural and experimental systems, scientists have observed that this is not consistently true: While some highly diverse populations are resistant to invasion, other highly diverse populations are more likely to be invaded.
To explore why both of those outcomes can occur, the researchers set up more than 400 communities of soil bacteria, which were all native to the soil around MIT. The researchers established communities of 12 to 20 species of bacteria, and six days later, they added one randomly chosen species as the invader. On the 12th day of the experiment, they sequenced the genomes of all the bacteria to determine if the invader had established itself in the ecosystem.
In each community, the researchers also varied the nutrient levels in the culture medium on which the bacteria were grown. When nutrient levels were high, the microbes displayed strong interactions, characterized by heightened competition for food and other resources, or mutual inhibition through mechanisms such as pH-mediated cross-toxin effects. Some of these populations formed stable states in which the fraction of each microbe did not vary much over time, while others formed communities in which most of the species fluctuated in number.
The researchers found that these fluctuations were the most important factor in the outcome of the invasion. Communities that had more fluctuations tended to be more diverse, but they were also more likely to be invaded successfully.
“The fluctuation is not driven by changes in the environment, but it is internal fluctuation driven by the species interaction. And what we found is that the fluctuating communities are more readily invaded and also more diverse than the stable ones,” Hu says.
In some of the populations where the invader established itself, the other species remained, but in smaller numbers. In other populations, some of the resident species were outcompeted and disappeared completely. This displacement tended to happen more often in ecosystems when there were stronger competitive interactions between species.
In ecosystems that had more stable, less diverse populations, with stronger interactions between species, invasions were more likely to fail.
Regardless of whether the community was stable or fluctuating, the researchers found that the fraction of the original species that survived in the community before invasion predicts the probability of invasion success. This “survival fraction” could be estimated in natural communities by taking the ratio of the diversity within a local community (measured by the number of species in that area) to the regional diversity (number of species found in the entire region).
“It would be exciting to study whether the local and regional diversity could be used to predict susceptibility to invasion in natural communities,” Gore says.
Predicting success
The researchers also found that under certain circumstances, the order in which species arrived in the ecosystem played a role in whether an invasion was successful. When the interactions between species were strong, the chances of a species becoming successfully incorporated went down when that species was introduced after other species have already become established.
When the interactions are weak, this “priority effect” disappears and the same stable equilibrium is reached no matter what order the microbes arrived in.
“Under a strong interaction regime, we found the invader has some disadvantage because it arrived later. This is of interest in ecology because people have always found that in some cases the order in which species arrived matters a lot, while in the other cases it doesn't matter,” Hu says.
The researchers now plan to try to replicate their findings in ecosystems for which species diversity data is available, including the human gut microbiome. Their formula could allow them to predict the success of probiotic treatment, in which beneficial bacteria are consumed orally, or FMT, an experimental treatment for severe infections such as C. difficile, in which beneficial bacteria from a donor’s stool are transplanted into a patient’s colon.
“Invasions can be harmful or can be good depending on the context,” Hu says. “In some cases, like probiotics, or FMT to treat C. difficile infection, we want the healthy species to invade successfully. Also for soil protection, people introduce probiotics or beneficial species to the soil. In that case people also want the invaders to succeed.”
The research was funded by the Schmidt Polymath Award and the Sloan Foundation.
MIT affiliates awarded 2024 National Medals of Science, Technology
Four MIT faculty members are among 23 world-class researchers who have been awarded the nation’s highest honors for scientists and innovators, the White House announced today.
Angela Belcher and Emery Brown were each presented with the National Medal of Science at a White House ceremony this afternoon, and Paula Hammond ’84, PhD ’93, and Feng Zhang were awarded the National Medal of Technology and Innovation.
Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science and Engineering and a member of the Koch Institute for Integrative Cancer Research, was honored for her work designing novel materials for applications that include solar cells, batteries, and medical imaging.
Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, was recognized for work that has revealed how anesthesia affects the brain. Brown is also a member of MIT’s Picower Institute for Learning and Memory and Institute for Medical Engineering and Science (IMES).
Hammond, an MIT Institute Professor, vice provost for faculty, and member of the Koch Institute, was honored for developing methods for assembling thin films that can be used for drug delivery, wound healing, and many other applications.
Zhang, the James and Patricia Poitras Professor of Neuroscience at MIT and a professor of brain and cognitive sciences and biological engineering, was recognized for his work developing molecular tools, including the CRISPR genome-editing system, that have the potential to diagnose and treat disease. Zhang is also an investigator at the McGovern Institute for Brain Research and a core member of the Broad Institute of MIT and Harvard.
Two additional MIT alumni also accepted awards: Richard Lawrence Edwards ’76, a professor at the University of Minnesota, received a National Medal of Science for his work in geochemistry. And Noubar Afeyan PhD ’87 accepted one of two National Medals of Technology and Innovation awarded to an organization. These awards went to the biotechnology companies Moderna, which Afeyan co-founded, and Pfizer, for their development of vaccines for Covid-19.
This year, the White House awarded the National Medal of Science to 14 recipients and named nine individual awardees of the National Medal of Technology and Innovation, along with two organizations. To date, nearly 100 MIT affiliates have won one of these two honors.
“Emery Brown is at the forefront of the Institute’s collaborations among neuroscience, medicine, and patient care. His research has shifted the paradigm for brain monitoring during general anesthesia for surgery. His pioneering approach based on neural oscillations, as opposed to solely monitoring vital signs, promises to revolutionize how anesthesia medications are delivered to patients,” says Nergis Mavalvala, dean of MIT’s School of Science. “Feng Zhang is one of the preeminent researchers in CRISPR technologies that have accelerated the pace of science and engineering, blending entrepreneurship and scientific discovery. These new molecular technologies can modify the cell’s genetic information, engineer vehicles to deliver these tools into the correct cells, and scale to restore organ function. Zhang will apply these life-altering innovations to diseases such as neurodegeneration, immune disorders, and aging.”
Hammond and Belcher are frequent collaborators, and each of them has had significant impact on the fields of nanotechnology and nanomedicine.
“Angela Belcher and Paula Hammond have made tremendous contributions to science and engineering, and I’m thrilled for each of them to receive this well-deserved recognition,” says Anantha Chandrakasan, dean of the School of Engineering and chief innovation and strategy officer at MIT. “By harnessing the processes of nature, Angela’s innovations have impacted fields from energy to the environment to medicine. Her non-invasive imaging system has improved outcomes for patients diagnosed with many types of cancer. Paula’s pioneering research in nanotechnology helped transform the ways in which we deliver and administer drugs within the body — through her technique, therapeutics can be customized and sent directly to specifically targeted cells, including cancer cells.”
Growing materials with viruses
Belcher, who joined the MIT faculty in 2002 and served as head of the Department of Biological Engineering from 2019 to 2023, initially heard that she was being considered for the National Medal of Science in September, and in mid-December, found out she had won.
“It was quite shocking and just a huge honor. It’s an honor to be considered, and then to get the email and the call that I actually was receiving it was humbling,” she says.
Belcher, who earned a bachelor’s degree in creative studies and a PhD in inorganic chemistry from the University of California at Santa Barbara, has focused much of her research on developing ways to use biological systems, such as viruses, to grow materials.
“Since graduate school, I’ve been fascinated with trying to understand how nature makes materials and then applying those processes, whether directly through biological molecules, or through evolving biological molecules or biological organisms, to make materials that are of technological importance,” she says.
Early in her career, she developed a technique for generating materials by engineering viruses to self-assemble into nanoscale scaffolds that can be coated with inorganic materials to form functional devices such as batteries, semiconductors, solar cells, and catalysts. This approach allows for exquisite control over the electronic, optical, and magnetic properties of the material.
In the late 2000s, then-MIT president Susan Hockfield asked Belcher to join the newly formed Koch Institute, whose mission is to bring together scientists and engineers to seek new ways to diagnose and treat cancer. Not knowing much about cancer biology, Belcher was hesitant at first, but she ended up moving her lab to the Koch Institute and applying her work to the new challenge.
One of her first projects, on which she collaborated with Hammond, was a method for using shortwave infrared light to image cancer cells. This technology, eventually commercialized by a company called Cision Vision, is now being used in hospitals to image lymph nodes during cancer surgery, helping them to determine if a tumor has spread.
Belcher is now focused on finding technologies to detect other cancers, especially ovarian cancer, which is difficult to diagnose in early stages, as well as developing cancer vaccines.
Unlocking the mysteries of anesthesia
Brown, who has been on the MIT faculty since 2005, said he was “overjoyed” when he found out he would receive the National Medal of Science.
“I’m extremely excited and quite honored to receive such an award, because it is one of the pinnacles of recognition in the scientific field in the United States,” he says.
Much of Brown’s work has focused on achieving a better understanding of what happens in the human brain under anesthesia. Trained as an anesthesiologist, Brown earned his MD from Harvard Medical School and a PhD in statistics from Harvard University.
Since 1992, he has been a member of the Harvard Medical School faculty and a practicing anesthesiologist at Massachusetts General Hospital. Early in his research career, he worked on developing methods to characterize the properties of the human circadian clock. These included characterizing the clock’s phase response curve to light, accurately measuring its intrinsic period, and measuring the impact of physiologically designed schedules on shift worker performance. Later, he became interested in developing signal processing methods to characterize how neurons represent signals and stimuli in their ensemble activity.
In collaboration with Matt Wilson, an MIT professor of neuroscience, Brown devised algorithms to decode the position of an animal in its environment by reading the activity of a small group of place cell neurons in the animal’s brain. Other applications of these methods included characterizing learning, controlling brain-machine interfaces, and controlling brain states such as medically induced coma.
“I was practicing anesthesia at the time, and as I saw more and more of what the neuroscientists were doing, it occurred to me we could use their paradigms to study anesthesia, and we should, because we weren’t doing that,” he says. “Anesthesia was not being looked at as a neuroscience subdiscipline. It was looked at as a subdiscipline of pharmacology.”
Over the past two decades, Brown’s work has revealed how anesthesia drugs induce unconsciousness in the brain, along with other altered arousal states. Anesthesia drugs such as propofol dramatically alter the brain’s intrinsic oscillations. These oscillations can be seen with electroencephalography (EEG). During the awake state, these oscillations usually have high frequency and low amplitude, but as anesthetic drugs are given, they shift generally to low frequency, high amplitude. Working with MIT professors Earl Miller and Ila Fiete, as well as collaborators at Massachusetts General Hospital and Boston University, Brown has shown that these changes disrupt normal communication between different brain regions, leading to loss of consciousness.
Brown has also shown that these EEG oscillations can be used to monitor whether a patient is too deeply unconscious, and he has developed a closed-loop anesthesia delivery system that can maintain a patient’s anesthesia state at precisely desired levels. Brown and colleagues have also developed methods to accelerate recovery from anesthesia. More precise control and accelerated recovery could help to prevent the cognitive impairments that often affect patients after they emerge from anesthesia. Accelerating recovery from anesthesia has also suggested ways to accelerate recovery from coma.
Building multifunctional materials
Hammond, who earned both her bachelor’s degree and PhD in chemical engineering from MIT, has been a member of the faculty since 1995 and was named an Institute Professor in 2021. She was also the 2023-24 recipient of MIT’s Killian Award, the highest honor that the faculty bestows.
Early in her career, Hammond developed a novel technique for generating functional thin-film materials by stacking layers of charged polymeric materials. This approach can be used to build polymers with highly controlled architectures by alternately exposing a surface to positively and negatively charged particles.
She initially used this layer-by-layer assembly technique to build ultrathin batteries and fuel cell electrodes, before turning her attention to biomedical applications. To adapt the films for drug delivery, she came up with ways to incorporate drug molecules into the layers of the film. These molecules are then released when the particles reach their targets.
“We began to look at bioactive materials and how we could sandwich them into these layers and use that as a way to deliver the drug in a very controlled fashion, at the right time and in the right place,” she says. “We are using the layering as a way to modify the surface of a nanoparticle so that there is a very high and selective affinity for the cancer cells we’re targeting.”
Using this technique, she has created drug-delivery nanoparticles that are coated with molecules that specifically target cancer cells, with a particular focus on ovarian cancer. These particles can be tailored to carry chemotherapy drugs such as cisplatin, immunotherapy agents, or nucleic acids such as messenger RNA.
Working with colleagues around MIT, she has also developed materials that can be used to promote wound healing, blood clotting, and tissue regeneration.
“What we have found is that these layers are very versatile. They can coat a very broad range of substrates, and those substrates can be anything from a bone implant, which can be quite large, down to a nanoparticle, which is 100 nanometers,” she says.
Designing molecular tools
Zhang, who earned his undergraduate degree from Harvard University in 2004, has contributed to the development of multiple molecular tools to accelerate the understanding of human disease. While a graduate student at Stanford University, from which he received his PhD in 2009, Zhang worked in the lab of Professor Karl Deisseroth. There, he worked on a protein called channelrhodopsin, which he and Deisseroth believed held potential for engineering mammalian cells to respond to light.
The resulting technique, known as optogenetics, is now used widely used in neuroscience and other fields. By engineering neurons to express light-sensitive proteins such as channelrhodopsin, researchers can either stimulate or silence the cells’ electrical impulses by shining different wavelengths of light on them. This has allowed for detailed study of the roles of specific populations of neurons in the brain, and the mapping of neural circuits that control a variety of behaviors.
In 2011, about a month after joining the MIT faculty, Zhang attended a talk by Harvard Medical School Professor Michael Gilmore, who studies the pathogenic bacterium Enteroccocus. The scientist mentioned that these bacteria protect themselves from viruses with DNA-cutting enzymes known as nucleases, which are part of a defense system known as CRISPR.
“I had no idea what CRISPR was, but I was interested in nucleases,” Zhang told MIT News in 2016. “I went to look up CRISPR, and that’s when I realized you might be able to engineer it for use for genome editing.”
In January 2013, Zhang and members of his lab reported that they had successfully used CRISPR to edit genes in mammalian cells. The CRISPR system includes a nuclease called Cas9, which can be directed to cut a specific genetic target by RNA molecules known as guide strands.
Since then, scientists in fields from medicine to plant biology have used CRISPR to study gene function and investigate the possibility of correcting faulty genes that cause disease. More recently, Zhang’s lab has devised many enhancements to the original CRISPR system, such as making the targeting more precise and preventing unintended cuts in the wrong locations.
The National Medal of Science was established in 1959 and is administered for the White House by the National Science Foundation. The medal recognizes individuals who have made outstanding contributions to science and engineering.
The National Medal of Technology and Innovation was established in 1980 and is administered for the White House by the U.S. Department of Commerce’s Patent and Trademark Office. The award recognizes those who have made lasting contributions to America’s competitiveness and quality of life and helped strengthen the nation’s technological workforce.
An abundant phytoplankton feeds a global network of marine microbes
One of the hardest-working organisms in the ocean is the tiny, emerald-tinged Prochlorococcus marinus. These single-celled “picoplankton,” which are smaller than a human red blood cell, can be found in staggering numbers throughout the ocean’s surface waters, making Prochlorococcus the most abundant photosynthesizing organism on the planet. (Collectively, Prochlorococcus fix as much carbon as all the crops on land.) Scientists continue to find new ways that the little green microbe is involved in the ocean’s cycling and storage of carbon.
Now, MIT scientists have discovered a new ocean-regulating ability in the small but mighty microbes: cross-feeding of DNA building blocks. In a study appearing today in Science Advances, the team reports that Prochlorococcus shed these extra compounds into their surroundings, where they are then “cross-fed,” or taken up by other ocean organisms, either as nutrients, energy, or for regulating metabolism. Prochlorococcus’ rejects, then, are other microbes’ resources.
What’s more, this cross-feeding occurs on a regular cycle: Prochlorococcus tend to shed their molecular baggage at night, when enterprising microbes quickly consume the cast-offs. For a microbe called SAR11, the most abundant bacteria in the ocean, the researchers found that the nighttime snack acts as a relaxant of sorts, forcing the bacteria to slow down their metabolism and effectively recharge for the next day.
Through this cross-feeding interaction, Prochlorococcus could be helping many microbial communities to grow sustainably, simply by giving away what it doesn’t need. And they’re doing so in a way that could set the daily rhythms of microbes around the world.
“The relationship between the two most abundant groups of microbes in ocean ecosystems has intrigued oceanographers for years,” says co-author and MIT Institute Professor Sallie “Penny” Chisholm, who played a role in the discovery of Prochlorococcus in 1986. “Now we have a glimpse of the finely tuned choreography that contributes to their growth and stability across vast regions of the oceans.”
Given that Prochlorococcus and SAR11 suffuse the surface oceans, the team suspects that the exchange of molecules from one to the other could amount to one of the major cross-feeding relationships in the ocean, making it an important regulator of the ocean carbon cycle.
“By looking at the details and diversity of cross-feeding processes, we can start to unearth important forces that are shaping the carbon cycle,” says the study’s lead author, Rogier Braakman, a research scientist in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
Other MIT co-authors include Brandon Satinsky, Tyler O’Keefe, Shane Hogle, Jamie Becker, Robert Li, Keven Dooley, and Aldo Arellano, along with Krista Longnecker, Melissa Soule, and Elizabeth Kujawinski of Woods Hole Oceanographic Institution (WHOI).
Spotting castaways
Cross-feeding occurs throughout the microbial world, though the process has mainly been studied in close-knit communities. In the human gut, for instance, microbes are in close proximity and can easily exchange and benefit from shared resources.
By comparison, Prochlorococcus are free-floating microbes that are regularly tossed and mixed through the ocean’s surface layers. While scientists assume that the plankton are involved in some amount of cross-feeding, exactly how this occurs, and who would benefit, have historically been challenging to probe; any stuff that Prochlorococcus cast away would have vanishingly low concentrations,and be exceedingly difficult to measure.
But in work published in 2023, Braakman teamed up with scientists at WHOI, who pioneered ways to measure small organic compounds in seawater. In the lab, they grew various strains of Prochlorococcus under different conditions and characterized what the microbes released. They found that among the major “exudants,” or released molecules, were purines and pyridines, which are molecular building blocks of DNA. The molecules also happen to be nitrogen-rich — a fact that puzzled the team. Prochlorococcus are mainly found in ocean regions that are low in nitrogen, so it was assumed they’d want to retain any and all nitrogen-containing compounds they can. Why, then, were they instead throwing such compounds away?
Global symphony
In their new study, the researchers took a deep dive into the details of Prochlorococcus’ cross-feeding and how it influences various types of ocean microbes.
They set out to study how Prochlorococcus use purine and pyridine in the first place, before expelling the compounds into their surroundings. They compared published genomes of the microbes, looking for genes that encode purine and pyridine metabolism. Tracing the genes forward through the genomes, the team found that once the compounds are produced, they are used to make DNA and replicate the microbes’ genome. Any leftover purine and pyridine is recycled and used again, though a fraction of the stuff is ultimately released into the environment. Prochlorococcus appear to make the most of the compounds, then cast off what they can’t.
The team also looked to gene expression data and found that genes involved in recycling purine and pyrimidine peak several hours after the recognized peak in genome replication that occurs at dusk. The question then was: What could be benefiting from this nightly shedding?
For this, the team looked at the genomes of more than 300 heterotrophic microbes — organisms that consume organic carbon rather than making it themselves through photosynthesis. They suspected that such carbon-feeders could be likely consumers of Prochlorococcus’ organic rejects. They found most of the heterotrophs contained genes that take up either purine or pyridine, or in some cases, both, suggesting microbes have evolved along different paths in terms of how they cross-feed.
The group zeroed in on one purine-preferring microbe, SAR11, as it is the most abundant heterotrophic microbe in the ocean. When they then compared the genes across different strains of SAR11, they found that various types use purines for different purposes, from simply taking them up and using them intact to breaking them down for their energy, carbon, or nitrogen. What could explain the diversity in how the microbes were using Prochlorococcus’ cast-offs?
It turns out the local environment plays a big role. Braakman and his collaborators performed a metagenome analysis in which they compared the collectively sequenced genomes of all microbes in over 600 seawater samples from around the world, focusing on SAR11 bacteria. Metagenome sequences were collected alongside measurements of various environmental conditions and geographic locations in which they are found. This analysis showed that the bacteria gobble up purine for its nitrogen when the nitrogen in seawater is low, and for its carbon or energy when nitrogen is in surplus — revealing the selective pressures shaping these communities in different ocean regimes.
“The work here suggests that microbes in the ocean have developed relationships that advance their growth potential in ways we don’t expect,” says co-author Kujawinski.
Finally, the team carried out a simple experiment in the lab, to see if they could directly observe a mechanism by which purine acts on SAR11. They grew the bacteria in cultures, exposed them to various concentrations of purine, and unexpectedly found it causes them to slow down their normal metabolic activities and even growth. However, when the researchers put these same cells under environmentally stressful conditions, they continued growing strong and healthy cells, as if the metabolic pausing by purines helped prime them for growth, thereby avoiding the effects of the stress.
“When you think about the ocean, where you see this daily pulse of purines being released by Prochlorococcus, this provides a daily inhibition signal that could be causing a pause in SAR11 metabolism, so that the next day when the sun comes out, they are primed and ready,” Braakman says. “So we think Prochlorococcus is acting as a conductor in the daily symphony of ocean metabolism, and cross-feeding is creating a global synchronization among all these microbial cells.”
This work was supported, in part, by the Simons Foundation and the National Science Foundation.
At MIT, Clare Grey stresses battery development to electrify the planet
“How do we produce batteries at the cost that is suitable for mass adoption globally, and how do you do this to electrify the planet?” Clare Grey asked an audience of over 450 combined in-person and virtual attendees at the sixth annual Dresselhaus Lecture, organized by MIT.nano on Nov. 18. “The biggest challenge is, how do you make batteries to allow more renewables on the grid.”
These questions emphasized one of Grey’s key messages in her presentation: The future of batteries aligns with global climate efforts. She addressed sustainability issues with lithium mining and stressed the importance of increasing the variety of minerals that can be used in batteries. But the talk primarily focused on advanced imaging techniques to produce insights into the behaviors of materials that will guide the development of new technology. “We need to come up with new chemistries and new materials that are both more sustainable and safer,” she said, as well as think about other issues like secondhand use, which requires batteries to be made to last longer.
Better understanding will produce better batteries
“Batteries have really transformed the way we live,” Grey said. “In order to improve batteries, we need to understand how they work, we need to understand how they operate, and we need to understand how they degrade.”
Grey, a Royal Society Research Professor and the Geoffrey Moorhouse-Gibson Professor of Chemistry at Cambridge University, introduced new optical methods for studying batteries while they are operating, visualizing reactions down to the nanoscale. “It is much easier to study an operating device in-situ,” she said. “When you take batteries apart, sometimes there are processes that don’t survive disassembling.”
Grey presented work coming out of her research group that uses in-situ metrologies to better understand different dynamics and transformational phenomena of various materials. For example, in-situ nuclear magnetic resonance can identify issues with wrapping lithium with silicon (it does not form a passivating layer) and demonstrate why anodes cannot be replaced with sodium (it is the wrong size molecule). Grey discussed the value of being able to use in-situ metrology to look at higher energy density materials that are more sustainable such as lithium sulfur or lithium air batteries.
The lecture connected local structure to mechanisms and how materials intercalate. Grey spoke about using interferometric scattering (iSCAT) microscopy, typically used by biologists, to follow how ions are pulled in and out of materials. Sharing iSCAT images of graphite, she gave a shout out to the late Institute Professor and lecture namesake Mildred Dresselhaus when discussing nucleation, the process by which atoms come together to form new structures that is important for considering new, more sustainable materials for batteries.
“Millie, in her solid-state physics class for undergrads, nicely explained what’s going on here,” Grey explained. “There is a dramatic change in the conductivity as you go from diluted state to the dense state. The conductivity goes up. With this information, you can explore nucleation.”
Designing for the future
“How do we design for fast charging?” Grey asked, discussing gradient spectroscopy to visualize different materials. “We need to find a material that operates at a high enough voltage to avoid lithium plating and has high lithium mobility.”
“To return to the theme of graphite and Millie Dresselhaus,” said Grey, “I’ve been trying to really understand what is the nature of the passivating layer that grows on both graphite and lithium metal. Can we enhance this layer?” In the question-and-answer session that followed, Grey spoke about the pros and cons of incorporating nitrogen in the anode.
After the lecture, Grey was joined by Yet-Ming Chiang, the Kyocera Professor of Ceramics in the MIT Department of Materials Science and Engineering, for a fireside chat. The conversation touched on political and academic attitudes toward climate change in the United Kingdom, and audience members applauded Grey’s development of imaging methods that allow researchers to look at the temperature dependent response of battery materials.
This was the sixth Dresselhaus Lecture, named in honor of MIT Institute Professor Mildred Dresselhaus, known to many as the "Queen of Carbon Science.” “It’s truly wonderful to be here to celebrate the life and the science of Millie Dresselhaus,” said Grey. “She was a very strong advocate for women in science. I’m honored to be here to give a lecture in honor of her.”
High school teams compete at 2024 MIT Science Bowl Invitational
A quiet intensity held the room on edge as the clock ticked down in the final moments of the 2024 MIT Science Bowl Invitational. Montgomery Blair High School clung to a razor-thin lead over Mission San Jose High School — 70 to 60 — with just two minutes remaining.
Mission San Jose faced a pivotal bonus opportunity that could tie the score. The moderator’s steady voice filled the room as he read the question. Mission San Jose’s team of four huddled together, pencils moving quickly across their white scratch paper. Across the stage, Montgomery Blair’s players sat still, their eyes darting between the scoreboard and the opposing team attempting to close the gap.
Mission San Jose team captain Advaith Mopuri called out their final answer.
“Incorrect,” the moderator announced.
Montgomery Blair’s team collectively exhaled, the tension breaking as they sealed their championship victory, but the gravity of those final moments when everything was on the line lingered — a testament to just how close the competition had been. Their showdown in the final round was a fitting culmination of the event, showcasing the mental agility and teamwork honed through months of practice.
“That final round was so tense. It came down to the final question,” says Jonathan Huang, a senior undergraduate at MIT and the co-president of the MIT Science Bowl Club. “It’s rare for it to come down to the very last question, so that was really exciting.”
A tournament of science and strategy
Now in its sixth year at the high school level, the MIT Science Bowl Invitational welcomed 48 teams from across the country this year for a full day of competition. The buzzer-style tournament challenged students on topics that spanned disciplines such as biology, chemistry, and physics. The rapid pace and diverse subject matter demanded a combination of deep knowledge, quick reflexes, and strategic teamwork.
Montgomery Blair’s hard-fought victory marked the culmination of months of preparation. “It was so exciting,” says Katherine Wang, Montgomery Blair senior and Science Bowl team member. “I can’t even describe it. You never think anything like that would happen to you.”
The volunteers who make it happen
Behind the scenes, the invitational is powered by a team of more than 120 dedicated volunteers, many of them current MIT students. From moderating matches to coordinating logistics, these volunteers form the backbone of the invitational.
Preparation for the competition starts months in advance. “By the time summer started, we already had to figure out who was going to be the head writers for each subject,” Huang says. “Every week over the summer, volunteers spent their own time to start writing up questions.”
“Every single question you hear today was written by a volunteer,” said Paolo Adajar, an MIT graduate student who served in roles like questions judge this year and is a former president of the MIT Science Bowl Club. Adajar, who competed in the National Science Bowl as a high school student, has been involved in the MIT Invitational since it began in 2019. “There's just something so fun about the games and just watching people be excited to get a question right.”
For many volunteers, the event is a chance to reconnect with a shared community. “It’s so nice to get together with the community every year,” says Emily Liu, a master’s student in computer science at MIT and a veteran volunteer. “And I’m always pleasantly surprised to see how much I remember.”
For competitors, the invitational offers more than just a chance to win. It’s an opportunity to connect with peers who share their passion for science, to experience the energy of MIT’s campus, and to sharpen skills they’ll carry into future endeavors.
As the crowd dispersed and the auditorium emptied, the spirit of the competition remained — a testament to the dedication, curiosity, and camaraderie that define the MIT Science Bowl Invitational.
A new computational model can predict antibody structures more accurately
By adapting artificial intelligence models known as large language models, researchers have made great progress in their ability to predict a protein’s structure from its sequence. However, this approach hasn’t been as successful for antibodies, in part because of the hypervariability seen in this type of protein.
To overcome that limitation, MIT researchers have developed a computational technique that allows large language models to predict antibody structures more accurately. Their work could enable researchers to sift through millions of possible antibodies to identify those that could be used to treat SARS-CoV-2 and other infectious diseases.
“Our method allows us to scale, whereas others do not, to the point where we can actually find a few needles in the haystack,” says Bonnie Berger, the Simons Professor of Mathematics, the head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and one of the senior authors of the new study. “If we could help to stop drug companies from going into clinical trials with the wrong thing, it would really save a lot of money.”
The technique, which focuses on modeling the hypervariable regions of antibodies, also holds potential for analyzing entire antibody repertoires from individual people. This could be useful for studying the immune response of people who are super responders to diseases such as HIV, to help figure out why their antibodies fend off the virus so effectively.
Bryan Bryson, an associate professor of biological engineering at MIT and a member of the Ragon Institute of MGH, MIT, and Harvard, is also a senior author of the paper, which appears this week in the Proceedings of the National Academy of Sciences. Rohit Singh, a former CSAIL research scientist who is now an assistant professor of biostatistics and bioinformatics and cell biology at Duke University, and Chiho Im ’22 are the lead authors of the paper. Researchers from Sanofi and ETH Zurich also contributed to the research.
Modeling hypervariability
Proteins consist of long chains of amino acids, which can fold into an enormous number of possible structures. In recent years, predicting these structures has become much easier to do, using artificial intelligence programs such as AlphaFold. Many of these programs, such as ESMFold and OmegaFold, are based on large language models, which were originally developed to analyze vast amounts of text, allowing them to learn to predict the next word in a sequence. This same approach can work for protein sequences — by learning which protein structures are most likely to be formed from different patterns of amino acids.
However, this technique doesn’t always work on antibodies, especially on a segment of the antibody known as the hypervariable region. Antibodies usually have a Y-shaped structure, and these hypervariable regions are located in the tips of the Y, where they detect and bind to foreign proteins, also known as antigens. The bottom part of the Y provides structural support and helps antibodies to interact with immune cells.
Hypervariable regions vary in length but usually contain fewer than 40 amino acids. It has been estimated that the human immune system can produce up to 1 quintillion different antibodies by changing the sequence of these amino acids, helping to ensure that the body can respond to a huge variety of potential antigens. Those sequences aren’t evolutionarily constrained the same way that other protein sequences are, so it’s difficult for large language models to learn to predict their structures accurately.
“Part of the reason why language models can predict protein structure well is that evolution constrains these sequences in ways in which the model can decipher what those constraints would have meant,” Singh says. “It’s similar to learning the rules of grammar by looking at the context of words in a sentence, allowing you to figure out what it means.”
To model those hypervariable regions, the researchers created two modules that build on existing protein language models. One of these modules was trained on hypervariable sequences from about 3,000 antibody structures found in the Protein Data Bank (PDB), allowing it to learn which sequences tend to generate similar structures. The other module was trained on data that correlates about 3,700 antibody sequences to how strongly they bind three different antigens.
The resulting computational model, known as AbMap, can predict antibody structures and binding strength based on their amino acid sequences. To demonstrate the usefulness of this model, the researchers used it to predict antibody structures that would strongly neutralize the spike protein of the SARS-CoV-2 virus.
The researchers started with a set of antibodies that had been predicted to bind to this target, then generated millions of variants by changing the hypervariable regions. Their model was able to identify antibody structures that would be the most successful, much more accurately than traditional protein-structure models based on large language models.
Then, the researchers took the additional step of clustering the antibodies into groups that had similar structures. They chose antibodies from each of these clusters to test experimentally, working with researchers at Sanofi. Those experiments found that 82 percent of these antibodies had better binding strength than the original antibodies that went into the model.
Identifying a variety of good candidates early in the development process could help drug companies avoid spending a lot of money on testing candidates that end up failing later on, the researchers say.
“They don’t want to put all their eggs in one basket,” Singh says. “They don’t want to say, I’m going to take this one antibody and take it through preclinical trials, and then it turns out to be toxic. They would rather have a set of good possibilities and move all of them through, so that they have some choices if one goes wrong.”
Comparing antibodies
Using this technique, researchers could also try to answer some longstanding questions about why different people respond to infection differently. For example, why do some people develop much more severe forms of Covid, and why do some people who are exposed to HIV never become infected?
Scientists have been trying to answer those questions by performing single-cell RNA sequencing of immune cells from individuals and comparing them — a process known as antibody repertoire analysis. Previous work has shown that antibody repertoires from two different people may overlap as little as 10 percent. However, sequencing doesn’t offer as comprehensive a picture of antibody performance as structural information, because two antibodies that have different sequences may have similar structures and functions.
The new model can help to solve that problem by quickly generating structures for all of the antibodies found in an individual. In this study, the researchers showed that when structure is taken into account, there is much more overlap between individuals than the 10 percent seen in sequence comparisons. They now plan to further investigate how these structures may contribute to the body’s overall immune response against a particular pathogen.
“This is where a language model fits in very beautifully because it has the scalability of sequence-based analysis, but it approaches the accuracy of structure-based analysis,” Singh says.
The research was funded by Sanofi and the Abdul Latif Jameel Clinic for Machine Learning in Health.
Remembering Mike Walter: “We loved him, and he loved us”
Michael "Mike" Walter, MIT Health applications support generalist, passed away on Nov. 2 at age 46 after a battle with cancer.
At home, Walter was a husband and devoted father to his two adolescent sons. But for 22 years, he was everyone’s friend and the smiling face at MIT Health who never failed to solve individual computer problems, no matter how large or small.
Walter came to MIT as an office assistant in MIT Health’s Medical Records department in 2002. He eventually transferred to MIT Health’s Technology Services team, where he worked from 2009 until his passing. Information Systems Manager David Forristall, who had previously worked in medical records, still remembers when “this young guy came to work for his first day.”
“When he first got to Medical Records, he thought it was only going to be a pit stop — that he was only going to be here for like two weeks,” says Walter’s colleague, Technical Support Specialist Michael Miller. “Then, 20 years later…”
“You don’t often, other than a family member, watch someone grow through their life,” says Forristall. “So for him to come to MIT as a young man at the start of his career, to a full-blown career with a wife and children. He basically came here as a boy, and we watched him turn into a man.”
Walter’s colleagues were always struck by how positive he was. “He never complained about help desk tickets. All of us looked to him for that,” remembers Medical Records Manager Tom Goodwin. “When I found myself getting a little annoyed, I would just look to Mike and think, he doesn’t do that.”
Without fail, Walter would drop everything to help his MIT Health colleagues. “He would go out on a call, and people would just keep stopping him,” remembers Senior Programmer Analyst Terry McNatt. “They would see him around the building, and they knew he would help them. He wouldn’t come back for two hours!”
The needs of MIT patients were just as important to Walter. At the annual flu clinics, Walter would, without fail, volunteer for the full day. Oftentimes people could find him serving as a go-fer; he would deliver vaccines, Band-Aids, and whatever other supplies were needed to help the vaccinators be as efficient as possible.
According to his colleagues, Walter’s dedication to the MIT community is best explained by the day he learned of his cancer diagnosis. A major snowstorm was approaching, and Walter was diligently working to get laptop computers set up so employees could work remotely for multiple days if needed. All the while, he felt awful. Eventually he went to Urgent Care to be seen.
“Urgent Care was telling him, ‘You need to go to Mount Auburn hospital right now,’” recalls Forristall. “But Mike didn’t want to go.” He refused to leave until all the laptops were properly set up so his colleagues could continue to care for patients despite the impending MIT snow closure. He only left after he grudgingly agreed to have his peers cover for him.
Walter was also a Patriots superfan, and deep lover of sports. He had multiple footballs at his desk at all times, and for years he would gather his colleagues for “coffee-break” walks around campus where they would all walk and toss a football back and forth. Anyone who passed by was invited to Walter’s game of catch — students, construction workers, staff, and faculty alike were welcome.
“Mike was always happy and he shared that with everyone,” says Forristall. “He made you happy when you saw him. We loved him and he loved us.”
Mike Walter is survived by his wife Cindy (Cucinotta), his sons Ben and Leo, and many extended family members and friends. See his legacy page here.
Unlocking the hidden power of boiling — for energy, space, and beyond
Most people take boiling water for granted. For Associate Professor Matteo Bucci, uncovering the physics behind boiling has been a decade-long journey filled with unexpected challenges and new insights.
The seemingly simple phenomenon is extremely hard to study in complex systems like nuclear reactors, and yet it sits at the core of a wide range of important industrial processes. Unlocking its secrets could thus enable advances in efficient energy production, electronics cooling, water desalination, medical diagnostics, and more.
“Boiling is important for applications way beyond nuclear,” says Bucci, who earned tenure at MIT in July. “Boiling is used in 80 percent of the power plants that produce electricity. My research has implications for space propulsion, energy storage, electronics, and the increasingly important task of cooling computers.”
Bucci’s lab has developed new experimental techniques to shed light on a wide range of boiling and heat transfer phenomena that have limited energy projects for decades. Chief among those is a problem caused by bubbles forming so quickly they create a band of vapor across a surface that prevents further heat transfer. In 2023, Bucci and collaborators developed a unifying principle governing the problem, known as the boiling crisis, which could enable more efficient nuclear reactors and prevent catastrophic failures.
For Bucci, each bout of progress brings new possibilities — and new questions to answer.
“What’s the best paper?” Bucci asks. “The best paper is the next one. I think Alfred Hitchcock used to say it doesn’t matter how good your last movie was. If your next one is poor, people won’t remember it. I always tell my students that our next paper should always be better than the last. It’s a continuous journey of improvement.”
From engineering to bubbles
The Italian village where Bucci grew up had a population of about 1,000 during his childhood. He gained mechanical skills by working in his father’s machine shop and by taking apart and reassembling appliances like washing machines and air conditioners to see what was inside. He also gained a passion for cycling, competing in the sport until he attended the University of Pisa for undergraduate and graduate studies.
In college, Bucci was fascinated with matter and the origins of life, but he also liked building things, so when it came time to pick between physics and engineering, he decided nuclear engineering was a good middle ground.
“I have a passion for construction and for understanding how things are made,” Bucci says. “Nuclear engineering was a very unlikely but obvious choice. It was unlikely because in Italy, nuclear was already out of the energy landscape, so there were very few of us. At the same time, there were a combination of intellectual and practical challenges, which is what I like.”
For his PhD, Bucci went to France, where he met his wife, and went on to work at a French national lab. One day his department head asked him to work on a problem in nuclear reactor safety known as transient boiling. To solve it, he wanted to use a method for making measurements pioneered by MIT Professor Jacopo Buongiorno, so he received grant money to become a visiting scientist at MIT in 2013. He’s been studying boiling at MIT ever since.
Today Bucci’s lab is developing new diagnostic techniques to study boiling and heat transfer along with new materials and coatings that could make heat transfer more efficient. The work has given researchers an unprecedented view into the conditions inside a nuclear reactor.
“The diagnostics we’ve developed can collect the equivalent of 20 years of experimental work in a one-day experiment,” Bucci says.
That data, in turn, led Bucci to a remarkably simple model describing the boiling crisis.
“The effectiveness of the boiling process on the surface of nuclear reactor cladding determines the efficiency and the safety of the reactor,” Bucci explains. “It’s like a car that you want to accelerate, but there is an upper limit. For a nuclear reactor, that upper limit is dictated by boiling heat transfer, so we are interested in understanding what that upper limit is and how we can overcome it to enhance the reactor performance.”
Another particularly impactful area of research for Bucci is two-phase immersion cooling, a process wherein hot server parts bring liquid to boil, then the resulting vapor condenses on a heat exchanger above to create a constant, passive cycle of cooling.
“It keeps chips cold with minimal waste of energy, significantly reducing the electricity consumption and carbon dioxide emissions of data centers,” Bucci explains. “Data centers emit as much CO2 as the entire aviation industry. By 2040, they will account for over 10 percent of emissions.”
Supporting students
Bucci says working with students is the most rewarding part of his job. “They have such great passion and competence. It’s motivating to work with people who have the same passion as you.”
“My students have no fear to explore new ideas,” Bucci adds. “They almost never stop in front of an obstacle — sometimes to the point where you have to slow them down and put them back on track.”
In running the Red Lab in the Department of Nuclear Science and Engineering, Bucci tries to give students independence as well as support.
“We’re not educating students, we’re educating future researchers,” Bucci says. “I think the most important part of our work is to not only provide the tools, but also to give the confidence and the self-starting attitude to fix problems. That can be business problems, problems with experiments, problems with your lab mates.”
Some of the more unique experiments Bucci’s students do require them to gather measurements while free falling in an airplane to achieve zero gravity.
“Space research is the big fantasy of all the kids,” says Bucci, who joins students in the experiments about twice a year. “It’s very fun and inspiring research for students. Zero g gives you a new perspective on life.”
Applying AI
Bucci is also excited about incorporating artificial intelligence into his field. In 2023, he was a co-recipient of a multi-university research initiative (MURI) project in thermal science dedicated solely to machine learning. In a nod to the promise AI holds in his field, Bucci also recently founded a journal called AI Thermal Fluids to feature AI-driven research advances.
“Our community doesn’t have a home for people that want to develop machine-learning techniques,” Bucci says. “We wanted to create an avenue for people in computer science and thermal science to work together to make progress. I think we really need to bring computer scientists into our community to speed this process up.”
Bucci also believes AI can be used to process huge reams of data gathered using the new experimental techniques he’s developed as well as to model phenomena researchers can’t yet study.
“It’s possible that AI will give us the opportunity to understand things that cannot be observed, or at least guide us in the dark as we try to find the root causes of many problems,” Bucci says.
MIT scientists pin down the origins of a fast radio burst
Fast radio bursts are brief and brilliant explosions of radio waves emitted by extremely compact objects such as neutron stars and possibly black holes. These fleeting fireworks last for just a thousandth of a second and can carry an enormous amount of energy — enough to briefly outshine entire galaxies.
Since the first fast radio burst (FRB) was discovered in 2007, astronomers have detected thousands of FRBs, whose locations range from within our own galaxy to as far as 8 billion light-years away. Exactly how these cosmic radio flares are launched is a highly contested unknown.
Now, astronomers at MIT have pinned down the origins of at least one fast radio burst using a novel technique that could do the same for other FRBs. In their new study, appearing today in the journal Nature, the team focused on FRB 20221022A — a previously discovered fast radio burst that was detected from a galaxy about 200 million light-years away.
The team zeroed in further to determine the precise location of the radio signal by analyzing its “scintillation,” similar to how stars twinkle in the night sky. The scientists studied changes in the FRB’s brightness and determined that the burst must have originated from the immediate vicinity of its source, rather than much further out, as some models have predicted.
The team estimates that FRB 20221022A exploded from a region that is extremely close to a rotating neutron star, 10,000 kilometers away at most. That’s less than the distance between New York and Singapore. At such close range, the burst likely emerged from the neutron star’s magnetosphere — a highly magnetic region immediately surrounding the ultracompact star.
The team’s findings provide the first conclusive evidence that a fast radio burst can originate from the magnetosphere, the highly magnetic environment immediately surrounding an extremely compact object.
“In these environments of neutron stars, the magnetic fields are really at the limits of what the universe can produce,” says lead author Kenzie Nimmo, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research. “There’s been a lot of debate about whether this bright radio emission could even escape from that extreme plasma.”
“Around these highly magnetic neutron stars, also known as magnetars, atoms can’t exist — they would just get torn apart by the magnetic fields,” says Kiyoshi Masui, associate professor of physics at MIT. “The exciting thing here is, we find that the energy stored in those magnetic fields, close to the source, is twisting and reconfiguring such that it can be released as radio waves that we can see halfway across the universe.”
The study’s MIT co-authors include Adam Lanman, Shion Andrew, Daniele Michilli, and Kaitlyn Shin, along with collaborators from multiple institutions.
Burst size
Detections of fast radio bursts have ramped up in recent years, due to the Canadian Hydrogen Intensity Mapping Experiment (CHIME). The radio telescope array comprises four large, stationary receivers, each shaped like a half-pipe, that are tuned to detect radio emissions within a range that is highly sensitive to fast radio bursts.
Since 2020, CHIME has detected thousands of FRBs from all over the universe. While scientists generally agree that the bursts arise from extremely compact objects, the exact physics driving the FRBs is unclear. Some models predict that fast radio bursts should come from the turbulent magnetosphere immediately surrounding a compact object, while others predict that the bursts should originate much further out, as part of a shockwave that propagates away from the central object.
To distinguish between the two scenarios, and determine where fast radio bursts arise, the team considered scintillation — the effect that occurs when light from a small bright source such as a star, filters through some medium, such as a galaxy’s gas. As the starlight filters through the gas, it bends in ways that make it appear, to a distant observer, as if the star is twinkling. The smaller or the farther away an object is, the more it twinkles. The light from larger or closer objects, such as planets in our own solar system, experience less bending, and therefore do not appear to twinkle.
The team reasoned that if they could estimate the degree to which an FRB scintillates, they might determine the relative size of the region from where the FRB originated. The smaller the region, the closer in the burst would be to its source, and the more likely it is to have come from a magnetically turbulent environment. The larger the region, the farther the burst would be, giving support to the idea that FRBs stem from far-out shockwaves.
Twinkle pattern
To test their idea, the researchers looked to FRB 20221022A, a fast radio burst that was detected by CHIME in 2022. The signal lasts about two milliseconds, and is a relatively run-of-the-mill FRB, in terms of its brightness. However, the team’s collaborators at McGill University found that FRB 20221022A exhibited one standout property: The light from the burst was highly polarized, with the angle of polarization tracing a smooth S-shaped curve. This pattern is interpreted as evidence that the FRB emission site is rotating — a characteristic previously observed in pulsars, which are highly magnetized, rotating neutron stars.
To see a similar polarization in fast radio bursts was a first, suggesting that the signal may have arisen from the close-in vicinity of a neutron star. The McGill team’s results are reported in a companion paper today in Nature.
The MIT team realized that if FRB 20221022A originated from close to a neutron star, they should be able to prove this, using scintillation.
In their new study, Nimmo and her colleagues analyzed data from CHIME and observed steep variations in brightness that signaled scintillation — in other words, the FRB was twinkling. They confirmed that there is gas somewhere between the telescope and FRB that is bending and filtering the radio waves. The team then determined where this gas could be located, confirming that gas within the FRB’s host galaxy was responsible for some of the scintillation observed. This gas acted as a natural lens, allowing the researchers to zoom in on the FRB site and determine that the burst originated from an extremely small region, estimated to be about 10,000 kilometers wide.
“This means that the FRB is probably within hundreds of thousands of kilometers from the source,” Nimmo says. “That’s very close. For comparison, we would expect the signal would be more than tens of millions of kilometers away if it originated from a shockwave, and we would see no scintillation at all.”
“Zooming in to a 10,000-kilometer region, from a distance of 200 million light years, is like being able to measure the width of a DNA helix, which is about 2 nanometers wide, on the surface of the moon,” Masui says. “There’s an amazing range of scales involved.”
The team’s results, combined with the findings from the McGill team, rule out the possibility that FRB 20221022A emerged from the outskirts of a compact object. Instead, the studies prove for the first time that fast radio bursts can originate from very close to a neutron star, in highly chaotic magnetic environments.
“These bursts are always happening, and CHIME detects several a day,” Masui says. “There may be a lot of diversity in how and where they occur, and this scintillation technique will be really useful in helping to disentangle the various physics that drive these bursts.”
This research was supported by various institutions including the Canada Foundation for Innovation, the Dunlap Institute for Astronomy and Astrophysics at the University of Toronto, the Canadian Institute for Advanced Research, the Trottier Space Institute at McGill University, and the University of British Columbia.