MIT Latest News
Responding to the climate impact of generative AI
In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.
The energy demands of generative AI are expected to continue increasing dramatically over the next decade.
For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.
Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.
These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.
Considering carbon emissions
Talk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.
Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)
Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds.
“The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.
Reducing operational carbon emissions
When it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.
“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.
In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.
Another strategy is to use less energy-intensive computing hardware.
Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.
But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.
There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.
Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.
“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.
Researchers can also take advantage of efficiency-boosting measures.
For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.
By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.
Leveraging efficiency improvements
Constant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.
Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.
“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.
Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.
Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.
These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.
“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.
Maximizing energy savings
While reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.
“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.
Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.
Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.
Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.
“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.
He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.
The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.
With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.
“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.
In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.
Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.
Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.
AI-based solutions
Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.
The local, state, and federal review processes required for a new renewable energy projects can take years.
Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.
For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.
And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.
“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.
For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.
It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.
By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.
To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.
The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.
At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.
“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.
A beacon of light
Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.
The symbolism of Robinson’s lamp was shared by Hashim Sarkis, dean of the MIT School of Architecture and Planning (SA+P), at the school’s graduation ceremony in May, where Robinson addressed the class of 2025. To replicate the generous intentions of Robinson’s lamp and commemorate her visit to MIT, Sarkis commissioned a unique lantern as a gift for Robinson. He commissioned an identical one for his office, which is in the front portico of MIT at 77 Massachusetts Ave.
“The lamp will welcome all citizens of the world to MIT,” says Sarkis.
No ordinary lantern
The bespoke lantern was created by Marcelo Coelho SM ’08, PhD ’12, director of the Design Intelligence Lab and associate professor of the practice in the Department of Architecture.
One of several projects in the Geoletric research at the Design Intelligence Lab, the lantern showcases the use of geopolymers as a sustainable material alternative for embedded computers and consumer electronics.
“The materials that we use to make computers have a negative impact on climate, so we’re rethinking how we make products with embedded electronics — such as a lamp or lantern — from a climate perspective,” says Coelho.
Consumer electronics rely on materials that are high in carbon emissions and difficult to recycle. As the demand for embedded computing increases, so too does the need for alternative materials that have a reduced environmental impact while supporting electronic functionality.
The Geolectric lantern advances the formulation and application of geopolymers — a class of inorganic materials that form covalently bonded, non-crystalline networks. Unlike traditional ceramics, geopolymers do not require high-temperature firing, allowing electronic components to be embedded seamlessly during production.
Geopolymers are similar to ceramics, but have a lower carbon footprint and present a sustainable alternative for consumer electronics, product design, and architecture. The minerals Coelho uses to make the geopolymers — aluminum silicate and sodium silicate — are those regularly used to make ceramics.
“Geopolymers aren’t particularly new, but are becoming more popular,” says Coelho. “They have high strength in both tension and compression, superior durability, fire resistance, and thermal insulation. Compared to concrete, geopolymers don’t release carbon dioxide. Compared to ceramics, you don’t have to worry about firing them. What’s even more interesting is that they can be made from industrial byproducts and waste materials, contributing to a circular economy and reducing waste.”
The lantern is embedded with custom electronics that serve as a proximity and touch sensor. When a hand is placed over the top, light shines down the glass tubes.
The timeless design of the Geoelectric lantern — minimalist, composed of natural materials — belies its future-forward function. Coelho’s academic background is in fine arts and computer science. Much of his work, he says, “bridges these two worlds.”
Working at the Design Intelligence Lab with Coelho on the lanterns are Jacob Payne, a graduate architecture student, and Jean-Baptiste Labrune, a research affiliate.
A light for MIT
A few weeks before commencement, Sarkis saw the Geoelectric lantern in Palazzo Diedo Berggruen Arts and Culture in Venice, Italy. The exhibition, a collateral event of the Venice Biennale’s 19th International Architecture Exhibition, featured the work of 40 MIT architecture faculty.
The sustainability feature of Geolectric is the key reason Sarkis regarded the lantern as the perfect gift for Robinson. After her career in politics, Robinson founded the Mary Robinson Foundation — Climate Justice, an international center addressing the impacts of climate change on marginalized communities.
The third iteration of Geolectric for Sarkis’ office is currently underway. While the lantern was a technical prototype and an opportunity to showcase his lab’s research, Coelho — an immigrant from Brazil — was profoundly touched by how Sarkis created the perfect symbolism to both embody the welcoming spirit of the school and honor President Robinson.
“When the world feels most fragile, we need to urgently find sustainable and resilient solutions for our built environment. It’s in the darkest times when we need light the most,” says Coelho.
The first animals on Earth may have been sea sponges, study suggests
A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.
In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.
The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.
“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).
The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.
The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.
Sponges on steroids
The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.
The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.
However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.
The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.
Building evidence
Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).
“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.
A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.
“It’s very unusual to find a sterol with 30 carbons,” Shawar says.
The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.
In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.
“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”
The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.
The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.
“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”
“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.
Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.
This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program.
How the brain splits up vision without you even noticing
The brain divides vision between its two hemispheres — what’s on your left is processed by your right hemisphere, and vice versa — but your experience with every bike or bird that you see zipping by is seamless. A new study by neuroscientists at The Picower Institute for Learning and Memory at MIT reveals how the brain handles the transition.
“It’s surprising to some people to hear that there’s some independence between the hemispheres, because that doesn’t really correspond to how we perceive reality,” says Earl K. Miller, Picower Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “In our consciousness, everything seems to be unified.”
There are advantages to separately processing vision on either side of the brain, including the ability to keep track of more things at once, Miller and other researchers have found, but neuroscientists have been eager to fully understand how perception ultimately appears so unified in the end.
Led by Picower Fellow Matthew Broschard and Research Scientist Jefferson Roy, the research team measured neural activity in the brains of animals as they tracked objects crossing their field of view. The results reveal that different frequencies of brain waves encoded and then transferred information from one hemisphere to the other in advance of the crossing, and then held on to the object representation in both hemispheres until after the crossing was complete. The process is analogous to how relay racers hand off a baton, how a child swings from one monkey bar to the next, and how cellphone towers hand off a call from one to the next as a train passenger travels through their area. In all cases, both towers or hands actively hold what’s being transferred until the handoff is confirmed.
Witnessing the handoff
To conduct the study, published Sept. 19 in the Journal of Neuroscience, the researchers measured both the electrical spiking of individual neurons and the various frequencies of brain waves that emerge from the coordinated activity of many neurons. They studied the dorsal and ventrolateral prefrontal cortex in both hemispheres, brain areas associated with executive brain functions.
The power fluctuations of the wave frequencies in each hemisphere told the researchers a clear story about how the subject’s brains transferred information from the “sending” to the “receiving” hemisphere whenever a target object crossed the middle of their field of view. In the experiments, the target was accompanied by a distractor object on the opposite side of the screen to confirm that the subjects were consciously paying attention to the target object’s motion, and not just indiscriminately glancing at whatever happened to pop up on to the screen.
The highest-frequency “gamma” waves, which encode sensory information, peaked in both hemispheres when the subjects first looked at the screen and again when the two objects appeared. When a color change signaled which object was the target to track, the gamma increase was only evident in the “sending” hemisphere (on the opposite side as the target object), as expected. Meanwhile, the power of somewhat lower-frequency “beta” waves, which regulate when gamma waves are active, varied inversely with the gamma waves. These sensory encoding dynamics were stronger in the ventrolateral locations compared to the dorsolateral ones.
Meanwhile, two distinct bands of lower-frequency waves showed greater power in the dorsolateral locations at key moments related to achieving the handoff. About a quarter of a second before a target object crossed the middle of the field of view, “alpha” waves ramped up in both hemispheres and then peaked just after the object crossed. Meanwhile, “theta” band waves peaked after the crossing was complete, only in the “receiving” hemisphere (opposite from the target’s new position).
Accompanying the pattern of wave peaks, neuron spiking data showed how the brain’s representation of the target’s location traveled. Using decoder software, which interprets what information the spikes represent, the researchers could see the target representation emerge in the sending hemisphere’s ventrolateral location when it was first cued by the color change. Then they could see that as the target neared the middle of the field of view, the receiving hemisphere joined the sending hemisphere in representing the object, so that they both encoded the information during the transfer.
Doing the wave
Taken together, the results showed that after the sending hemisphere initially encoded the target with a ventrolateral interplay of beta and gamma waves, a dorsolateral ramp up of alpha waves caused the receiving hemisphere to anticipate the handoff by mirroring the sending hemisphere’s encoding of the target information. Alpha peaked just after the target crossed the middle of the field of view, and when the handoff was complete, theta peaked in the receiving hemisphere as if to say, “I got it.”
And in trials where the target never crossed the middle of the field of view, these handoff dynamics were not apparent in the measurements.
The study shows that the brain is not simply tracking objects in one hemisphere and then just picking them up anew when they enter the field of view of the other hemisphere.
“These results suggest there are active mechanisms that transfer information between cerebral hemispheres,” the authors wrote. “The brain seems to anticipate the transfer and acknowledge its completion.”
But they also note, based on other studies, that the system of interhemispheric coordination can sometimes appear to break down in certain neurological conditions including schizophrenia, autism, depression, dyslexia, and multiple sclerosis. The new study may lend insight into the specific dynamics needed for it to succeed.
In addition to Broschard, Roy, and Miller, the paper’s other authors are Scott Brincat and Meredith Mahnke.
Funding for the study came from the Office of Naval Research, the National Eye Institute of the National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.
An adaptable evaluation of justice and interest groups
In 2024, an association of female senior citizens in Switzerland won a case at the European Court of Human Rights. Their country, the women contended, needed to do more to protect them from climate change, since heat waves can make the elderly particularly vulnerable. The court ruled in favor of the group, saying that states belonging to the Council of Europe have a “positive obligation” to protect citizens from “serious adverse effects of climate change on lives, health, well-being, and quality of life.”
The exact policy implications of such rulings can be hard to assess. But there are still subtle civic implications related to the ruling that bear consideration.
For one thing, although the case was brought by a particular special-interest association, its impact could benefit everyone in society. Yet the people in the group had not always belonged to it and are not wholly defined by being part of it. In a sense, while the senior-citizen association brought the case as a minority group of sorts, being a senior citizen is not the sole identity marker of the people in it.
These kinds of situations underline the complexity of interest-group dynamics as they engage with legal and political systems. Much public discourse on particularistic groups focuses on them as seemingly fixed entities with clear definitions, but being a member of a minority group is not a static thing.
“What I want to insist on is that it’s not like an absolute property. It’s a dynamic,” says MIT Professor Bruno Perreau. “It is both a complex situation and a mobile situation. You can be a member of a minority group vis-à-vis one category and not another.”
Now Perreau explores these dynamics in a book, “Spheres of Injustice,” published this year by the MIT Press. Perreau is the Cynthia L. Reed Professor of French Studies and Language in MIT’s Literature program. The French-language edition of the book was published in 2023.
Around the world, Perreau observes, much of the political contestation over interest-group politics and policies to protect minorities arrives at a similar tension point: Policies or legal rulings are sometimes crafted to redress problems, but when political conditions shift, those same policies can be discarded with claims that they themselves are unfair. In many places, this dynamic has become familiar through the contestation of policies regarding ethnic identity, gender, sexual orientation, and more.
But this is not the only paradigm of minority group politics. One aim of Perreau’s book is to add breadth to the subject, grounded in the empirical realities people experience.
After all, when it comes to being regarded as a member of a minority group, “in a given situation, some people will claim this label for themselves, whereas others will reject it,” Perreau writes. “Some consider this piece of their identity to be fundamental; others regard it as secondary. … The work of defining it is the very locus of its power.”
“Spheres of Injustice” both lays out that complexity and seeks to find ways to rethink group-oriented politics as part of an expansion of rights generally. The book arises partly out of previous work Perreau has published, often concerning France. It also developed partly in response to Perreau thinking about how rights might evolve in a time of climate change. But it arrived at its exact form as a rethinking of “Spheres of Justice,” a prominent 1980s text by political philosopher Michael Walzer.
Instead of there being a single mechanism through which justice could be applied throughout society, Walzer contended, there are many spheres of life, and the meaning of justice depends on where it is being applied.
“Because of the complexities of social relations, inequalities are impossible to fully erase,” Perreau says. “Even in the act of trying to resist an injustice, we may create other forms of injustice. Inequality is unavoidable, but his [Walzer’s] goal is to reduce injustice to the minimum, in the form of little inequalities that do not matter that much.”
Walzer’s work, however, never grapples with the kinds of political dynamics in which minority groups try to establish rights. To be clear, Perreau notes, in some cases the categorization as a minority is foisted upon people, and in other cases, it is developed by the group itself. In either case, he thinks we should consider how complex the formation and activities of the group may be.
As another example, consider that while disability rights are a contested issue in some countries and ignored in others, they also involve fluidity in terms of who advocates and benefits from them. Imagine, Perreau says, you break a leg. Temporarily, he says, “you experience a little bit of what people with a permanent disability experience.” If you lobby, for, say, better school building access or better transit access, you could be helping kids, the elderly, families with kids, and more — including people and groups not styling themselves as part of a disability-rights movement.
“One goal of the book is to enhance awareness about the virtuous circle that can emerge from this kind of minority politics,” Perreau says. “It’s often regarded by many privileged people as a protection that removes something from them. But that’s not the case.”
Indeed, the politics Perreau envisions in “Spheres of Injustice” have an alternate framework, in which developing rights for some better protects others, to the point where minority rights translate into universal rights. That is not, again, meant to minimize the experience of core members of a group that has been discriminated against, but to encourage thinking about how solidifying rights for a particular group overlaps with the greater expansion of rights generally.
“I’m walking a fine line between different perspectives on what it means to belong,” Perreau says. “But this is indispensable today.”
Indeed, due to the senior citizens in Switzerland, he notes, “There will be better rights in Europe. Politics is not just a matter of diplomacy and majority decision-making. Sharing a complex world means drawing on the minority parts of our lives because it is these parts that most fundamentally connect us to others, intentionally or unintentionally. Thinking in these terms today is an essential civic virtue.”
Teamwork in motion
Graduate school can feel like a race to the finish line, but it becomes much easier with a team to cheer you on — especially if that team is literally next to you, shouting encouragement from a decorated van.
From the morning of Sept. 12 into the early afternoon on Sept. 13, two teams made up of MIT Department of Aeronautics and Astronautics (AeroAstro) graduate students, alumni, and friends ran the 2025 Ragnar Road Reach the Beach relay in two friendly yet competitive teams of 12, aptly named Team Aero and Team Astro. Ragnar races are long-distance, team-based relay events that run overnight through some of the country’s most scenic routes. The Reach the Beach course began in Lancaster, New Hampshire, and sent teams on a 204-mile trek through the White Mountains, finishing at Hampton Beach State Park.
“This all began on the Graduate Association of Aeronautics and Astronautics North End Pastry Tour in 2024. While discussing our mutual love for running, and stuffing our faces with cannoli, Maya Harris jokingly mentioned the concept of doing a Ragnar,” says Nathanael Jenkins, the eventual Team Aero captain. The idea took hold, inspiring enough interest to form a team for the first AeroAstro Ragnar relay in April 2025. From there enthusiasm continued to grow, resulting in the two current teams.
“I was surprised at the number of people, even people who don’t run very frequently, who wanted to do another race after finishing the first Ragnar,” says Patrick Riley, captain of Team Astro. “All of the new faces are awesome because they bring new energy and excitement to the team. I love the community, I love the sport, and I think the best way to get to know someone is to be crammed into a van with them for six hours at a time.”
Resource management and real-time support
The two teams organized four vans, adorned with words of encouragement and team magnets — a Ragnar tradition — to shepherd the teams through the race, serving as rolling rest stops for runners at each exchange point. Each runner completed three to four sections out of 36 total legs, running between 1.7 to 11.6 miles at a time. Runners could swap out there for a power nap or a protein bar. To keep morale high, teams played games and handed out awards of their own to teammates. “Noah (McAllister) got the prize for ‘Most bees removed from the car;’ Madison (Bronniman) won for ‘Eating the most tinned fish;’ I got the prize for ‘Most violent slamming of doors’ — which I hadn’t realized was in my skill set,” says Jenkins.
“This race is really unique because it bonds the team together in ways that many other races simply don’t,” says Riley, an avid runner prior to the event. “Marathons are strenuous on your body, but a Ragnar is about long-term resource management — eating, hydrating, sleep management, staying positive. Then communicating those logistics effectively and proceeding with the plan.”
Pulling off a logistics-heavy race across both teams required “magical spreadsheeting” that used distance, start time, elevation changes, and average pace to estimate finish time for each leg of the race. “Noah made it for the first race. Then a bunch of engineers saw a spreadsheet and zeroed in,” says Riley.
Engineering success
The careful planning paid off with a win for Team Astro, with a finishing time of 31:01:13. Team Aero was close behind, finishing at 31:19:43. Yet in the end, the competition mattered less than the camaraderie, when all runners celebrated together at the finish line.
“I think the big connection that we talk about is putting the teamwork skills we use in engineering into practice,” says Jenkins. “Engineers all like achieving. Runners like achieving. Many of our runners don’t run for enjoyment in the moment, but the feeling of crossing the finish line makes up for the, well, pain. In engineering, the feeling of finishing a difficult problem makes up for the pain of doing it.”
Call them gluttons for punishment or high achievers, the group is already making plans for the next race. “Everybody is immediately throwing links in the group chat for more Ragnars in the future,” says Riley. “MIT has so many people who want to explore and engage with the world around them, and they’re willing to take a chance and do crazy stuff. And we have the follow-through to make it happen.”
Runners
Team Aero: Claire Buffington, Alex Chipps, Nathanael Jenkins, Noah McAllister, Garrett Siemen, Nick Torres (Course 16, AeroAstro), Madison Bronniman, Ceci Perez Gago, Juju Wang (Course 16 alum), Katie Benoit, and Jason Wang.
Team Astro: Tim Cavesmith, Evrard Constant, Mary Foxen, Maya Harris, Jules Penot, Patrick Riley, Alex Rose, Samir Wadhwania (Course 16), Henry Price (Course 3, materials science and engineering), Katherine Hoekstra, and Ian Robertson (Woods Hole Oceanographic Institute).
Honorary teammates: Abigail Lee, Celvi Lissy, and Taylor Hampson.
How federal research support has helped create life-changing medicines
Gleevec, a cancer drug first approved for sale in 2001, has dramatically changed the lives of people with chronic myeloid leukemia. This form of cancer was once regarded as very difficult to combat, but survival rates of patients who respond to Gleevec now resemble that of the population at large.
Gleevec is also a medicine developed with the help of federally funded research. That support helped scientists better understand how to create drugs targeting the BCR-ABL oncoprotein, the cancer-causing protein behind chronic myeloid leukemia.
A new study co-authored by MIT researchers quantifies how many such examples of drug development exist. The current administration is proposing a nearly 40 percent budget reduction to the National Institutes of Health (NIH), which sponsors a significant portion of biomedical research. The study finds that over 50 percent of small-molecule drug patents this century cite at least one piece of NIH-backed research that would likely be vulnerable to that potential level of funding change.
“What we found was quite striking,” says MIT economist Danielle Li, co-author of a newly published paper outlining the study’s results. “More than half of the drugs approved by the FDA since 2000 are connected to NIH research that would likely have been cut under a 40 percent budget reduction.”
Or, as the researchers write in the paper: “We found extensive connections between medical advances and research that was funded by grants that would have been cut if the NIH budget was sharply reduced.”
The paper, “What if NIH funding had been 40% smaller?” is published today as a Policy Article in the journal Science. The authors are Pierre Azoulay, the China Program Professor of International Management at the MIT Sloan School of Management; Matthew Clancy, an economist with the group Open Philanthropy; Li, the David Sarnoff Professor of Management of Technology at MIT Sloan; and Bhaven N. Sampat, an economist at Johns Hopkins University. (Biomedical researchers at both MIT and Johns Hopkins could be affected by adjustments to NIH funding.)
To conduct the study, the researchers leveraged the fact that the NIH uses priority lists to determine which projects get funded. That makes it possible to discern which projects were in the lower 40 percent of NIH-backed projects, priority-wise, for a given time period. The researchers call these “at-risk” pieces of research. Applying these data from 1980 through 2007, the scholars examined the patents of the new molecular entities — drugs with a new active ingredient — approved by the U.S. Food and Drug Administration since 2000. There is typically a time interval between academic research and subsequent related drug development.
The study focuses on small-molecule drugs — compact organic compounds, often taken orally as medicine — whereas NIH funding supports a wider range of advancements in medicine generally. Based on how many of these FDA-approved small-molecule medicines were linked to at-risk research from the prior period, the researchers estimated what kinds of consequences a 40 percent cut in funding would have generated going forward.
The study distinguishes between two types of links new drugs have to NIH funding. Some drug patents have what the researchers call “direct” links to new NIH-backed projects that generated new findings relevant to development of those particular drugs. Other patents have “indirect “ links to the NIH, when they cite prior NIH-funded studies that contributed to the overall body of knowledge used in drug development.
The analysis finds that 40 of the FDA-approved medications have direct links to new NIH-supported studies cited in the patents — or 7.1 percent. Of these, 14 patents cite at-risk pieces of NIH research.
When it comes to indirect links, of the 557 drugs approved by the FDA from 2000 to 2023, the study found that 59.4 percent have a patent citing at least one NIH-supported research publication. And, 51.4 percent cite at least one NIH-funded study from the at-risk category of projects.
“The indirect connection is where we see the real breadth of NIH's impact,” Li says. “What the NIH does is fund research that forms the scientific foundation upon which companies and other drug developers build.”
As the researchers emphasize in the paper, there are many nuances involved in the study. A single citation of an NIH-funded study could appear in a patent for a variety of reasons, and does not necessarily mean “that the drug in question could never have been developed in its absence,” as they write in the paper. To reckon with this, the study also analyzes how many patents had at least 25 percent of their citations fall in the category of at-risk NIH-backed research. By this metric, they found that 65 of the 557 FDA-approved drugs, or 11.7 percent, met the threshold.
On the other hand, as the researchers state in the paper, it is possible the study “understates the extent to which medical advances are connected to NIH research.” For one thing, as the study’s endpoint for examining NIH data is 2007, there could have been more recent pieces of research informing medications that have already received FDA approval. The study does not quantify “second-order connections,” in which NIH-supported findings may have led to additional research that directly led to drug development. Again, NIH funding also supports a broad range of studies beyond the type examined in the current paper.
It is also likely, the scholars suggest, that NIH cuts would curtail the careers of many promising scientists, and in so doing slowdown medical progress. For a variety of these reasons, in addition to the core data itself, the scholars say the study indicates how broadly NIH-backed research has helped advance medicine.
“The worry is that these kinds of deep cuts to the NIH risk that foundation and therefore endanger the development of medicines that might be used to treat us, or our kids and grandkids, 20 years from now,” Li says.
Azoulay and Sampat have received past NIH funding. They also serve on an NIH working group about the empirical analysis of the scientific enterprise.
AI system learns from many types of scientific information and runs experiments to discover new materials
Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers.
Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments that incorporates information from diverse sources like insights from the literature, chemical compositions, microstructural images, and more. The approach is part of a new platform, named Copilot for Real-world Experimental Scientists (CRESt), that also uses robotic equipment for high-throughput materials testing, the results of which are fed back into large multimodal models to further optimize materials recipes.
Human researchers can converse with the system in natural language, with no coding required, and the system makes its own observations and hypotheses along the way. Cameras and visual language models also allow the system to monitor experiments, detect issues, and suggest corrections.
“In the field of AI for science, the key is designing new experiments,” says Ju Li, School of Engineering Carl Richard Soderberg Professor of Power Engineering. “We use multimodal feedback — for example information from previous literature on how palladium behaved in fuel cells at this temperature, and human feedback — to complement experimental data and design new experiments. We also use robots to synthesize and characterize the material’s structure and to test performance.”
The system is described in a paper published in Nature. The researchers used CRESt to explore more than 900 chemistries and conduct 3,500 electrochemical tests, leading to the discovery of a catalyst material that delivered record power density in a fuel cell that runs on formate salt to produce electricity.
Joining Li on the paper as first authors are PhD student Zhen Zhang, Zhichu Ren PhD ’24, PhD student Chia-Wei Hsu, and postdoc Weibin Chen. Their coauthors are MIT Assistant Professor Iwnetim Abate; Associate Professor Pulkit Agrawal; JR East Professor of Engineering Yang Shao-Horn; MIT.nano researcher Aubrey Penn; Zhang-Wei Hong PhD ’25, Hongbin Xu PhD ’25; Daniel Zheng PhD ’25; MIT graduate students Shuhan Miao and Hugh Smith; MIT postdocs Yimeng Huang, Weiyin Chen, Yungsheng Tian, Yifan Gao, and Yaoshen Niu; former MIT postdoc Sipei Li; and collaborators including Chi-Feng Lee, Yu-Cheng Shao, Hsiao-Tsu Wang, and Ying-Rui Lu.
A smarter system
Materials science experiments can be time-consuming and expensive. They require researchers to carefully design workflows, make new material, and run a series of tests and analysis to understand what happened. Those results are then used to decide how to improve the material.
To improve the process, some researchers have turned to a machine-learning strategy known as active learning to make efficient use of previous experimental data points and explore or exploit those data. When paired with a statistical technique known as Bayesian optimization (BO), active learning has helped researchers identify new materials for things like batteries and advanced semiconductors.
“Bayesian optimization is like Netflix recommending the next movie to watch based on your viewing history, except instead it recommends the next experiment to do,” Li explains. “But basic Bayesian optimization is too simplistic. It uses a boxed-in design space, so if I say I’m going to use platinum, palladium, and iron, it only changes the ratio of those elements in this small space. But real materials have a lot more dependencies, and BO often gets lost.”
Most active learning approaches also rely on single data streams that don’t capture everything that goes on in an experiment. To equip computational systems with more human-like knowledge, while still taking advantage of the speed and control of automated systems, Li and his collaborators built CRESt.
CRESt’s robotic equipment includes a liquid-handling robot, a carbothermal shock system to rapidly synthesize materials, an automated electrochemical workstation for testing, characterization equipment including automated electron microscopy and optical microscopy, and auxiliary devices such as pumps and gas valves, which can also be remotely controlled. Many processing parameters can also be tuned.
With the user interface, researchers can chat with CRESt and tell it to use active learning to find promising materials recipes for different projects. CRESt can include up to 20 precursor molecules and substrates into its recipe. To guide material designs, CRESt’s models search through scientific papers for descriptions of elements or precursor molecules that might be useful. When human researchers tell CRESt to pursue new recipes, it kicks off a robotic symphony of sample preparation, characterization, and testing. The researcher can also ask CRESt to perform image analysis from scanning electron microscopy imaging, X-ray diffraction, and other sources.
Information from those processes is used to train the active learning models, which use both literature knowledge and current experimental results to suggest further experiments and accelerate materials discovery.
“For each recipe we use previous literature text or databases, and it creates these huge representations of every recipe based on the previous knowledge base before even doing the experiment,” says Li. “We perform principal component analysis in this knowledge embedding space to get a reduced search space that captures most of the performance variability. Then we use Bayesian optimization in this reduced space to design the new experiment. After the new experiment, we feed newly acquired multimodal experimental data and human feedback into a large language model to augment the knowledgebase and redefine the reduced search space, which gives us a big boost in active learning efficiency.”
Materials science experiments can also face reproducibility challenges. To address the problem, CRESt monitors its experiments with cameras, looking for potential problems and suggesting solutions via text and voice to human researchers.
The researchers used CRESt to develop an electrode material for an advanced type of high-density fuel cell known as a direct formate fuel cell. After exploring more than 900 chemistries over three months, CRESt discovered a catalyst material made from eight elements that achieved a 9.3-fold improvement in power density per dollar over pure palladium, an expensive precious metal. In further tests, CRESTs material was used to deliver a record power density to a working direct formate fuel cell even though the cell contained just one-fourth of the precious metals of previous devices.
The results show the potential for CRESt to find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.
“A significant challenge for fuel-cell catalysts is the use of precious metal,” says Zhang. “For fuel cells, researchers have used various precious metals like palladium and platinum. We used a multielement catalyst that also incorporates many other cheap elements to create the optimal coordination environment for catalytic activity and resistance to poisoning species such as carbon monoxide and adsorbed hydrogen atom. People have been searching low-cost options for many years. This system greatly accelerated our search for these catalysts.”
A helpful assistant
Early on, poor reproducibility emerged as a major problem that limited the researchers’ ability to perform their new active learning technique on experimental datasets. Material properties can be influenced by the way the precursors are mixed and processed, and any number of problems can subtly alter experimental conditions, requiring careful inspection to correct.
To partially automate the process, the researchers coupled computer vision and vision language models with domain knowledge from the scientific literature, which allowed the system to hypothesize sources of irreproducibility and propose solutions. For example, the models can notice when there’s a millimeter-sized deviation in a sample’s shape or when a pipette moves something out of place. The researchers incorporated some of the model’s suggestions, leading to improved consistency, suggesting the models already make good experimental assistants.
The researchers noted that humans still performed most of the debugging in their experiments.
“CREST is an assistant, not a replacement, for human researchers,” Li says. “Human researchers are still indispensable. In fact, we use natural language so the system can explain what it is doing and present observations and hypotheses. But this is a step toward more flexible, self-driving labs.”
