MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 13 hours 26 min ago

MIT cognitive scientists reveal why some sentences stand out from others

Wed, 10/01/2025 - 12:00am

“You still had to prove yourself.”

“Every cloud has a blue lining!”

Which of those sentences are you most likely to remember a few minutes from now? If you guessed the second, you’re probably correct.

According to a new study from MIT cognitive scientists, sentences that stick in your mind longer are those that have distinctive meanings, making them stand out from sentences you’ve previously seen. They found that meaning, not any other trait, is the most important feature when it comes to memorability.

“One might have thought that when you remember sentences, maybe it’s all about the visual features of the sentence, but we found that that was not the case. A big contribution of this paper is pinning down that it is the meaning-related space that makes sentences memorable,” says Greta Tuckute PhD ’25, who is now a research fellow at Harvard University’s Kempner Institute.

The findings support the hypothesis that sentences with distinctive meanings — like “Does olive oil work for tanning?” — are stored in brain space that is not cluttered with sentences that mean almost the same thing. Sentences with similar meanings end up densely packed together and are therefore more difficult to recognize confidently later on, the researchers believe.

“When you encode sentences that have a similar meaning, there’s feature overlap in that space. Therefore, a particular sentence you’ve encoded is not linked to a unique set of features, but rather to a whole bunch of features that may overlap with other sentences,” says Evelina Fedorenko, an MIT associate professor of brain and cognitive sciences (BCS), a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study.

Tuckute and Thomas Clark, an MIT graduate student, are the lead authors of the paper, which appears in the Journal of Memory and Language. MIT graduate student Bryan Medina is also an author.

Distinctive sentences

What makes certain things more memorable than others is a longstanding question in cognitive science and neuroscience. In a 2011 study, Aude Oliva, now a senior research scientist at MIT and MIT director of the MIT-IBM Watson AI Lab, showed that not all items are created equal: Some types of images are much easier to remember than others, and people are remarkably consistent in what images they remember best.

In that study, Oliva and her colleagues found that, in general, images with people in them are the most memorable, followed by images of human-scale space and close-ups of objects. Least memorable are natural landscapes.

As a follow-up to that study, Fedorenko and Oliva, along with Ted Gibson, another faculty member in BCS, teamed up to determine if words also vary in their memorability. In a study published earlier this year, co-led by Tuckute and Kyle Mahowald, a former PhD student in BCS, the researchers found that the most memorable words are those that have the most distinctive meanings.

Words are categorized as being more distinctive if they have a single meaning, and few or no synonyms — for example, words like “pineapple” or “avalanche” which were found to be very memorable. On the other hand, words that can have multiple meanings, such as “light,” or words that have many synonyms, like “happy,” were more difficult for people to recognize accurately.

In the new study, the researchers expanded their scope to analyze the memorability of sentences. Just like words, some sentences have very distinctive meanings, while others communicate similar information in slightly different ways.

To do the study, the researchers assembled a collection of 2,500 sentences drawn from publicly available databases that compile text from novels, news articles, movie dialogues, and other sources. Each sentence that they chose contained exactly six words.

The researchers then presented a random selection of about 1,000 of these sentences to each study participant, including repeats of some sentences. Each of the 500 participants in the study was asked to press a button when they saw a sentence that they remembered seeing earlier.

The most memorable sentences — the ones where participants accurately and quickly indicated that they had seen them before — included strings such as “Homer Simpson is hungry, very hungry,” and “These mosquitoes are — well, guinea pigs.”

Those memorable sentences overlapped significantly with sentences that were determined as having distinctive meanings as estimated through the high-dimensional vector space of a large language model (LLM) known as Sentence BERT. That model is able to generate sentence-level representations of sentences, which can be used for tasks like judging meaning similarity between sentences. This model provided researchers with a distinctness score for each sentence based on its semantic similarity to other sentences.

The researchers also evaluated the sentences using a model that predicts memorability based on the average memorability of the individual words in the sentence. This model performed fairly well at predicting overall sentence memorability, but not as well as Sentence BERT. This suggests that the meaning of a sentence as a whole — above and beyond the contributions from individual words — determines how memorable it will be, the researchers say.

Noisy memories

While cognitive scientists have long hypothesized that the brain’s memory banks have a limited capacity, the findings of the new study support an alternative hypothesis that would help to explain how the brain can continue forming new memories without losing old ones.

This alternative, known as the noisy representation hypothesis, says that when the brain encodes a new memory, be it an image, a word, or a sentence, it is represented in a noisy way — that is, this representation is not identical to the stimulus, and some information is lost. For example, for an image, you may not encode the exact viewing angle at which an object is shown, and for a sentence, you may not remember the exact construction used.

Under this theory, a new sentence would be encoded in a similar part of the memory space as sentences that carry a similar meanings, whether they were encountered recently or sometime across a lifetime of language experience. This jumbling of similar meanings together increases the amount of noise and can make it much harder, later on, to remember the exact sentence you have seen before.

“The representation is gradually going to accumulate some noise. As a result, when you see an image or a sentence for a second time, your accuracy at judging whether you’ve seen it before will be affected, and it’ll be less than 100 percent in most cases,” Clark says.

However, if a sentence has a unique meaning that is encoded in a less densely crowded space, it will be easier to pick out later on.

“Your memory may still be noisy, but your ability to make judgments based on the representations is less affected by that noise because the representation is so distinctive to begin with,” Clark says.

The researchers now plan to study whether other features of sentences, such as more vivid and descriptive language, might also contribute to making them more memorable, and how the language system may interact with the hippocampal memory structures during the encoding and retrieval of memories.

The research was funded, in part, by the National Institutes of Health, the McGovern Institute, the Department of Brain and Cognitive Sciences, the Simons Center for the Social Brain, and the MIT Quest Initiative for Intelligence.

3 Questions: How a new mission to Uranus could be just around the corner

Tue, 09/30/2025 - 8:00am

The successful test of SpaceX’s Starship launch vehicle, following a series of engineering challenges and failed launches, has reignited excitement over the possibilities this massive rocket may unlock for humanity’s greatest ambitions in space. The largest rocket ever built, Starship and its 33-engine “super heavy” booster completed a full launch into Earth orbit on Aug. 26, deployed eight test prototype satellites, and survived reentry for a simulated landing before coming down, mostly intact, in the Indian Ocean. The 400-foot rocket is designed to carry up to 150 tons of cargo to low Earth orbit, dramatically increasing potential payload volume from rockets currently in operation. In addition to the planned Artemis III mission to the lunar surface and proposed missions to Mars in the near future, Starship also poses an opportunity for large-scale scientific missions throughout the solar system.

The National Academy of Sciences Planetary Science Decadal Survey published a recommendation in 2022 outlining exploration of Uranus as its highest-priority flagship mission. This proposed mission was envisioned for the 2030s, assuming use of a Falcon Heavy expendable rocket and anticipating arrival at the planet before 2050. Earlier this summer, a paper from researchers in MIT’s Engineering Systems Lab found that Starship may enable this flagship mission to Uranus in half the flight time. 

In this 3Q, Chloe Gentgen, a PhD student in aeronautics and astronautics and co-author on the recent study, describes the significance of Uranus as a flagship mission and what the current trajectory of Starship means for scientific exploration.

Q: Why has Uranus been identified as the highest-priority flagship mission? 

A: Uranus is one of the most intriguing and least-explored planets in our solar system. The planet is tilted on its side, is extremely cold, presents a highly dynamic atmosphere with fast winds, and has an unusual and complex magnetic field. A few of Uranus’ many moons could be ocean worlds, making them potential candidates in the search for life in the solar system. The ice giants Uranus and Neptune also represent the closest match to most of the exoplanets discovered. A mission to Uranus would therefore radically transform our understanding of ice giants, the solar system, and exoplanets. 

What we know about Uranus largely dates back to Voyager 2’s brief flyby nearly 40 years ago. No spacecraft has visited Uranus or Neptune since, making them the only planets yet to have a dedicated orbital mission. One of the main obstacles has been the sheer distance. Uranus is 19 times farther from the sun than the Earth is, and nearly twice as far as Saturn. Reaching it requires a heavy-lift launch vehicle and trajectories involving gravity assists from other planets. 

Today, such heavy-lift launch vehicles are available, and trajectories have been identified for launch windows throughout the 2030s, which resulted in selecting a Uranus mission as the highest priority flagship in the 2022 decadal survey. The proposed concept, called Uranus Orbiter and Probe (UOP), would release a probe into the planet’s atmosphere and then embark on a multiyear tour of the system to study the planet’s interior, atmosphere, magnetosphere, rings, and moons. 

Q: How do you envision your work on the Starship launch vehicle being deployed for further development?

A: Our study assessed the feasibility and potential benefits of launching a mission to Uranus with a Starship refueled in Earth’s orbit, instead of a Falcon Heavy (another SpaceX launch vehicle, currently operational). The Uranus decadal study showed that launching on a Falcon Heavy Expendable results in a cruise time of at least 13 years. Long cruise times present challenges, such as loss of team expertise and a higher operational budget. With the mission not yet underway, we saw an opportunity to evaluate launch vehicles currently in development, particularly Starship. 

When refueled in orbit, Starship could launch a spacecraft directly to Uranus, without detours by other planets for gravity-assist maneuvers. The proposed spacecraft could then arrive at Uranus in just over six years, less than half the time currently envisioned. These high-energy trajectories require significant deceleration at Uranus to capture in orbit. If the spacecraft slows down propulsively, the burn would require 5 km/s of delta v (which quantifies the energy needed for the maneuver), much higher than is typically performed by spacecraft, which might result in a very complex design. A more conservative approach, assuming a maximum burn of 2 km/s at Uranus, would result in a cruise time of 8.5 years. 

An alternative to propulsive orbit insertion at Uranus is aerocapture, where the spacecraft, enclosed in a thermally protective aeroshell, dips into the planet’s atmosphere and uses aerodynamic drag to decelerate. We examined whether Starship itself could perform aerocapture, rather than being separated from the spacecraft shortly after launch. Starship is already designed to withstand atmospheric entry at Earth and Mars, and thus already has a thermal protection system that could, potentially, be modified for aerocapture at Uranus. While bringing a Starship vehicle all the way to Uranus presents significant challenges, our analysis showed that aerocapture with Starship would produce deceleration and heating loads similar to those of other Uranus aerocapture concepts and would enable a cruise time of six years.

In addition to launching the proposed spacecraft on a faster trajectory that would reach Uranus sooner, Starship’s capabilities could also be leveraged to deploy larger masses to Uranus, enabling an enhanced mission with additional instruments or probes.

Q: What does the recent successful test of Starship tell us about the viability and timeline for a potential mission to the outer solar system?

A: The latest Starship launch marked an important milestone for the company after three failed launches in recent months, renewing optimism about the rocket’s future capabilities. Looking ahead, the program will need to demonstrate on-orbit refueling, a capability central to both SpaceX’s long-term vision of deep-space exploration and this proposed mission.

Launch vehicle selection for flagship missions typically occurs approximately two years after the official mission formulation process begins, which has not yet commenced for the Uranus mission. As such, Starship still has a few more years to demonstrate its on-orbit refueling architecture before a decision has to be made.

Overall, Starship is still under development, and significant uncertainty remains about its performance, timelines, and costs. Even so, our initial findings paint a promising picture of the benefits that could be realized by using Starship for a flagship mission to Uranus.

3 Questions: Addressing the world’s most pressing challenges

Tue, 09/30/2025 - 8:00am

The Center for International Studies (CIS) empowers students, faculty, and scholars to bring MIT’s interdisciplinary style of research and scholarship to address complex global challenges. 

In this Q&A, Mihaela Papa, the center's director of research and a principal research scientist at MIT, describes her role as well as research within the BRICS Lab at MIT — a reference the BRICS intergovernmental organization, which comprises the nations of Brazil, Russia, India, China, South Africa, Egypt, Ethiopia, Indonesia, Iran and the United Arab Emirates. She also discusses the ongoing mission of CIS to tackle the world's most complex challenges in new and creative ways.

Q: What is your role at CIS, and some of your key accomplishments since joining the center just over a year ago?

A: I serve as director of research and principal research scientist at CIS, a role that bridges management and scholarship. I oversee grant and fellowship programs, spearhead new research initiatives, build research communities across our center's area programs and MIT schools, and mentor the next generation of scholars. My academic expertise is in international relations, and I publish on global governance and sustainable development, particularly through my new BRICS Lab. 

This past year, I focused on building collaborative platforms that highlight CIS’ role as an interdisciplinary hub and expand its research reach. With Evan Lieberman, the director of CIS, I launched the CIS Global Research and Policy Seminar series to address current challenges in global development and governance, foster cross-disciplinary dialogue, and connect theoretical insights to policy solutions. We also convened a Climate Adaptation Workshop, which examined promising strategies for financing adaptation and advancing policy innovation. We documented the outcomes in a workshop report that outlines a broader research agenda contributing to MIT’s larger climate mission.

In parallel, I have been reviewing CIS’ grant-making programs to improve how we serve our community, while also supporting regional initiatives such as research planning related to Ukraine. Together with the center's MIT-Brazil faculty director Brad Olsen, I secured a MITHIC [MIT Human Insight Collaboration] Connectivity grant to build an MIT Amazonia research community that connects MIT scholars with regional partners and strengthens collaboration across the Amazon. Finally, I launched the BRICS Lab to analyze transformations in global governance and have ongoing research on BRICS and food security and data centers in BRICS. 

Q: Tell us more about the BRICS Lab.

A: The BRICS countries comprise the majority of the world’s population and an expanding share of the global economy. [Originally comprising Brazil, Russia, India, and China, BRICS currently includes 11 nations.] As a group, they carry the collective weight to shape international rules, influence global markets, and redefine norms — yet the question remains: Will they use this power effectively? The BRICS Lab explores the implications of the bloc’s rise for international cooperation and its role in reshaping global politics. Our work focuses on three areas: the design and strategic use of informal groups like BRICS in world affairs; the coalition’s potential to address major challenges such as food security, climate change, and artificial intelligence; and the implications of U.S. policy toward BRICS for the future of multilateralism.

Q: What are the center’s biggest research priorities right now?

A: Our center was founded in response to rising geopolitical tensions and the urgent need for policy rooted in rigorous, evidence-based research. Since then, we have grown into a hub that combines interdisciplinary scholarship and actively engages with policymakers and the public. Today, as in our early years, the center brings together exceptional researchers with the ambition to address the world’s most pressing challenges in new and creative ways.

Our core focus spans security, development, and human dignity. Security studies have been a priority for the center, and our new nuclear security programming advances this work while training the next generation of scholars in this critical field. On the development front, our work has explored how societies manage diverse populations, navigate international migration, as well as engage with human rights and the changing patterns of regime dynamics.

We are pursuing new research in three areas. First, on climate change, we seek to understand how societies confront environmental risks and harms, from insurance to water and food security in the international context. Second, we examine shifting patterns of global governance as rising powers set new agendas and take on greater responsibilities in the international system. Finally, we are initiating research on the impact of AI — how it reshapes governance across international relations, what is the role of AI corporations, and how AI-related risks can be managed.

As we approach our 75th anniversary in 2026, we are excited to bring researchers together to spark bold ideas that open new possibilities for the future.

Saab 340 becomes permanent flight-test asset at Lincoln Laboratory

Tue, 09/30/2025 - 8:00am

A Saab 340 aircraft recently became a permanent fixture of the fleet at the MIT Lincoln Laboratory Flight Test Facility, which supports R&D programs across the lab. 

Over the past five years, the facility leased and operated the twin-engine turboprop, once commercially used for the regional transport of passengers and cargo. During this time, staff modified the aircraft with a suite of radar, sensing, and communications capabilities. Transitioning the aircraft from a leased to a government-owned asset retains the aircraft's capabilities for present and future R&D in support of national security and reduces costs for Lincoln Laboratory sponsors. 

With the acquisition of the Saab, the Flight Test Facility currently maintains five government-owned aircraft — including three Gulfstream IVs and a Cessna 206 — as well as a leased Twin Otter, all housed on Hanscom Air Force Base, just over a mile from the laboratory's main campus.

"Of all our aircraft, the Saab is the most multi-mission-capable," says David Culbertson, manager of the Flight Test Facility. "It's highly versatile and adaptable, like a Swiss Army knife. Researchers from across the laboratory have conducted flight tests on the Saab to develop all kinds of technologies for national security."

For example, the Saab was modified to host the Airborne Radar Testbed (ARTB), a high-performance radar system based on a computer-controlled array of antennas that can be electronically steered (instead of physically moved) in different directions. With the ARTB, researchers have matured innovative radio-frequency technology; prototyped advanced system concepts; and demonstrated concepts of operation for intelligence, surveillance, and reconnaissance (ISR) missions. With its open-architecture design and compliance with open standards, the ARTB can easily be reconfigured to suit specific R&D needs.

"The Saab has enabled us to rapidly prototype and mature the complex system-of-systems solutions needed to realize critical warfighter capabilities," says Ramu Bhagavatula, an assistant leader of the laboratory's Embedded and Open Systems Group. "Recently, the Saab participated in a major national exercise as a surrogate multi-INT [intelligence] ISR platform. We demonstrated machine-to-machine cueing of our multi-INT payload to automatically recognize targets designated by an operational U.S. Air Force platform. The Saab's flexibility was key to integrating diverse technologies to develop this important capability."

In anticipation of the expiration of the Saab's lease, the Flight Test Facility and Financial Services Department conducted an extensive analysis of alternatives. Comparing the operational effectiveness, suitability, and life-cycle cost of various options, this analysis determined that the optimal solution for the laboratory and the government was to purchase the aircraft.

"Having the Saab in our permanent inventory allows research groups from across the laboratory to continuously leverage each other's test beds and expertise," says Linda McCabe, a project manager in the laboratory's Communication Networks and Analysis Group. "In addition, we can invest in long-term infrastructure updates that will benefit a wide range of users. For instance, my group helped obtain authorizations from various agencies to equip the Saab with Link 16, a secure communications network used by NATO and its allies to share tactical information."

The Saab acquisition is part of a larger recapitalization effort at the Flight Test Facility to support emerging technology development for years to come. This 10-year effort, slated for completion in 2026, is retiring aging, obsolete aircraft and replacing them with newer platforms that will be more cost-effective to maintain, easier to integrate rapidly prototyped systems into, and able to operate under expanded flight envelopes (the performance limits within which an aircraft can safely fly, defined by parameters such as speed, altitude, and maneuverability).

MIT joins in constructing the Giant Magellan Telescope

Tue, 09/30/2025 - 6:00am

The following article is adapted from a joint press release issued today by MIT and the Giant Magellan Telescope.

MIT is lending its support to the Giant Magellan Telescope, joining the international consortium to advance the $2.6 billion observatory in Chile. The Institute’s participation, enabled by a transformational gift from philanthropists Phillip (Terry) Ragon ’72 and Susan Ragon, adds to the momentum to construct the Giant Magellan Telescope, whose 25.4-meter aperture will have five times the light-collecting area and up to 200 times the power of existing observatories.

“As philanthropists, Terry and Susan have an unerring instinct for finding the big levers: those interventions that truly transform the scientific landscape,” says MIT President Sally Kornbluth. “We saw this with their founding of the Ragon Institute, which pursues daring approaches to harnessing the immune system to prevent and cure human diseases. With today’s landmark gift, the Ragons enable an equally lofty mission to better understand the universe — and we could not be more grateful for their visionary support."

MIT will be the 16th member of the international consortium advancing the Giant Magellan Telescope and the 10th participant based in the United States. Together, the consortium has invested $1 billion in the observatory — the largest-ever private investment in ground-based astronomy. The Giant Magellan Telescope is already 40 percent under construction, with major components being designed and manufactured across 36 U.S. states.

“MIT is honored to join the consortium and participate in this exceptional scientific endeavor,” says Ian A. Waitz, MIT’s vice president for research. “The Giant Magellan Telescope will bring tremendous new capabilities to MIT astronomy and to U.S. leadership in fundamental science. The construction of this uniquely powerful telescope represents a vital private and public investment in scientific excellence for decades to come.”

MIT brings to the consortium powerful scientific capabilities and a legacy of astronomical excellence. MIT’s departments of Physics and of Earth, Atmospheric and Planetary Sciences, and the MIT Kavli Institute for Astrophysics and Space Research, are internationally recognized for research in exoplanets, cosmology, and environments of extreme gravity, such as black holes and compact binary stars. MIT’s involvement will strengthen the Giant Magellan Telescope’s unique capabilities in high-resolution spectroscopy, adaptive optics, and the search for life beyond Earth. It also deepens a long-standing scientific relationship: MIT is already a partner in the existing twin Magellan Telescopes at Las Campanas Observatory in Chile — one of the most scientifically valuable observing sites on Earth, and the same site where the Giant Magellan Telescope is now under construction.

“Since Galileo’s first spyglass, the world’s largest telescope has doubled in aperture every 40 to 50 years,” says Robert A. Simcoe, director of the MIT Kavli Institute and the Francis L. Friedman Professor of Physics. “Each generation’s leading instruments have resolved important scientific questions of the day and then surprised their builders with new discoveries not yet even imagined, helping humans understand our place in the universe. Together with the Giant Magellan Telescope, MIT is helping to realize our generation’s contribution to this lineage, consistent with our mission to advance the frontier of fundamental science by undertaking the most audacious and advanced engineering challenges.”

Contributing to the national strategy

MIT’s support comes at a pivotal time for the observatory. In June 2025, the National Science Foundation (NSF) advanced the Giant Magellan Telescope into its Final Design Phase, one of the final steps before it becomes eligible for federal construction funding. To demonstrate readiness and a strong commitment to U.S. leadership, the consortium offered to privately fund this phase, which is traditionally supported by the NSF.

MIT’s investment is an integral part of the national strategy to secure U.S. access to the next generation of research facilities known as “extremely large telescopes.” The Giant Magellan Telescope is a core partner in the U.S. Extremely Large Telescope Program, the nation’s top priority in astronomy. The National Academies’ Astro2020 Decadal Survey called the program “absolutely essential if the United States is to maintain a position as a leader in ground-based astronomy.” This long-term strategy also includes the recently commissioned Vera C. Rubin Observatory in Chile. Rubin is scanning the sky to detect rare, fast-changing cosmic events, while the Giant Magellan Telescope will provide the sensitivity, resolution, and spectroscopic instruments needed to study them in detail. Together, these Southern Hemisphere observatories will give U.S. scientists the tools they need to lead 21st-century astrophysics.

“Without direct access to the Giant Magellan Telescope, the U.S. risks falling behind in fundamental astronomy, as Rubin’s most transformational discoveries will be utilized by other nations with access to their own ‘extremely large telescopes’ under development,” says Walter Massey, board chair of the Giant Magellan Telescope.

MIT’s participation brings the United States a step closer to completing the promise of this powerful new observatory on a globally competitive timeline. With federal construction funding, it is expected that the observatory could reach 90 percent completion in less than two years and become operational by the 2030s.

“MIT brings critical expertise and momentum at a time when global leadership in astronomy hangs in the balance,” says Robert Shelton, president of the Giant Magellan Telescope. “With MIT, we are not just adding a partner; we are accelerating a shared vision for the future and reinforcing the United States’ position at the forefront of science.”

Other members of the Giant Magellan Telescope consortium include the University of Arizona, Carnegie Institution for Science, The University of Texas at Austin, Korea Astronomy and Space Science Institute, University of Chicago, São Paulo Research Foundation (FAPESP), Texas A&M University, Northwestern University, Harvard University, Astronomy Australia Ltd., Australian National University, Smithsonian Institution, Weizmann Institute of Science, Academia Sinica Institute of Astronomy and Astrophysics, and Arizona State University.

A boon for astrophysics research and education

Access to the world’s best optical telescopes is a critical resource for MIT researchers. More than 150 individual science programs at MIT have relied on major astronomical observatories in the past three years, engaging faculty, researchers, and students in investigations into the marvels of the universe. Recent research projects have included chemical studies of the universe’s oldest stars, led by Professor Anna Frebel; spectroscopy of stars shredded by dormant black holes, led by Professor Erin Kara; and measurements of a white dwarf teetering on the precipice of a black hole, led by Professor Kevin Burdge. 

“Over many decades, researchers at the MIT Kavli Institute have used unparalleled instruments to discover previously undetected cosmic phenomena from both ground-based observations and spaceflight missions,” says Nergis Mavalvala, dean of the MIT School of Science and the Curtis (1963) and Kathleen Marble Professor of Astrophysics. “I have no doubt our brilliant colleagues will carry on that tradition with the Giant Magellan Telescope, and I can’t wait to see what they will discover next.”

The Giant Magellan Telescope will also provide a platform for advanced R&D in remote sensing, creating opportunities to build custom infrared and optical spectrometers and high-speed imagers to further study our universe.

“One cannot have a leading physics program without a leading astrophysics program. Access to time on the Giant Magellan Telescope will ensure that future generations of MIT researchers will continue to work at the forefront of astrophysical discovery for decades to come,” says Deepto Chakrabarty, head of the MIT Department of Physics, the William A. M. Burden Professor in Astrophysics, and principal investigator at the MIT Kavli Institute. “Our institutional access will help attract and retain top researchers in astrophysics, planetary science, and advanced optics, and will give our PhD students and postdocs unrivaled educational opportunities.”

Responding to the climate impact of generative AI

Tue, 09/30/2025 - 12:00am

In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.

The energy demands of generative AI are expected to continue increasing dramatically over the next decade.

For instance, an April 2025 report from the International Energy Agency predicts that the global electricity demand from data centers, which house the computing infrastructure to train and deploy AI models, will more than double by 2030, to around 945 terawatt-hours. While not all operations performed in a data center are AI-related, this total amount is slightly more than the energy consumption of Japan.

Moreover, an August 2025 analysis from Goldman Sachs Research forecasts that about 60 percent of the increasing electricity demands from data centers will be met by burning fossil fuels, increasing global carbon emissions by about 220 million tons. In comparison, driving a gas-powered car for 5,000 miles produces about 1 ton of carbon dioxide.

These statistics are staggering, but at the same time, scientists and engineers at MIT and around the world are studying innovations and interventions to mitigate AI’s ballooning carbon footprint, from boosting the efficiency of algorithms to rethinking the design of data centers.

Considering carbon emissions

Talk of reducing generative AI’s carbon footprint is typically centered on “operational carbon” — the emissions used by the powerful processors, known as GPUs, inside a data center. It often ignores “embodied carbon,” which are emissions created by building the data center in the first place, says Vijay Gadepally, senior scientist at MIT Lincoln Laboratory, who leads research projects in the Lincoln Laboratory Supercomputing Center.

Constructing and retrofitting a data center, built from tons of steel and concrete and filled with air conditioning units, computing hardware, and miles of cable, consumes a huge amount of carbon. In fact, the environmental impact of building data centers is one reason companies like Meta and Google are exploring more sustainable building materials. (Cost is another factor.)

Plus, data centers are enormous buildings — the world’s largest, the China Telecomm-Inner Mongolia Information Park, engulfs roughly 10 million square feet — with about 10 to 50 times the energy density of a normal office building, Gadepally adds. 

“The operational side is only part of the story. Some things we are working on to reduce operational emissions may lend themselves to reducing embodied carbon, too, but we need to do more on that front in the future,” he says.

Reducing operational carbon emissions

When it comes to reducing operational carbon emissions of AI data centers, there are many parallels with home energy-saving measures. For one, we can simply turn down the lights.

“Even if you have the worst lightbulbs in your house from an efficiency standpoint, turning them off or dimming them will always use less energy than leaving them running at full blast,” Gadepally says.

In the same fashion, research from the Supercomputing Center has shown that “turning down” the GPUs in a data center so they consume about three-tenths the energy has minimal impacts on the performance of AI models, while also making the hardware easier to cool.

Another strategy is to use less energy-intensive computing hardware.

Demanding generative AI workloads, such as training new reasoning models like GPT-5, usually need many GPUs working simultaneously. The Goldman Sachs analysis estimates that a state-of-the-art system could soon have as many as 576 connected GPUs operating at once.

But engineers can sometimes achieve similar results by reducing the precision of computing hardware, perhaps by switching to less powerful processors that have been tuned to handle a specific AI workload.

There are also measures that boost the efficiency of training power-hungry deep-learning models before they are deployed.

Gadepally’s group found that about half the electricity used for training an AI model is spent to get the last 2 or 3 percentage points in accuracy. Stopping the training process early can save a lot of that energy.

“There might be cases where 70 percent accuracy is good enough for one particular application, like a recommender system for e-commerce,” he says.

Researchers can also take advantage of efficiency-boosting measures.

For instance, a postdoc in the Supercomputing Center realized the group might run a thousand simulations during the training process to pick the two or three best AI models for their project.

By building a tool that allowed them to avoid about 80 percent of those wasted computing cycles, they dramatically reduced the energy demands of training with no reduction in model accuracy, Gadepally says.

Leveraging efficiency improvements

Constant innovation in computing hardware, such as denser arrays of transistors on semiconductor chips, is still enabling dramatic improvements in the energy efficiency of AI models.

Even though energy efficiency improvements have been slowing for most chips since about 2005, the amount of computation that GPUs can do per joule of energy has been improving by 50 to 60 percent each year, says Neil Thompson, director of the FutureTech Research Project at MIT’s Computer Science and Artificial Intelligence Laboratory and a principal investigator at MIT’s Initiative on the Digital Economy.

“The still-ongoing ‘Moore’s Law’ trend of getting more and more transistors on chip still matters for a lot of these AI systems, since running operations in parallel is still very valuable for improving efficiency,” says Thomspon.

Even more significant, his group’s research indicates that efficiency gains from new model architectures that can solve complex problems faster, consuming less energy to achieve the same or better results, is doubling every eight or nine months.

Thompson coined the term “negaflop” to describe this effect. The same way a “negawatt” represents electricity saved due to energy-saving measures, a “negaflop” is a computing operation that doesn’t need to be performed due to algorithmic improvements.

These could be things like “pruning” away unnecessary components of a neural network or employing compression techniques that enable users to do more with less computation.

“If you need to use a really powerful model today to complete your task, in just a few years, you might be able to use a significantly smaller model to do the same thing, which would carry much less environmental burden. Making these models more efficient is the single-most important thing you can do to reduce the environmental costs of AI,” Thompson says.

Maximizing energy savings

While reducing the overall energy use of AI algorithms and computing hardware will cut greenhouse gas emissions, not all energy is the same, Gadepally adds.

“The amount of carbon emissions in 1 kilowatt hour varies quite significantly, even just during the day, as well as over the month and year,” he says.

Engineers can take advantage of these variations by leveraging the flexibility of AI workloads and data center operations to maximize emissions reductions. For instance, some generative AI workloads don’t need to be performed in their entirety at the same time.

Splitting computing operations so some are performed later, when more of the electricity fed into the grid is from renewable sources like solar and wind, can go a long way toward reducing a data center’s carbon footprint, says Deepjyoti Deka, a research scientist in the MIT Energy Initiative.

Deka and his team are also studying “smarter” data centers where the AI workloads of multiple companies using the same computing equipment are flexibly adjusted to improve energy efficiency.

“By looking at the system as a whole, our hope is to minimize energy use as well as dependence on fossil fuels, while still maintaining reliability standards for AI companies and users,” Deka says.

He and others at MITEI are building a flexibility model of a data center that considers the differing energy demands of training a deep-learning model versus deploying that model. Their hope is to uncover the best strategies for scheduling and streamlining computing operations to improve energy efficiency.

The researchers are also exploring the use of long-duration energy storage units at data centers, which store excess energy for times when it is needed.

With these systems in place, a data center could use stored energy that was generated by renewable sources during a high-demand period, or avoid the use of diesel backup generators if there are fluctuations in the grid.

“Long-duration energy storage could be a game-changer here because we can design operations that really change the emission mix of the system to rely more on renewable energy,” Deka says.

In addition, researchers at MIT and Princeton University are developing a software tool for investment planning in the power sector, called GenX, which could be used to help companies determine the ideal place to locate a data center to minimize environmental impacts and costs.

Location can have a big impact on reducing a data center’s carbon footprint. For instance, Meta operates a data center in Lulea, a city on the coast of northern Sweden where cooler temperatures reduce the amount of electricity needed to cool computing hardware.

Thinking farther outside the box (way farther), some governments are even exploring the construction of data centers on the moon where they could potentially be operated with nearly all renewable energy.

AI-based solutions

Currently, the expansion of renewable energy generation here on Earth isn’t keeping pace with the rapid growth of AI, which is one major roadblock to reducing its carbon footprint, says Jennifer Turliuk MBA ’25, a short-term lecturer, former Sloan Fellow, and former practice leader of climate and energy AI at the Martin Trust Center for MIT Entrepreneurship.

The local, state, and federal review processes required for a new renewable energy projects can take years.

Researchers at MIT and elsewhere are exploring the use of AI to speed up the process of connecting new renewable energy systems to the power grid.

For instance, a generative AI model could streamline interconnection studies that determine how a new project will impact the power grid, a step that often takes years to complete.

And when it comes to accelerating the development and implementation of clean energy technologies, AI could play a major role.

“Machine learning is great for tackling complex situations, and the electrical grid is said to be one of the largest and most complex machines in the world,” Turliuk adds.

For instance, AI could help optimize the prediction of solar and wind energy generation or identify ideal locations for new facilities.

It could also be used to perform predictive maintenance and fault detection for solar panels or other green energy infrastructure, or to monitor the capacity of transmission wires to maximize efficiency.

By helping researchers gather and analyze huge amounts of data, AI could also inform targeted policy interventions aimed at getting the biggest “bang for the buck” from areas such as renewable energy, Turliuk says.

To help policymakers, scientists, and enterprises consider the multifaceted costs and benefits of AI systems, she and her collaborators developed the Net Climate Impact Score.

The score is a framework that can be used to help determine the net climate impact of AI projects, considering emissions and other environmental costs along with potential environmental benefits in the future.

At the end of the day, the most effective solutions will likely result from collaborations among companies, regulators, and researchers, with academia leading the way, Turliuk adds.

“Every day counts. We are on a path where the effects of climate change won’t be fully known until it is too late to do anything about it. This is a once-in-a-lifetime opportunity to innovate and make AI systems less carbon-intense,” she says.

A beacon of light

Mon, 09/29/2025 - 4:00pm

Placing a lit candle in a window to welcome friends and strangers is an old Irish tradition that took on greater significance when Mary Robinson was elected president of Ireland in 1990. At the time, Robinson placed a lamp in Áras an Uachtaráin — the official residence of Ireland’s presidents — noting that the Irish diaspora and all others are always welcome in Ireland. Decades later, a lit lamp remains in a window in Áras an Uachtaráin.

The symbolism of Robinson’s lamp was shared by Hashim Sarkis, dean of the MIT School of Architecture and Planning (SA+P), at the school’s graduation ceremony in May, where Robinson addressed the class of 2025. To replicate the generous intentions of Robinson’s lamp and commemorate her visit to MIT, Sarkis commissioned a unique lantern as a gift for Robinson. He commissioned an identical one for his office, which is in the front portico of MIT at 77 Massachusetts Ave.

“The lamp will welcome all citizens of the world to MIT,” says Sarkis.

No ordinary lantern

The bespoke lantern was created by Marcelo Coelho SM ’08, PhD ’12, director of the Design Intelligence Lab and associate professor of the practice in the Department of Architecture.

One of several projects in the Geoletric research at the Design Intelligence Lab, the lantern showcases the use of geopolymers as a sustainable material alternative for embedded computers and consumer electronics.

“The materials that we use to make computers have a negative impact on climate, so we’re rethinking how we make products with embedded electronics — such as a lamp or lantern — from a climate perspective,” says Coelho.

Consumer electronics rely on materials that are high in carbon emissions and difficult to recycle. As the demand for embedded computing increases, so too does the need for alternative materials that have a reduced environmental impact while supporting electronic functionality.

The Geolectric lantern advances the formulation and application of geopolymers — a class of inorganic materials that form covalently bonded, non-crystalline networks. Unlike traditional ceramics, geopolymers do not require high-temperature firing, allowing electronic components to be embedded seamlessly during production.

Geopolymers are similar to ceramics, but have a lower carbon footprint and present a sustainable alternative for consumer electronics, product design, and architecture. The minerals Coelho uses to make the geopolymers — aluminum silicate and sodium silicate — are those regularly used to make ceramics.

“Geopolymers aren’t particularly new, but are becoming more popular,” says Coelho. “They have high strength in both tension and compression, superior durability, fire resistance, and thermal insulation. Compared to concrete, geopolymers don’t release carbon dioxide. Compared to ceramics, you don’t have to worry about firing them. What’s even more interesting is that they can be made from industrial byproducts and waste materials, contributing to a circular economy and reducing waste.”

The lantern is embedded with custom electronics that serve as a proximity and touch sensor. When a hand is placed over the top, light shines down the glass tubes.

The timeless design of the Geoelectric lantern — minimalist, composed of natural materials — belies its future-forward function. Coelho’s academic background is in fine arts and computer science. Much of his work, he says, “bridges these two worlds.”

Working at the Design Intelligence Lab with Coelho on the lanterns are Jacob Payne, a graduate architecture student, and Jean-Baptiste Labrune, a research affiliate.

A light for MIT

A few weeks before commencement, Sarkis saw the Geoelectric lantern in Palazzo Diedo Berggruen Arts and Culture in Venice, Italy. The exhibition, a collateral event of the Venice Biennale’s 19th International Architecture Exhibition, featured the work of 40 MIT architecture faculty.

The sustainability feature of Geolectric is the key reason Sarkis regarded the lantern as the perfect gift for Robinson. After her career in politics, Robinson founded the Mary Robinson Foundation — Climate Justice, an international center addressing the impacts of climate change on marginalized communities.

The third iteration of Geolectric for Sarkis’ office is currently underway. While the lantern was a technical prototype and an opportunity to showcase his lab’s research, Coelho — an immigrant from Brazil — was profoundly touched by how Sarkis created the perfect symbolism to both embody the welcoming spirit of the school and honor President Robinson.

“When the world feels most fragile, we need to urgently find sustainable and resilient solutions for our built environment. It’s in the darkest times when we need light the most,” says Coelho. 

The first animals on Earth may have been sea sponges, study suggests

Mon, 09/29/2025 - 3:00pm

A team of MIT geochemists has unearthed new evidence in very old rocks suggesting that some of the first animals on Earth were likely ancestors of the modern sea sponge.

In a study appearing today in the Proceedings of the National Academy of Sciences, the researchers report that they have identified “chemical fossils” that may have been left by ancient sponges in rocks that are more than 541 million years old. A chemical fossil is a remnant of a biomolecule that originated from a living organism that has since been buried, transformed, and preserved in sediment, sometimes for hundreds of millions of years.

The newly identified chemical fossils are special types of steranes, which are the geologically stable form of sterols, such as cholesterol, that are found in the cell membranes of complex organisms. The researchers traced these special steranes to a class of sea sponges known as demosponges. Today, demosponges come in a huge variety of sizes and colors, and live throughout the oceans as soft and squishy filter feeders. Their ancient counterparts may have shared similar characteristics.

“We don’t know exactly what these organisms would have looked like back then, but they absolutely would have lived in the ocean, they would have been soft-bodied, and we presume they didn’t have a silica skeleton,” says Roger Summons, the Schlumberger Professor of Geobiology Emeritus in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS).

The group’s discovery of sponge-specific chemical fossils offers strong evidence that the ancestors of demosponges were among the first animals to evolve, and that they likely did so much earlier than the rest of Earth’s major animal groups.

The study’s authors, including Summons, are lead author and former MIT EAPS Crosby Postdoctoral Fellow Lubna Shawar, who is now a research scientist at Caltech, along with Gordon Love from the University of California at Riverside, Benjamin Uveges of Cornell University, Alex Zumberge of GeoMark Research in Houston, Paco Cárdenas of Uppsala University in Sweden, and José-Luis Giner of the State University of New York College of Environmental Science and Forestry.

Sponges on steroids

The new study builds on findings that the group first reported in 2009. In that study, the team identified the first chemical fossils that appeared to derive from ancient sponges. They analyzed rock samples from an outcrop in Oman and found a surprising abundance of steranes that they determined were the preserved remnants of 30-carbon (C30) sterols — a rare form of steroid that they showed was likely derived from ancient sea sponges.

The steranes were found in rocks that were very old and formed during the Ediacaran Period — which spans from roughly 541 million to about 635 million years ago. This period took place just before the Cambrian, when the Earth experienced a sudden and global explosion of complex multicellular life. The team’s discovery suggested that ancient sponges appeared much earlier than most multicellular life, and were possibly one of Earth’s first animals.

However, soon after these findings were released, alternative hypotheses swirled to explain the C30 steranes’ origins, including that the chemicals could have been generated by other groups of organisms or by nonliving geological processes.

The team says the new study reinforces their earlier hypothesis that ancient sponges left behind this special chemical record, as they have identified a new chemical fossil in the same Precambrian rocks that is almost certainly biological in origin.

Building evidence

Just as in their previous work, the researchers looked for chemical fossils in rocks that date back to the Ediacaran Period. They acquired samples from drill cores and outcrops in Oman, western India, and Siberia, and analyzed the rocks for signatures of steranes, the geologically stable form of sterols found in all eukaryotes (plants, animals, and any organism with a nucleus and membrane-bound organelles).

“You’re not a eukaryote if you don’t have sterols or comparable membrane lipids,” Summons says.

A sterol’s core structure consists of four fused carbon rings. Additional carbon side chain and chemical add-ons can attach to and extend a sterol’s structure, depending on what an organism’s particular genes can produce. In humans, for instance, the sterol cholesterol contains 27 carbon atoms, while the sterols in plants generally have 29 carbon atoms.

“It’s very unusual to find a sterol with 30 carbons,” Shawar says.

The chemical fossil the researchers identified in 2009 was a 30-carbon sterol. What’s more, the team determined that the compound could be synthesized because of the presence of a distinctive enzyme which is encoded by a gene that is common to demosponges.

In their new study, the team focused on the chemistry of these compounds and realized the same sponge-derived gene could produce an even rarer sterol, with 31 carbon atoms (C31). When they analyzed their rock samples for C31 steranes, they found it in surprising abundance, along with the aforementioned C30 steranes.

“These special steranes were there all along,” Shawar says. “It took asking the right questions to seek them out and to really understand their meaning and from where they come.”

The researchers also obtained samples of modern-day demosponges and analyzed them for C31 sterols. They found that, indeed, the sterols — biological precursors of the C31 steranes found in rocks — are present in some species of contemporary demosponges. Going a step further, they chemically synthesized eight different C31 sterols in the lab as reference standards to verify their chemical structures. Then, they processed the molecules in ways that simulate how the sterols would change when deposited, buried, and pressurized over hundreds of millions of years. They found that the products of only two such sterols were an exact match with the form of C31 sterols that they found in ancient rock samples. The presence of two and the absence of the other six demonstrates that these compounds were not produced by a random nonbiological process.

The findings, reinforced by multiple lines of inquiry, strongly support the idea that the steranes that were found in ancient rocks were indeed produced by living organisms, rather than through geological processes. What’s more, those organisms were likely the ancestors of demosponges, which to this day have retained the ability to produce the same series of compounds.

“It’s a combination of what’s in the rock, what’s in the sponge, and what you can make in a chemistry laboratory,” Summons says. “You’ve got three supportive, mutually agreeing lines of evidence, pointing to these sponges being among the earliest animals on Earth.”

“In this study we show how to authenticate a biomarker, verifying that a signal truly comes from life rather than contamination or non-biological chemistry,” Shawar adds.

Now that the team has shown C30 and C31 sterols are reliable signals of ancient sponges, they plan to look for the chemical fossils in ancient rocks from other regions of the world. They can only tell from the rocks they’ve sampled so far that the sediments, and the sponges, formed some time during the Ediacaran Period. With more samples, they will have a chance to narrow in on when some of the first animals took form.

This research was supported, in part, by the MIT Crosby Fund, the Distinguished Postdoctoral Fellowship program, the Simons Foundation Collaboration on the Origins of Life, and the NASA Exobiology Program. 

How the brain splits up vision without you even noticing

Fri, 09/26/2025 - 3:50pm

The brain divides vision between its two hemispheres — what’s on your left is processed by your right hemisphere, and vice versa — but your experience with every bike or bird that you see zipping by is seamless. A new study by neuroscientists at The Picower Institute for Learning and Memory at MIT reveals how the brain handles the transition.

“It’s surprising to some people to hear that there’s some independence between the hemispheres, because that doesn’t really correspond to how we perceive reality,” says Earl K. Miller, Picower Professor in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences. “In our consciousness, everything seems to be unified.”

There are advantages to separately processing vision on either side of the brain, including the ability to keep track of more things at once, Miller and other researchers have found, but neuroscientists have been eager to fully understand how perception ultimately appears so unified in the end.

Led by Picower Fellow Matthew Broschard and Research Scientist Jefferson Roy, the research team measured neural activity in the brains of animals as they tracked objects crossing their field of view. The results reveal that different frequencies of brain waves encoded and then transferred information from one hemisphere to the other in advance of the crossing, and then held on to the object representation in both hemispheres until after the crossing was complete. The process is analogous to how relay racers hand off a baton, how a child swings from one monkey bar to the next, and how cellphone towers hand off a call from one to the next as a train passenger travels through their area. In all cases, both towers or hands actively hold what’s being transferred until the handoff is confirmed.

Witnessing the handoff

To conduct the study, published Sept. 19 in the Journal of Neuroscience, the researchers measured both the electrical spiking of individual neurons and the various frequencies of brain waves that emerge from the coordinated activity of many neurons. They studied the dorsal and ventrolateral prefrontal cortex in both hemispheres, brain areas associated with executive brain functions.

The power fluctuations of the wave frequencies in each hemisphere told the researchers a clear story about how the subject’s brains transferred information from the “sending” to the “receiving” hemisphere whenever a target object crossed the middle of their field of view. In the experiments, the target was accompanied by a distractor object on the opposite side of the screen to confirm that the subjects were consciously paying attention to the target object’s motion, and not just indiscriminately glancing at whatever happened to pop up on to the screen.

The highest-frequency “gamma” waves, which encode sensory information, peaked in both hemispheres when the subjects first looked at the screen and again when the two objects appeared. When a color change signaled which object was the target to track, the gamma increase was only evident in the “sending” hemisphere (on the opposite side as the target object), as expected. Meanwhile, the power of somewhat lower-frequency “beta” waves, which regulate when gamma waves are active, varied inversely with the gamma waves. These sensory encoding dynamics were stronger in the ventrolateral locations compared to the dorsolateral ones.

Meanwhile, two distinct bands of lower-frequency waves showed greater power in the dorsolateral locations at key moments related to achieving the handoff. About a quarter of a second before a target object crossed the middle of the field of view, “alpha” waves ramped up in both hemispheres and then peaked just after the object crossed. Meanwhile, “theta” band waves peaked after the crossing was complete, only in the “receiving” hemisphere (opposite from the target’s new position).

Accompanying the pattern of wave peaks, neuron spiking data showed how the brain’s representation of the target’s location traveled. Using decoder software, which interprets what information the spikes represent, the researchers could see the target representation emerge in the sending hemisphere’s ventrolateral location when it was first cued by the color change. Then they could see that as the target neared the middle of the field of view, the receiving hemisphere joined the sending hemisphere in representing the object, so that they both encoded the information during the transfer.

Doing the wave

Taken together, the results showed that after the sending hemisphere initially encoded the target with a ventrolateral interplay of beta and gamma waves, a dorsolateral ramp up of alpha waves caused the receiving hemisphere to anticipate the handoff by mirroring the sending hemisphere’s encoding of the target information. Alpha peaked just after the target crossed the middle of the field of view, and when the handoff was complete, theta peaked in the receiving hemisphere as if to say, “I got it.”

And in trials where the target never crossed the middle of the field of view, these handoff dynamics were not apparent in the measurements.

The study shows that the brain is not simply tracking objects in one hemisphere and then just picking them up anew when they enter the field of view of the other hemisphere.

“These results suggest there are active mechanisms that transfer information between cerebral hemispheres,” the authors wrote. “The brain seems to anticipate the transfer and acknowledge its completion.”

But they also note, based on other studies, that the system of interhemispheric coordination can sometimes appear to break down in certain neurological conditions including schizophrenia, autism, depression, dyslexia, and multiple sclerosis. The new study may lend insight into the specific dynamics needed for it to succeed.

In addition to Broschard, Roy, and Miller, the paper’s other authors are Scott Brincat and Meredith Mahnke.

Funding for the study came from the Office of Naval Research, the National Eye Institute of the National Institutes of Health, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.

An adaptable evaluation of justice and interest groups

Fri, 09/26/2025 - 12:00am

In 2024, an association of female senior citizens in Switzerland won a case at the European Court of Human Rights. Their country, the women contended, needed to do more to protect them from climate change, since heat waves can make the elderly particularly vulnerable. The court ruled in favor of the group, saying that states belonging to the Council of Europe have a “positive obligation” to protect citizens from “serious adverse effects of climate change on lives, health, well-being, and quality of life.”

The exact policy implications of such rulings can be hard to assess. But there are still subtle civic implications related to the ruling that bear consideration.

For one thing, although the case was brought by a particular special-interest association, its impact could benefit everyone in society. Yet the people in the group had not always belonged to it and are not wholly defined by being part of it. In a sense, while the senior-citizen association brought the case as a minority group of sorts, being a senior citizen is not the sole identity marker of the people in it.

These kinds of situations underline the complexity of interest-group dynamics as they engage with legal and political systems. Much public discourse on particularistic groups focuses on them as seemingly fixed entities with clear definitions, but being a member of a minority group is not a static thing.

“What I want to insist on is that it’s not like an absolute property. It’s a dynamic,” says MIT Professor Bruno Perreau. “It is both a complex situation and a mobile situation. You can be a member of a minority group vis-à-vis one category and not another.”

Now Perreau explores these dynamics in a book, “Spheres of Injustice,” published this year by the MIT Press. Perreau is the Cynthia L. Reed Professor of French Studies and Language in MIT’s Literature program. The French-language edition of the book was published in 2023.

Around the world, Perreau observes, much of the political contestation over interest-group politics and policies to protect minorities arrives at a similar tension point: Policies or legal rulings are sometimes crafted to redress problems, but when political conditions shift, those same policies can be discarded with claims that they themselves are unfair. In many places, this dynamic has become familiar through the contestation of policies regarding ethnic identity, gender, sexual orientation, and more.

But this is not the only paradigm of minority group politics. One aim of Perreau’s book is to add breadth to the subject, grounded in the empirical realities people experience.

After all, when it comes to being regarded as a member of a minority group, “in a given situation, some people will claim this label for themselves, whereas others will reject it,” Perreau writes. “Some consider this piece of their identity to be fundamental; others regard it as secondary. … The work of defining it is the very locus of its power.”

“Spheres of Injustice” both lays out that complexity and seeks to find ways to rethink group-oriented politics as part of an expansion of rights generally. The book arises partly out of previous work Perreau has published, often concerning France. It also developed partly in response to Perreau thinking about how rights might evolve in a time of climate change. But it arrived at its exact form as a rethinking of “Spheres of Justice,” a prominent 1980s text by political philosopher Michael Walzer.

Instead of there being a single mechanism through which justice could be applied throughout society, Walzer contended, there are many spheres of life, and the meaning of justice depends on where it is being applied.

“Because of the complexities of social relations, inequalities are impossible to fully erase,” Perreau says. “Even in the act of trying to resist an injustice, we may create other forms of injustice. Inequality is unavoidable, but his [Walzer’s] goal is to reduce injustice to the minimum, in the form of little inequalities that do not matter that much.”

Walzer’s work, however, never grapples with the kinds of political dynamics in which minority groups try to establish rights. To be clear, Perreau notes, in some cases the categorization as a minority is foisted upon people, and in other cases, it is developed by the group itself. In either case, he thinks we should consider how complex the formation and activities of the group may be.

As another example, consider that while disability rights are a contested issue in some countries and ignored in others, they also involve fluidity in terms of who advocates and benefits from them. Imagine, Perreau says, you break a leg. Temporarily, he says, “you experience a little bit of what people with a permanent disability experience.” If you lobby, for, say, better school building access or better transit access, you could be helping kids, the elderly, families with kids, and more — including people and groups not styling themselves as part of a disability-rights movement.

“One goal of the book is to enhance awareness about the virtuous circle that can emerge from this kind of minority politics,” Perreau says. “It’s often regarded by many privileged people as a protection that removes something from them. But that’s not the case.”

Indeed, the politics Perreau envisions in “Spheres of Injustice” have an alternate framework, in which developing rights for some better protects others, to the point where minority rights translate into universal rights. That is not, again, meant to minimize the experience of core members of a group that has been discriminated against, but to encourage thinking about how solidifying rights for a particular group overlaps with the greater expansion of rights generally.

“I’m walking a fine line between different perspectives on what it means to belong,” Perreau says. “But this is indispensable today.”

Indeed, due to the senior citizens in Switzerland, he notes, “There will be better rights in Europe. Politics is not just a matter of diplomacy and majority decision-making. Sharing a complex world means drawing on the minority parts of our lives because it is these parts that most fundamentally connect us to others, intentionally or unintentionally. Thinking in these terms today is an essential civic virtue.”

Teamwork in motion

Thu, 09/25/2025 - 3:45pm

Graduate school can feel like a race to the finish line, but it becomes much easier with a team to cheer you on — especially if that team is literally next to you, shouting encouragement from a decorated van.

From the morning of Sept. 12 into the early afternoon on Sept. 13, two teams made up of MIT Department of Aeronautics and Astronautics (AeroAstro) graduate students, alumni, and friends ran the 2025 Ragnar Road Reach the Beach relay in two friendly yet competitive teams of 12, aptly named Team Aero and Team Astro. Ragnar races are long-distance, team-based relay events that run overnight through some of the country’s most scenic routes. The Reach the Beach course began in Lancaster, New Hampshire, and sent teams on a 204-mile trek through the White Mountains, finishing at Hampton Beach State Park.

“This all began on the Graduate Association of Aeronautics and Astronautics North End Pastry Tour in 2024. While discussing our mutual love for running, and stuffing our faces with cannoli, Maya Harris jokingly mentioned the concept of doing a Ragnar,” says Nathanael Jenkins, the eventual Team Aero captain. The idea took hold, inspiring enough interest to form a team for the first AeroAstro Ragnar relay in April 2025. From there enthusiasm continued to grow, resulting in the two current teams. 

“I was surprised at the number of people, even people who don’t run very frequently, who wanted to do another race after finishing the first Ragnar,” says Patrick Riley, captain of Team Astro. “All of the new faces are awesome because they bring new energy and excitement to the team. I love the community, I love the sport, and I think the best way to get to know someone is to be crammed into a van with them for six hours at a time.”

Resource management and real-time support

The two teams organized four vans, adorned with words of encouragement and team magnets — a Ragnar tradition — to shepherd the teams through the race, serving as rolling rest stops for runners at each exchange point. Each runner completed three to four sections out of 36 total legs, running between 1.7 to 11.6 miles at a time. Runners could swap out there for a power nap or a protein bar. To keep morale high, teams played games and handed out awards of their own to teammates. “Noah (McAllister) got the prize for ‘Most bees removed from the car;’ Madison (Bronniman) won for ‘Eating the most tinned fish;’ I got the prize for ‘Most violent slamming of doors’ — which I hadn’t realized was in my skill set,” says Jenkins.

“This race is really unique because it bonds the team together in ways that many other races simply don’t,” says Riley, an avid runner prior to the event. “Marathons are strenuous on your body, but a Ragnar is about long-term resource management — eating, hydrating, sleep management, staying positive. Then communicating those logistics effectively and proceeding with the plan.”

Pulling off a logistics-heavy race across both teams required “magical spreadsheeting” that used distance, start time, elevation changes, and average pace to estimate finish time for each leg of the race. “Noah made it for the first race. Then a bunch of engineers saw a spreadsheet and zeroed in,” says Riley.

Engineering success

The careful planning paid off with a win for Team Astro, with a finishing time of 31:01:13. Team Aero was close behind, finishing at 31:19:43. Yet in the end, the competition mattered less than the camaraderie, when all runners celebrated together at the finish line.

“I think the big connection that we talk about is putting the teamwork skills we use in engineering into practice,” says Jenkins. “Engineers all like achieving. Runners like achieving. Many of our runners don’t run for enjoyment in the moment, but the feeling of crossing the finish line makes up for the, well, pain. In engineering, the feeling of finishing a difficult problem makes up for the pain of doing it.”

Call them gluttons for punishment or high achievers, the group is already making plans for the next race. “Everybody is immediately throwing links in the group chat for more Ragnars in the future,” says Riley. “MIT has so many people who want to explore and engage with the world around them, and they’re willing to take a chance and do crazy stuff. And we have the follow-through to make it happen.”

Runners

Team Aero: Claire Buffington, Alex Chipps, Nathanael Jenkins, Noah McAllister, Garrett Siemen, Nick Torres (Course 16, AeroAstro), Madison Bronniman, Ceci Perez Gago, Juju Wang (Course 16 alum), Katie Benoit, and Jason Wang.

Team Astro: Tim Cavesmith, Evrard Constant, Mary Foxen, Maya Harris, Jules Penot, Patrick Riley, Alex Rose, Samir Wadhwania (Course 16), Henry Price (Course 3, materials science and engineering), Katherine Hoekstra, and Ian Robertson (Woods Hole Oceanographic Institute).

Honorary teammates: Abigail Lee, Celvi Lissy, and Taylor Hampson.

How federal research support has helped create life-changing medicines

Thu, 09/25/2025 - 2:00pm

Gleevec, a cancer drug first approved for sale in 2001, has dramatically changed the lives of people with chronic myeloid leukemia. This form of cancer was once regarded as very difficult to combat, but survival rates of patients who respond to Gleevec now resemble that of the population at large.

Gleevec is also a medicine developed with the help of federally funded research. That support helped scientists better understand how to create drugs targeting the BCR-ABL oncoprotein, the cancer-causing protein behind chronic myeloid leukemia.

A new study co-authored by MIT researchers quantifies how many such examples of drug development exist. The current administration is proposing a nearly 40 percent budget reduction to the National Institutes of Health (NIH), which sponsors a significant portion of biomedical research. The study finds that over 50 percent of small-molecule drug patents this century cite at least one piece of NIH-backed research that would likely be vulnerable to that potential level of funding change.

“What we found was quite striking,” says MIT economist Danielle Li, co-author of a newly published paper outlining the study’s results. “More than half of the drugs approved by the FDA since 2000 are connected to NIH research that would likely have been cut under a 40 percent budget reduction.”

Or, as the researchers write in the paper: “We found extensive connections between medical advances and research that was funded by grants that would have been cut if the NIH budget was sharply reduced.”

The paper, “What if NIH funding had been 40% smaller?” is published today as a Policy Article in the journal Science. The authors are Pierre Azoulay, the China Program Professor of International Management at the MIT Sloan School of Management; Matthew Clancy, an economist with the group Open Philanthropy; Li, the David Sarnoff Professor of Management of Technology at MIT Sloan; and Bhaven N. Sampat, an economist at Johns Hopkins University. (Biomedical researchers at both MIT and Johns Hopkins could be affected by adjustments to NIH funding.)

To conduct the study, the researchers leveraged the fact that the NIH uses priority lists to determine which projects get funded. That makes it possible to discern which projects were in the lower 40 percent of NIH-backed projects, priority-wise, for a given time period. The researchers call these “at-risk” pieces of research. Applying these data from 1980 through 2007, the scholars examined the patents of the new molecular entities — drugs with a new active ingredient — approved by the U.S. Food and Drug Administration since 2000. There is typically a time interval between academic research and subsequent related drug development.

The study focuses on small-molecule drugs — compact organic compounds, often taken orally as medicine — whereas NIH funding supports a wider range of advancements in medicine generally. Based on how many of these FDA-approved small-molecule medicines were linked to at-risk research from the prior period, the researchers estimated what kinds of consequences a 40 percent cut in funding would have generated going forward.

The study distinguishes between two types of links new drugs have to NIH funding. Some drug patents have what the researchers call “direct” links to new NIH-backed projects that generated new findings relevant to development of those particular drugs. Other patents have “indirect “ links to the NIH, when they cite prior NIH-funded studies that contributed to the overall body of knowledge used in drug development.

The analysis finds that 40 of the FDA-approved medications have direct links to new NIH-supported studies cited in the patents — or 7.1 percent. Of these, 14 patents cite at-risk pieces of NIH research.

When it comes to indirect links, of the 557 drugs approved by the FDA from 2000 to 2023, the study found that 59.4 percent have a patent citing at least one NIH-supported research publication. And, 51.4 percent cite at least one NIH-funded study from the at-risk category of projects. 

“The indirect connection is where we see the real breadth of NIH's impact,” Li says. “What the NIH does is fund research that forms the scientific foundation upon which companies and other drug developers build.”

As the researchers emphasize in the paper, there are many nuances involved in the study. A single citation of an NIH-funded study could appear in a patent for a variety of reasons, and does not necessarily mean “that the drug in question could never have been developed in its absence,” as they write in the paper. To reckon with this, the study also analyzes how many patents had at least 25 percent of their citations fall in the category of at-risk NIH-backed research. By this metric, they found that 65 of the 557 FDA-approved drugs, or 11.7 percent, met the threshold.

On the other hand, as the researchers state in the paper, it is possible the study “understates the extent to which medical advances are connected to NIH research.” For one thing, as the study’s endpoint for examining NIH data is 2007, there could have been more recent pieces of research informing medications that have already received FDA approval. The study does not quantify “second-order connections,” in which NIH-supported findings may have led to additional research that directly led to drug development. Again, NIH funding also supports a broad range of studies beyond the type examined in the current paper.

It is also likely, the scholars suggest, that NIH cuts would curtail the careers of many promising scientists, and in so doing slowdown medical progress. For a variety of these reasons, in addition to the core data itself, the scholars say the study indicates how broadly NIH-backed research has helped advance medicine.

“The worry is that these kinds of deep cuts to the NIH risk that foundation and therefore endanger the development of medicines that might be used to treat us, or our kids and grandkids, 20 years from now,” Li says.

Azoulay and Sampat have received past NIH funding. They also serve on an NIH working group about the empirical analysis of the scientific enterprise.

AI system learns from many types of scientific information and runs experiments to discover new materials

Thu, 09/25/2025 - 11:00am

Machine-learning models can speed up the discovery of new materials by making predictions and suggesting experiments. But most models today only consider a few specific types of data or variables. Compare that with human scientists, who work in a collaborative environment and consider experimental results, the broader scientific literature, imaging and structural analysis, personal experience or intuition, and input from colleagues and peer reviewers.

Now, MIT researchers have developed a method for optimizing materials recipes and planning experiments that incorporates information from diverse sources like insights from the literature, chemical compositions, microstructural images, and more. The approach is part of a new platform, named Copilot for Real-world Experimental Scientists (CRESt), that also uses robotic equipment for high-throughput materials testing, the results of which are fed back into large multimodal models to further optimize materials recipes.

Human researchers can converse with the system in natural language, with no coding required, and the system makes its own observations and hypotheses along the way. Cameras and visual language models also allow the system to monitor experiments, detect issues, and suggest corrections.

“In the field of AI for science, the key is designing new experiments,” says Ju Li, School of Engineering Carl Richard Soderberg Professor of Power Engineering. “We use multimodal feedback — for example information from previous literature on how palladium behaved in fuel cells at this temperature, and human feedback — to complement experimental data and design new experiments. We also use robots to synthesize and characterize the material’s structure and to test performance.”

The system is described in a paper published in Nature. The researchers used CRESt to explore more than 900 chemistries and conduct 3,500 electrochemical tests, leading to the discovery of a catalyst material that delivered record power density in a fuel cell that runs on formate salt to produce electricity.

Joining Li on the paper as first authors are PhD student Zhen Zhang, Zhichu Ren PhD ’24, PhD student Chia-Wei Hsu, and postdoc Weibin Chen. Their coauthors are MIT Assistant Professor Iwnetim Abate; Associate Professor Pulkit Agrawal; JR East Professor of Engineering Yang Shao-Horn; MIT.nano researcher Aubrey Penn; Zhang-Wei Hong PhD ’25, Hongbin Xu PhD ’25; Daniel Zheng PhD ’25; MIT graduate students Shuhan Miao and Hugh Smith; MIT postdocs Yimeng Huang, Weiyin Chen, Yungsheng Tian, Yifan Gao, and Yaoshen Niu; former MIT postdoc Sipei Li; and collaborators including Chi-Feng Lee, Yu-Cheng Shao, Hsiao-Tsu Wang, and Ying-Rui Lu.

A smarter system

Materials science experiments can be time-consuming and expensive. They require researchers to carefully design workflows, make new material, and run a series of tests and analysis to understand what happened. Those results are then used to decide how to improve the material.

To improve the process, some researchers have turned to a machine-learning strategy known as active learning to make efficient use of previous experimental data points and explore or exploit those data. When paired with a statistical technique known as Bayesian optimization (BO), active learning has helped researchers identify new materials for things like batteries and advanced semiconductors.

“Bayesian optimization is like Netflix recommending the next movie to watch based on your viewing history, except instead it recommends the next experiment to do,” Li explains. “But basic Bayesian optimization is too simplistic. It uses a boxed-in design space, so if I say I’m going to use platinum, palladium, and iron, it only changes the ratio of those elements in this small space. But real materials have a lot more dependencies, and BO often gets lost.”

Most active learning approaches also rely on single data streams that don’t capture everything that goes on in an experiment. To equip computational systems with more human-like knowledge, while still taking advantage of the speed and control of automated systems, Li and his collaborators built CRESt.

CRESt’s robotic equipment includes a liquid-handling robot, a carbothermal shock system to rapidly synthesize materials, an automated electrochemical workstation for testing, characterization equipment including automated electron microscopy and optical microscopy, and auxiliary devices such as pumps and gas valves, which can also be remotely controlled.  Many processing parameters can also be tuned.

With the user interface, researchers can chat with CRESt and tell it to use active learning to find promising materials recipes for different projects. CRESt can include up to 20 precursor molecules and substrates into its recipe. To guide material designs, CRESt’s models search through scientific papers for descriptions of elements or precursor molecules that might be useful. When human researchers tell CRESt to pursue new recipes, it kicks off a robotic symphony of sample preparation, characterization, and testing. The researcher can also ask CRESt to perform image analysis from scanning electron microscopy imaging, X-ray diffraction, and other sources.

Information from those processes is used to train the active learning models, which use both literature knowledge and current experimental results to suggest further experiments and accelerate materials discovery.

“For each recipe we use previous literature text or databases, and it creates these huge representations of every recipe based on the previous knowledge base before even doing the experiment,” says Li. “We perform principal component analysis in this knowledge embedding space to get a reduced search space that captures most of the performance variability. Then we use Bayesian optimization in this reduced space to design the new experiment. After the new experiment, we feed newly acquired multimodal experimental data and human feedback into a large language model to augment the knowledgebase and redefine the reduced search space, which gives us a big boost in active learning efficiency.”

Materials science experiments can also face reproducibility challenges. To address the problem, CRESt monitors its experiments with cameras, looking for potential problems and suggesting solutions via text and voice to human researchers.

The researchers used CRESt to develop an electrode material for an advanced type of high-density fuel cell known as a direct formate fuel cell. After exploring more than 900 chemistries over three months, CRESt discovered a catalyst material made from eight elements that achieved a 9.3-fold improvement in power density per dollar over pure palladium, an expensive precious metal. In further tests, CRESTs material was used to deliver a record power density to a working direct formate fuel cell even though the cell contained just one-fourth of the precious metals of previous devices.

The results show the potential for CRESt to find solutions to real-world energy problems that have plagued the materials science and engineering community for decades.

“A significant challenge for fuel-cell catalysts is the use of precious metal,” says Zhang. “For fuel cells, researchers have used various precious metals like palladium and platinum. We used a multielement catalyst that also incorporates many other cheap elements to create the optimal coordination environment for catalytic activity and resistance to poisoning species such as carbon monoxide and adsorbed hydrogen atom. People have been searching low-cost options for many years. This system greatly accelerated our search for these catalysts.”

A helpful assistant

Early on, poor reproducibility emerged as a major problem that limited the researchers’ ability to perform their new active learning technique on experimental datasets. Material properties can be influenced by the way the precursors are mixed and processed, and any number of problems can subtly alter experimental conditions, requiring careful inspection to correct.

To partially automate the process, the researchers coupled computer vision and vision language models with domain knowledge from the scientific literature, which allowed the system to hypothesize sources of irreproducibility and propose solutions. For example, the models can notice when there’s a millimeter-sized deviation in a sample’s shape or when a pipette moves something out of place. The researchers incorporated some of the model’s suggestions, leading to improved consistency, suggesting the models already make good experimental assistants.

The researchers noted that humans still performed most of the debugging in their experiments.

“CREST is an assistant, not a replacement, for human researchers,” Li says. “Human researchers are still indispensable. In fact, we use natural language so the system can explain what it is doing and present observations and hypotheses. But this is a step toward more flexible, self-driving labs.”

Study shows mucus contains molecules that block Salmonella infection

Thu, 09/25/2025 - 12:00am

Mucus is more than just a sticky substance: It contains a wealth of powerful molecules called mucins that help to tame microbes and prevent infection. In a new study, MIT researchers have identified mucins that defend against Salmonella and other bacteria that cause diarrhea.

The researchers now hope to mimic this defense system to create synthetic mucins that could help prevent or treat illness in soldiers or other people at risk of exposure to Salmonella. It could also help prevent “traveler’s diarrhea,” a gastrointestinal infection caused by consuming contaminated food or water.

Mucins are bottlebrush-shaped polymers made of complex sugar molecules known as glycans, which are tethered to a peptide backbone. In this study, the researchers discovered that a mucin called MUC2 turns off genes that Salmonella uses to enter and infect host cells.

“By using and reformatting this motif from the natural innate immune system, we hope to develop strategies to preventing diarrhea before it even starts. This approach could provide a low-cost solution to a major global health challenge that costs billions annually in lost productivity, health care expenses, and human suffering,” says Katharina Ribbeck, the Andrew and Erna Viterbi Professor of Biological Engineering at MIT and the senior author of the study.

MIT Research Scientist Kelsey Wheeler PhD ’21 and Michaela Gold PhD ’22 are the lead authors of the paper, which appeared Tuesday in the journal Cell Reports.

Blocking infection

Mucus lines much of the body, providing a physical barrier to infection, but that’s not all it does. Over the past decade, Ribbeck has identified mucins that can help to disarm Vibrio cholerae, as well as Pseudomonas aeruginosa, which can infect the lungs and other organs, and the yeast Candida albicans.

In the new study, the researchers wanted to explore how mucins from the digestive tract might interact with Salmonella enterica, a foodborne pathogen that can cause illness after consuming raw or undercooked food, or contaminated water.

To infect host cells, Salmonella must produce proteins that are part of the type 3 secretion system (T3SS), which helps bacteria form needle-like complexes that transfer bacterial proteins directly into host cells. These proteins are all encoded on a segment of DNA called Salmonella pathogenicity island 1 (SPI-1).

The researchers found that when they exposed Salmonella to a mucin called MUC2, which is found in the intestines, the bacteria stopped producing the proteins encoded by SPI-1, and they were no longer able to infect cells.

Further studies revealed that MUC2 achieves this by turning off a regulatory bacterial protein known as HilD. When this protein is blocked by mucins, it can no longer activate the T3SS genes.

Using computational simulations, the researchers showed that certain monosaccharides found in glycans, including GlcNAc and GalNAc, can attach to a specific binding site of the HilD protein. However, their studies showed that these monosaccharides can’t turn off HilD on their own — the shutoff only occurs when the glycans are tethered to the peptide backbone of the mucin.

The researchers also discovered that a similar mucin called MUC5AC, which is found in the stomach, can block HilD. And, both MUC2 and MUC5AC can turn off virulence genes in other foodborne pathogens that also use HilD as a gene regulator.

Mucins as medicine

Ribbeck and her students now plan to explore ways to use synthetic versions of these mucins to help boost the body’s natural defenses and protect the GI tract from Salmonella and other infections.

Studies from other labs have shown that in mice, Salmonella tends to infect portions of the GI tract that have a thin mucus barrier, or no barrier at all.

“Part of Salmonella’s evasion strategy for this host defense is to find locations where mucus is absent and then infect there. So, one could imagine a strategy where we try to bolster mucus barriers to protect those areas with limited mucin,” Wheeler says.

One way to deploy synthetic mucins could be to add them to oral rehydration salts — mixtures of electrolytes that are dissolved in water and used to treat dehydration caused by diarrhea and other gastrointestinal illnesses.

Another potential application for synthetic mucins would be to incorporate them into a chewable tablet that could be consumed before traveling to areas where Salmonella and other diarrheal illnesses are common. This kind of “pre-exposure prophylaxis” could help prevent a great deal of suffering and lost productivity due to illness, the researchers say.

“Mucin mimics would particularly shine as preventatives, because that’s how the body evolved mucus — as part of this innate immune system to prevent infection,” Wheeler says.

The research was funded by the U.S. Army Research Office, the U.S. Army Institute for Collaborative Biotechnologies, the U.S. National Science Foundation, the U.S. National Institute of Health and Environmental Sciences, the U.S. National Institutes of Health, and the German Research Foundation.

New AI system could accelerate clinical research

Thu, 09/25/2025 - 12:00am

Annotating regions of interest in medical images, a process known as segmentation, is often one of the first steps clinical researchers take when running a new study involving biomedical images.

For instance, to determine how the size of the brain’s hippocampus changes as patients age, the scientist first outlines each hippocampus in a series of brain scans. For many structures and image types, this is often a manual process that can be extremely time-consuming, especially if the regions being studied are challenging to delineate.

To streamline the process, MIT researchers developed an artificial intelligence-based system that enables a researcher to rapidly segment new biomedical imaging datasets by clicking, scribbling, and drawing boxes on the images. This new AI model uses these interactions to predict the segmentation.

As the user marks additional images, the number of interactions they need to perform decreases, eventually dropping to zero. The model can then segment each new image accurately without user input.

It can do this because the model’s architecture has been specially designed to use information from images it has already segmented to make new predictions.

Unlike other medical image segmentation models, this system allows the user to segment an entire dataset without repeating their work for each image.

In addition, the interactive tool does not require a presegmented image dataset for training, so users don’t need machine-learning expertise or extensive computational resources. They can use the system for a new segmentation task without retraining the model.

In the long run, this tool could accelerate studies of new treatment methods and reduce the cost of clinical trials and medical research. It could also be used by physicians to improve the efficiency of clinical applications, such as radiation treatment planning.

“Many scientists might only have time to segment a few images per day for their research because manual image segmentation is so time-consuming. Our hope is that this system will enable new science by allowing clinical researchers to conduct studies they were prohibited from doing before because of the lack of an efficient tool,” says Hallee Wong, an electrical engineering and computer science graduate student and lead author of a paper on this new tool.

She is joined on the paper by Jose Javier Gonzalez Ortiz PhD ’24; John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering; and senior author Adrian Dalca, an assistant professor at Harvard Medical School and MGH, and a research scientist in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). The research will be presented at the International Conference on Computer Vision.

Streamlining segmentation

There are primarily two methods researchers use to segment new sets of medical images. With interactive segmentation, they input an image into an AI system and use an interface to mark areas of interest. The model predicts the segmentation based on those interactions.

A tool previously developed by the MIT researchers, ScribblePrompt, allows users to do this, but they must repeat the process for each new image.

Another approach is to develop a task-specific AI model to automatically segment the images. This approach requires the user to manually segment hundreds of images to create a dataset, and then train a machine-learning model. That model predicts the segmentation for a new image. But the user must start the complex, machine-learning-based process from scratch for each new task, and there is no way to correct the model if it makes a mistake.

This new system, MultiverSeg, combines the best of each approach. It predicts a segmentation for a new image based on user interactions, like scribbles, but also keeps each segmented image in a context set that it refers to later.

When the user uploads a new image and marks areas of interest, the model draws on the examples in its context set to make a more accurate prediction, with less user input.

The researchers designed the model’s architecture to use a context set of any size, so the user doesn’t need to have a certain number of images. This gives MultiverSeg the flexibility to be used in a range of applications.

“At some point, for many tasks, you shouldn’t need to provide any interactions. If you have enough examples in the context set, the model can accurately predict the segmentation on its own,” Wong says.

The researchers carefully engineered and trained the model on a diverse collection of biomedical imaging data to ensure it had the ability to incrementally improve its predictions based on user input.

The user doesn’t need to retrain or customize the model for their data. To use MultiverSeg for a new task, one can upload a new medical image and start marking it.

When the researchers compared MultiverSeg to state-of-the-art tools for in-context and interactive image segmentation, it outperformed each baseline.

Fewer clicks, better results

Unlike these other tools, MultiverSeg requires less user input with each image. By the ninth new image, it needed only two clicks from the user to generate a segmentation more accurate than a model designed specifically for the task.

For some image types, like X-rays, the user might only need to segment one or two images manually before the model becomes accurate enough to make predictions on its own.

The tool’s interactivity also enables the user to make corrections to the model’s prediction, iterating until it reaches the desired level of accuracy. Compared to the researchers’ previous system, MultiverSeg reached 90 percent accuracy with roughly 2/3 the number of scribbles and 3/4 the number of clicks.

“With MultiverSeg, users can always provide more interactions to refine the AI predictions. This still dramatically accelerates the process because it is usually faster to correct something that exists than to start from scratch,” Wong says.

Moving forward, the researchers want to test this tool in real-world situations with clinical collaborators and improve it based on user feedback. They also want to enable MultiverSeg to segment 3D biomedical images.

This work is supported, in part, by Quanta Computer, Inc. and the National Institutes of Health, with hardware support from the Massachusetts Life Sciences Center.

Technique makes complex 3D printed parts more reliable

Thu, 09/25/2025 - 12:00am

People are increasingly turning to software to design complex material structures like airplane wings and medical implants. But as design models become more capable, our fabrication techniques haven’t kept up. Even 3D printers struggle to reliably produce the precise designs created by algorithms. The problem has led to a disconnect between the ways a material is expected to perform and how it actually works.

Now, MIT researchers have created a way for models to account for 3D printing’s limitations during the design process. In experiments, they showed their approach could be used to make materials that perform much more closely to the way they’re intended to.

“If you don’t account for these limitations, printers can either over- or under-deposit material by quite a lot, so your part becomes heavier or lighter than intended. It can also over- or underestimate the material performance significantly,” says Gilbert W. Winslow Associate Professor of Civil and Environmental Engineering Josephine Carstensen. “With our technique, you know what you’re getting in terms of performance because the numerical model and experimental results align very well.”

The approach is described in the journal Materials and Design, in an open-access paper co-authored by Carstensen and PhD student Hajin Kim-Tackowiak.

Matching theory with reality

Over the last decade, new design and fabrication technologies have transformed the way things are made, especially in industries like aerospace, automotive, and biomedical engineering, where materials must reach precise weight-to-strength ratios and other performance thresholds. In particular, 3D printing allows materials to be made with more complex internal structures.

“3D printing processes generally give us more flexibility because we don’t have to come up with forms or molds for things that would be made through more traditional means like injection molding,” Kim-Tackowiak explains.

As 3D printing has made production more precise, so have methods for designing complex material structures. One of the most advanced computational design techniques is known as topology optimization. Topology optimization has been used to generate new and often surprising material structures that can outperform conventional designs, in some cases approaching the theoretical limits of certain performance thresholds. It is currently being used to design materials with optimized stiffness and strength, maximized energy absorption, fluid permeability, and more.

But topology optimization often creates designs at extremely fine scales that 3D printers have struggled to reliably reproduce. The problem is the size of the print head that extrudes the material. If the design specifies a layer to be 0.5 millimeters thick, for instance, and the print head is only capable of extruding 1-millimeter-thick layers, the final design will be warped and imprecise.

Another problem has to do with the way 3D printers create parts, with a print head extruding a thin bead of material as it glides across the printing area, gradually building parts layer by layer. That can cause weak bonding between layers, making the part more prone to separation or failure.

The researchers sought to address the disconnect between expected and actual properties of materials that arise from those limitations.

“We thought, ‘We know these limitations in the beginning, and the field has gotten better at quantifying these limitations, so we might as well design from the get-go with that in mind,” Kim-Tackowiak says.

In previous work, Carstensen developed an algorithm that embedded information about the print nozzle size into design algorithms for beam structures. For this paper, the researchers built off that approach to incorporate the direction of the print head and the corresponding impact of weak bonding between layers. They also made it work with more complex, porous structures that can have extremely elastic properties.

The approach allows users to add variables to the design algorithms that account for the center of the bead being extruded from a print head and the exact location of the weaker bonding region between layers. The approach also automatically dictates the path the print head should take during production.

The researchers used their technique to create a series of repeating 2D designs with various sizes of hollow pores, or densities. They compared those creations to materials made using traditional topology optimization designs of the same densities.

In tests, the traditionally designed materials deviated from their intended mechanical performance more than materials designed using the researchers’ new technique at material densities under 70 percent. The researchers also found that conventional designs consistently over-deposited material during fabrication. Overall, the researchers’ approach led to parts with more reliable performance at most densities.

“One of the challenges of topology optimization has been that you need a lot of expertise to get good results, so that once you take the designs off the computer, the materials behave the way you thought they would,” Carstensen says. “We’re trying to make it easy to get these high-fidelity products.”

Scaling a new design approach

The researchers believe this is the first time a design technique has accounted for both the print head size and weak bonding between layers.

“When you design something, you should use as much context as possible,” Kim-Tackowiak says. “It was rewarding to see that putting more context into the design process makes your final materials more accurate. It means there are fewer surprises. Especially when we’re putting so much more computational resources into these designs, it’s nice to see we can correlate what comes out of the computer with what comes out of the production process.”

In future work, the researchers hope to improve their method for higher material densities and for different kinds of materials like cement and ceramics. Still, they said their approach offered an improvement over existing techniques, which often require experienced 3D printing specialists to help account for the limitations of the machines and materials.

“It was cool to see that just by putting in the size of your deposition and the bonding property values, you get designs that would have required the consultation of somebody who’s worked in the space for years,” Kim-Tackowiak says.

The researchers say the work paves the way to design with more materials.

“We’d like to see this enable the use of materials that people have disregarded because printing with them has led to issues,” Kim-Tackowiak says. “Now we can leverage those properties or work with those quirks as opposed to just not using all the material options we have at our disposal.”

Signposts on the way to new territory

Wed, 09/24/2025 - 4:10pm

MIT professors Zachary Hartwig and Wanda Orlikowski exemplify a rare but powerful kind of mentorship — one grounded not just in intellectual excellence, but in deep personal care. They remind us that transformative academic leadership starts with humanity. 

Whether it's Hartwig’s ability to bring engineering brilliance to life through genuine personal connection, or Orlikowski’s unwavering support for those who share in her mission to create meaningful impact, both foster environments where people, not just ideas, can thrive. 

Their students and colleagues describe feeling seen, supported, and encouraged not only to grow as scholars, but as people. It’s this ethic of care, of valuing the human behind the research, that defines their mentorship and elevates those around them.

Hartwig and Orlikowski are two of the 2023-25 Committed to Caring cohort who are fostering transformative research through growth, independence, and support. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.

Zachary Hartwig: Signposts on the way to new territory

Zachary (Zach) Seth Hartwig is an associate professor in the Department of Nuclear Science and Engineering (NSE) with a co-appointment at the MIT Plasma Science and Fusion Center (PSFC). He has worked in the areas of large-scale applied superconductivity, magnet fusion device design, radiation detector development, and accelerator science and engineering. His active research focuses on the development of high-field superconducting magnet technologies for fusion energy and accelerated irradiation methods for fusion materials using ion beams.

One nominator expressed, “although he didn't formally become my advisor until after I submitted my thesis prospectus, I always felt like Zach had my back.” This feeling of support was shared by Hartwig’s advisees through numerous examples.

When the pandemic started, Hartwig made sure that the student had ongoing support and a safe place to simply exist as an international visiting student during a tumultuous time. This care often presented in small ways: when the mentee needed to debug their cryogenic system, Hartwig showed up at the lab every day to help plan the next test; when this same student struggled to write the introduction of their first paper, Hartwig continued to provide support; and when the student wanted to practice for their qualifying exam, Hartwig insisted on helping until the last day. Additionally, when the advisee’s funding was nearing its end, Hartwig secured transition support to bridge the gap.

The nominator reflected on Hartwig’s cheerful and positive mentorship style, noting that “through it all, he … always valued my ideas, he was never judgmental, he never raised his voice, he never dismissed me.” 

Hartwig characterizes himself as “highly supportive, but from the backseat.” He is active with and available to his students; however, it is essential to him that they are the ones driving the research. “Graduate students need to experience increasing amounts of autonomy, but within a supportive framework that fades as they need to rely on it less and less as they become independent researchers,” he notes.

Hartwig shapes the intellectual maturation of his students. He believes that graduate school is not solely about results or publications, but about whom students become in the process. 

“The most important output of a PhD program is not your results, your papers, or your thesis; it’s YOU,” he emphasizes. His mentorship is built around this philosophy, creating an environment where students steadily evolve into independent researchers.

Importantly, Hartwig cultivates a culture where daring, unconventional ideas are not just allowed — they’re encouraged. He models this approach through his own career, which has taken bold leaps across disciplines and technologies. 

“MIT should do things only MIT can do,” he tells his students. His message is clear: Graduate students should not be afraid to go against the grain.

This philosophy has inspired many of his students to explore nontraditional research paths, armed with the confidence that failure is not a setback, but a sign that they are asking ambitious questions. Hartwig regularly reinforces this, reminding students that null results and dead ends often teach us the most. 

“They’re the signposts you have to pass on the way to new territory,” he says.

Ultimately, one of the most fulfilling parts of Hartwig’s work is witnessing the moment when it all “clicks” for a student — when they begin to lead boldly, push back thoughtfully, and take true ownership of their research. “It’s a beautiful thing when it happens,” he reflects. 

For Hartwig, mentorship is about fostering not only the skills of a scientist, but the identity of one. His students don’t just grow in knowledge, they grow in courage, conviction, and clarity.

Wanda Orlikowski: Shaping research by supporting the people who make it happen

Wanda Orlikowski is the Alfred P. Sloan Professor of Information Technology and Organization Studies at MIT’s Sloan School of Management. Her research examines technologies in the workplace, with a particular focus on how digital reconfigurations generate significant shifts in organizing, coordination, and accountability. She is currently exploring the digital transformation of work.

Through times of uncertainty, students always find support in Orlikowski. One of her nominators shared that they have encountered many moments of doubt during the research development phase of their dissertation. “I [have had] concerns … that I'm not making progress. I do all this work, and it’s not going anywhere, I keep returning back to where I started,” the mentee reflected. 

Orlikowski has walked this advisee through those moments patiently and with great empathy, connecting her own experiences with those of her students. She often talks about the research process not being a straight line of progress, but rather a spiral. 

“This metaphor … suggests that coming back to ideas again and again is in fact progress,” rather than a failure. “Every time I come back to it, I’m at a higher plane, and I’m refining the same idea further and further,” the nominator wrote.

Students say that Orlikowski makes an effort to support them through moments of doubt, turning these moments into opportunities for growth. “It has … been such a benefit for me to have her near-constant availability,” the student said. “She listens to my thoughts and lets me just talk and spitball ideas, without her interrupting.” 

Orlikowski pushes and prods her students to elaborate, clarify, and expand their thoughts. She does this proactively, spending many hours every week talking to her students, reading their writing, and making scrupulous comments on their work. 

Orlikowski has been remarkably perceptive when her students need support. One of the nominators struggled during their first holiday season in the PhD program, unable to visit their family. Orlikowski noticed the student’s isolation and reached out, inviting the student to her family’s Christmas dinner, a gesture that turned into a heartwarming tradition. 

“I gave her an orchid that first year, and to this day, it continues to bloom each year. Wanda regularly sends me pictures of it, and the joy she expresses in keeping it alive means so much to me. I feel that in her care, both the orchid and our connection have flourished,” the mentee remarks.

“One of the things I’ve appreciated most about Wanda is that she has never tried to change who I am,” the nominator adds. They go on to describe themselves as not a very strategic or extroverted person by nature, and for a long time, they struggled with the idea that these qualities might hinder their success in academia. “Wanda has helped me embrace my true self.”

“It’s not about fitting into a mold,” Orlikowski reminded the student, “It’s about being true to who you are, and doing great work.” Her support has made the student comfortable with their approach to both research and life.

The academic world often feels like it rewards self-promotion and strategic maneuvering, but Orlikowski has alleviated much of her students’ anxiety about whether they can be competitive without it. “You don’t have to pretend to be something you’re not,” she assures them. “The work will speak for itself.” 

Orlikowski’s support for her students extends beyond encouragement; she advocates for their work, helping them gain visibility and traction in the broader academic community. “It’s not just words — she has actively supported me, promoting my work through her network of students and peers,” the nominator articulated. 

Her belief in her mentees, and her willingness to support their work, has had a profound impact on their academic journey.

By attracting the world’s sharpest talent, MIT helps keep the US a step ahead

Wed, 09/24/2025 - 11:55am

Just as the United States has prospered through its ability to draw talent from every corner of the globe, so too has MIT thrived as a magnet for the world’s most keen and curious minds — many of whom remain here to invent solutions, create companies, and teach future leaders, contributing to America’s success.

President Ronald Reagan remarked in 1989 that the United States leads the world “because, unique among nations, we draw our people — our strength — from every country and every corner of the world. And by doing so we continuously renew and enrich our nation.” Those words ring still ring true 36 years later — and the sentiment resonates especially at MIT.

"To find people with the drive, skill, and daring to see, discover, and invent things no one else can, we open ourselves to talent from every corner of the United States and from around the globe,” says MIT President Sally Kornbluth. “MIT is an American university, proudly so — but we would be gravely diminished without the students and scholars who join us from other nations."

MIT’s steadfast commitment to attracting the best and brightest talent from around the world has contributed to not just its own success, but also that of the nation as whole. MIT’s stature as an international hub of education and innovation adds value to the U.S. economy and competitiveness in myriad ways — from foreign-born faculty delivering breakthroughs here and founding American companies that create American jobs to international students contributing over $264 million annually to the U.S. economy during the 2023-24 school year.

Highlighting the extent and value of its global character, the Office of the Vice Provost for International Activities recently expanded a new video series, “The World at MIT.” In it, 20 faculty members born outside the United States tell how they dreamed of coming to MIT while growing up abroad and eventually joined the MIT faculty, where they’ve helped establish and maintain global leadership in science while teaching the next generation of innovators. A common thread running through their stories is the importance of the campus’s distinct nature as a community that is both profoundly American and deeply connected to the people, institutions, and concerns of regions and nations around the globe.

Joining the MIT faculty in 1980, MIT President Emeritus L. Rafael Reif knew almost instantly that he would stay.

“I was impressed by the richness of the variety of groups of people and cultures here,” says Reif, who moved to the United States from Venezuela and eventually served as MIT’s president from 2012 to 2022. “There is no richer place than MIT, because every point of view is here. That is what makes the place so special.”

The benefits of welcoming international students and researchers to campus extend well beyond MIT. More than 17,000 MIT alumni born elsewhere now call the United States home, for example, and many have founded U.S.-based companies that have generated billions of dollars in economic activity.

Contributing to America’s prestige internationally, one-third of MIT’s 104 Nobel laureates — including seven of the eight Nobel winners over the last decade — were born abroad. Drawn to MIT, they went on to make their breakthroughs in the United States. Among them is Lester Wolfe Professor of Chemistry Moungi Bawendi, who won the Nobel Prize in Chemistry in 2023 for his work in the chemical production of high-quality quantum dots.   

“MIT is a great environment. It’s very collegial, very collaborative. As a result, we also have amazing students,” says Bawendi, who lived in France and Tunisia as a child before moving to the U.S. “I couldn’t have done my first three years here, which eventually got me a Nobel Prize, without having really bold, smart, adventurous graduate students.”

The give-and-take among MIT faculty and students also inspires electrical engineering and computer science professor Akintunde Ibitayo (Tayo) Akinwande, who grew up in Nigeria.

“Anytime I teach a class, I always learn something from my students’ probing questions,” Akinwande says. “It gives me new insights sometimes, and that’s always the kind of environment I like — where I’m learning something new all the time.”

MIT’s global vibe inspires its students to not only explore worlds of ideas in campus labs and classrooms, but to journey the world itself. Forty-three percent of undergraduates pursued international experiences during the last academic year — taking courses at foreign universities, conducting research, or interning at multinational companies. MIT students and faculty alike are regularly engaged in research outside the United States, addressing some of the world’s toughest challenges and devising solutions that can be deployed back home, as well as abroad. In so doing, they embody MIT’s motto of “mens et manus” (“mind and hand”), reflecting the educational ideals of MIT’s founders who promoted education for practical application.

As someone who loves exploring “lofty questions” along with the practical design of things, Nergis Mavalvala found a perfect fit at MIT and calls her position as the Marble Professor of Astrophysics and dean of the School of Science “the best job in the world.”

“Everybody here wants to make the world a better place and are using their intellectual gifts and their education to do so,” says Mavalvala, who emigrated from Pakistan. “And I think that’s an amazing community to be part of.”

Daniela Rus agrees. Now the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, Rus was drawn to the practical application of mathematics while still a student in her native Romania.   

“And so, now here I am at MIT, essentially bringing together the world of science and math with the world of making things,” Rus says. “I’ve been here for two decades, and it’s been an extraordinary journey.”

The daughter of an Albert Einstein afficionado, Yukiko Yamashita grew up in Japan thinking of science not as a job, but a calling. MIT, where she is a professor of biology, is a place where people “are really open to unconventional ideas” and “intellectual freedom” thrives.

“There is something sacred about doing science. That’s how I grew up,” Yamashita says. “There are some distinct MIT characteristics. In a good way, people can’t let go. Every day, I am creating more mystery than I answer.”

For more about the paths that brought Yamashita and others to MIT and stories of how their disparate personal histories enrich the campus and wider community, visit the “World at MIT” videos website.

“Our global community’s multiplicity of ideas, experiences, and perspectives contributes enormously to MIT’s innovative and entrepreneurial spirit and, by extension, to the innovation and competitiveness of the U.S.,” says Vice Provost for International Activities Duane Boning, whose department developed the video series. “The bottom line is that both MIT and the U.S. grow stronger when we harness the talents of the world’s best and brightest.”

Improving the workplace of the future

Wed, 09/24/2025 - 12:00am

Whitney Zhang ’21 believes in the importance of valuing workers regardless of where they fit into an organizational chart.

Zhang is a PhD student in MIT’s Department of Economics studying labor economics. She explores how the technological and managerial decisions companies make affect workers across the pay spectrum. 

“I’ve been interested in economics, economic impacts, and related social issues for a long time,” says Zhang, who majored in mathematical economics as an undergraduate. “I wanted to apply my math skills to see how we could improve policies and their effects.”

Zhang is interested in how to improve conditions for workers. She believes it’s important to build relationships with policymakers, focusing on an evidence-driven approach to policy, while always remembering to center those the policies may affect. “We have to remember the people whose lives are impacted by business operations and legislation,” she says. 

She’s also aware of the complex intermixture of politics, social status, and financial obligations organizations and their employees have to navigate.

“Though I’m studying workers, it’s important to consider the entire complex ecosystem when solving for these kinds of challenges, including firm incentives and global economic conditions,” she says.

The intersection of tech and labor policy

Zhang began investigating employee productivity, artificial intelligence, and related economic and labor market phenomena early in her time as a doctoral student, collaborating frequently with fellow PhD students in the department.

A collaboration with economics doctoral student Shakked Noy yielded the 2023 study investigating ChatGPT as a tool to improve productivity. Their research found it substantially increased workers’ productivity on writing tasks, most so for workers who initially performed the worst on the tasks.

“This was one of the earliest pieces of evidence on the productivity effects of generative AI, and contributed to providing concrete data on how impactful these types of tools might be in the workplace and on the labor market,” Zhang says.

In other ongoing research — “Determinants of Irregular Worker Schedules” — Zhang is using data from a payroll provider to examine scheduling unpredictability, investigating why companies employ unpredictable schedules and how these schedules affect low-wage employees’ quality of life.

The scheduling project, conducted with MIT economics PhD student Nathan Lazarus, is motivated, in part, by existing sociological evidence that low-wage workers’ unpredictable schedules are associated with worse sleep and well-being. “We’ve seen a relationship between higher turnover and inconsistent, inadequate schedules, which suggests workers dis-prefer these kinds of schedules,” Zhang says.

At an academic roundtable, Zhang presented her results to Starbucks employees involved in scheduling and staffing. The attendees wanted to learn more about how different scheduling practices impacted workers and their productivity. “These are the kinds of questions that could reveal useful information for small businesses, large corporations, and others,” she says.

By conducting this research, Zhang hopes to better understand whether or not scheduling regulations can improve affected employees’ quality of life, while also considering potential unintended consequences. “Why are these schedules set the way they’re set?” she asks. “Do businesses with these kinds of schedules require increased regulation?”

Another project, conducted with MIT economics doctoral student Arjun Ramani, examines the linkages between offshoring, remote work, and related outcomes. “Do the technological and managerial practices that have made remote work possible further facilitate offshoring?” she asks. “Do organizations see significant gains in efficiency? What are the impacts on U.S. and offshore workers?”

Her work is being funded through the National Science Foundation Graduate Research Fellowship Program and the Washington Center for Equitable Growth.

Putting people at the center

Zhang has observed the different kinds of people economics and higher education could bring together. She followed a dual enrollment track in high school, completing college-level courses with students from across a variety of demographic identities. “I enjoyed centering people in my work,” she says. “Taking classes with a diverse group of students, including veterans and mothers returning to school to complete their studies, made me more curious about socioeconomic issues and the policies relevant to them.”

She later enrolled at MIT, where she participated in the Undergraduate Research Opportunities Program (UROP). She also completed an internship at the World Bank, worked as a summer analyst at the Federal Reserve Bank of New York, and worked as an assistant for a diverse faculty cohort including MIT economists David AutorJon Gruber, and Nina Roussille. Autor is her primary advisor on her doctoral research, a mentor she cites as a significant influence.

“[Autor’s] course, 14.03 (Microeconomics and Public Policy), cemented connections between theory and practice,” she says. “I thought the class was revelatory in showing the kinds of questions economics can answer.”

Doctoral study has revealed interesting pathways of investigation for Zhang, as have her relationships with her student peers and other faculty. She has, for example, leveraged faculty connections to gain access to hourly wage data in support of her scheduling and employee impacts work. “Generally, economists have had administrative data on earnings, but not on hours,” she notes.

Zhang’s focus on improving others’ lives extends to her work outside the classroom. She’s a mentor for the Boston Chinatown Neighborhood Center College Access Program and a member of MIT’s Graduate Christian Fellowship group. When she’s not enjoying spicy soups or paddling on the Charles, she takes advantage of opportunities to decompress with her art at W20 Arts Studios.

“I wanted to create time for myself outside of research and the classroom,” she says.

Zhang cites the benefits of MIT’s focus on cross-collaboration and encouraging students to explore other disciplines. As an undergraduate, Zhang minored in computer science, which taught her coding skills critical to her data work. Exposure to engineering also led her to become more interested in questions around how technology and workers interact.

Working with other scholars in the department has improved how Zhang conducts inquiries. “I’ve become the kind of well-rounded student and professional who can identify and quantify impacts, which is invaluable for future projects,” she says. Exposure to different academic and research areas, Zhang argues, helps increase access to ideas and information.

NASA selects Adam Fuhrmann ’11 for astronaut training

Tue, 09/23/2025 - 12:15pm

U.S. Air Force Maj. Adam Fuhrmann ’11 was one of 10 individuals chosen from a field of 8,000 applicants for the 2025 U.S. astronaut candidate class, NASA announced in a live ceremony on Sept. 22. 

This is NASA’s 24th class of astronaut candidates since the first Mercury 7 astronauts were chosen in 1959. Upon completion of his training, Fuhrmann will be the 45th MIT graduate to become a flight-eligible astronaut.

“As test pilots we don't do anything on our own, we work with amazing teams of engineers and maintenance professionals to plan, simulate, and execute complex and sometimes risky missions in aircraft to collect data and accomplish a mission, all while assessing risk and making smart calls as a team to do that as safely as possible,” Fuhrmann said at NASA’s announcement ceremony in Houston, Texas. “I'm happy to try to bring some of that experience to do the same thing with the NASA team and learn from everyone at Johnson Space Center how to apply those lessons to human spaceflight.”

His class now begins two years of training at the Johnson Space Center in Houston that includes instruction and skills development for complex operations aboard the International Space Station, Artemis missions to the moon, and beyond. Training includes robotics, land and water survival, geology, foreign language, space medicine and physiology, and more, while also conducting simulated spacewalks and flying high-performance jets.

From MIT to astronaut training

Fuhrmann, 35, is from Leesburg, Virginia, and has accumulated more than 2,100 flight hours in 27 aircraft, including the F-16 and F-35. He has served as a U.S. Air Force fighter pilot and experimental test pilot for nearly 14 years and deployed in support of operations Freedom’s Sentinel and Resolute Support, logging 400 combat hours.

Fuhrmann holds a bachelor’s degree in aeronautics and astronautics from MIT and master’s degrees in flight test engineering and systems engineering from the U.S. Air Force Test Pilot School and Purdue University, respectively. While at MIT, he was a member of Air Force ROTC Detachment 365 and was selected as the third-ever student leader of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL) in spring 2011.

“We are tremendously proud of Adam for this notable accomplishment, and we look forward to following his journey through astronaut candidate school and beyond,” says Leo McGonagle, GEL founding and executive director.

“It’s always a thrill to learn that one of our own has joined NASA's illustrious astronaut corps,” says Julie Shah, head of the MIT Department of Aeronautics and Astronautics and the H.N. Slater Professor in Aeronautics and Astronautics. “Adam is Course 16’s 19th astronaut alum. We take very seriously the responsibility to provide the very best aerospace engineering education, and it's so gratifying to see that those fundamentals continue to set individuals from our community on the path to becoming an astronaut.”

Learning to be a leader at MIT

McGonagle recalls that Fuhrmann was a very early participant in GEL from 2009 to 2011.

“The GEL Program was still in its infancy during this time and was in somewhat of a fragile state as we were seeking to grow and cement ourselves as a viable MIT program. As the fall 2010 semester was winding down, it was evident that the program needed an effective GEL2 student leader during the spring semester, who could lead by example and inspire fellow students and who was an example of what right looks like. I knew Adam was already an emerging leader as a senior cadet in MIT’s Air Force ROTC Detachment, so I tapped him for the role of spring student leader of GEL,” said McGonagle.

Fuhrmann initially sought to decline this role, citing his time as a leader in ROTC. But McGonagle, having led the Army ROTC Program prior to GEL, felt that the GEL Student Leader role would challenge and develop Fuhrmann in other ways. In GEL, he would be charged with leading and inspiring students from a broad background of experiences, and focused exclusively on leading within engineering contexts, while engaging with engineering industry organizations.

“GEL needed strong student leadership at this time, so Adam took on the role, and it ended up being a win-win for both him and the program. He later expressed to me that the experience challenged him in ways that he hadn’t anticipated and complemented his Air Force ROTC leadership development. He was grateful for the opportunity, and the program stabilized and grew under Adam’s leadership. He was the right student at the right time and place,” said McGonagle.

Fuhrmann has remained connected to the GEL program. He asked McGonagle to administer his oath of commissioning into the U.S. Air Force, with his family in attendance, at the historic Bunker Hill Monument in Boston. “One of my proudest GEL memories,” said McGonagle, who is a former U.S. Army Lt. Colonel.

Throughout his time in service which included overseas deployments, Fuhrmann has actively participated in Junior Engineering Leader’s Roundtable leadership labs (ELLs) with GEL students, and he has kept in touch with his GEL2 cohort.

“Adam’s GEL2 cohort meets informally once or twice a year, usually via Zoom, to share and discuss professional challenges, lessons learned, life stories, to keep in touch with each other. This small but excellent group of GEL alum is committed to staying connected and supporting one another, as part of the broader GEL community,” said McGonagle.

Pages