MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 hours 22 min ago

Astronomers detect the brightest fast radio burst of all time

Thu, 08/21/2025 - 2:00pm

A fast radio burst is an immense flash of radio emission that lasts for just a few milliseconds, during which it can momentarily outshine every other radio source in its galaxy. These flares can be so bright that their light can be seen from halfway across the universe, several billion light years away.

The sources of these brief and dazzling signals are unknown. But scientists now have a chance to study a fast radio burst (FRB) in unprecedented detail. An international team of scientists including physicists at MIT have detected a near and ultrabright fast radio burst some 130 million light-years from Earth in the constellation Ursa Major. It is one of the closest FRBs detected to date. It is also the brightest — so bright that the signal has garnered the informal moniker, RBFLOAT, for “radio brightest flash of all time.”

The burst’s brightness, paired with its proximity, is giving scientists the closest look yet at FRBs and the environments from which they emerge.

“Cosmically speaking, this fast radio burst is just in our neighborhood,” says Kiyoshi Masui, associate professor of physics and affiliate of MIT’s Kavli Institute for Astrophysics and Space Research. “This means we get this chance to study a pretty normal FRB in exquisite detail.”

Masui and his colleagues report their findings today in the Astrophysical Journal Letters.

Diverse bursts

The clarity of the new detection is thanks to a significant upgrade to The Canadian Hydrogen Intensity Mapping Experiment (CHIME), a large array of halfpipe-shaped antennae based in British Columbia. CHIME was originally designed to detect and map the distribution of hydrogen across the universe. The telescope is also sensitive to ultrafast and bright radio emissions. Since it started observations in 2018, CHIME has detected about 4,000 fast radio bursts, from all parts of the sky. But the telescope had not been able to precisely pinpoint the location of each fast radio burst, until now.

CHIME recently got a significant boost in precision, in the form of CHIME Outriggers — three miniature versions of CHIME, each sited in different parts of North America. Together, the telescopes work as one continent-sized system that can focus in on any bright flash that CHIME detects, to pin down its location in the sky with extreme precision.

“Imagine we are in New York and there’s a firefly in Florida that is bright for a thousandth of a second, which is usually how quick FRBs are,” says MIT Kavli graduate student Shion Andrew. “Localizing an FRB to a specific part of its host galaxy is analogous to figuring out not just what tree the firefly came from, but which branch it’s sitting on.”

The new fast radio burst is the first detection made using the combination of CHIME and the completed CHIME Outriggers. Together, the telescope array identified the FRB and determined not only the specific galaxy, but also the region of the galaxy from where the burst originated. It appears that the burst arose from the edge of the galaxy, just outside of a star-forming region. The precise localization of the FRB is allowing scientists to study the environment around the signal for clues to what brews up such bursts.

“As we’re getting these much more precise looks at FRBs, we’re better able to see the diversity of environments they’re coming from,” says MIT physics postdoc Adam Lanman.

Lanman, Andrew, and Masui are members of the CHIME Collaboration — which includes scientists from multiple institutions around the world — and are authors of the new paper detailing the discovery of the new FRB detection.

An older edge

Each of CHIME’s Outrigger stations continuously monitors the same swath of sky as the parent CHIME array. Both CHIME and the Outriggers “listen” for radio flashes, at incredibly short, millisecond timescales. Even over several minutes, such precision monitoring can amount to a huge amount of data. If CHIME detects no FRB signal, the Outriggers automatically delete the last 40 seconds of data to make room for the next span of measurements.

On March 16, 2025, CHIME detected an ultrabright flash of radio emissions, which automatically triggered the CHIME Outriggers to record the data. Initially, the flash was so bright that astronomers were unsure whether it was an FRB or simply a terrestrial event caused, for instance, by a burst of cellular communications.

That notion was put to rest as the CHIME Outrigger telescopes focused in on the flash and pinned down its location to NGC4141 — a spiral galaxy in the constellation Ursa Major about 130 million light years away, which happens to be surprisingly close to our own Milky Way. The detection is one of the closest and brightest fast radio bursts detected to date.

Follow-up observations in the same region revealed that the burst came from the very edge of an active region of star formation. While it’s still a mystery as to what source could produce FRBs, scientists’ leading hypothesis points to magnetars — young neutron stars with extremely powerful magnetic fields that can spin out high-energy flares across the electromagnetic spectrum, including in the radio band. Physicists suspect that magnetars are found in the center of star formation regions, where the youngest, most active stars are forged. The location of the new FRB, just outside a star-forming region in its galaxy, may suggest that the source of the burst is a slightly older magnetar.

“These are mostly hints,” Masui says. “But the precise localization of this burst is letting us dive into the details of how old an FRB source could be. If it were right in the middle, it would only be thousands of years old — very young for a star. This one, being on the edge, may have had a little more time to bake.”

No repeats

In addition to pinpointing where the new FRB was in the sky, the scientists also looked back through CHIME data to see whether any similar flares occurred in the same region in the past. Since the first FRB was discovered in 2007, astronomers have detected over 4,000 radio flares. Most of these bursts are one-offs. But a few percent have been observed to repeat, flashing every so often. And an even smaller fraction of these repeaters flash in a pattern, like a rhythmic heartbeat, before flaring out. A central question surrounding fast radio bursts is whether repeaters and nonrepeaters come from different origins.

The scientists looked through CHIME’s six years of data and came up empty: This new FRB appears to be a one-off, at least in the last six years. The findings are particularly exciting, given the burst’s proximity. Because it is so close and so bright, scientists can probe the environment in and around the burst for clues to what might produce a nonrepeating FRB.

“Right now we’re in the middle of this story of whether repeating and nonrepeating FRBs are different. These observations are putting together bits and pieces of the puzzle,” Masui says.

“There’s evidence to suggest that not all FRB progenitors are the same,” Andrew adds. “We’re on track to localize hundreds of FRBs every year. The hope is that a larger sample of FRBs localized to their host environments can help reveal the full diversity of these populations.”

The construction of the CHIME Outriggers was funded by the Gordon and Betty Moore Foundation and the U.S. National Science Foundation. The construction of CHIME was funded by the Canada Foundation for Innovation and provinces of Quebec, Ontario, and British Columbia.

Study links rising temperatures and declining moods

Thu, 08/21/2025 - 11:00am

Rising global temperatures affect human activity in many ways. Now, a new study illuminates an important dimension of the problem: Very hot days are associated with more negative moods, as shown by a large-scale look at social media postings.

Overall, the study examines 1.2 billion social media posts from 157 countries over the span of a year. The research finds that when the temperature rises above 95 degrees Fahrenheit, or 35 degrees Celsius, expressed sentiments become about 25 percent more negative in lower-income countries and about 8 percent more negative in better-off countries. Extreme heat affects people emotionally, not just physically.

“Our study reveals that rising temperatures don’t just threaten physical health or economic productivity — they also affect how people feel, every day, all over the world,” says Siqi Zheng, a professor in MIT’s Department of Urban Studies and Planning (DUSP) and Center for Real Estate (CRE), and co-author of a new paper detailing the results. “This work opens up a new frontier in understanding how climate stress is shaping human well-being at a planetary scale.”

The paper, “Unequal Impacts of Rising Temperatures on Global Human Sentiment,” is published today in the journal One Earth. The authors are Jianghao Wang, of the Chinese Academy of Sciences; Nicolas Guetta-Jeanrenaud SM ’22, a graduate of MIT’s Technology and Policy Program (TPP) and Institute for Data, Systems, and Society; Juan Palacios, a visiting assistant professor at MIT’s Sustainable Urbanization Lab (SUL) and an assistant professor Maastricht University; Yichun Fan, of SUL and Duke University; Devika Kakkar, of Harvard University; Nick Obradovich, of SUL and the Laureate Institute for Brain Research in Tulsa; and Zheng, who is the STL Champion Professor of Urban and Real Estate Sustainability at CRE and DUSP. Zheng is also the faculty director of CRE and founded the Sustainable Urbanization Lab in 2019.

Social media as a window

To conduct the study, the researchers evaluated 1.2 billion posts from the social media platforms Twitter and Weibo, all of which appeared in 2019. They used a natural language processing technique called Bidirectional Encoder Representations from Transformers (BERT), to analyze 65 languages across the 157 countries in the study.

Each social media post was given a sentiment rating from 0.0 (for very negative posts) to 1.0 (for very positive posts). The posts were then aggregated geographically to 2,988 locations and evaluated in correlation with area weather. From this method, the researchers could then deduce the connection between extreme temperatures and expressed sentiment.

“Social media data provides us with an unprecedented window into human emotions across cultures and continents,” Wang says. “This approach allows us to measure emotional impacts of climate change at a scale that traditional surveys simply cannot achieve, giving us real-time insights into how temperature affects human sentiment worldwide.”

To assess the effects of temperatures on sentiment in higher-income and middle-to-lower-income settings, the scholars also used a World Bank cutoff level of gross national income per-capita annual income of $13,845, finding that in places with incomes below that, the effects of heat on mood were triple those found in economically more robust settings.

“Thanks to the global coverage of our data, we find that people in low- and middle-income countries experience sentiment declines from extreme heat that are three times greater than those in high-income countries,” Fan says. “This underscores the importance of incorporating adaptation into future climate impact projections.”

In the long run

Using long-term global climate models, and expecting some adaptation to heat, the researchers also produced a long-range estimate of the effects of extreme temperatures on sentiment by the year 2100. Extending the current findings to that time frame, they project a 2.3 percent worsening of people’s emotional well-being based on high temperatures alone by then — although that is a far-range projection.

“It's clear now, with our present study adding to findings from prior studies, that weather alters sentiment on a global scale,” Obradovich says. “And as weather and climates change, helping individuals become more resilient to shocks to their emotional states will be an important component of overall societal adaptation.”

The researchers note that there are many nuances to the subject, and room for continued research in this area. For one thing, social media users are not likely to be a perfectly representative portion of the population, with young children and the elderly almost certainly using social media less than other people. However, as the researchers observe in the paper, the very young and elderly are probably particularly vulnerable to heat shocks, making the response to hot weather possible even larger than their study can capture.

The research is part of the Global Sentiment project led by the MIT Sustainable Urbanization Lab, and the study’s dataset is publicly available. Zheng and other co-authors have previously investigated these dynamics using social media, although never before at this scale.

“We hope this resource helps researchers, policymakers, and communities better prepare for a warming world,” Zheng says.

The research was supported, in part, by Zheng’s chaired professorship research fund, and grants Wang received from the National Natural Science Foundation of China and the Chinese Academy of Sciences. 

The “Mississippi Bubble” and the complex history of Haiti

Thu, 08/21/2025 - 12:00am

Many things account for Haiti’s modern troubles. A good perspective on them comes from going back in time to 1715 or so — and grappling with a far-flung narrative involving the French monarchy, a financial speculator named John Law, and a stock-market crash called the “Mississippi Bubble.”

To condense: After the death of Louis XIV in 1715, France was mired in debt following decades of war. The country briefly turned over its economic policy to Law, a Scotsman who implemented a system in which, among other things, French debt was retired while private monopoly companies expanded overseas commerce.

This project did not go entirely as planned. Stock-market speculation created the “Mississippi Bubble” and crash of 1719-20. Amid the chaos, Law lost a short-lived fortune and left France.

Yet Law’s system had lasting effects. French expansionism helped spur Haiti’s “sugar revolution” of the early 1700s, in which the country’s economy first became oriented around labor-intensive sugar plantations. Using enslaved workers and deploying violence against political enemies, plantation owners helped define Haiti’s current-day geography and place within the global economy, creating an extractive system benefitting a select few.

While there has been extensive debate about how the Haitian Revolution of 1789-1804 (and the 1825 “indemnity” Haiti agreed to pay France) has influenced the country’s subsequent path, the events of the early 1700s help illuminate the whole picture.

“This is a moment of transformation for Haiti’s history that most people don’t know much about,” says MIT historian Malick Ghachem. “And it happened well before independence. It goes back to the 18th century when Haiti began to be enmeshed in the debtor-creditor relationships from which it has never really escaped. The 1720s was the period when those relationships crystallized.”

Ghachem examines the economic transformations and multi-sided power struggles of that time in a new book, “The Colony and the Company: Haiti after the Mississippi Bubble,” published this summer by Princeton University Press.

“How did Haiti come to be the way it is today? This is the question everybody asks about it,” says Ghachem. “This book is an intervention in that debate.”

Enmeshed in the crisis

Ghachem is both a professor and head of MIT’s program in history. A trained lawyer, his work ranges across France’s global history and American legal history. His 2012 book “The Old Regime and the Haitian Revolution,” also situated in pre-revolutionary Haiti, examines the legal backdrop of the drive for emancipation.

“The Colony and the Company” draws on original archival research while arriving at two related conclusions: Haiti was a big part of the global bubble of the 1710s, and that bubble and its aftermath is a big part of Haiti’s history.

After all, until the late 1600s, Haiti, then known as Saint Domingue, was “a fragile, mostly ungoverned, and sparsely settled place of uncertain direction,” as Ghachem writes in the book. The establishment of Haiti’s economy is not just the background of later events, but a formative event on its own.

And while the “sugar revolution” may have reached Haiti sooner or later, it was amplified by France’s quest for new sources of revenue. Louis XIV’s military agenda had been a fiscal disaster for the French. Law — a convicted murderer, and evidently a persuasive salesman — proposed a restructuring scheme that concentrated revenue-raising and other fiscal powers in a monopoly overseas trading company and bank overseen by Law himself.

As France sought economic growth beyond its borders, that led the company to Haiti, to tap its agricultural potential. For that matter, as Ghachem details, multiple countries were expanding their overseas activities — and France, Britain, and Spain also increased slave-trading activities markedly. Within a few decades, Haiti was a center of global sugar production, based on slave labor.

“When the company is seen as the answer to France’s own woes, Haiti becomes enmeshed in the crisis,” Ghachem says. “The Mississippi Bubble of 1719-20 was really a global event. And one of the theaters where it played out most dramatically was Haiti.”

As it happens, in Haiti, the dynamics of this were complex. Local planters did not want to be answerable to Law’s company, and fended it off, but, as Ghachem writes,  they “internalized and privatized the financial and economic logic of the System against which they had re­belled, making of it a script for the management of plantation society.”

That society was complex. One of the main elements of “The Colony and the Company” is the exploration of its nuances. Haiti was home to a variety of people, including Jesuit missionaries, European women who had been re-settled there, and maroons (freed or escaped slaves living apart from plantations), among others. Plantation life came with violence, civic instability, and a lack of economic alternatives.

“What’s called the ‘success’ of the colony as a French economic force is really inseparable from the conditions that make it hard for Haiti to survive as an independent nation after the revolution,” Ghachem observes.

Stories in a new light

In public discourse, questions about Haiti’s past are often considered highly relevant to its present, as a near-failed state whose capital city is now substantially controlled by gangs, with no end to violence in sight. Some people draw a through line between the present and Haiti’s revolutionary-era condition. But to Ghachem, the revolution changed some political dynamics, but not the underlying conditions of life in the country.

“One [view] is that it’s the Haitian Revolution that leads to Haiti’s immiseration and violence and political dysfunction and its economic underdevelopment,” Ghachem says. “I think that argument is wrong. It’s an older problem that goes back to Haiti’s relationship with France in the late 17th and early 18th centuries. The revolution compounds that problem, and does so significantly, because of how France responds. But the terms of Haiti’s subordination are already set.”

Other scholars have praised “The Colony and the Company.” Pernille Røge of the University of Pittsburgh has called it “a multilayered and deeply compelling history rooted in a careful analysis of both familiar and unfamiliar primary sources.”

For his part, Ghachem hopes to persuade anyone interested in Haiti’s past and present to look more expansively at the subject, and consider how the deep roots of Haiti’s economy have helped structure its society.

“I’m trying to keep up with the day job of a historian,” Ghachem says. “Which includes finding stories that aren’t well-known, or are well-known and have aspects that are underappreciated, and telling them in a new light.”

Lincoln Laboratory reports on airborne threat mitigation for the NYC subway

Thu, 08/21/2025 - 12:00am

A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.

Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."

Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.

A complex environment for testing

For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.

To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.

The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.

"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"

At times, issues such as power outages or database errors could disrupt data capture.

"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."

The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.

Calling on industry

Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.

The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.

"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.

The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.

"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.

"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."

Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.

Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.

Learning from punishment

Wed, 08/20/2025 - 4:45pm

From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.

It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.

Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.

“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”

For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.

People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.

Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.

“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”

For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.

Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.

To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.

Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.

“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”

“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.

This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just. 

“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.

The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”

Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”

Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.

A boost for the precision of genome editing

Wed, 08/20/2025 - 4:30pm

The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.

Materials Research Laboratory: Driving interdisciplinary materials research at MIT

Wed, 08/20/2025 - 4:15pm

Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

Beyond individual projects, the MIT Materials Research Laboratory (MRL) fosters broad collaboration through strategic initiatives such as the Materials Systems Laboratory and SHINE (Sustainability and Health Initiative for Net Positive Enterprise). These efforts bring together academia, government, and industry to accelerate innovation in sustainability, energy use, and advanced materials.

MRL, a hub that connects and supports the Institute’s materials research community, is at the center of these efforts. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering who became MRL director in April. “Our goal is to make it easier for our faculty to conduct their extraordinary research.”

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

  • The finance team manages grants and helps secure new funding opportunities.
  • The human resources team supports the hiring of postdocs.
  • The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
  • The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.

Recent MRL initiatives

MRL has supported a wide range of research programs in partnership with major industry leaders, including Apple, Ford, Microsoft,  Rio Tinto, IBM, Samsung, and Texas Instruments, as well as organizations such as Advanced Functional Fabrics of America, Allegheny Technologies, Ericsson, and the Semiconductor Research Corp.

MRL researchers are addressing critical global challenges in energy efficiency, environmental sustainability, and the development of next-generation material systems.

  • Professor Antoine Allanore is advancing a direct process for wire production from sulfide concentrates, offering a more efficient and sustainable alternative to traditional methods.
  • Professor Joe Checkelsky is leading pioneering research on scalable, high-temperature quantum materials, in the realm of quantum transport.
  • Professor Pablo Jarillo-Herrero is making significant progress with two-dimensional materials and their heterostructures.
  • Professor Nuh Gedik explores ultrafast electronic and structural dynamics and light-matter interactions.
  • Professor Gregory Rutledge spearheaded a National Institute of Standards and Technology Rapid Assistance for Coronavirus Economic Response (NIST RACER)-sponsored initiative to develop biodegradable nanofiber-based personal protective equipment, aimed at improving manufacturing automation, diversifying supply chains, and reducing environmental impact.
  • Professor Elsa Olivetti serves as the lead principal investigator at MIT for REMADE: the Institute for Reducing Embodied-energy and Decreasing Emissions. Her research on fiber recovery and post-consumer resin processing directly supports REMADE’s mission to enhance material circularity and reduce energy use by 50 percent by 2027.
  • Randy Kirchain is modeling metals markets under decarbonization, and developing greener construction materials.
  • Anu Agarwal is spearheading efforts to build a sustainable microchip manufacturing ecosystem. 

New laser “comb” can enable rapid identification of chemicals with extreme precision

Wed, 08/20/2025 - 10:00am

Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.

Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.

But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.

Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.

“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.

He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.

Broadband combs

An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.

Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.

In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.

The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.

Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.

“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.

Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.

Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).

A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.

“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.

Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.

“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.

A new solution

Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.

This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.

“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.

“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.

Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.

In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.

“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.

This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation.

Graduate work with an impact — in big cities and on campus

Wed, 08/20/2025 - 12:00am

While working to boost economic development in Detroit in the late 2010s, Nick Allen found he was running up against a problem.

The city was trying to spur more investment after long-term industrial flight to suburbs and other states. Relying more heavily on property taxes for revenue, the city was negotiating individualized tax deals with prospective businesses. That’s hardly a scenario unique to Detroit, but such deals involved lengthy approval processes that slowed investment decisions and made smaller projects seem unrealistic. 

Moreover, while creating small pockets of growth, these individualized tax abatements were not changing the city’s broader fiscal structure. They also favored those with leverage and resources to work the system for a break.

“The thing you really don’t want to do with taxes is have very particular, highly procedural ways of adjusting the burdens,” says Allen, now a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP). “You want a simple process that fits people’s ideas about what fairness looks like.”

So, after starting his PhD program at MIT, Allen kept studying urban fiscal policy. Along with a group of other scholars, he has produced research papers making the case for a land-value tax — a common tax rate on land that, combined with reduced property taxes, could raise more local revenue by encouraging more city-wide investment, even while lowering tax burdens on residents and businesses. As a bonus, it could also reduce foreclosures.

In the last few years, this has become a larger topic in urban policy circles. The mayor of Detroit has endorsed the idea. The New York Times has written about the work of Allen and his colleagues. The land-value tax is now a serious policy option.

It is unusual for a graduate student to have their work become part of a prominent policy debate. But then, Allen is an unusual student. At MIT, he has not just conducted influential research in his field, but thrown himself into campus-based work with substantial impact as well. Allen has served on task forces assessing student stipend policy, expanding campus housing, and generating ideas for dining program reform.

For all these efforts, in May, Allen received the Karl Taylor Compton Prize, MIT’s highest student honor. At the ceremony, MIT Chancellor Melissa Nobles observed that Allen’s work helped Institute stakeholders “fully understand complex issues, ensuring his recommendations are not only well-informed but also practical and impactful.”

Looking to revive growth

Allen is a Minnesota native who received his BA from Yale University. In 2015, he enrolled in graduate school at MIT, receiving his master’s in city planning from DUSP in 2017. At the time, Allen worked on the Malaysia Sustainable Cities Project, headed by Professor Lawrence Susskind. At one point Allen spent a couple of months in a small Malaysian village studying the effects of coastal development on local fishing and farming.

Malaysia may be different than Michigan, but the issues that Allen encountered in Asia were similar to the ones he wanted to keep studying back in the U.S.: finding ways to finance growth.

“The core interests I have are around real estate, the physical environment, and these fiscal policy questions of how this all gets funded and what the responsibilities are of the state and private markets,” Allen says. “And that brought me to Detroit.”

Specifically, that landed him at the Detroit Economic Growth Corporation, a city-charted development agency that works to facilitate new investment. There, Allen started grappling with the city’s revenue problems. Once heralded as the richest city in America, Detroit has seen a lot of property go vacant, and has hiked property taxes on existing structures to compensate for that. Those rates then discouraged further investment and building.

To be sure, the challenges Detroit has faced stem from far more than tax policy and relate to many macroscale socioeconomic factors, including suburban flight, the shift of manufacturing to states with nonunion employees, and much more. But changing tax policy can be one lever to pull in response.

“It’s difficult to figure out how to revive growth in a place that’s been cannibalized by its losses,” Allen says.

Tasked with underwriting real estate projects, Allen started cataloguing the problems arising from Detroit’s property tax reliance, and began looking at past economics work on optimal tax policy in search of alternatives.

“There’s a real nose-to-the-ground empiricism you start with, asking why we have a system nobody would choose,” Allen says. “There were two parts to that, for me. One was initially looking at the difficulty of making individual projects work, from affordable housing to big industrial plants, along with, secondly, this wave of tax foreclosures in the city.”

Engineering, but for policy

After two years in Detroit, Allen returned to MIT, this time as a doctoral student in DUSP and with a research program oriented around the issues he had worked on. In pursuing that, Allen has worked closely with John E. Anderson, an economist at the University of Nebraska at Lincoln. With a nationwide team of economists convened by the Lincoln Institute of Land Policy, they worked to address the city’s questions on property tax reform.

One paper used current data to show that a land-value tax should lower tax-connected foreclosures in the city. Two other papers study the use of the tax in certain parts of Pennsylvania, one of the few states where it has been deployed. There, the researchers concluded, the land-value tax both leads to greater business development and raises property values.

“What we found overall, looking at past tax reduction in Detroit and other cities, is that in reducing the rate at which people in deep tax distress go through foreclosure, it has a fairly large effect,” Allen says. “It has some effect on allowing business to reinvest in properties. We are seeing a lot more attraction of investment. And it’s got the virtue of being a rules-based system.”

Those empirical results, he notes, helped confirm the sense that a policy change could help growth in Detroit.

“That really validated the hunch we were following,” Allen says.

The widespread attention the policy proposal has garnered could not really have been predicted. The tax has not yet been implemented in Detroit, although it has been a prominent part of civic debates there. Allen has been asked to consult on tax policy by officials in numerous large cities, and is hopeful the concept will gain still more traction.

Meanwhile, at MIT, Allen has one more year to go in his doctoral program. On top of his academic research, he has been an active participant in Institute matters, helping reshape graduate-school policies on multiple fronts.

For instance, Allen was part of the Graduate Housing Working Group, whose efforts helped spur MIT to build Graduate Junction, a new housing complex for 675 graduate students on Vassar Street in Cambridge, Massachusetts. The name also refers to the Grand Junction rail line that runs nearby; the complex formally opened in 2024.

“Innovative places struggle to build housing fast enough,” Allen said at the time Graduate Junction opened, also noting that “new housing for students reduces price pressure on the rest of the Cambridge community.”

Commenting on it now, he adds, “Maybe to most people graduate housing policy doesn’t sound that fun, but to me these are very absorbing questions.”

And ultimately, Allen says, the intellectual problems in either domain can be similar, whether he is working on city policy issues or campus enhancements.

“The reason I think planning fits so well here at MIT is, a lot of what I do is like policy engineering,” Allen says. “It’s really important to understand system constraints, and think seriously about finding solutions that can be built to purpose. I think that’s why I’ve felt at home here at MIT, working on these outside public policy topics, and projects for the Institute. You need to take seriously what people say about the constraints in their lives.”

Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78

Tue, 08/19/2025 - 2:35pm

John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78. 

Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.

His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.

“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”

“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”

“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”

Layers of light

John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”

He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.

“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”

Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.

Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.

And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”

That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.

Legendary mentor

In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.

Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.

In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.

This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:

“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”

“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”

Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.

What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.

“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”

Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.

“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”

MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.

“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”

Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.

“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”

“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.

“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”

Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.

A new model predicts how molecules will dissolve in different solvents

Tue, 08/19/2025 - 5:00am

Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.

The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.

“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.

The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.

“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”

William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.

Solving solubility

The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.

In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.

That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.

“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.

Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.

Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.

One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.

The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.

The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.

“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.

Accurate predictions

The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.

“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”

The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.

“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.

Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.

“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”

The research was funded, in part, by the U.S. Department of Energy.

Researchers glimpse the inner workings of protein language models

Mon, 08/18/2025 - 3:00pm

Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 

A shape-changing antenna for more versatile sensing and communication

Mon, 08/18/2025 - 12:00am

MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.

A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.

The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.

The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.

In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.

“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.

Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Making sense of antennas

While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.

To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.

An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.

To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.

By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.

“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.

The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.

To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”

But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.

“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.

A means for makers

With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.

The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.

“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.

Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.

For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.

Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.

In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.

This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.

Pages