MIT Latest News

The “Mississippi Bubble” and the complex history of Haiti
Many things account for Haiti’s modern troubles. A good perspective on them comes from going back in time to 1715 or so — and grappling with a far-flung narrative involving the French monarchy, a financial speculator named John Law, and a stock-market crash called the “Mississippi Bubble.”
To condense: After the death of Louis XIV in 1715, France was mired in debt following decades of war. The country briefly turned over its economic policy to Law, a Scotsman who implemented a system in which, among other things, French debt was retired while private monopoly companies expanded overseas commerce.
This project did not go entirely as planned. Stock-market speculation created the “Mississippi Bubble” and crash of 1719-20. Amid the chaos, Law lost a short-lived fortune and left France.
Yet Law’s system had lasting effects. French expansionism helped spur Haiti’s “sugar revolution” of the early 1700s, in which the country’s economy first became oriented around labor-intensive sugar plantations. Using enslaved workers and deploying violence against political enemies, plantation owners helped define Haiti’s current-day geography and place within the global economy, creating an extractive system benefitting a select few.
While there has been extensive debate about how the Haitian Revolution of 1789-1804 (and the 1825 “indemnity” Haiti agreed to pay France) has influenced the country’s subsequent path, the events of the early 1700s help illuminate the whole picture.
“This is a moment of transformation for Haiti’s history that most people don’t know much about,” says MIT historian Malick Ghachem. “And it happened well before independence. It goes back to the 18th century when Haiti began to be enmeshed in the debtor-creditor relationships from which it has never really escaped. The 1720s was the period when those relationships crystallized.”
Ghachem examines the economic transformations and multi-sided power struggles of that time in a new book, “The Colony and the Company: Haiti after the Mississippi Bubble,” published this summer by Princeton University Press.
“How did Haiti come to be the way it is today? This is the question everybody asks about it,” says Ghachem. “This book is an intervention in that debate.”
Enmeshed in the crisis
Ghachem is both a professor and head of MIT’s program in history. A trained lawyer, his work ranges across France’s global history and American legal history. His 2012 book “The Old Regime and the Haitian Revolution,” also situated in pre-revolutionary Haiti, examines the legal backdrop of the drive for emancipation.
“The Colony and the Company” draws on original archival research while arriving at two related conclusions: Haiti was a big part of the global bubble of the 1710s, and that bubble and its aftermath is a big part of Haiti’s history.
After all, until the late 1600s, Haiti, then known as Saint Domingue, was “a fragile, mostly ungoverned, and sparsely settled place of uncertain direction,” as Ghachem writes in the book. The establishment of Haiti’s economy is not just the background of later events, but a formative event on its own.
And while the “sugar revolution” may have reached Haiti sooner or later, it was amplified by France’s quest for new sources of revenue. Louis XIV’s military agenda had been a fiscal disaster for the French. Law — a convicted murderer, and evidently a persuasive salesman — proposed a restructuring scheme that concentrated revenue-raising and other fiscal powers in a monopoly overseas trading company and bank overseen by Law himself.
As France sought economic growth beyond its borders, that led the company to Haiti, to tap its agricultural potential. For that matter, as Ghachem details, multiple countries were expanding their overseas activities — and France, Britain, and Spain also increased slave-trading activities markedly. Within a few decades, Haiti was a center of global sugar production, based on slave labor.
“When the company is seen as the answer to France’s own woes, Haiti becomes enmeshed in the crisis,” Ghachem says. “The Mississippi Bubble of 1719-20 was really a global event. And one of the theaters where it played out most dramatically was Haiti.”
As it happens, in Haiti, the dynamics of this were complex. Local planters did not want to be answerable to Law’s company, and fended it off, but, as Ghachem writes, they “internalized and privatized the financial and economic logic of the System against which they had rebelled, making of it a script for the management of plantation society.”
That society was complex. One of the main elements of “The Colony and the Company” is the exploration of its nuances. Haiti was home to a variety of people, including Jesuit missionaries, European women who had been re-settled there, and maroons (freed or escaped slaves living apart from plantations), among others. Plantation life came with violence, civic instability, and a lack of economic alternatives.
“What’s called the ‘success’ of the colony as a French economic force is really inseparable from the conditions that make it hard for Haiti to survive as an independent nation after the revolution,” Ghachem observes.
Stories in a new light
In public discourse, questions about Haiti’s past are often considered highly relevant to its present, as a near-failed state whose capital city is now substantially controlled by gangs, with no end to violence in sight. Some people draw a through line between the present and Haiti’s revolutionary-era condition. But to Ghachem, the revolution changed some political dynamics, but not the underlying conditions of life in the country.
“One [view] is that it’s the Haitian Revolution that leads to Haiti’s immiseration and violence and political dysfunction and its economic underdevelopment,” Ghachem says. “I think that argument is wrong. It’s an older problem that goes back to Haiti’s relationship with France in the late 17th and early 18th centuries. The revolution compounds that problem, and does so significantly, because of how France responds. But the terms of Haiti’s subordination are already set.”
Other scholars have praised “The Colony and the Company.” Pernille Røge of the University of Pittsburgh has called it “a multilayered and deeply compelling history rooted in a careful analysis of both familiar and unfamiliar primary sources.”
For his part, Ghachem hopes to persuade anyone interested in Haiti’s past and present to look more expansively at the subject, and consider how the deep roots of Haiti’s economy have helped structure its society.
“I’m trying to keep up with the day job of a historian,” Ghachem says. “Which includes finding stories that aren’t well-known, or are well-known and have aspects that are underappreciated, and telling them in a new light.”
Lincoln Laboratory reports on airborne threat mitigation for the NYC subway
A multiyear program at MIT Lincoln Laboratory to characterize how biological and chemical vapors and aerosols disperse through the New York City subway system is coming to a close. The program, part of the U.S. Department of Homeland Security (DHS) Science and Technology Directorate's Urban Area Security Initiative, builds on other efforts at Lincoln Laboratory to detect chemical and biological threats, validate air dispersion models, and improve emergency protocols in urban areas in case of an airborne attack. The results of this program will inform the New York Metropolitan Transportation Authority (MTA) on how best to install an efficient, cost-effective system for airborne threat detection and mitigation throughout the subway. On a broader scale, the study will help the national security community understand pragmatic chemical and biological defense options for mass transit, critical facilities, and special events.
Trina Vian from the laboratory's Counter–Weapons of Mass Destruction (WMD) Systems Group led this project, which she says had as much to do with air flow and sensors as it did with MTA protocols and NYC commuters. "There are real dangers associated with panic during an alarm. People can get hurt during mass evacuation, or lose trust in a system and the authorities that administer that system, if there are false alarms," she says. "A novel aspect of our project was to investigate effective low-regret response options, meaning those with little operational consequence to responding to a false alarm."
Currently, depending on the severity of the alarm, the MTA's response can include stopping service and evacuating passengers and employees.
A complex environment for testing
For the program, which started in 2019, Vian and her team collected data on how chemical and biological sensors performed in the subway, what factors affected sensor accuracy, and how different mitigation protocols fared in stopping an airborne threat from spreading and removing the threat from a contaminated location. For their tests, they released batches of a safe, custom-developed aerosol simulant within Grand Central Station that they could track with DNA barcodes. Each batch had a different barcode, which allowed the team to differentiate among them and quantitatively assess different combinations of mitigation strategies.
To control and isolate air flow, the team tested static air curtains as well as air filtration systems. They also tested a spray knockdown system developed by Sandia National Laboratories designed to reduce and isolate particulate hazards in large volume areas. The system sprays a fine water mist into the tunnels that attaches to threat particulates and uses gravity to rain out the threat material. The spray contains droplets of a particular size and concentration, and with an applied electrostatic field. The original idea for the system was adapted from the coal mining industry, which used liquid sprayers to reduce the amount of inhalable soot.
The tests were done in a busy environment, and the team was required to complete trainings on MTA protocols such as track safety and how to interact with the public.
"We had long and sometimes very dirty days," says Jason Han of the Counter–WMD Systems Group, who collected measurements in the tunnels and analyzed the data. "We all wore bright orange contractor safety vests, which made people think we were official employees of the MTA. We would often get approached by people asking for directions!"
At times, issues such as power outages or database errors could disrupt data capture.
"We learned fairly early on that we had to capture daily data backups and keep a daily evolving master list of unique sensor identifiers and locations," says fellow team member Cassie Smith. "We developed workflows and wrote scripts to help automate the process, which ensured successful sensor data capture and attribution."
The team also worked closely with the MTA to make sure their tests and data capture ran smoothly. "The MTA was great at helping us maintain the test bed, doing as much as they could in our physical absence," Vian says.
Calling on industry
Another crucial aspect of the program was to connect with the greater chemical and biological industrial community to solicit their sensors for testing. These partnerships reduced the cost for DHS to bring new sensing technologies into the project, and, in return, participants gained a testing and data collection opportunity within the challenging NYC subway environment.
The team ultimately fielded 16 different sensors, each with varying degrees of maturity, that operated through a range of methods, such as ultraviolet laser–induced fluorescence, polymerase chain reaction, and long-wave infrared spectrometry.
"The partners appreciated the unique data they got and the opportunity to work with the MTA and experience an environment and customer base that they may not have anticipated before," Vian says.
The team finished testing in 2024 and has delivered the final report to the DHS. The MTA will use the report to help expand their PROTECT chemical detection system (originally developed by Argonne National Laboratory) from Grand Central Station into adjacent stations. They expect to complete this work in 2026.
"The value of this program cannot be overstated. This partnership with DHS and MIT Lincoln Laboratory has led to the identification of the best-suited systems for the MTA’s unique operating environment," says Michael Gemelli, director of chemical, biological, radiological, and nuclear/WMD detection and mitigation at the New York MTA.
"Other transit authorities can leverage these results to start building effective chemical and biological defense systems for their own specific spaces and threat priorities," adds Benjamin Ervin, leader of Lincoln Laboratory's Counter–WMD Systems Group. "Specific test and evaluation within the operational environment of interest, however, is always recommended to ensure defense system objectives are met."
Building these types of decision-making reports for airborne chemical and biological sensing has been a part of Lincoln Laboratory's mission since the mid-1990s. The laboratory also helped to define priorities in the field when DHS was forming in the early 2000s.
Beyond this study, the Lincoln Laboratory is leading several other projects focused on forecasting the impact of novel chemical and biological threats within multiple domains — military, space, agriculture, health, etc. — and on prototyping rapid, autonomous, high-confidence biological identification capabilities for the homeland to provide actionable evidence of hazardous environments.
Learning from punishment
From toddlers’ timeouts to criminals’ prison sentences, punishment reinforces social norms, making it known that an offender has done something unacceptable. At least, that is usually the intent — but the strategy can backfire. When a punishment is perceived as too harsh, observers can be left with the impression that an authority figure is motivated by something other than justice.
It can be hard to predict what people will take away from a particular punishment, because everyone makes their own inferences not just about the acceptability of the act that led to the punishment, but also the legitimacy of the authority who imposed it. A new computational model developed by scientists at MIT’s McGovern Institute for Brain Research makes sense of these complicated cognitive processes, recreating the ways people learn from punishment and revealing how their reasoning is shaped by their prior beliefs.
Their work, reported Aug. 4 in the journal PNAS, explains how a single punishment can send different messages to different people, and even strengthen the opposing viewpoints of groups who hold different opinions about authorities or social norms.
“The key intuition in this model is the fact that you have to be evaluating simultaneously both the norm to be learned and the authority who’s punishing,” says McGovern investigator and John W. Jarve Professor of Brain and Cognitive Sciences Rebecca Saxe, who led the research. “One really important consequence of that is even where nobody disagrees about the facts — everybody knows what action happened, who punished it, and what they did to punish it — different observers of the same situation could come to different conclusions.”
For example, she says, a child who is sent to timeout after biting a sibling might interpret the event differently than the parent. One might see the punishment as proportional and important, teaching the child not to bite. But if the biting, to the toddler, seemed a reasonable tactic in the midst of a squabble, the punishment might be seen as unfair, and the lesson will be lost.
People draw on their own knowledge and opinions when they evaluate these situations — but to study how the brain interprets punishment, Saxe and graduate student Setayesh Radkani wanted to take those personal ideas out of the equation. They needed a clear understanding of the beliefs that people held when they observed a punishment, so they could learn how different kinds of information altered their perceptions. So Radkani set up scenarios in imaginary villages where authorities punished individuals for actions that had no obvious analog in the real world.
Participants observed these scenarios in a series of experiments, with different information offered in each one. In some cases, for example, participants were told that the person being punished was either an ally or a competitor of the authority, whereas in other cases, the authority’s possible bias was left ambiguous.
“That gives us a really controlled setup to vary prior beliefs,” Radkani explains. “We could ask what people learn from observing punitive decisions with different severities, in response to acts that vary in their level of wrongness, by authorities that vary in their level of different motives.”
For each scenario, participants were asked to evaluate four factors: how much the authority figure cared about justice; the selfishness of the authority; the authority’s bias for or against the individual being punished; and the wrongness of the punished act. The research team asked these questions when participants were first introduced to the hypothetical society, then tracked how their responses changed after they observed the punishment. Across the scenarios, participants’ initial beliefs about the authority and the wrongness of the act shaped the extent to which those beliefs shifted after they observed the punishment.
Radkani was able to replicate these nuanced interpretations using a cognitive model framed around an idea that Saxe’s team has long used to think about how people interpret the actions of others. That is, to make inferences about others’ intentions and beliefs, we assume that people choose actions that they expect will help them achieve their goals.
To apply that concept to the punishment scenarios, Radkani developed a model that evaluates the meaning of a punishment (an action aimed at achieving a goal of the authority) by considering the harm associated with that punishment; its costs or benefits to the authority; and its proportionality to the violation. By assessing these factors, along with prior beliefs about the authority and the punished act, the model was able to predict people’s responses to the hypothetical punishment scenarios, supporting the idea that people use a similar mental model. “You need to have them consider those things, or you can’t make sense of how people understand punishment when they observe it,” Saxe says.
Even though the team designed their experiments to preclude preconceived ideas about the people and actions in their imaginary villages, not everyone drew the same conclusions from the punishments they observed. Saxe’s group found that participants’ general attitudes toward authority influenced their interpretation of events. Those with more authoritarian attitudes — assessed through a standard survey — tended to judge punished acts as more wrong and authorities as more motivated by justice than other observers.
“If we differ from other people, there’s a knee-jerk tendency to say, ‘either they have different evidence from us, or they’re crazy,’” Saxe says. Instead, she says, “It’s part of the way humans think about each other’s actions.”
“When a group of people who start out with different prior beliefs get shared evidence, they will not end up necessarily with shared beliefs. That’s true even if everybody is behaving rationally,” says Saxe.
This way of thinking also means that the same action can simultaneously strengthen opposing viewpoints. The Saxe lab’s modeling and experiments showed that when those viewpoints shape individuals’ interpretations of future punishments, the groups’ opinions will continue to diverge. For instance, a punishment that seems too harsh to a group who suspects an authority is biased can make that group even more skeptical of the authority’s future actions. Meanwhile, people who see the same punishment as fair and the authority as just will be more likely to conclude that the authority figure’s future actions are also just.
“You will get a vicious cycle of polarization, staying and actually spreading to new things,” says Radkani.
The researchers say their findings point toward strategies for communicating social norms through punishment. “It is exactly sensible in our model to do everything you can to make your action look like it’s coming out of a place of care for the long-term outcome of this individual, and that it’s proportional to the norm violation they did,” Saxe says. “That is your best shot at getting a punishment interpreted pedagogically, rather than as evidence that you’re a bully.”
Nevertheless, she says that won’t always be enough. “If the beliefs are strong the other way, it’s very hard to punish and still sustain a belief that you were motivated by justice.”
Joining Saxe and Radkani on the paper is Joshua Tenenbaum, MIT professor of brain and cognitive sciences. The study was funded, in part, by the Patrick J McGovern Foundation.
A boost for the precision of genome editing
The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.
CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.
Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).
“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”
The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.
LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.
Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.
The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.
The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.
Materials Research Laboratory: Driving interdisciplinary materials research at MIT
Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).
Beyond individual projects, the MIT Materials Research Laboratory (MRL) fosters broad collaboration through strategic initiatives such as the Materials Systems Laboratory and SHINE (Sustainability and Health Initiative for Net Positive Enterprise). These efforts bring together academia, government, and industry to accelerate innovation in sustainability, energy use, and advanced materials.
MRL, a hub that connects and supports the Institute’s materials research community, is at the center of these efforts. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering who became MRL director in April. “Our goal is to make it easier for our faculty to conduct their extraordinary research.”
A storied history
Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.
Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.
Enabling research through partnership and support
MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.
Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.
Behind-the-scenes support, front-line impact
MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.
This quiet but powerful support spans multiple areas:
- The finance team manages grants and helps secure new funding opportunities.
- The human resources team supports the hiring of postdocs.
- The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
- The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.
Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.
Leadership with a vision
Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT.
“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.
Recent MRL initiatives
MRL has supported a wide range of research programs in partnership with major industry leaders, including Apple, Ford, Microsoft, Rio Tinto, IBM, Samsung, and Texas Instruments, as well as organizations such as Advanced Functional Fabrics of America, Allegheny Technologies, Ericsson, and the Semiconductor Research Corp.
MRL researchers are addressing critical global challenges in energy efficiency, environmental sustainability, and the development of next-generation material systems.
- Professor Antoine Allanore is advancing a direct process for wire production from sulfide concentrates, offering a more efficient and sustainable alternative to traditional methods.
- Professor Joe Checkelsky is leading pioneering research on scalable, high-temperature quantum materials, in the realm of quantum transport.
- Professor Pablo Jarillo-Herrero is making significant progress with two-dimensional materials and their heterostructures.
- Professor Nuh Gedik explores ultrafast electronic and structural dynamics and light-matter interactions.
- Professor Gregory Rutledge spearheaded a National Institute of Standards and Technology Rapid Assistance for Coronavirus Economic Response (NIST RACER)-sponsored initiative to develop biodegradable nanofiber-based personal protective equipment, aimed at improving manufacturing automation, diversifying supply chains, and reducing environmental impact.
- Professor Elsa Olivetti serves as the lead principal investigator at MIT for REMADE: the Institute for Reducing Embodied-energy and Decreasing Emissions. Her research on fiber recovery and post-consumer resin processing directly supports REMADE’s mission to enhance material circularity and reduce energy use by 50 percent by 2027.
- Randy Kirchain is modeling metals markets under decarbonization, and developing greener construction materials.
- Anu Agarwal is spearheading efforts to build a sustainable microchip manufacturing ecosystem.
New laser “comb” can enable rapid identification of chemicals with extreme precision
Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.
Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.
But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.
Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.
“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.
He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.
Broadband combs
An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.
Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.
In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.
The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.
Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.
“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.
Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.
Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).
A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.
“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.
Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.
“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.
A new solution
Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.
This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.
“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.
“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.
Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.
In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.
“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.
This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation.
Graduate work with an impact — in big cities and on campus
While working to boost economic development in Detroit in the late 2010s, Nick Allen found he was running up against a problem.
The city was trying to spur more investment after long-term industrial flight to suburbs and other states. Relying more heavily on property taxes for revenue, the city was negotiating individualized tax deals with prospective businesses. That’s hardly a scenario unique to Detroit, but such deals involved lengthy approval processes that slowed investment decisions and made smaller projects seem unrealistic.
Moreover, while creating small pockets of growth, these individualized tax abatements were not changing the city’s broader fiscal structure. They also favored those with leverage and resources to work the system for a break.
“The thing you really don’t want to do with taxes is have very particular, highly procedural ways of adjusting the burdens,” says Allen, now a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP). “You want a simple process that fits people’s ideas about what fairness looks like.”
So, after starting his PhD program at MIT, Allen kept studying urban fiscal policy. Along with a group of other scholars, he has produced research papers making the case for a land-value tax — a common tax rate on land that, combined with reduced property taxes, could raise more local revenue by encouraging more city-wide investment, even while lowering tax burdens on residents and businesses. As a bonus, it could also reduce foreclosures.
In the last few years, this has become a larger topic in urban policy circles. The mayor of Detroit has endorsed the idea. The New York Times has written about the work of Allen and his colleagues. The land-value tax is now a serious policy option.
It is unusual for a graduate student to have their work become part of a prominent policy debate. But then, Allen is an unusual student. At MIT, he has not just conducted influential research in his field, but thrown himself into campus-based work with substantial impact as well. Allen has served on task forces assessing student stipend policy, expanding campus housing, and generating ideas for dining program reform.
For all these efforts, in May, Allen received the Karl Taylor Compton Prize, MIT’s highest student honor. At the ceremony, MIT Chancellor Melissa Nobles observed that Allen’s work helped Institute stakeholders “fully understand complex issues, ensuring his recommendations are not only well-informed but also practical and impactful.”
Looking to revive growth
Allen is a Minnesota native who received his BA from Yale University. In 2015, he enrolled in graduate school at MIT, receiving his master’s in city planning from DUSP in 2017. At the time, Allen worked on the Malaysia Sustainable Cities Project, headed by Professor Lawrence Susskind. At one point Allen spent a couple of months in a small Malaysian village studying the effects of coastal development on local fishing and farming.
Malaysia may be different than Michigan, but the issues that Allen encountered in Asia were similar to the ones he wanted to keep studying back in the U.S.: finding ways to finance growth.
“The core interests I have are around real estate, the physical environment, and these fiscal policy questions of how this all gets funded and what the responsibilities are of the state and private markets,” Allen says. “And that brought me to Detroit.”
Specifically, that landed him at the Detroit Economic Growth Corporation, a city-charted development agency that works to facilitate new investment. There, Allen started grappling with the city’s revenue problems. Once heralded as the richest city in America, Detroit has seen a lot of property go vacant, and has hiked property taxes on existing structures to compensate for that. Those rates then discouraged further investment and building.
To be sure, the challenges Detroit has faced stem from far more than tax policy and relate to many macroscale socioeconomic factors, including suburban flight, the shift of manufacturing to states with nonunion employees, and much more. But changing tax policy can be one lever to pull in response.
“It’s difficult to figure out how to revive growth in a place that’s been cannibalized by its losses,” Allen says.
Tasked with underwriting real estate projects, Allen started cataloguing the problems arising from Detroit’s property tax reliance, and began looking at past economics work on optimal tax policy in search of alternatives.
“There’s a real nose-to-the-ground empiricism you start with, asking why we have a system nobody would choose,” Allen says. “There were two parts to that, for me. One was initially looking at the difficulty of making individual projects work, from affordable housing to big industrial plants, along with, secondly, this wave of tax foreclosures in the city.”
Engineering, but for policy
After two years in Detroit, Allen returned to MIT, this time as a doctoral student in DUSP and with a research program oriented around the issues he had worked on. In pursuing that, Allen has worked closely with John E. Anderson, an economist at the University of Nebraska at Lincoln. With a nationwide team of economists convened by the Lincoln Institute of Land Policy, they worked to address the city’s questions on property tax reform.
One paper used current data to show that a land-value tax should lower tax-connected foreclosures in the city. Two other papers study the use of the tax in certain parts of Pennsylvania, one of the few states where it has been deployed. There, the researchers concluded, the land-value tax both leads to greater business development and raises property values.
“What we found overall, looking at past tax reduction in Detroit and other cities, is that in reducing the rate at which people in deep tax distress go through foreclosure, it has a fairly large effect,” Allen says. “It has some effect on allowing business to reinvest in properties. We are seeing a lot more attraction of investment. And it’s got the virtue of being a rules-based system.”
Those empirical results, he notes, helped confirm the sense that a policy change could help growth in Detroit.
“That really validated the hunch we were following,” Allen says.
The widespread attention the policy proposal has garnered could not really have been predicted. The tax has not yet been implemented in Detroit, although it has been a prominent part of civic debates there. Allen has been asked to consult on tax policy by officials in numerous large cities, and is hopeful the concept will gain still more traction.
Meanwhile, at MIT, Allen has one more year to go in his doctoral program. On top of his academic research, he has been an active participant in Institute matters, helping reshape graduate-school policies on multiple fronts.
For instance, Allen was part of the Graduate Housing Working Group, whose efforts helped spur MIT to build Graduate Junction, a new housing complex for 675 graduate students on Vassar Street in Cambridge, Massachusetts. The name also refers to the Grand Junction rail line that runs nearby; the complex formally opened in 2024.
“Innovative places struggle to build housing fast enough,” Allen said at the time Graduate Junction opened, also noting that “new housing for students reduces price pressure on the rest of the Cambridge community.”
Commenting on it now, he adds, “Maybe to most people graduate housing policy doesn’t sound that fun, but to me these are very absorbing questions.”
And ultimately, Allen says, the intellectual problems in either domain can be similar, whether he is working on city policy issues or campus enhancements.
“The reason I think planning fits so well here at MIT is, a lot of what I do is like policy engineering,” Allen says. “It’s really important to understand system constraints, and think seriously about finding solutions that can be built to purpose. I think that’s why I’ve felt at home here at MIT, working on these outside public policy topics, and projects for the Institute. You need to take seriously what people say about the constraints in their lives.”
Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78
John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78.
Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.
His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.
“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”
“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”
“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”
Layers of light
John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”
He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.
“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”
Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.
Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.
And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”
That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.
Legendary mentor
In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.
Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.
In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.
This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:
“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”
“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”
Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.
What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.
“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”
Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.
“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”
MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.
“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”
Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.
“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”
“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.
“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”
Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.
A new model predicts how molecules will dissolve in different solvents
Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.
The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.
“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.
The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.
“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”
William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.
Solving solubility
The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.
In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.
That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.
“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.
Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.
Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.
One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.
The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.
The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.
“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.
Accurate predictions
The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.
“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”
The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.
“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.
Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.
“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”
The research was funded, in part, by the U.S. Department of Energy.
Researchers glimpse the inner workings of protein language models
Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.
These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.
In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.
“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”
Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.
Opening the black box
In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.
Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.
In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.
However, in all of these studies, it has been impossible to know how the models were making their predictions.
“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.
In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.
The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.
Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.
When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.
“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”
Interpretable models
Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.
By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”
This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.
“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.
Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.
“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.
The research was funded by the National Institutes of Health.
A shape-changing antenna for more versatile sensing and communication
MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.
A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.
The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.
The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.
In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.
“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.
Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.
Making sense of antennas
While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.
To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.
An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.
To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.
By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.
“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.
The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.
To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”
But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.
“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.
A means for makers
With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.
The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.
“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.
Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.
For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.
Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.
In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.
This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.
How AI could speed the development of RNA vaccines and other RNA therapies
Using artificial intelligence, MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies.
After training a machine-learning model to analyze thousands of existing delivery particles, the researchers used it to predict new materials that would work even better. The model also enabled the researchers to identify particles that would work well in different types of cells, and to discover ways to incorporate new types of materials into the particles.
“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
This approach could dramatically speed the process of developing new RNA vaccines, as well as therapies that could be used to treat obesity, diabetes, and other metabolic disorders, the researchers say.
Alvin Chan, a former MIT postdoc who is now an assistant professor at Nanyang Technological University, and Ameya Kirtane, a former MIT postdoc who is now an assistant professor at the University of Minnesota, are the lead authors of the new open-access study, which appears today in Nature Nanotechnology.
Particle predictions
RNA vaccines, such as the vaccines for SARS-CoV-2, are usually packaged in lipid nanoparticles (LNPs) for delivery. These particles protect mRNA from being broken down in the body and help it to enter cells once injected.
Creating particles that handle these jobs more efficiently could help researchers to develop even more effective vaccines. Better delivery vehicles could also make it easier to develop mRNA therapies that encode genes for proteins that could help to treat a variety of diseases.
In 2024, Traverso’s lab launched a multiyear research program, funded by the U.S. Advanced Research Projects Agency for Health (ARPA-H), to develop new ingestible devices that could achieve oral delivery of RNA treatments and vaccines.
“Part of what we’re trying to do is develop ways of producing more protein, for example, for therapeutic applications. Maximizing the efficiency is important to be able to boost how much we can have the cells produce,” Traverso says.
A typical LNP consists of four components — a cholesterol, a helper lipid, an ionizable lipid, and a lipid that is attached to polyethylene glycol (PEG). Different variants of each of these components can be swapped in to create a huge number of possible combinations. Changing up these formulations and testing each one individually is very time-consuming, so Traverso, Chan, and their colleagues decided to turn to artificial intelligence to help speed up the process.
“Most AI models in drug discovery focus on optimizing a single compound at a time, but that approach doesn’t work for lipid nanoparticles, which are made of multiple interacting components,” Chan says. “To tackle this, we developed a new model called COMET, inspired by the same transformer architecture that powers large language models like ChatGPT. Just as those models understand how words combine to form meaning, COMET learns how different chemical components come together in a nanoparticle to influence its properties — like how well it can deliver RNA into cells.”
To generate training data for their machine-learning model, the researchers created a library of about 3,000 different LNP formulations. The team tested each of these 3,000 particles in the lab to see how efficiently they could deliver their payload to cells, then fed all of this data into a machine-learning model.
After the model was trained, the researchers asked it to predict new formulations that would work better than existing LNPs. They tested those predictions by using the new formulations to deliver mRNA encoding a fluorescent protein to mouse skin cells grown in a lab dish. They found that the LNPs predicted by the model did indeed work better than the particles in the training data, and in some cases better than LNP formulations that are used commercially.
Accelerated development
Once the researchers showed that the model could accurately predict particles that would efficiently deliver mRNA, they began asking additional questions. First, they wondered if they could train the model on nanoparticles that incorporate a fifth component: a type of polymer known as branched poly beta amino esters (PBAEs).
Research by Traverso and his colleagues has shown that these polymers can effectively deliver nucleic acids on their own, so they wanted to explore whether adding them to LNPs could improve LNP performance. The MIT team created a set of about 300 LNPs that also include these polymers, which they used to train the model. The resulting model could then predict additional formulations with PBAEs that would work better.
Next, the researchers set out to train the model to make predictions about LNPs that would work best in different types of cells, including a type of cell called Caco-2, which is derived from colorectal cancer cells. Again, the model was able to predict LNPs that would efficiently deliver mRNA to these cells.
Lastly, the researchers used the model to predict which LNPs could best withstand lyophilization — a freeze-drying process often used to extend the shelf-life of medicines.
“This is a tool that allows us to adapt it to a whole different set of questions and help accelerate development. We did a large training set that went into the model, but then you can do much more focused experiments and get outputs that are helpful on very different kinds of questions,” Traverso says.
He and his colleagues are now working on incorporating some of these particles into potential treatments for diabetes and obesity, which are two of the primary targets of the ARPA-H funded project. Therapeutics that could be delivered using this approach include GLP-1 mimics with similar effects to Ozempic.
This research was funded by the GO Nano Marble Center at the Koch Institute, the Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and ARPA-H.
Study sheds light on graphite’s lifespan in nuclear reactors
Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.
Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.
“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”
Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.
“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”
The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.
A long-studied, complex material
Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.
Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.
“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”
But graphite also has its complexities.
“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”
Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.
Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.
“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”
For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.
The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.
“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”
Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.
“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”
The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.
“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”
From research to reactors
The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.
Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.
“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”
The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.
“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”
This work was supported, in part, by the U.S. Department of Energy.
Using generative AI, researchers design compounds that can kill drug-resistant bacteria
With help from artificial intelligence, MIT researchers have designed novel antibiotics that can combat two hard-to-treat infections: drug-resistant Neisseria gonorrhoeae and multi-drug-resistant Staphylococcus aureus (MRSA).
Using generative AI algorithms, the research team designed more than 36 million possible compounds and computationally screened them for antimicrobial properties. The top candidates they discovered are structurally distinct from any existing antibiotics, and they appear to work by novel mechanisms that disrupt bacterial cell membranes.
This approach allowed the researchers to generate and evaluate theoretical compounds that have never been seen before — a strategy that they now hope to apply to identify and design compounds with activity against other species of bacteria.
“We’re excited about the new possibilities that this project opens up for antibiotics development. Our work shows the power of AI from a drug design standpoint, and enables us to exploit much larger chemical spaces that were previously inaccessible,” says James Collins, the Termeer Professor of Medical Engineering and Science in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Biological Engineering.
Collins is the senior author of the study, which appears today in Cell. The paper’s lead authors are MIT postdoc Aarti Krishnan, former postdoc Melis Anahtar ’08, and Jacqueline Valeri PhD ’23.
Exploring chemical space
Over the past 45 years, a few dozen new antibiotics have been approved by the FDA, but most of these are variants of existing antibiotics. At the same time, bacterial resistance to many of these drugs has been growing. Globally, it is estimated that drug-resistant bacterial infections cause nearly 5 million deaths per year.
In hopes of finding new antibiotics to fight this growing problem, Collins and others at MIT’s Antibiotics-AI Project have harnessed the power of AI to screen huge libraries of existing chemical compounds. This work has yielded several promising drug candidates, including halicin and abaucin.
To build on that progress, Collins and his colleagues decided to expand their search into molecules that can’t be found in any chemical libraries. By using AI to generate hypothetically possible molecules that don’t exist or haven’t been discovered, they realized that it should be possible to explore a much greater diversity of potential drug compounds.
In their new study, the researchers employed two different approaches: First, they directed generative AI algorithms to design molecules based on a specific chemical fragment that showed antimicrobial activity, and second, they let the algorithms freely generate molecules, without having to include a specific fragment.
For the fragment-based approach, the researchers sought to identify molecules that could kill N. gonorrhoeae, a Gram-negative bacterium that causes gonorrhea. They began by assembling a library of about 45 million known chemical fragments, consisting of all possible combinations of 11 atoms of carbon, nitrogen, oxygen, fluorine, chlorine, and sulfur, along with fragments from Enamine’s REadily AccessibLe (REAL) space.
Then, they screened the library using machine-learning models that Collins’ lab has previously trained to predict antibacterial activity against N. gonorrhoeae. This resulted in nearly 4 million fragments. They narrowed down that pool by removing any fragments predicted to be cytotoxic to human cells, displayed chemical liabilities, and were known to be similar to existing antibiotics. This left them with about 1 million candidates.
“We wanted to get rid of anything that would look like an existing antibiotic, to help address the antimicrobial resistance crisis in a fundamentally different way. By venturing into underexplored areas of chemical space, our goal was to uncover novel mechanisms of action,” Krishnan says.
Through several rounds of additional experiments and computational analysis, the researchers identified a fragment they called F1 that appeared to have promising activity against N. gonorrhoeae. They used this fragment as the basis for generating additional compounds, using two different generative AI algorithms.
One of those algorithms, known as chemically reasonable mutations (CReM), works by starting with a particular molecule containing F1 and then generating new molecules by adding, replacing, or deleting atoms and chemical groups. The second algorithm, F-VAE (fragment-based variational autoencoder), takes a chemical fragment and builds it into a complete molecule. It does so by learning patterns of how fragments are commonly modified, based on its pretraining on more than 1 million molecules from the ChEMBL database.
Those two algorithms generated about 7 million candidates containing F1, which the researchers then computationally screened for activity against N. gonorrhoeae. This screen yielded about 1,000 compounds, and the researchers selected 80 of those to see if they could be produced by chemical synthesis vendors. Only two of these could be synthesized, and one of them, named NG1, was very effective at killing N. gonorrhoeae in a lab dish and in a mouse model of drug-resistant gonorrhea infection.
Additional experiments revealed that NG1 interacts with a protein called LptA, a novel drug target involved in the synthesis of the bacterial outer membrane. It appears that the drug works by interfering with membrane synthesis, which is fatal to cells.
Unconstrained design
In a second round of studies, the researchers explored the potential of using generative AI to freely design molecules, using Gram-positive bacteria, S. aureus as their target.
Again, the researchers used CReM and VAE to generate molecules, but this time with no constraints other than the general rules of how atoms can join to form chemically plausible molecules. Together, the models generated more than 29 million compounds. The researchers then applied the same filters that they did to the N. gonorrhoeae candidates, but focusing on S. aureus, eventually narrowing the pool down to about 90 compounds.
They were able to synthesize and test 22 of these molecules, and six of them showed strong antibacterial activity against multi-drug-resistant S. aureus grown in a lab dish. They also found that the top candidate, named DN1, was able to clear a methicillin-resistant S. aureus (MRSA) skin infection in a mouse model. These molecules also appear to interfere with bacterial cell membranes, but with broader effects not limited to interaction with one specific protein.
Phare Bio, a nonprofit that is also part of the Antibiotics-AI Project, is now working on further modifying NG1 and DN1 to make them suitable for additional testing.
“In a collaboration with Phare Bio, we are exploring analogs, as well as working on advancing the best candidates preclinically, through medicinal chemistry work,” Collins says. “We are also excited about applying the platforms that Aarti and the team have developed toward other bacterial pathogens of interest, notably Mycobacterium tuberculosis and Pseudomonas aeruginosa.”
The research was funded, in part, by the U.S. Defense Threat Reduction Agency, the National Institutes of Health, the Audacious Project, Flu Lab, the Sea Grape Foundation, Rosamund Zander and Hansjorg Wyss for the Wyss Foundation, and an anonymous donor.
A new way to test how well AI systems classify text
Is this movie review a rave or a pan? Is this news story about business or technology? Is this online chatbot conversation veering off into giving financial advice? Is this online medical information site giving out misinformation?
These kinds of automated conversations, whether they involve seeking a movie or restaurant review or getting information about your bank account or health records, are becoming increasingly prevalent. More than ever, such evaluations are being made by highly sophisticated algorithms, known as text classifiers, rather than by human beings. But how can we tell how accurate these classifications really are?
Now, a team at MIT’s Laboratory for Information and Decision Systems (LIDS) has come up with an innovative approach to not only measure how well these classifiers are doing their job, but then go one step further and show how to make them more accurate.
The new evaluation and remediation software was developed by Kalyan Veeramachaneni, a principal research scientist at LIDS, his students Lei Xu and Sarah Alnegheimish, and two others. The software package is being made freely available for download by anyone who wants to use it.
A standard method for testing these classification systems is to create what are known as synthetic examples — sentences that closely resemble ones that have already been classified. For example, researchers might take a sentence that has already been tagged by a classifier program as being a rave review, and see if changing a word or a few words while retaining the same meaning could fool the classifier into deeming it a pan. Or a sentence that was determined to be misinformation might get misclassified as accurate. This ability to fool the classifiers makes these adversarial examples.
People have tried various ways to find the vulnerabilities in these classifiers, Veeramachaneni says. But existing methods of finding these vulnerabilities have a hard time with this task and miss many examples that they should catch, he says.
Increasingly, companies are trying to use such evaluation tools in real time, monitoring the output of chatbots used for various purposes to try to make sure they are not putting out improper responses. For example, a bank might use a chatbot to respond to routine customer queries such as checking account balances or applying for a credit card, but it wants to ensure that its responses could never be interpreted as financial advice, which could expose the company to liability. “Before showing the chatbot’s response to the end user, they want to use the text classifier to detect whether it’s giving financial advice or not,” Veeramachaneni says. But then it’s important to test that classifier to see how reliable its evaluations are.
“These chatbots, or summarization engines or whatnot are being set up across the board,” he says, to deal with external customers and within an organization as well, for example providing information about HR issues. It’s important to put these text classifiers into the loop to detect things that they are not supposed to say, and filter those out before the output gets transmitted to the user.
That’s where the use of adversarial examples comes in — those sentences that have already been classified but then produce a different response when they are slightly modified while retaining the same meaning. How can people confirm that the meaning is the same? By using another large language model (LLM) that interprets and compares meanings. So, if the LLM says the two sentences mean the same thing, but the classifier labels them differently, “that is a sentence that is adversarial — it can fool the classifier,” Veeramachaneni says. And when the researchers examined these adversarial sentences, “we found that most of the time, this was just a one-word change,” although the people using LLMs to generate these alternate sentences often didn’t realize that.
Further investigation, using LLMs to analyze many thousands of examples, showed that certain specific words had an outsized influence in changing the classifications, and therefore the testing of a classifier’s accuracy could focus on this small subset of words that seem to make the most difference. They found that one-tenth of 1 percent of all the 30,000 words in the system’s vocabulary could account for almost half of all these reversals of classification, in some specific applications.
Lei Xu PhD ’23, a recent graduate from LIDS who performed much of the analysis as part of his thesis work, “used a lot of interesting estimation techniques to figure out what are the most powerful words that can change the overall classification, that can fool the classifier,” Veeramachaneni says. The goal is to make it possible to do much more narrowly targeted searches, rather than combing through all possible word substitutions, thus making the computational task of generating adversarial examples much more manageable. “He’s using large language models, interestingly enough, as a way to understand the power of a single word.”
Then, also using LLMs, he searches for other words that are closely related to these powerful words, and so on, allowing for an overall ranking of words according to their influence on the outcomes. Once these adversarial sentences have been found, they can be used in turn to retrain the classifier to take them into account, increasing the robustness of the classifier against those mistakes.
Making classifiers more accurate may not sound like a big deal if it’s just a matter of classifying news articles into categories, or deciding whether reviews of anything from movies to restaurants are positive or negative. But increasingly, classifiers are being used in settings where the outcomes really do matter, whether preventing the inadvertent release of sensitive medical, financial, or security information, or helping to guide important research, such as into properties of chemical compounds or the folding of proteins for biomedical applications, or in identifying and blocking hate speech or known misinformation.
As a result of this research, the team introduced a new metric, which they call p, which provides a measure of how robust a given classifier is against single-word attacks. And because of the importance of such misclassifications, the research team has made its products available as open access for anyone to use. The package consists of two components: SP-Attack, which generates adversarial sentences to test classifiers in any particular application, and SP-Defense, which aims to improve the robustness of the classifier by generating and using adversarial sentences to retrain the model.
In some tests, where competing methods of testing classifier outputs allowed a 66 percent success rate by adversarial attacks, this team’s system cut that attack success rate almost in half, to 33.7 percent. In other applications, the improvement was as little as a 2 percent difference, but even that can be quite important, Veeramachaneni says, since these systems are being used for so many billions of interactions that even a small percentage can affect millions of transactions.
The team’s results were published on July 7 in the journal Expert Systems in a paper by Xu, Veeramachaneni, and Alnegheimish of LIDS, along with Laure Berti-Equille at IRD in Marseille, France, and Alfredo Cuesta-Infante at the Universidad Rey Juan Carlos, in Spain.
MIT gears up to transform manufacturing
“Manufacturing is the engine of society, and it is the backbone of robust, resilient economies,” says John Hart, head of MIT’s Department of Mechanical Engineering (MechE) and faculty co-director of the MIT Initiative for New Manufacturing (INM). “With manufacturing a lively topic in today’s news, there’s a renewed appreciation and understanding of the importance of manufacturing to innovation, to economic and national security, and to daily lives.”
Launched this May, INM will “help create a transformation of manufacturing through new technology, through development of talent, and through an understanding of how to scale manufacturing in a way that enables imparts higher productivity and resilience, drives adoption of new technologies, and creates good jobs,” Hart says.
INM is one of MIT’s strategic initiatives and builds on the successful three-year-old Manufacturing@MIT program. “It’s a recognition by MIT that manufacturing is an Institute-wide theme and an Institute-wide priority, and that manufacturing connects faculty and students across campus,” says Hart. Alongside Hart, INM’s faculty co-directors are Institute Professor Suzanne Berger and Chris Love, professor of chemical engineering.
The initiative is pursuing four main themes: reimagining manufacturing technologies and systems, elevating the productivity and human experience of manufacturing, scaling up new manufacturing, and transforming the manufacturing base.
Breaking manufacturing barriers for corporations
Amgen, Autodesk, Flex, GE Vernova, PTC, Sanofi, and Siemens are founding members of INM’s industry consortium. These industry partners will work closely with MIT faculty, researchers, and students across many aspects of manufacturing-related research, both in broad-scale initiatives and in particular areas of shared interests. Membership requires a minimum three-year commitment of $500,000 a year to manufacturing-related activities at MIT, including the INM membership fee of $275,000 per year, which supports several core activities that engage the industry members.
One major thrust for INM industry collaboration is the deployment and adoption of AI and automation in manufacturing. This effort will include seed research projects at MIT, collaborative case studies, and shared strategy development.
INM also offers companies participation in the MIT-wide New Manufacturing Research effort, which is studying the trajectories of specific manufacturing industries and examining cross-cutting themes such as technology and financing.
Additionally, INM will concentrate on education for all professions in manufacturing, with alliances bringing together corporations, community colleges, government agencies, and other partners. “We'll scale our curriculum to broader audiences, from aspiring manufacturing workers and aspiring production line supervisors all the way up to engineers and executives,” says Hart.
In workforce training, INM will collaborate with companies broadly to help understand the challenges and frame its overall workforce agenda, and with individual firms on specific challenges, such as acquiring suitably prepared employees for a new factory.
Importantly, industry partners will also engage directly with students. Founding member Flex, for instance, hosted MIT researchers and students at the Flex Institute of Technology in Sorocaba, Brazil, developing new solutions for electronics manufacturing.
“History shows that you need to innovate in manufacturing alongside the innovation in products,” Hart comments. “At MIT, as more students take classes in manufacturing, they’ll think more about key manufacturing issues as they decide what research problems they want to solve, or what choices they make as they prototype their devices. The same is true for industry — companies that operate at the frontier of manufacturing, whether through internal capabilities or their supply chains, are positioned to be on the frontier of product innovation and overall growth.”
“We’ll have an opportunity to bring manufacturing upstream to the early stage of research, designing new processes and new devices with scalability in mind,” he says.
Additionally, MIT expects to open new manufacturing-related labs and to further broaden cooperation with industry at existing shared facilities, such as MIT.nano. Hart says that facilities will also invite tighter collaborations with corporations — not just providing advanced equipment, but working jointly on, say, new technologies for weaving textiles, or speeding up battery manufacturing.
Homing in on the United States
INM is a global project that brings a particular focus on the United States, which remains the world’s second-largest manufacturing economy, but has suffered a significant decline in manufacturing employment and innovation.
One key to reversing this trend and reinvigorating the U.S. manufacturing base is advocacy for manufacturing’s critical role in society and the career opportunities it offers.
“No one really disputes the importance of manufacturing,” Hart says. “But we need to elevate interest in manufacturing as a rewarding career, from the production workers to manufacturing engineers and leaders, through advocacy, education programs, and buy-in from industry, government, and academia.”
MIT is in a unique position to convene industry, academic, and government stakeholders in manufacturing to work together on this vital issue, he points out.
Moreover, in times of radical and rapid changes in manufacturing, “we need to focus on deploying new technologies into factories and supply chains,” Hart says. “Technology is not all of the solution, but for the U.S. to expand our manufacturing base, we need to do it with technology as a key enabler, embracing companies of all sizes, including small and medium enterprises.”
“As AI becomes more capable, and automation becomes more flexible and more available, these are key building blocks upon which you can address manufacturing challenges,” he says. “AI and automation offer new accelerated ways to develop, deploy, and monitor production processes, which present a huge opportunity and, in some cases, a necessity.”
“While manufacturing is always a combination of old technology, new technology, established practice, and new ways of thinking, digital technology gives manufacturers an opportunity to leapfrog competitors,” Hart says. “That’s very, very powerful for the U.S. and any company, or country, that aims to create differentiated capabilities.”
Fortunately, in recent years, investors have increasingly bought into new manufacturing in the United States. “They see the opportunity to re-industrialize, to build the factories and production systems of the future,” Hart says.
“That said, building new manufacturing is capital-intensive, and takes time,” he adds. “So that’s another area where it’s important to convene stakeholders and to think about how startups and growth-stage companies build their capital portfolios, how large industry can support an ecosystem of small businesses and young companies, and how to develop talent to support those growing companies.”
All these concerns and opportunities in the manufacturing ecosystem play to MIT’s strengths. “MIT’s DNA of cross-disciplinary collaboration and working with industry can let us create a lot of impact,” Hart emphasizes. “We can understand the practical challenges. We can also explore breakthrough ideas in research and cultivate successful outcomes, all the way to new companies and partnerships. Sometimes those are seen as disparate approaches, but we like to bring them together.”
The art and science of being an MIT teaching assistant
“It’s probably the hardest thing I’ve ever done at MIT,” says Haley Nakamura, a second-year MEng student in the MIT Department of Electrical Engineering and Computer Science (EECS). She’s not reflecting on a class, final exam, or research paper. Nakamura is talking about the experience of being a teaching assistant (TA). “It’s really an art form, in that there is no formula for being a good teacher. It’s a skill, and something you have to continuously work at and adapt to different people.”
Nakamura, like approximately 16 percent of her EECS MEng peers, balances her own coursework with teaching responsibilities. The TA role is complex, nuanced, and at MIT, can involve much more planning and logistics than you might imagine. Nakamura works on a central computer science (CS) course, 6.3900 (Introduction to Machine Learning), which registers around 400-500 students per semester. For that enrollment, the course requires eight instructors at the lecturer/professor level; 15 TAs, between the undergraduate and graduate level; and about 50 lab assistants (LAs). Students are split across eight sections corresponding to each senior instructor, with a group of TAs and LAs for each section of 60-70 students.
To keep everyone moving forward at the same pace, coordination and organization are key. “A lot of the reason I got my initial TA-ship was because I was pretty organized,” Nakamura explains. “Everyone here at MIT can be so busy that it can be difficult to be on top of things, and students will be the first to point out logistical confusion and inconsistencies. If they’re worried about some quirk on the website, or wondering how their grades are being calculated, those things can prevent them from focusing on content.”
Nakamura's organizational skills made her a good candidate to spot and deal with potential wrinkles before they derailed a course section. “When I joined the course, we wanted someone on the TA side to be more specifically responsible for underlying administrative tasks, so I became the first head TA for the course. Since then, we’ve built that role up more and more. There is now a head TA, a head undergraduate TA, and section leads working on internal documentation such as instructions for how to improve content and how to manage office hours.” The result of this administrative work is consistency across sections and semesters.
The other side of a TA-ship is, of course, teaching. “I was eager to engage with students in a meaningful way,” says Soroush Araei, a sixth-year graduate student who had already fulfilled the teaching requirement for his degree in electrical engineering, but who jumped at the chance to teach alongside his PhD advisor. “I enjoy teaching, and have always found that explaining concepts to others deepens my own understanding.” He was recently awarded the MIT School of Engineering’s 2025 Graduate Student Teaching and Mentoring Award, which honors “a graduate student in the School of Engineering who has demonstrated extraordinary teaching and mentoring as a teaching or research assistant.” Araei’s dedication comes at the price of sleep. “Juggling my own research with my TA duties was no small feat. I often found myself in the lab for long hours, helping students troubleshoot their circuits. While their design simulations looked perfect, the circuits they implemented on protoboards didn’t always perform as expected. I had to dive deep into the issues alongside the students, which often required considerable time and effort.”
The rewards for Araei’s work are often intrinsic. “Teaching has shown me that there are always deeper layers to understanding. There are concepts I thought I had mastered, but I realized gaps in my own knowledge when trying to explain them,” he says. Another challenge: the variety of background knowledge between students in a single class. “Some had never encountered transistors, while others had tape-out experience. Designing problem sets and selecting questions for office hours required careful planning to keep all students engaged.” For Araei, some of the best moments have come during office hours. “Witnessing the ‘aha’ moment on a student’s face when a complex concept finally clicked was incredibly rewarding.”
The pursuit of the “aha” moment is a common thread between TAs. “I still struggle with the feeling that you’re responsible for someone’s understanding in a given topic, and, if you’re not doing a good job, that could affect that person for the rest of their life,” says Nakamura. “But the flip side of that moment of confusion is when someone has the ‘aha!’ moment as you’re talking to them, when you’re able to explain something that wasn’t conveyed in the other materials. It was your help that broke through and gave understanding. And that reward really overruns the fear of causing confusion.”
Hope Dargan ’21, MEng ’23, a second-year PhD student in EECS, uses her role as a graduate instructor to try to reach students who may not fit into the stereotype of the scientist. She started her career at MIT planning to major in CS and become a software engineer, but a missionary trip to Sweden in 2016-17 (when refugees from the Syrian civil war were resettling in the region) sparked a broader interest in both the Middle East and in how groups of people contextualized their own narratives. When Dargan returned to MIT, she took on a history degree, writing her thesis on the experiences of queer Mormon women. Additionally, she taught for MEET (the Middle East Entrepreneurs of Tomorrow), an educational initiative for Israeli and Palestinian high school students. “I realized I loved teaching, and this experience set me on a trajectory to teaching as a career.”
Dargan gained her teaching license as an undergrad through the MIT Scheller Teacher Education Program (STEP), then joined the MEng program, in which she designed an educational intervention for students who were struggling in class 6.101 (Fundamentals of Programming). The next step was a PhD. “Teaching is so context-dependent,” says Dargan, who was awarded the Goodwin Medal for her teaching efforts in 2023. “When I taught students for MEET, it was very different from when I was teaching eighth graders at Josiah Quincy Upper School for my teaching license, and very different now when I teach students in 6.101, versus when I teach the LGO [Leaders for Global Operations] students Python in the summers. Each student has their own unique perspective on what’s motivating them, how they learn, and what they connect to … So even if I’ve taught the material for five years (as I have for 6.101, because I was an LA, then a TA, and now an instructor), improving my teaching is always challenging. Getting better at adapting my teaching to the context of the students and their stories, which are ever-evolving, is always interesting.”
Although Dargan considers teaching one of her greatest passions, she is clear-eyed about the cost of the profession. “I think the things that we’re passionate about tell us a lot about ourselves, both our strengths and our weaknesses, and teaching has taught me a lot about my weaknesses,” she says. “Teaching is a tough career, because it tends to take people who care a lot and are perfectionists, and it can lead to a lot of burnout.”
Dargan's students have also expressed enthusiasm and gratitude for her work. “Hope is objectively the most helpful instructor I’ve ever had,” said one anonymous reviewer. Another wrote, “I never felt judged when I asked her questions, and she was great at guiding me through problems by asking motivating questions … I truly felt like she cared about me as a student and person.” Dargan herself is modest about her role, saying, “For me, the trade-off between teaching and research is that teaching has an immediate day-to-day impact, while research has this unknown potential for long-term impact.”
With the responsibility to instruct an ever-growing percentage of the Institute’s students, the Department of Electrical Engineering and Computer Science relies heavily on dedicated and passionate students like Nakamura, Araei, and Dargan. As their caring and humane influence ripples outward through thousands of new electrical engineers and computer scientists, the day-to-day impact of their work is clear; but the long-term impact may be greater than any of them know.
Would you like that coffee with iron?
Around the world, about 2 billion people suffer from iron deficiency, which can lead to anemia, impaired brain development in children, and increased infant mortality.
To combat that problem, MIT researchers have come up with a new way to fortify foods and beverages with iron, using small crystalline particles. These particles, known as metal-organic frameworks, could be sprinkled on food, added to staple foods such as bread, or incorporated into drinks like coffee and tea.
“We’re creating a solution that can be seamlessly added to staple foods across different regions,” says Ana Jaklenec, a principal investigator at MIT’s Koch Institute for Integrative Cancer Research. “What’s considered a staple in Senegal isn’t the same as in India or the U.S., so our goal was to develop something that doesn’t react with the food itself. That way, we don’t have to reformulate for every context — it can be incorporated into a wide range of foods and beverages without compromise.”
The particles designed in this study can also carry iodine, another critical nutrient. The particles could also be adapted to carry important minerals such as zinc, calcium, or magnesium.
“We are very excited about this new approach and what we believe is a novel application of metal-organic frameworks to potentially advance nutrition, particularly in the developing world,” says Robert Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute.
Jaklenec and Langer are the senior authors of the study, which appears today in the journal Matter. MIT postdoc Xin Yang and Linzixuan (Rhoda) Zhang PhD ’24 are the lead authors of the paper.
Iron stabilization
Food fortification can be a successful way to combat nutrient deficiencies, but this approach is often challenging because many nutrients are fragile and break down during storage or cooking. When iron is added to foods, it can react with other molecules in the food, giving the food a metallic taste.
In previous work, Jaklenec’s lab has shown that encapsulating nutrients in polymers can protect them from breaking down or reacting with other molecules. In a small clinical trial, the researchers found that women who ate bread fortified with encapsulated iron were able to absorb the iron from the food.
However, one drawback to this approach is that the polymer adds a lot of bulk to the material, limiting the amount of iron or other nutrients that end up in the food.
“Encapsulating iron in polymers significantly improves its stability and reactivity, making it easier to add to food,” Jaklenec says. “But to be effective, it requires a substantial amount of polymer. That limits how much iron you can deliver in a typical serving, making it difficult to meet daily nutritional targets through fortified foods alone.”
To overcome that challenge, Yang came up with a new idea: Instead of encapsulating iron in a polymer, they could use iron itself as a building block for a crystalline particle known as a metal-organic framework, or MOF (pronounced “moff”).
MOFs consist of metal atoms joined by organic molecules called ligands to create a rigid, cage-like structure. Depending on the combination of metals and ligands chosen, they can be used for a wide variety of applications.
“We thought maybe we could synthesize a metal-organic framework with food-grade ligands and food-grade micronutrients,” Yang says. “Metal-organic frameworks have very high porosity, so they can load a lot of cargo. That’s why we thought we could leverage this platform to make a new metal-organic framework that could be used in the food industry.”
In this case, the researchers designed a MOF consisting of iron bound to a ligand called fumaric acid, which is often used as a food additive to enhance flavor or help preserve food.
This structure prevents iron from reacting with polyphenols — compounds commonly found in foods such as whole grains and nuts, as well as coffee and tea. When iron does react with those compounds, it forms a metal polyphenol complex that cannot be absorbed by the body.
The MOFs’ structure also allows them to remain stable until they reach an acidic environment, such as the stomach, where they break down and release their iron payload.
Double-fortified salts
The researchers also decided to include iodine in their MOF particle, which they call NuMOF. Iodized salt has been very successful at preventing iodine deficiency, and many efforts are now underway to create “double-fortified salts” that would also contain iron.
Delivering these nutrients together has proven difficult because iron and iodine can react with each other, making each one less likely to be absorbed by the body. In this study, the MIT team showed that once they formed their iron-containing MOF particles, they could load them with iodine, in a way that the iron and iodine do not react with each other.
In tests of the particles’ stability, the researchers found that the NuMOFs could withstand long-term storage, high heat and humidity, and boiling water.
Throughout these tests, the particles maintained their structure. When the researchers then fed the particles to mice, they found that both iron and iodine became available in the bloodstream within several hours of the NuMOF consumption.
The researchers are now working on launching a company that is developing coffee and other beverages fortified with iron and iodine. They also hope to continue working toward a double-fortified salt that could be consumed on its own or incorporated into staple food products.
The research was partially supported by J-WAFS Fellowships for Water and Food Solutions.
Other authors of the paper include Fangzheng Chen, Wenhao Gao, Zhiling Zheng, Tian Wang, Erika Yan Wang, Behnaz Eshaghi, and Sydney MacDonald.
Jessika Trancik named director of the Sociotechnical Systems Research Center
Jessika Trancik, a professor in MIT’s Institute for Data, Systems, and Society, has been named the new director of the Sociotechnical Systems Research Center (SSRC), effective July 1. The SSRC convenes and supports researchers focused on problems and solutions at the intersection of technology and its societal impacts.
Trancik conducts research on technology innovation and energy systems. At the Trancik Lab, she and her team develop methods drawing on engineering knowledge, data science, and policy analysis. Their work examines the pace and drivers of technological change, helping identify where innovation is occurring most rapidly, how emerging technologies stack up against existing systems, and which performance thresholds matter most for real-world impact. Her models have been used to inform government innovation policy and have been applied across a wide range of industries.
“Professor Trancik’s deep expertise in the societal implications of technology, and her commitment to developing impactful solutions across industries, make her an excellent fit to lead SSRC,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor of Mechanical Engineering.
Much of Trancik’s research focuses on the domain of energy systems, and establishing methods for energy technology evaluation, including of their costs, performance, and environmental impacts. She covers a wide range of energy services — including electricity, transportation, heating, and industrial processes. Her research has applications in solar and wind energy, energy storage, low-carbon fuels, electric vehicles, and nuclear fission. Trancik is also known for her research on extreme events in renewable energy availability.
A prolific researcher, Trancik has helped measure progress and inform the development of solar photovoltaics, batteries, electric vehicle charging infrastructure, and other low-carbon technologies — and anticipate future trends. One of her widely cited contributions includes quantifying learning rates and identifying where targeted investments can most effectively accelerate innovation. These tools have been used by U.S. federal agencies, international organizations, and the private sector to shape energy R&D portfolios, climate policy, and infrastructure planning.
Trancik is committed to engaging and informing the public on energy consumption. She and her team developed the app carboncounter.com, which helps users choose cars with low costs and low environmental impacts.
As an educator, Trancik teaches courses for students across MIT’s five schools and the MIT Schwarzman College of Computing.
“The question guiding my teaching and research is how do we solve big societal challenges with technology, and how can we be more deliberate in developing and supporting technologies to get us there?” Trancik said in an article about course IDS.521/IDS.065 (Energy Systems for Climate Change Mitigation).
Trancik received her undergraduate degree in materials science and engineering from Cornell University. As a Rhodes Scholar, she completed her PhD in materials science at the University of Oxford. She subsequently worked for the United Nations in Geneva, Switzerland, and the Earth Institute at Columbia University. After serving as an Omidyar Research Fellow at the Santa Fe Institute, she joined MIT in 2010 as a faculty member.
Trancik succeeds Fotini Christia, the Ford International Professor of Social Sciences in the Department of Political Science and director of IDSS, who previously served as director of SSRC.
Harvey Kent Bowen, ceramics scholar and MIT Leaders for Global Operations co-founder, dies at 83
Harvey Kent Bowen PhD ’71, a longtime MIT professor celebrated for his pioneering work in manufacturing education, innovative ceramics research, and generous mentorship, died July 17 in Belmont, Massachusetts. He was 83.
At MIT, he was the founding engineering faculty leader of Leaders for Manufacturing (LFM) — now Leaders for Global Operations (LGO) — a program that continues to shape engineering and management education nearly four decades later.
Bowen spent 22 years on the MIT faculty, returning to his alma mater after earning both a master’s degree in materials science and a PhD in materials science and ceramics processing there. He held the Ford Professorship of Engineering, with appointments in the departments of Materials Science and Engineering (DMSE) and Electrical Engineering and Computer Science, before transitioning to Harvard Business School, where he bridged the worlds of engineering, manufacturing, and management.
Bowen’s prodigious research output spans 190 articles, 45 Harvard case studies, and two books. In addition to his scholarly contributions, those who knew him best say his visionary understanding of the connection between management and engineering, coupled with his intellect and warm leadership style, set him apart at a time of rapid growth at MIT.
A pioneering physical ceramics researcher
Bowen was born on Nov. 21, 1941, in Salt Lake City, Utah. As an MIT graduate student in the 1970s, he helped to redefine the study of ceramics — transforming it into the scientific field now known as physical ceramics, which focuses on the structure, properties, and behavior of ceramic materials.
“Prior to that, it was the art of ceramic composition,” says Michael Cima, the David H. Koch Professor of Engineering in DMSE. “What Kent and a small group of more-senior DMSE faculty were doing was trying to turn that art into science.”
Bowen advanced the field by applying scientific rigor to how ceramic materials were processed. He applied concepts from the developing field of colloid science — the study of particles evenly distributed in another material — to the manufacturing of ceramics, forever changing how such objects were made.
“That sparked a whole new generation of people taking a different look at how ceramic objects are manufactured,” Cima recalls. “It was an opportunity to make a big change. Despite the fact that physical ceramics — composition, crystal structure and so forth — had turned into a science, there still was this big gap: how do you make these things? Kent thought this was the opportunity for science to have an impact on the field of ceramics.”
One of his greatest scholarly accomplishments was “Introduction to Ceramics, 2nd edition,” with David Kingery and Donald Uhlmann, a foundational textbook he helped write early in his career. The book, published in 1976, helped maintain DMSE’s leading position in ceramics research and education.
“Every PhD student in ceramics studied that book, all 1,000 pages, from beginning to end, to prepare for the PhD qualifying exams,” says Yet-Ming Chiang, Kyocera Professor of Ceramics in DMSE. “It covered almost every aspect of the science and engineering of ceramics known at that time. That was why it was both an outstanding teaching text as well as a reference textbook for data.”
In ceramics processing, Bowen was also known for his control of particle size, shape, and size distribution, and how those factors influence sintering, the process of forming solid materials from powders.
Over time, Bowen’s interest in ceramics processing broadened into a larger focus on manufacturing. As such, Bowen was also deeply connected to industry and traveled frequently, especially to Japan, a leader in ceramics manufacturing.
“One time, he came back from Japan and told all of us graduate students that the students there worked so hard they were sleeping in the labs at night — as a way to prod us,” Chiang recalls.
While Bowen’s work in manufacturing began in ceramics, he also became a consultant to major companies, including automakers, and he worked with Lee Iacocca, the Ford executive behind the Mustang. Those experiences also helped spark LFM, which evolved into LGO. Bowen co-founded LFM with former MIT dean of engineering Tom Magnanti.
“I’m still in awe of Kent’s audacity and vision in starting the LFM program. The scale and scope of the program were, even for MIT standards, highly ambitious. Thirty-seven successful years later, we all owe a great sense of gratitude to Kent,” says LGO Executive Director Thomas Roemer, a senior lecturer at the MIT Sloan School of Management.
Bowen as mentor, teacher
Bowen’s scientific leadership was matched by his personal influence. Colleagues recall him as a patient, thoughtful mentor who valued creativity and experimentation.
“He had a lot of patience, and I think students benefited from that patience. He let them go in the directions they wanted to — and then helped them out of the hole when their experiments didn’t work. He was good at that,” Cima says.
His discipline was another hallmark of his character. Chiang was an undergraduate and graduate student when Bowen was a faculty member. He fondly recalls his tendency to get up early, a source of amusement for his 3.01 (Kinetics of Materials) class.
“One time, some students played a joke on him. They got to class before him, set up an electric griddle, and cooked breakfast in the classroom before he arrived,” says Chiang. “When we all arrived, it smelled like breakfast.”
Bowen took a personal interest in Chiang’s career trajectory, arranging for him to spend a summer in Bowen’s lab through the Undergraduate Research Opportunities Program. Funded by the Department of Energy, the project explored magnetohydrodynamics: shooting a high-temperature plasma made from coal fly ash into a magnetic field between ceramic electrodes to generate electricity.
“My job was just to sift the fly ash, but it opened my eyes to energy research,” Chiang recalls.
Later, when Chiang was an assistant professor at MIT, Bowen served on his career development committee. He was both encouraging and pragmatic.
“He pushed me to get things done — to submit and publish papers at a time when I really needed the push,” Chiang says. “After all the happy talk, he would say, ‘OK, by what date are you going to submit these papers?’ And that was what I needed.”
After leaving MIT, Bowen joined Harvard Business School (HBS), where he wrote numerous detailed case studies, including one on A123 Systems, a battery company Chiang co-founded in 2001.
“He was very supportive of our work to commercialize battery technology, and starting new companies in energy and materials,” Chiang says.
Bowen was also a devoted mentor for LFM/LGO students, even while at HBS. Greg Dibb MBA ’04, SM ’04 recalls that Bowen agreed to oversee his work on the management philosophy known as the Toyota Production System (TPS) — a manufacturing system developed by the Japanese automaker — responding kindly to the young student’s outreach and inspiring him with methodical, real-world advice.
“By some miracle, he agreed and made the time to guide me on my thesis work. In the process, he became a mentor and a lifelong friend,” Dibb says. “He inspired me in his way of working and collaborating. He was a master thinker and listener, and he taught me by example through his Socratic style, asking me simple but difficult questions that required rigor of thought.
“I remember he asked me about my plan to learn about manufacturing and TPS. I came to him enthusiastically with a list of books I planned to read. He responded, ‘Do you think a world expert would read those books?’”
In trying to answer that question, Dibb realized the best way to learn was to go to the factory floor.
“He had a passion for the continuous improvement of manufacturing and operations, and he taught me how to do it by being an observer and a listener just like him — all the time being inspired by his optimism, faith, and charity toward others.”
Faith was a cornerstone of Bowen’s life outside of academia. He served a mission for The Church of Jesus Christ of Latter-day Saints in the Central Germany Mission and held several leadership roles, including bishop of the Cambridge, Massachusetts Ward, stake president of the Cambridge Stake, mission president of the Tacoma, Washington Mission, and temple president of the Boston, Massachusetts Temple.
An enthusiastic role model who inspired excellence
During early-morning conversations, Cima learned about Bowen’s growing interest in manufacturing, which would spur what is now LGO. Bowen eventually became recognized as an expert in the Toyota Production System, the company’s operational culture and practice which was a major influence on the LGO program’s curriculum design.
“I got to hear it from him — I was exposed to his early insights,” Cima says. “The fact that he would take the time every morning to talk to me — it was a huge influence.”
Bowen was a natural leader and set an example for others, Cima says.
“What is a leader? A leader is somebody who has the kind of infectious enthusiasm to convince others to work with them. Kent was really good at that,” Cima says. “What’s the way you learn leadership? Well, you’d look at how leaders behave. And really good leaders behave like Kent Bowen.”
MIT Sloan School of Management professor of the practice Zeynep Ton praises Bowen’s people skills and work ethic: “When you combine his belief in people with his ability to think big, something magical happens through the people Kent mentored. He always pushed us to do more,” Ton recalls. “Whenever I shared with Kent my research making an impact on a company, or my teaching making an impact on a student, his response was never just ‘good job.’ His next question was: ‘How can you make a bigger impact? Do you have the resources at MIT to do it? Who else can help you?’”
A legacy of encouragement and drive
With this drive to do more, Bowen embodied MIT’s ethos, colleagues say.
“Kent Bowen embodies the MIT 'mens et manus' ['mind and hand'] motto professionally and personally as an inveterate experimenter in the lab, in the classroom, as an advisor, and in larger society,” says MIT Sloan senior lecturer Steve Spear. “Kent’s consistency was in creating opportunities to help people become their fullest selves, not only finding expression for their humanity greater than they could have achieved on their own, but greater than they might have even imagined on their own. An extraordinary number of people are directly in his debt because of this personal ethos — and even more have benefited from the ripple effect.”
Gregory Dibb, now a leader in the autonomous vehicle industry, is just one of them.
“Upon hearing of his passing, I immediately felt that I now have even more responsibility to step up and try to fill his shoes in sacrificing and helping others as he did — even if that means helping an unprepared and overwhelmed LGO grad student like me,” Dibb says.
Bowen is survived by his wife, Kathy Jones; his children, Natalie, Jennifer Patraiko, Melissa, Kirsten, and Jonathan; his sister, Kathlene Bowen; and six grandchildren.
Jason Sparapani contributed to this article.