MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 hours 42 min ago

A boost for the precision of genome editing

Wed, 08/20/2025 - 4:30pm

The U.S. Food and Drug Administration’s recent approval of the first CRISPR-Cas9–based gene therapy has marked a major milestone in biomedicine, validating genome editing as a promising treatment strategy for disorders like sickle cell disease, muscular dystrophy, and certain cancers.

CRISPR-Cas9, often likened to “molecular scissors,” allows scientists to cut DNA at targeted sites to snip, repair, or replace genes. But despite its power, Cas9 poses a critical safety risk: The active enzyme can linger in cells and cause unintended DNA breaks — so-called off-target effects — which may trigger harmful mutations in healthy genes.

Now, researchers in the labs of Ronald T. Raines, MIT professor of chemistry, and Amit Choudhary, professor of medicine at Harvard Medical School, have engineered a precise way to turn Cas9 off after its job is done — significantly reducing off-target effects and improving the clinical safety of gene editing. Their findings are detailed in a new paper published in the Proceedings of the National Academy of Sciences (PNAS).

“To ‘turn off’ Cas9 after it achieves its intended genome-editing outcome, we developed the first cell-permeable anti-CRISPR protein system,” says Raines, the Roger and Georges Firmenich Professor of Natural Products Chemistry. “Our technology reduces the off-target activity of Cas9 and increases its genome-editing specificity and clinical utility.”

The new tool — called LFN-Acr/PA — uses a protein-based delivery system to ferry anti-CRISPR proteins into human cells rapidly and efficiently. While natural Type II anti-CRISPR proteins (Acrs) are known to inhibit Cas9, their use in therapy has been limited because they’re often too bulky or charged to enter cells, and conventional delivery methods are too slow or ineffective.

LFN-Acr/PA overcomes these hurdles using a component derived from anthrax toxin to introduce Acrs into cells within minutes. Even at picomolar concentrations, the system shuts down Cas9 activity with remarkable speed and precision — boosting genome-editing specificity up to 40 percent.

Bradley L. Pentelute, MIT professor of chemistry, is an expert on the anthrax delivery system, and is also an author of the paper.

The implications of this advance are wide-ranging. With patent applications filed, LFN-Acr/PA represents a faster, safer, and more controllable means of harnessing CRISPR-Cas9, opening the door to more-refined gene therapies with fewer unintended consequences.

The research was supported by the National Institutes of Health and a Gilliam Fellowship from the Howard Hughes Medical Institute awarded to lead author Axel O. Vera, a graduate student in the Department of Chemistry.

Materials Research Laboratory: Driving interdisciplinary materials research at MIT

Wed, 08/20/2025 - 4:15pm

Materials research thrives across MIT, spanning disciplines and departments. Recent breakthroughs include strategies for securing sustainable supplies of nickel — critical to clean-energy technologies (Department of Materials Science and Engineering); the discovery of unexpected magnetism in atomically thin quantum materials (Department of Physics); and the development of adhesive coatings that reduce scarring around medical implants (departments of Mechanical Engineering and Civil and Environmental Engineering).

Beyond individual projects, the MIT Materials Research Laboratory (MRL) fosters broad collaboration through strategic initiatives such as the Materials Systems Laboratory and SHINE (Sustainability and Health Initiative for Net Positive Enterprise). These efforts bring together academia, government, and industry to accelerate innovation in sustainability, energy use, and advanced materials.

MRL, a hub that connects and supports the Institute’s materials research community, is at the center of these efforts. “MRL serves as a home for the entire materials research community at MIT,” says C. Cem Tasan, the POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering who became MRL director in April. “Our goal is to make it easier for our faculty to conduct their extraordinary research.”

A storied history

Established in 2017, the MRL brings together more than 30 researchers and builds on a 48-year legacy of innovation. It was formed through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering (CMSE), two institutions that helped lay the foundation for MIT’s global leadership in materials science.

Over the years, research supported by MPC and CMSE has led to transformative technologies and successful spinout companies. Notable examples include amsc, based on advances in superconductivity; OmniGuide, which developed cutting-edge optical fiber technologies; and QD Vision, a pioneer in quantum dot technology acquired by Samsung in 2016. Another landmark achievement was the development of the first germanium laser to operate at room temperature — a breakthrough now used in optical communications.

Enabling research through partnership and support

MRL is launching targeted initiatives to connect MIT researchers with industry partners around specific technical challenges. Each initiative will be led by a junior faculty member working closely with MRL to identify a problem that aligns with their research expertise and is relevant to industry needs.

Through multi-year collaborations with participating companies, faculty can explore early-stage solutions in partnership with postdocs or graduate students. These initiatives are designed to be agile and interdisciplinary, with the potential to grow into major, long-term research programs.

Behind-the-scenes support, front-line impact

MRL provides critical infrastructure that enables faculty to focus on discovery, not logistics. “MRL works silently in the background, where every problem a principal investigator has related to the administration of materials research is solved with efficiency, good organization, and minimum effort,” says Tasan.

This quiet but powerful support spans multiple areas:

  • The finance team manages grants and helps secure new funding opportunities.
  • The human resources team supports the hiring of postdocs.
  • The communications team amplifies the lab’s impact through compelling stories shared with the public and funding agencies.
  • The events team plans and coordinates conferences, seminars, and symposia that foster collaboration within the MIT community and with external partners.

Together, these functions ensure that research at MRL runs smoothly and effectively — from initial idea to lasting innovation.

Leadership with a vision

Tasan, who also leads a research group focused on metallurgy, says he took on the directorship because “I thrive on new challenges.” He also saw the role as an opportunity to contribute more broadly to MIT. 

“I believe MRL can play an even greater role in advancing materials research across the Institute, and I’m excited to help make that happen,” he says.

Recent MRL initiatives

MRL has supported a wide range of research programs in partnership with major industry leaders, including Apple, Ford, Microsoft,  Rio Tinto, IBM, Samsung, and Texas Instruments, as well as organizations such as Advanced Functional Fabrics of America, Allegheny Technologies, Ericsson, and the Semiconductor Research Corp.

MRL researchers are addressing critical global challenges in energy efficiency, environmental sustainability, and the development of next-generation material systems.

  • Professor Antoine Allanore is advancing a direct process for wire production from sulfide concentrates, offering a more efficient and sustainable alternative to traditional methods.
  • Professor Joe Checkelsky is leading pioneering research on scalable, high-temperature quantum materials, in the realm of quantum transport.
  • Professor Pablo Jarillo-Herrero is making significant progress with two-dimensional materials and their heterostructures.
  • Professor Nuh Gedik explores ultrafast electronic and structural dynamics and light-matter interactions.
  • Professor Gregory Rutledge spearheaded a National Institute of Standards and Technology Rapid Assistance for Coronavirus Economic Response (NIST RACER)-sponsored initiative to develop biodegradable nanofiber-based personal protective equipment, aimed at improving manufacturing automation, diversifying supply chains, and reducing environmental impact.
  • Professor Elsa Olivetti serves as the lead principal investigator at MIT for REMADE: the Institute for Reducing Embodied-energy and Decreasing Emissions. Her research on fiber recovery and post-consumer resin processing directly supports REMADE’s mission to enhance material circularity and reduce energy use by 50 percent by 2027.
  • Randy Kirchain is modeling metals markets under decarbonization, and developing greener construction materials.
  • Anu Agarwal is spearheading efforts to build a sustainable microchip manufacturing ecosystem. 

New laser “comb” can enable rapid identification of chemicals with extreme precision

Wed, 08/20/2025 - 10:00am

Optical frequency combs are specially designed lasers that act like rulers to accurately and rapidly measure specific frequencies of light. They can be used to detect and identify chemicals and pollutants with extremely high precision.

Frequency combs would be ideal for remote sensors or portable spectrometers because they can enable accurate, real-time monitoring of multiple chemicals without complex moving parts or external equipment.

But developing frequency combs with high enough bandwidth for these applications has been a challenge. Often, researchers must add bulky components that limit scalability and performance.

Now, a team of MIT researchers has demonstrated a compact, fully integrated device that uses a carefully crafted mirror to generate a stable frequency comb with very broad bandwidth. The mirror they developed, along with an on-chip measurement platform, offers the scalability and flexibility needed for mass-producible remote sensors and portable spectrometers. This development could enable more accurate environmental monitors that can identify multiple harmful chemicals from trace gases in the atmosphere.

“The broader the bandwidth a spectrometer has, the more powerful it is, but dispersion is in the way. Here we took the hardest problem that limits bandwidth and made it the centerpiece of our study, addressing every step to ensure robust frequency comb operation,” says Qing Hu, Distinguished Professor in Electrical Engineering and Computer Science at MIT, principal investigator in the Research Laboratory of Electronics, and senior author on an open-access paper describing the work.

He is joined on the paper by lead author Tianyi Zeng PhD ’23; as well as Yamac Dikmelik of General Dynamics Mission Systems; Feng Xie and Kevin Lascola of Thorlabs Quantum Electronics; and David Burghoff SM ’09, PhD ’14, an assistant professor at the University of Texas at Austin. The research appears today in Light: Science and Applications.

Broadband combs

An optical frequency comb produces a spectrum of equally spaced laser lines, which resemble the teeth of a comb.

Scientists can generate frequency combs using several types of lasers for different wavelengths. By using a laser that produces long wave infrared radiation, such as a quantum cascade laser, they can use frequency combs for high-resolution sensing and spectroscopy.

In dual-comb spectroscopy (DCS), the beam of one frequency comb travels straight through the system and strikes a detector at the other end. The beam of the second frequency comb passes through a chemical sample before striking the same detector. Using the results from both combs, scientists can faithfully replicate the chemical features of the sample at much lower frequencies, where signals can be easily analyzed.

The frequency combs must have high bandwidth, or they will only be able to detect a small frequency range of chemical compounds, which could lead to false alarms or inaccurate results.

Dispersion is the most important factor that limits a frequency comb’s bandwidth. If there is dispersion, the laser lines are not evenly spaced, which is incompatible with the formation of frequency combs.

“With long wave infrared radiation, the dispersion will be very high. There is no way to get around it, so we have to find a way to compensate for it or counteract it by engineering our system,” Hu says.

Many existing approaches aren’t flexible enough to be used in different scenarios or don’t enable high enough bandwidth.

Hu’s group previously solved this problem in a different type of frequency comb, one that used terahertz waves, by developing a double-chirped mirror (DCM).

A DCM is a special type of optical mirror that has multiple layers with thicknesses that change gradually from one end to the other. They found that this DCM, which has a corrugated structure, could effectively compensate for dispersion when used with a terahertz laser.

“We tried to borrow this trick and apply it to an infrared comb, but we ran into lots of challenges,” Hu says.

Because infrared waves are 10 times shorter than terahertz waves, fabricating the new mirror required an extreme level of precision. At the same time, they needed to coat the entire DCM in a thick layer of gold to remove the heat under laser operation. Plus, their dispersion measurement system, designed for terahertz waves, wouldn’t work with infrared waves, which have frequencies that are about 10 times higher than terahertz.

“After more than two years of trying to implement this scheme, we reached a dead end,” Hu says.

A new solution

Ready to throw in the towel, the team realized something they had missed. They had designed the mirror with corrugation to compensate for the lossy terahertz laser, but infrared radiation sources aren’t as lossy.

This meant they could use a standard DCM design to compensate for dispersion, which is compatible with infrared radiation. However, they still needed to create curved mirror layers to capture the beam of the laser, which made fabrication much more difficult than usual.

“The adjacent layers of mirror differ only by tens of nanometers. That level of precision precludes standard photolithography techniques. On top of that, we still had to etch very deeply into the notoriously stubborn material stacks. Achieving those critical dimensions and etch depths was key to unlocking broadband comb performance,” Zeng says. In addition to precisely fabricating the DCM, they integrated the mirror directly onto the laser, making the device extremely compact. The team also developed a high-resolution, on-chip dispersion measurement platform that doesn’t require bulky external equipment.

“Our approach is flexible. As long as we can use our platform to measure the dispersion, we can design and fabricate a DCM that compensates for it,” Hu adds.

Taken together, the DCM and on-chip measurement platform enabled the team to generate stable infrared laser frequency combs that had far greater bandwidth than can usually be achieved without a DCM.

In the future, the researchers want to extend their approach to other laser platforms that could generate combs with even greater bandwidth and higher power for more demanding applications.

“These researchers developed an ingenious nanophotonic dispersion compensation scheme based on an integrated air–dielectric double-chirped mirror. This approach provides unprecedented control over dispersion, enabling broadband comb formation at room temperature in the long-wave infrared. Their work opens the door to practical, chip-scale frequency combs for applications ranging from chemical sensing to free-space communications,” says Jacob B. Khurgin, a professor at the Johns Hopkins University Whiting School of Engineering, who was not involved with this paper.

This work is funded, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) and the Gordon and Betty Moore Foundation.

Graduate work with an impact — in big cities and on campus

Wed, 08/20/2025 - 12:00am

While working to boost economic development in Detroit in the late 2010s, Nick Allen found he was running up against a problem.

The city was trying to spur more investment after long-term industrial flight to suburbs and other states. Relying more heavily on property taxes for revenue, the city was negotiating individualized tax deals with prospective businesses. That’s hardly a scenario unique to Detroit, but such deals involved lengthy approval processes that slowed investment decisions and made smaller projects seem unrealistic. 

Moreover, while creating small pockets of growth, these individualized tax abatements were not changing the city’s broader fiscal structure. They also favored those with leverage and resources to work the system for a break.

“The thing you really don’t want to do with taxes is have very particular, highly procedural ways of adjusting the burdens,” says Allen, now a doctoral student in MIT’s Department of Urban Studies and Planning (DUSP). “You want a simple process that fits people’s ideas about what fairness looks like.”

So, after starting his PhD program at MIT, Allen kept studying urban fiscal policy. Along with a group of other scholars, he has produced research papers making the case for a land-value tax — a common tax rate on land that, combined with reduced property taxes, could raise more local revenue by encouraging more city-wide investment, even while lowering tax burdens on residents and businesses. As a bonus, it could also reduce foreclosures.

In the last few years, this has become a larger topic in urban policy circles. The mayor of Detroit has endorsed the idea. The New York Times has written about the work of Allen and his colleagues. The land-value tax is now a serious policy option.

It is unusual for a graduate student to have their work become part of a prominent policy debate. But then, Allen is an unusual student. At MIT, he has not just conducted influential research in his field, but thrown himself into campus-based work with substantial impact as well. Allen has served on task forces assessing student stipend policy, expanding campus housing, and generating ideas for dining program reform.

For all these efforts, in May, Allen received the Karl Taylor Compton Prize, MIT’s highest student honor. At the ceremony, MIT Chancellor Melissa Nobles observed that Allen’s work helped Institute stakeholders “fully understand complex issues, ensuring his recommendations are not only well-informed but also practical and impactful.”

Looking to revive growth

Allen is a Minnesota native who received his BA from Yale University. In 2015, he enrolled in graduate school at MIT, receiving his master’s in city planning from DUSP in 2017. At the time, Allen worked on the Malaysia Sustainable Cities Project, headed by Professor Lawrence Susskind. At one point Allen spent a couple of months in a small Malaysian village studying the effects of coastal development on local fishing and farming.

Malaysia may be different than Michigan, but the issues that Allen encountered in Asia were similar to the ones he wanted to keep studying back in the U.S.: finding ways to finance growth.

“The core interests I have are around real estate, the physical environment, and these fiscal policy questions of how this all gets funded and what the responsibilities are of the state and private markets,” Allen says. “And that brought me to Detroit.”

Specifically, that landed him at the Detroit Economic Growth Corporation, a city-charted development agency that works to facilitate new investment. There, Allen started grappling with the city’s revenue problems. Once heralded as the richest city in America, Detroit has seen a lot of property go vacant, and has hiked property taxes on existing structures to compensate for that. Those rates then discouraged further investment and building.

To be sure, the challenges Detroit has faced stem from far more than tax policy and relate to many macroscale socioeconomic factors, including suburban flight, the shift of manufacturing to states with nonunion employees, and much more. But changing tax policy can be one lever to pull in response.

“It’s difficult to figure out how to revive growth in a place that’s been cannibalized by its losses,” Allen says.

Tasked with underwriting real estate projects, Allen started cataloguing the problems arising from Detroit’s property tax reliance, and began looking at past economics work on optimal tax policy in search of alternatives.

“There’s a real nose-to-the-ground empiricism you start with, asking why we have a system nobody would choose,” Allen says. “There were two parts to that, for me. One was initially looking at the difficulty of making individual projects work, from affordable housing to big industrial plants, along with, secondly, this wave of tax foreclosures in the city.”

Engineering, but for policy

After two years in Detroit, Allen returned to MIT, this time as a doctoral student in DUSP and with a research program oriented around the issues he had worked on. In pursuing that, Allen has worked closely with John E. Anderson, an economist at the University of Nebraska at Lincoln. With a nationwide team of economists convened by the Lincoln Institute of Land Policy, they worked to address the city’s questions on property tax reform.

One paper used current data to show that a land-value tax should lower tax-connected foreclosures in the city. Two other papers study the use of the tax in certain parts of Pennsylvania, one of the few states where it has been deployed. There, the researchers concluded, the land-value tax both leads to greater business development and raises property values.

“What we found overall, looking at past tax reduction in Detroit and other cities, is that in reducing the rate at which people in deep tax distress go through foreclosure, it has a fairly large effect,” Allen says. “It has some effect on allowing business to reinvest in properties. We are seeing a lot more attraction of investment. And it’s got the virtue of being a rules-based system.”

Those empirical results, he notes, helped confirm the sense that a policy change could help growth in Detroit.

“That really validated the hunch we were following,” Allen says.

The widespread attention the policy proposal has garnered could not really have been predicted. The tax has not yet been implemented in Detroit, although it has been a prominent part of civic debates there. Allen has been asked to consult on tax policy by officials in numerous large cities, and is hopeful the concept will gain still more traction.

Meanwhile, at MIT, Allen has one more year to go in his doctoral program. On top of his academic research, he has been an active participant in Institute matters, helping reshape graduate-school policies on multiple fronts.

For instance, Allen was part of the Graduate Housing Working Group, whose efforts helped spur MIT to build Graduate Junction, a new housing complex for 675 graduate students on Vassar Street in Cambridge, Massachusetts. The name also refers to the Grand Junction rail line that runs nearby; the complex formally opened in 2024.

“Innovative places struggle to build housing fast enough,” Allen said at the time Graduate Junction opened, also noting that “new housing for students reduces price pressure on the rest of the Cambridge community.”

Commenting on it now, he adds, “Maybe to most people graduate housing policy doesn’t sound that fun, but to me these are very absorbing questions.”

And ultimately, Allen says, the intellectual problems in either domain can be similar, whether he is working on city policy issues or campus enhancements.

“The reason I think planning fits so well here at MIT is, a lot of what I do is like policy engineering,” Allen says. “It’s really important to understand system constraints, and think seriously about finding solutions that can be built to purpose. I think that’s why I’ve felt at home here at MIT, working on these outside public policy topics, and projects for the Institute. You need to take seriously what people say about the constraints in their lives.”

Professor John Joannopoulos, photonics pioneer and Institute for Soldier Nanotechnologies director, dies at 78

Tue, 08/19/2025 - 2:35pm

John “JJ” Joannopoulos, the Francis Wright Davis Professor of Physics at MIT and director of the MIT Institute for Soldier Nanotechnologies (ISN), passed away on Aug. 17. He was 78. 

Joannopoulos was a prolific researcher in the field of theoretical condensed-matter physics, and an early pioneer in the study and application of photonic crystals. Many of his discoveries, in the ways materials can be made to manipulate light, have led to transformative and life-saving technologies, from chip-based optical wave guides, to wireless energy transfer to health-monitoring textiles, to precision light-based surgical tools.

His remarkable career of over 50 years was spent entirely at MIT, where he was known as much for his generous and unwavering mentorship as for his contributions to science. He made a special point to keep up rich and meaningful collaborations with many of his former students and postdocs, dozens of whom have gone on to faculty positions at major universities, and to leadership roles in the public and private sectors. In his five decades at MIT, he made lasting connections across campus, both in service of science, and friendship.

“A scientific giant, inspiring leader, and a masterful communicator, John carried a generous and loving heart,” says Yoel Fink PhD ’00, an MIT professor of materials science and engineering who was Joannopoulos’ former student and a longtime collaborator. “He chose to see the good in people, keeping his mind and heart always open. Asking little for himself, he gave everything in care of others. John lived a life of deep impact and meaning — savoring the details of truth-seeking, achieving rare discoveries and mentoring generations of students to achieve excellence. With warmth, humor, and a never-ending optimism, JJ left an indelible impact on science and on all who had the privilege to know him. Above all, he was a loving husband, father, grandfather, friend, and mentor.”

“In the end, the most remarkable thing about him was his unmatched humanity, his ability to make you feel that you were the most important thing in the world that deserved his attention, no matter who you were,” says Raul Radovitzky, ISN associate director and the Jerome C. Hunsaker Professor in MIT’s Department of Aeronautics and Astronautics. “The legacy he leaves is not only in equations and innovations, but in the lives he touched, the minds he inspired, and the warmth he spread in every room he entered.”

“JJ was a very special colleague: a brilliant theorist who was also adept at identifying practical applications; a caring and inspiring mentor of younger scientists; a gifted teacher who knew every student in his class by name,” says Deepto Chakrabarty ’88, the William A. M. Burden Professor in Astrophysics and head of MIT’s Department of Physics. “He will be deeply missed.”

Layers of light

John Joannopoulos was born in 1947 in New York City, where his parents both emigrated from Greece. His father was a playwright, and his mother worked as a psychologist. From an early age, Joannopoulos knew he wanted to be a physicist — mainly because the subject was his most challenging in school. In a recent interview with MIT News, he enthusiastically shared: “You probably wouldn’t believe this, but it’s true: I wanted to be a physics professor since I was in high school! I loved the idea of being able to work with students, and being able to have ideas.”

He attended the University of California at Berkeley, where he received a bachelor’s degree in 1968, and a PhD in 1974, both in physics. That same year, he joined the faculty at MIT, where he would spend his 50-plus-year career — though at the time, the chances of gaining a long-term foothold at the Institute seemed slim, as Joannopoulos told MIT News.

“The chair of the physics department was the famous nuclear physicist, Herman Feshbach, who told me the probability that I would get tenure was something like 30 percent,” Joannopoulos recalled. “But when you’re young and just starting off, it was certainly better than zero, and I thought, that was fine — there was hope down the line.”

Starting out at MIT, Joannopoulos knew exactly what he wanted to do. He quickly set up a group to study theoretical condensed-matter physics, and specifically, ab initio physics, meaning physics “from first principles.” In this initial work, he sought to build theoretical models to predict the electronic behavior and structure of materials, based solely on the atomic numbers of the atoms in a material. Such foundational models could be applied to understand and design a huge range of materials and structures.

Then, in the early 1990s, Joannopoulos took a research turn, spurred by a paper by physicist Eli Yablonovitch at the University of California at Los Angeles, who did some preliminary work on materials that can affect the behavior of photons, or particles of light. Joannopoulos recognized a connection with his first-principles work with electrons. Along with his students, he applied that approach to predict the fundamental behavior of photons in different classes of materials. His group was one of the first to pioneer the field of photonic crystals, and the study of how materials can be manipulated at the nanoscale to control the behavior of light traveling through. In 1995, Joannopoulos co-authored the first textbook on the subject.

And in 1998, he took on a more-than-century-old assumption about how light should reflect, and turned it on its head. That assumption predicted that light, shining onto a structure made of multiple refractive layers, could reflect back, but only for a limited range of angles. But in fact, Joannopoulos and his group showed that the opposite is true: If the structure’s layers followed a particular design criteria, the structure as a whole could reflect light coming from any and all angles. This structure, was called the “perfect mirror.”

That insight led to another: If the structure were rolled into a tube, the resulting hollow fiber could act as a perfect optical conduit. Any light traveling through the fiber would reflect and bounce around within the fiber, with none scattering away. Joannopoulos and his group applied this insight to develop the first precision “optical scalpel” — a fiber that can be safely handled, while delivering a highly focused laser, precise and powerful enough to perform delicate surgical procedures. Joannopoulos helped to commercialize the new tool with a startup, Omniguide, that has since provided the optical scalpel to assist in hundreds of thousands of medical procedures around the world.

Legendary mentor

In 2006, Joannopoulos took the helm as director of MIT’s Institute for Soldier Nanotechnologies — a post he steadfastly held for almost 20 years. During his dedicated tenure, he worked with ISN members across campus and in departments outside his own, getting to know and champion their work. He has facilitated countless collaborations between MIT faculty, industry partners, and the U.S. Department of Defense. Among the many projects he raised support for were innovations in lightweight armor, hyperspectral imaging, energy-efficient batteries, and smart and responsive fabrics.

Joannopoulos helped to translate many basic science insights into practical applications. He was a cofounder of six spinoff companies based on his fundamental research, and helped to create dozens more companies, which have advanced technologies as wide-ranging as laser surgery tools, to wireless electric power transmission, transparent display technologies, and optical computing. He was awarded 126 patents for his many discoveries, and has authored over 750 peer-reviewed papers.

In recognition of his wide impact and contributions, Joannopoulos was elected to the National Academy of Sciences and the American Academy of Arts and Sciences. He was also a fellow of both the American Physical Society and the American Association for the Advancement of Science. Over his 50-plus-year career, he was the recipient of many scientific awards and honors including the Max Born Award, and the Aneesur Rahman Prize in Computational Physics. Joannopoulos was also a gifted classroom teacher, and was recognized at MIT with the Buechner Teaching Prize in Physics and the Graduate Teaching Award in Science.

This year, Joannopoulos was the recipient of MIT’s Killian Achievement Award, which recognizes the extraordinary lifetime contributions of a member of the MIT faculty. In addition to the many accomplishments Joannopoulos has made in science, the award citation emphasized his lasting impact on the generations of students he has mentored:

“Professor Joannopoulos has served as a legendary mentor to generations of students, inspiring them to achieve excellence in science while at the same time facilitating the practical benefit to society through entrepreneurship,” the citation reads. “Through all of these individuals he has impacted — not to mention their academic descendants — Professor Joannopoulos has had a vast influence on the development of science in recent decades.”

“JJ was an amazing scientist: He published hundreds of papers that have been cited close to 200,000 times. He was also a serial entrepreneur: Companies he cofounded raised hundreds of millions of dollars and employed hundreds of people,” says MIT Professor Marin Soljacic ’96, a former postdoc under Joannopoulos who with him cofounded a startup, Witricity. “He was an amazing mentor, a close friend, and like a scientific father to me. He always had time for me, any time of the day, and as much as I needed.”

Indeed, Joannopoulos strived to meaningfully support his many students. In the classroom, he “was legendary,” says friend and colleague Patrick Lee ’66, PhD ’70, who recalls that Joannopoulos would make a point of memorizing the names and faces of more than 100 students on the first day of class, and calling them each by their first name, starting on the second day, and for the rest of the term.

What’s more, Joannopoulos encouraged graduate students and postdocs to follow their ideas, even when they ran counter to his own.

“John did not produce clones,” says Lee, who is an MIT professor emeritus of physics. “He showed them the way to do science by example, by caring and by sharing his optimism. I have never seen someone so deeply loved by his students.”

Even students who stepped off the photonics path have kept in close contact with their mentor, as former student and MIT professor Josh Winn ’94, SM ’94, PhD ’01 has done.

“Even though our work together ended more than 25 years ago, and I now work in a different field, I still feel like part of the Joannopoulos academic family,” says Winn, who is now a professor of astrophysics at Princeton University. “It's a loyal group with branches all over the world. We even had our own series of conferences, organized by former students to celebrate John's 50th, 60th, and 70th birthdays. Most professors would consider themselves fortunate to have even one such ‘festschrift’ honoring their legacy.”

MIT professor of mathematics Steven Johnson ’95, PhD ’01, a former student and frequent collaborator, has experienced personally, and seen many times over, Joannopoulos’ generous and open-door mentorship.

“In every collaboration, I’ve unfailingly observed him to cast a wide net to value multiple voices, to ensure that everyone feels included and valued, and to encourage collaborations across groups and fields and institutions,” Johnson says. “Kind, generous, and brimming with infectious enthusiasm and positivity, he set an example so many of his lucky students have striven to follow.”

Joannopoulos started at MIT around the same time as Marc Kastner, who had a nearby office on the second floor of Building 13.

“I would often hear loud arguments punctuated by boisterous laughter, coming from John’s office, where he and his students were debating physics,” recalls Kastner, who is the Donner Professor of Physics Emeritus at MIT. “I am sure this style of interaction is what made him such a great mentor.”

“He exuded such enthusiasm for science and good will to others that he was just good fun to be around,” adds friend and colleague Erich Ippen, MIT professor emeritus of physics.

“John was indeed a great man — a very special one. Everyone who ever worked with him understands this,” says Stanford University physics professor Robert Laughlin PhD ’79, one of Joannopoulos’ first graduate students, who went on to win the 1998 Nobel Prize in Physics. “He sprinkled a kind of transformative magic dust on people that induced them to dedicate every waking moment to the task of making new and wonderful things. You can find traces of it in lots of places around the world that matter, all of them the better for it. There’s quite a pile of it in my office.”

Joannopoulos is survived by his wife, Kyri Dunussi-Joannopoulos; their three daughters, Maria, Lena, and Alkisti; and their families. Details for funeral and memorial services are forthcoming.

A new model predicts how molecules will dissolve in different solvents

Tue, 08/19/2025 - 5:00am

Using machine learning, MIT chemical engineers have created a computational model that can predict how well any given molecule will dissolve in an organic solvent — a key step in the synthesis of nearly any pharmaceutical. This type of prediction could make it much easier to develop new ways to produce drugs and other useful molecules.

The new model, which predicts how much of a solute will dissolve in a particular solvent, should help chemists to choose the right solvent for any given reaction in their synthesis, the researchers say. Common organic solvents include ethanol and acetone, and there are hundreds of others that can also be used in chemical reactions.

“Predicting solubility really is a rate-limiting step in synthetic planning and manufacturing of chemicals, especially drugs, so there’s been a longstanding interest in being able to make better predictions of solubility,” says Lucas Attia, an MIT graduate student and one of the lead authors of the new study.

The researchers have made their model freely available, and many companies and labs have already started using it. The model could be particularly useful for identifying solvents that are less hazardous than some of the most commonly used industrial solvents, the researchers say.

“There are some solvents which are known to dissolve most things. They’re really useful, but they’re damaging to the environment, and they’re damaging to people, so many companies require that you have to minimize the amount of those solvents that you use,” says Jackson Burns, an MIT graduate student who is also a lead author of the paper. “Our model is extremely useful in being able to identify the next-best solvent, which is hopefully much less damaging to the environment.”

William Green, the Hoyt Hottel Professor of Chemical Engineering and director of the MIT Energy Initiative, is the senior author of the study, which appears today in Nature Communications. Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering, is also an author of the paper.

Solving solubility

The new model grew out of a project that Attia and Burns worked on together in an MIT course on applying machine learning to chemical engineering problems. Traditionally, chemists have predicted solubility with a tool known as the Abraham Solvation Model, which can be used to estimate a molecule’s overall solubility by adding up the contributions of chemical structures within the molecule. While these predictions are useful, their accuracy is limited.

In the past few years, researchers have begun using machine learning to try to make more accurate solubility predictions. Before Burns and Attia began working on their new model, the state-of-the-art model for predicting solubility was a model developed in Green’s lab in 2022.

That model, known as SolProp, works by predicting a set of related properties and combining them, using thermodynamics, to ultimately predict the solubility. However, the model has difficulty predicting solubility for solutes that it hasn’t seen before.

“For drug and chemical discovery pipelines where you’re developing a new molecule, you want to be able to predict ahead of time what its solubility looks like,” Attia says.

Part of the reason that existing solubility models haven’t worked well is because there wasn’t a comprehensive dataset to train them on. However, in 2023 a new dataset called BigSolDB was released, which compiled data from nearly 800 published papers, including information on solubility for about 800 molecules dissolved about more than 100 organic solvents that are commonly used in synthetic chemistry.

Attia and Burns decided to try training two different types of models on this data. Both of these models represent the chemical structures of molecules using numerical representations known as embeddings, which incorporate information such as the number of atoms in a molecule and which atoms are bound to which other atoms. Models can then use these representations to predict a variety of chemical properties.

One of the models used in this study, known as FastProp and developed by Burns and others in Green’s lab, incorporates “static embeddings.” This means that the model already knows the embedding for each molecule before it starts doing any kind of analysis.

The other model, ChemProp, learns an embedding for each molecule during the training, at the same time that it learns to associate the features of the embedding with a trait such as solubility. This model, developed across multiple MIT labs, has already been used for tasks such as antibiotic discovery, lipid nanoparticle design, and predicting chemical reaction rates.

The researchers trained both types of models on over 40,000 data points from BigSolDB, including information on the effects of temperature, which plays a significant role in solubility. Then, they tested the models on about 1,000 solutes that had been withheld from the training data. They found that the models’ predictions were two to three times more accurate than those of SolProp, the previous best model, and the new models were especially accurate at predicting variations in solubility due to temperature.

“Being able to accurately reproduce those small variations in solubility due to temperature, even when the overarching experimental noise is very large, was a really positive sign that the network had correctly learned an underlying solubility prediction function,” Burns says.

Accurate predictions

The researchers had expected that the model based on ChemProp, which is able to learn new representations as it goes along, would be able to make more accurate predictions. However, to their surprise, they found that the two models performed essentially the same. That suggests that the main limitation on their performance is the quality of the data, and that the models are performing as well as theoretically possible based on the data that they’re using, the researchers say.

“ChemProp should always outperform any static embedding when you have sufficient data,” Burns says. “We were blown away to see that the static and learned embeddings were statistically indistinguishable in performance across all the different subsets, which indicates to us that that the data limitations that are present in this space dominated the model performance.”

The models could become more accurate, the researchers say, if better training and testing data were available — ideally, data obtained by one person or a group of people all trained to perform the experiments the same way.

“One of the big limitations of using these kinds of compiled datasets is that different labs use different methods and experimental conditions when they perform solubility tests. That contributes to this variability between different datasets,” Attia says.

Because the model based on FastProp makes its predictions faster and has code that is easier for other users to adapt, the researchers decided to make that one, known as FastSolv, available to the public. Multiple pharmaceutical companies have already begun using it.

“There are applications throughout the drug discovery pipeline,” Burns says. “We’re also excited to see, outside of formulation and drug discovery, where people may use this model.”

The research was funded, in part, by the U.S. Department of Energy.

Researchers glimpse the inner workings of protein language models

Mon, 08/18/2025 - 3:00pm

Within the past few years, models that can predict the structure or function of proteins have been widely used for a variety of biological applications, such as identifying drug targets and designing new therapeutic antibodies.

These models, which are based on large language models (LLMs), can make very accurate predictions of a protein’s suitability for a given application. However, there’s no way to determine how these models make their predictions or which protein features play the most important role in those decisions.

In a new study, MIT researchers have used a novel technique to open up that “black box” and allow them to determine what features a protein language model takes into account when making predictions. Understanding what is happening inside that black box could help researchers to choose better models for a particular task, helping to streamline the process of identifying new drugs or vaccine targets.

“Our work has broad implications for enhanced explainability in downstream tasks that rely on these representations,” says Bonnie Berger, the Simons Professor of Mathematics, head of the Computation and Biology group in MIT’s Computer Science and Artificial Intelligence Laboratory, and the senior author of the study. “Additionally, identifying features that protein language models track has the potential to reveal novel biological insights from these representations.”

Onkar Gujral, an MIT graduate student, is the lead author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Mihir Bafna, an MIT graduate student, and Eric Alm, an MIT professor of biological engineering, are also authors of the paper.

Opening the black box

In 2018, Berger and former MIT graduate student Tristan Bepler PhD ’20 introduced the first protein language model. Their model, like subsequent protein models that accelerated the development of AlphaFold, such as ESM2 and OmegaFold, was based on LLMs. These models, which include ChatGPT, can analyze huge amounts of text and figure out which words are most likely to appear together.

Protein language models use a similar approach, but instead of analyzing words, they analyze amino acid sequences. Researchers have used these models to predict the structure and function of proteins, and for applications such as identifying proteins that might bind to particular drugs.

In a 2021 study, Berger and colleagues used a protein language model to predict which sections of viral surface proteins are less likely to mutate in a way that enables viral escape. This allowed them to identify possible targets for vaccines against influenza, HIV, and SARS-CoV-2.

However, in all of these studies, it has been impossible to know how the models were making their predictions.

“We would get out some prediction at the end, but we had absolutely no idea what was happening in the individual components of this black box,” Berger says.

In the new study, the researchers wanted to dig into how protein language models make their predictions. Just like LLMs, protein language models encode information as representations that consist of a pattern of activation of different “nodes” within a neural network. These nodes are analogous to the networks of neurons that store memories and other information within the brain.

The inner workings of LLMs are not easy to interpret, but within the past couple of years, researchers have begun using a type of algorithm known as a sparse autoencoder to help shed some light on how those models make their predictions. The new study from Berger’s lab is the first to use this algorithm on protein language models.

Sparse autoencoders work by adjusting how a protein is represented within a neural network. Typically, a given protein will be represented by a pattern of activation of a constrained number of neurons, for example, 480. A sparse autoencoder will expand that representation into a much larger number of nodes, say 20,000.

When information about a protein is encoded by only 480 neurons, each node lights up for multiple features, making it very difficult to know what features each node is encoding. However, when the neural network is expanded to 20,000 nodes, this extra space along with a sparsity constraint gives the information room to “spread out.” Now, a feature of the protein that was previously encoded by multiple nodes can occupy a single node.

“In a sparse representation, the neurons lighting up are doing so in a more meaningful manner,” Gujral says. “Before the sparse representations are created, the networks pack information so tightly together that it's hard to interpret the neurons.”

Interpretable models

Once the researchers obtained sparse representations of many proteins, they used an AI assistant called Claude (related to the popular Anthropic chatbot of the same name), to analyze the representations. In this case, they asked Claude to compare the sparse representations with the known features of each protein, such as molecular function, protein family, or location within a cell.

By analyzing thousands of representations, Claude can determine which nodes correspond to specific protein features, then describe them in plain English. For example, the algorithm might say, “This neuron appears to be detecting proteins involved in transmembrane transport of ions or amino acids, particularly those located in the plasma membrane.”

This process makes the nodes far more “interpretable,” meaning the researchers can tell what each node is encoding. They found that the features most likely to be encoded by these nodes were protein family and certain functions, including several different metabolic and biosynthetic processes.

“When you train a sparse autoencoder, you aren’t training it to be interpretable, but it turns out that by incentivizing the representation to be really sparse, that ends up resulting in interpretability,” Gujral says.

Understanding what features a particular protein model is encoding could help researchers choose the right model for a particular task, or tweak the type of input they give the model, to generate the best results. Additionally, analyzing the features that a model encodes could one day help biologists to learn more about the proteins that they are studying.

“At some point when the models get a lot more powerful, you could learn more biology than you already know, from opening up the models,” Gujral says.

The research was funded by the National Institutes of Health. 

A shape-changing antenna for more versatile sensing and communication

Mon, 08/18/2025 - 12:00am

MIT researchers have developed a reconfigurable antenna that dynamically adjusts its frequency range by changing its physical shape, making it more versatile for communications and sensing than static antennas.

A user can stretch, bend, or compress the antenna to make reversible changes to its radiation properties, enabling a device to operate in a wider frequency range without the need for complex, moving parts. With an adjustable frequency range, a reconfigurable antenna could adapt to changing environmental conditions and reduce the need for multiple antennas.

The word “antenna” may draw to mind metal rods like the “bunny ears” on top of old television sets, but the MIT team instead worked with metamaterials — engineered materials whose mechanical properties, such as stiffness and strength, depend on the geometric arrangement of the material’s components.

The result is a simplified design for a reconfigurable antenna that could be used for applications like energy transfer in wearable devices, motion tracking and sensing for augmented reality, or wireless communication across a wide range of network protocols.

In addition, the researchers developed an editing tool so users can generate customized metamaterial antennas, which can be fabricated using a laser cutter.

“Usually, when we think of antennas, we think of static antennas — they are fabricated to have specific properties and that is it. However, by using auxetic metamaterials, which can deform into three different geometric states, we can seamlessly change the properties of the antenna by changing its geometry, without fabricating a new structure. In addition, we can use changes in the antenna’s radio frequency properties, due to changes in the metamaterial geometry, as a new method of sensing for interaction design,” says lead author Marwa AlAlawi, a mechanical engineering graduate student at MIT.

Her co-authors include Regina Zheng and Katherine Yan, both MIT undergraduate students; Ticha Sethapakdi, an MIT graduate student in electrical engineering and computer science; Soo Yeon Ahn of the Gwangju Institute of Science and Technology in Korea; and co-senior authors Junyi Zhu, assistant professor at the University of Michigan; and Stefanie Mueller, the TIBCO Career Development Associate Professor in MIT’s departments of Electrical Engineering and Computer Science and Mechanical Engineering and leader of the Human-Computer Interaction Group at the Computer Science and Artificial Intelligence Lab. The research will be presented at the ACM Symposium on User Interface Software and Technology.

Making sense of antennas

While traditional antennas radiate and receive radio signals, in this work, the researchers looked at how the devices can act as sensors. The team’s goal was to develop a mechanical element that can also be used as an antenna for sensing.

To do this, they leveraged the antenna’s “resonance frequency,” which is the frequency at which the antenna is most efficient.

An antenna’s resonance frequency will shift due to changes in its shape. (Think about extending the left “bunny ear” to reduce TV static.) Researchers can capture these shifts for sensing. For instance, a reconfigurable antenna could be used in this way to detect the expansion of a person’s chest, to monitor their respiration.

To design a versatile reconfigurable antenna, the researchers used metamaterials. These engineered materials, which can be programmed to adopt different shapes, are composed of a periodic arrangement of unit cells that can be rotated, compressed, stretched, or bent.

By deforming the metamaterial structure, one can shift the antenna’s resonance frequency.

“In order to trigger changes in resonance frequency, we either need to change the antenna’s effective length or introduce slits and holes into it. Metamaterials allow us to get those different states from only one structure,” AlAlawi says.

The device, dubbed the meta-antenna, is composed of a dielectric layer of material sandwiched between two conductive layers.

To fabricate a meta-antenna, the researchers cut the dielectric laser out of a rubber sheet with a laser cutter. Then they added a patch on top of the dielectric layer using conductive spray paint, creating a resonating “patch antenna.”

But they found that even the most flexible conductive material couldn’t withstand the amount of deformation the antenna would experience.

“We did a lot of trial and error to determine that, if we coat the structure with flexible acrylic paint, it protects the hinges so they don’t break prematurely,” AlAlawi explains.

A means for makers

With the fabrication problem solved, the researchers built a tool that enables users to design and produce metamaterial antennas for specific applications.

The user can define the size of the antenna patch, choose a thickness for the dielectric layer, and set the length to width ratio of the metamaterial unit cells. Then the system automatically simulates the antenna’s resonance frequency range.

“The beauty of metamaterials is that, because it is an interconnected system of linkages, the geometric structure allows us to reduce the complexity of a mechanical system,” AlAlawi says.

Using the design tool, the researchers incorporated meta-antennas into several smart devices, including a curtain that dynamically adjusts household lighting and headphones that seamlessly transition between noise-cancelling and transparent modes.

For the smart headphone, for instance, when the meta-antenna expands and bends, it shifts the resonance frequency by 2.6 percent, which switches the headphone mode. The team’s experiments also showed that meta-antenna structures are durable enough to withstand more than 10,000 compressions.

Because the antenna patch can be patterned onto any surface, it could be used with more complex structures. For instance, the antenna could be incorporated into smart textiles that perform noninvasive biomedical sensing or temperature monitoring.

In the future, the researchers want to design three-dimensional meta-antennas for a wider range of applications. They also want to add more functions to the design tool, improve the durability and flexibility of the metamaterial structure, experiment with different symmetric metamaterial patterns, and streamline some manual fabrication steps.

This research was funded, in part, by the Bahrain Crown Prince International Scholarship and the Gwangju Institute of Science and Technology.

How AI could speed the development of RNA vaccines and other RNA therapies

Fri, 08/15/2025 - 5:00am

Using artificial intelligence, MIT researchers have come up with a new way to design nanoparticles that can more efficiently deliver RNA vaccines and other types of RNA therapies.

After training a machine-learning model to analyze thousands of existing delivery particles, the researchers used it to predict new materials that would work even better. The model also enabled the researchers to identify particles that would work well in different types of cells, and to discover ways to incorporate new types of materials into the particles.

“What we did was apply machine-learning tools to help accelerate the identification of optimal ingredient mixtures in lipid nanoparticles to help target a different cell type or help incorporate different materials, much faster than previously was possible,” says Giovanni Traverso, an associate professor of mechanical engineering at MIT, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.

This approach could dramatically speed the process of developing new RNA vaccines, as well as therapies that could be used to treat obesity, diabetes, and other metabolic disorders, the researchers say.

Alvin Chan, a former MIT postdoc who is now an assistant professor at Nanyang Technological University, and Ameya Kirtane, a former MIT postdoc who is now an assistant professor at the University of Minnesota, are the lead authors of the new open-access study, which appears today in Nature Nanotechnology.

Particle predictions

RNA vaccines, such as the vaccines for SARS-CoV-2, are usually packaged in lipid nanoparticles (LNPs) for delivery. These particles protect mRNA from being broken down in the body and help it to enter cells once injected.

Creating particles that handle these jobs more efficiently could help researchers to develop even more effective vaccines. Better delivery vehicles could also make it easier to develop mRNA therapies that encode genes for proteins that could help to treat a variety of diseases.

In 2024, Traverso’s lab launched a multiyear research program, funded by the U.S. Advanced Research Projects Agency for Health (ARPA-H), to develop new ingestible devices that could achieve oral delivery of RNA treatments and vaccines.

“Part of what we’re trying to do is develop ways of producing more protein, for example, for therapeutic applications. Maximizing the efficiency is important to be able to boost how much we can have the cells produce,” Traverso says.

A typical LNP consists of four components — a cholesterol, a helper lipid, an ionizable lipid, and a lipid that is attached to polyethylene glycol (PEG). Different variants of each of these components can be swapped in to create a huge number of possible combinations. Changing up these formulations and testing each one individually is very time-consuming, so Traverso, Chan, and their colleagues decided to turn to artificial intelligence to help speed up the process.

“Most AI models in drug discovery focus on optimizing a single compound at a time, but that approach doesn’t work for lipid nanoparticles, which are made of multiple interacting components,” Chan says. “To tackle this, we developed a new model called COMET, inspired by the same transformer architecture that powers large language models like ChatGPT. Just as those models understand how words combine to form meaning, COMET learns how different chemical components come together in a nanoparticle to influence its properties — like how well it can deliver RNA into cells.”

To generate training data for their machine-learning model, the researchers created a library of about 3,000 different LNP formulations. The team tested each of these 3,000 particles in the lab to see how efficiently they could deliver their payload to cells, then fed all of this data into a machine-learning model.

After the model was trained, the researchers asked it to predict new formulations that would work better than existing LNPs. They tested those predictions by using the new formulations to deliver mRNA encoding a fluorescent protein to mouse skin cells grown in a lab dish. They found that the LNPs predicted by the model did indeed work better than the particles in the training data, and in some cases better than LNP formulations that are used commercially.

Accelerated development

Once the researchers showed that the model could accurately predict particles that would efficiently deliver mRNA, they began asking additional questions. First, they wondered if they could train the model on nanoparticles that incorporate a fifth component: a type of polymer known as branched poly beta amino esters (PBAEs).

Research by Traverso and his colleagues has shown that these polymers can effectively deliver nucleic acids on their own, so they wanted to explore whether adding them to LNPs could improve LNP performance. The MIT team created a set of about 300 LNPs that also include these polymers, which they used to train the model. The resulting model could then predict additional formulations with PBAEs that would work better.

Next, the researchers set out to train the model to make predictions about LNPs that would work best in different types of cells, including a type of cell called Caco-2, which is derived from colorectal cancer cells. Again, the model was able to predict LNPs that would efficiently deliver mRNA to these cells.

Lastly, the researchers used the model to predict which LNPs could best withstand lyophilization — a freeze-drying process often used to extend the shelf-life of medicines.

“This is a tool that allows us to adapt it to a whole different set of questions and help accelerate development. We did a large training set that went into the model, but then you can do much more focused experiments and get outputs that are helpful on very different kinds of questions,” Traverso says.

He and his colleagues are now working on incorporating some of these particles into potential treatments for diabetes and obesity, which are two of the primary targets of the ARPA-H funded project. Therapeutics that could be delivered using this approach include GLP-1 mimics with similar effects to Ozempic.

This research was funded by the GO Nano Marble Center at the Koch Institute, the Karl van Tassel Career Development Professorship, the MIT Department of Mechanical Engineering, Brigham and Women’s Hospital, and ARPA-H.

Study sheds light on graphite’s lifespan in nuclear reactors

Thu, 08/14/2025 - 5:30pm

Graphite is a key structural component in some of the world’s oldest nuclear reactors and many of the next-generation designs being built today. But it also condenses and swells in response to radiation — and the mechanism behind those changes has proven difficult to study.

Now, MIT researchers and collaborators have uncovered a link between properties of graphite and how the material behaves in response to radiation. The findings could lead to more accurate, less destructive ways of predicting the lifespan of graphite materials used in reactors around the world.

“We did some basic science to understand what leads to swelling and, eventually, failure in graphite structures,” says MIT Research Scientist Boris Khaykovich, senior author of the new study. “More research will be needed to put this into practice, but the paper proposes an attractive idea for industry: that you might not need to break hundreds of irradiated samples to understand their failure point.”

Specifically, the study shows a connection between the size of the pores within graphite and the way the material swells and shrinks in volume, leading to degradation.

“The lifetime of nuclear graphite is limited by irradiation-induced swelling,” says co-author and MIT Research Scientist Lance Snead. “Porosity is a controlling factor in this swelling, and while graphite has been extensively studied for nuclear applications since the Manhattan Project, we still do not have a clear understanding of the porosity in both mechanical properties and swelling. This work addresses that.”

The open-access paper appears this week in Interdisciplinary Materials. It is co-authored by Khaykovich, Snead, MIT Research Scientist Sean Fayfar, former MIT research fellow Durgesh Rai, Stony Brook University Assistant Professor David Sprouster, Oak Ridge National Laboratory Staff Scientist Anne Campbell, and Argonne National Laboratory Physicist Jan Ilavsky.

A long-studied, complex material

Ever since 1942, when physicists and engineers built the world’s first nuclear reactor on a converted squash court at the University of Chicago, graphite has played a central role in the generation of nuclear energy. That first reactor, dubbed the Chicago Pile, was constructed from about 40,000 graphite blocks, many of which contained nuggets of uranium.

Today graphite is a vital component of many operating nuclear reactors and is expected to play a central role in next-generation reactor designs like molten-salt and high-temperature gas reactors. That’s because graphite is a good neutron moderator, slowing down the neutrons released by nuclear fission so they are more likely to create fissions themselves and sustain a chain reaction.

“The simplicity of graphite makes it valuable,” Khaykovich explains. “It’s made of carbon, and it’s relatively well-known how to make it cleanly. Graphite is a very mature technology. It’s simple, stable, and we know it works.”

But graphite also has its complexities.

“We call graphite a composite even though it’s made up of only carbon atoms,” Khaykovich says. “It includes ‘filler particles’ that are more crystalline, then there is a matrix called a ‘binder’ that is less crystalline, then there are pores that span in length from nanometers to many microns.”

Each graphite grade has its own composite structure, but they all contain fractals, or shapes that look the same at different scales.

Those complexities have made it hard to predict how graphite will respond to radiation in microscopic detail, although it’s been known for decades that when graphite is irradiated, it first densifies, reducing its volume by up to 10 percent, before swelling and cracking. The volume fluctuation is caused by changes to graphite’s porosity and lattice stress.

“Graphite deteriorates under radiation, as any material does,” Khaykovich says. “So, on the one hand we have a material that’s extremely well-known, and on the other hand, we have a material that is immensely complicated, with a behavior that’s impossible to predict through computer simulations.”

For the study, the researchers received irradiated graphite samples from Oak Ridge National Laboratory. Co-authors Campbell and Snead were involved in irradiating the samples some 20 years ago. The samples are a grade of graphite known as G347A.

The research team used an analysis technique known as X-ray scattering, which uses the scattered intensity of an X-ray beam to analyze the properties of material. Specifically, they looked at the distribution of sizes and surface areas of the sample’s pores, or what are known as the material’s fractal dimensions.

“When you look at the scattering intensity, you see a large range of porosity,” Fayfar says. “Graphite has porosity over such large scales, and you have this fractal self-similarity: The pores in very small sizes look similar to pores spanning microns, so we used fractal models to relate different morphologies across length scales.”

Fractal models had been used on graphite samples before, but not on irradiated samples to see how the material’s pore structures changed. The researchers found that when graphite is first exposed to radiation, its pores get filled as the material degrades.

“But what was quite surprising to us is the [size distribution of the pores] turned back around,” Fayfar says. “We had this recovery process that matched our overall volume plots, which was quite odd. It seems like after graphite is irradiated for so long, it starts recovering. It’s sort of an annealing process where you create some new pores, then the pores smooth out and get slightly bigger. That was a big surprise.”

The researchers found that the size distribution of the pores closely follows the volume change caused by radiation damage.

“Finding a strong correlation between the [size distribution of pores] and the graphite’s volume changes is a new finding, and it helps connect to the failure of the material under irradiation,” Khaykovich says. “It’s important for people to know how graphite parts will fail when they are under stress and how failure probability changes under irradiation.”

From research to reactors

The researchers plan to study other graphite grades and explore further how pore sizes in irradiated graphite correlate with the probability of failure. They speculate that a statistical technique known as the Weibull Distribution could be used to predict graphite’s time until failure. The Weibull Distribution is already used to describe the probability of failure in ceramics and other porous materials like metal alloys.

Khaykovich also speculated that the findings could contribute to our understanding of why materials densify and swell under irradiation.

“There’s no quantitative model of densification that takes into account what’s happening at these tiny scales in graphite,” Khaykovich says. “Graphite irradiation densification reminds me of sand or sugar, where when you crush big pieces into smaller grains, they densify. For nuclear graphite, the crushing force is the energy that neutrons bring in, causing large pores to get filled with smaller, crushed pieces. But more energy and agitation create still more pores, and so graphite swells again. It’s not a perfect analogy, but I believe analogies bring progress for understanding these materials.”

The researchers describe the paper as an important step toward informing graphite production and use in nuclear reactors of the future.

“Graphite has been studied for a very long time, and we’ve developed a lot of strong intuitions about how it will respond in different environments, but when you’re building a nuclear reactor, details matter,” Khaykovich says. “People want numbers. They need to know how much thermal conductivity will change, how much cracking and volume change will happen. If components are changing volume, at some point you need to take that into account.”

This work was supported, in part, by the U.S. Department of Energy.

Pages