MIT Latest News
Demo Day features hormone-tracking sensors, desalination systems, and other innovations
Kresge Auditorium came alive Friday as MIT entrepreneurs took center stage to share their progress in the delta v startup accelerator program.
Now in its 14th year, delta v Demo Day represents the culmination of a summer in which students work full-time on new ventures under the guidance of the Martin Trust Center for MIT Entrepreneurship.
It also doubles as a celebration, with Trust Center Managing Director (and consummate hype man) Bill Aulet setting the tone early with his patented high-five run through the audience and leap on stage for opening remarks.
“All these students have performed a miracle,” Aulet told the crowd. “One year ago, they were sitting in the audience like all of you. One year ago, they probably didn’t even have an idea or a technology. Maybe they did, but they didn’t have a team, a clear vision, customer models, or a clear path to impact. But today they’re going to blow your mind. They have products — real products — a founding team, a clear mission, customer commitments or letters of intent, legitimate business models, and a path to greatness and impact. In short, they will have achieved escape velocity.”
The two-hour event filled Kresge Auditorium, with a line out the door for good measure, and was followed by a party under a tent on the Kresge lawn. Each presentation began with a short video introducing the company before a student took the stage to expand on the problem they were solving and what their team has learned from talks with potential customers.
In total, 22 startups showcased their ventures and early business milestones in rapid-fire presentations.
Rick Locke, the new dean of the MIT Sloan School of Management, said events like Demo Day are why he came back to the Institute after serving in various roles between 1988 and 2013.
“What’s great about this event is how it crystallizes the spirit of MIT: smart people doing important work, doing it by rolling up their sleeves, doing it with a certain humility but also a vision, and really making a difference in the world,” Locke told the audience. “You can feel the positivity, the energy, and the buzz here tonight. That’s what the world needs more of.”
A program with a purpose
This year’s Demo Day featured 70 students from across MIT, with 16 startups working out of the Trust Center on campus and six working from New York City. Through the delta v program, the students were guided by mentors, received funding, and worked through an action-oriented curriculum full-time between June and September. Aulet also noted that the students presenting benefitted from entrepreneurial support resources from across the Institute.
The odds are in the startups’ favor: A 2022 study found that 69 percent of businesses from the program were still operating five years later. Alumni companies had raised roughly $1 billion in funding.
Demo Day marks the end of delta v and serves to inspire next year’s cohort of entrepreneurs.
“Turn on a screen or look anywhere around you, and you'll see issues with climate, sustainability, health care, the future of work, economic disparities, and more,” Aulet said. “It can all be overwhelming. These entrepreneurs bring light to dark times. Entrepreneurs don’t see problems. As the great Biggie Smalls from Brooklyn said, ‘Turn a negative into a positive.’ That’s what entrepreneurs do.”
Startups in action
Startups in this year’s cohort presented solutions in biotech and health care, sustainability, financial services, energy, and more.
One company, Gees, is helping women with hormonal conditions like polycystic ovary syndrome (PCOS) with a saliva-based sensor that tracks key hormones to help women get personalized insights and manage symptoms.
“Over 200 million women live with PCOS worldwide,” said MIT postdoc and co-founder Walaa Khushaim. “If it goes unmanaged, it can lead to even more serious diseases. The good news is that 80 percent of cases can be managed with lifestyle changes. The problem is women trying to change their lifestyle are left in the dark, unsure if what they are doing is truly helping.”
Gees’ sensor is noninvasive and easier to use than current sensors that track hormones. It provides feedback in minutes from the comfort of users’ homes. The sensor connects to an app that shows results and trends to help women stay on track. The company already has more than 500 sign-ups for its wait list.
Another company, Kira, has created an electrochemical system to increase the efficiency and access of water desalination. The company is aiming to help companies manage their brine wastewater that is often dumped, pumped underground, or trucked off to be treated.
“At Kira, we’re working toward a system that produces zero liquid waste and only solid salts,” says PhD student Jonathan Bessette SM ’22.
Kira says its system increases the amount of clean water created by industrial processes, reduces the amount of brine wastewater, and optimizes the energy flows of factories. The company says next year it will deploy a system at the largest groundwater desalination plant in the U.S.
A variety of other startups presented at the event:
AutoAce builds AI agents for car dealerships, automating repetitive tasks with a 24/7 voice agent that answers inbound service calls and books appointments.
Carbion uses a thermochemical process to convert biomass into battery-grade graphite at half the temperature of traditional synthetic methods.
Clima Technologies has developed an AI building engineer that enables facilities managers to “talk” to their buildings in real-time, allowing teams to conduct 24/7 commissioning, act on fault diagnostics, minimize equipment downtime, and optimize controls.
Cognify uses AI to predict customer interactions with digital platforms, simulating customer behavior to deliver insights into which designs resonate with customers, where friction exists in user journeys, and how to build a user experience that converts.
Durability uses computer vision and AI to analyze movement, predict injury risks, and guide recovery for athletes.
EggPlan uses a simple blood test and proprietary model to assess eligibility for egg freezing with fertility clinics. If users do not have a baby, their fees are returned, making the process risk-free.
Forma Systems developed an optimization software for manufacturers to make smarter, faster decisions about things like materials use while reducing their climate impact.
Ground3d is a social impact organization building a digital tool for crowdsourcing hyperlocal environmental data, beginning with street-level documentation of flooding events in New York City. The platform could help residents with climate resilience and advocacy.
GrowthFactor helps retailers scale their footprint with a fractional real estate analyst while using an AI-powered platform to maximize their chance of commercial success.
Kyma uses AI-powered patient engagement to integrate data from wearables, smart scales, sensors, and continuous glucose monitors to track behaviors and draft physician-approved, timely reminders.
LNK Energies is solving the heavy-duty transport industry’s emissions problem with liquid organic hydrogen carriers (LOHCs): safe, room-temperature liquids compatible with existing diesel infrastructure.
Mendhai Health offers a suite of digital tools to help women improve pelvic health and rehabilitate before and after childbirth.
Nami has developed an automatic, reusable drinkware cleaning station that delivers a hot, soapy, pressurized wash in under 30 seconds.
Pancho helps restaurants improve margins with an AI-powered food procurement platform that uses real-time price comparison, dispute tracking, and smart ordering.
Qadence offers older adults a co-pilot that assesses mobility and fall risk, then delivers tailored guidance to improve balance, track progress, and extend recovery beyond the clinic.
Sensopore offers an at-home diagnostic device to help families test for everyday illnesses at home, get connected with a telehealth doctor, and have prescriptions shipped to their door, reducing clinical visits.
Spheric Bio has developed a personal occlusion device to improve a common surgical procedure used to treat strokes.
Tapestry uses conversational AI to chat with attendees before events and connect them with the right people for more meaningful conversations.
Torque automates financial analysis across private equity portfolios to help investment professionals make better strategic decisions.
Trazo helps interior designers and architects collaborate and iterate on technical drawings and 3D designs of new construction of remodeling projects.
DOE selects MIT to establish a Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions
The U.S. Department of Energy’s National Nuclear Security Administration (DOE/NNSA) recently announced that it has selected MIT to establish a new research center dedicated to advancing the predictive simulation of extreme environments, such as those encountered in hypersonic flight and atmospheric re-entry. The center will be part of the fourth phase of NNSA's Predictive Science Academic Alliance Program (PSAAP-IV), which supports frontier research advancing the predictive capabilities of high-performance computing for open science and engineering applications relevant to national security mission spaces.
The Center for the Exascale Simulation of Coupled High-Enthalpy Fluid–Solid Interactions (CHEFSI) — a joint effort of the MIT Center for Computational Science and Engineering, the MIT Schwarzman College of Computing, and the MIT Institute for Soldier Nanotechnologies (ISN) — plans to harness cutting-edge exascale supercomputers and next-generation algorithms to simulate with unprecedented detail how extremely hot, fast-moving gaseous and solid materials interact. The understanding of these extreme environments — characterized by temperatures of more than 1,500 degrees Celsius and speeds as high as Mach 25 — and their effect on vehicles is central to national security, space exploration, and the development of advanced thermal protection systems.
“CHEFSI will capitalize on MIT’s deep strengths in predictive modeling, high-performance computing, and STEM education to help ensure the United States remains at the forefront of scientific and technological innovation,” says Ian A. Waitz, MIT’s vice president for research. “The center’s particular relevance to national security and advanced technologies exemplifies MIT’s commitment to advancing research with broad societal benefit.”
CHEFSI is one of five new Predictive Simulation Centers announced by the NNSA as part of a program expected to provide up to $17.5 million to each center over five years.
CHEFSI’s research aims to couple detailed simulations of high-enthalpy gas flows with models of the chemical, thermal, and mechanical behavior of solid materials, capturing phenomena such as oxidation, nitridation, ablation, and fracture. Advanced computational models — validated by carefully designed experiments — can address the limitations of flight testing by providing critical insights into material performance and failure.
“By integrating high-fidelity physics models with artificial intelligence-based surrogate models, experimental validation, and state-of-the-art exascale computational tools, CHEFSI will help us understand and predict how thermal protection systems perform under some of the harshest conditions encountered in engineering systems,” says Raúl Radovitzky, the Jerome C. Hunsaker Professor of Aeronautics and Astronautics, associate director of the ISN, and director of CHEFSI. “This knowledge will help in the design of resilient systems for applications ranging from reusable spacecraft to hypersonic vehicles.”
Radovitzky will be joined on the center’s leadership team by Youssef Marzouk, the Breene M. Kerr (1951) Professor of Aeronautics and Astronautics, co-director of the MIT Center for Computational Science and Engineering (CCSE), and recently named the associate dean of the MIT Schwarzman College of Computing; and Nicolas Hadjiconstantinou, the Quentin Berg (1937) Professor of Mechanical Engineering and co-director of CCSE, who will serve as associate directors. The center co-principal investigators include MIT faculty members across the departments of Aeronautics and Astronautics, Electrical Engineering and Computer Science, Materials Science and Engineering, Mathematics, and Mechanical Engineering. Franklin Hadley will lead center operations, with administration and finance under the purview of Joshua Freedman. Hadley and Freedman are both members of the ISN headquarters team.
CHEFSI expects to collaborate extensively with the DoE/NNSA national laboratories — Lawrence Livermore National Laboratory, Los Alamos National Laboratory, and Sandia National Laboratories — and, in doing so, offer graduate students and postdocs immersive research experiences and internships at these facilities.
Ten years later, LIGO is a black-hole hunting machine
The following article is adapted from a press release issued by the Laser Interferometer Gravitational-wave Observatory (LIGO) Laboratory. LIGO is funded by the National Science Foundation and operated by Caltech and MIT, which conceived and built the project.
On Sept. 14, 2015, a signal arrived on Earth, carrying information about a pair of remote black holes that had spiraled together and merged. The signal had traveled about 1.3 billion years to reach us at the speed of light — but it was not made of light. It was a different kind of signal: a quivering of space-time called gravitational waves first predicted by Albert Einstein 100 years prior. On that day 10 years ago, the twin detectors of the U.S. National Science Foundation Laser Interferometer Gravitational-wave Observatory (NSF LIGO) made the first-ever direct detection of gravitational waves, whispers in the cosmos that had gone unheard until that moment.
The historic discovery meant that researchers could now sense the universe through three different means. Light waves, such as X-rays, optical, radio, and other wavelengths of light, as well as high-energy particles called cosmic rays and neutrinos, had been captured before, but this was the first time anyone had witnessed a cosmic event through the gravitational warping of space-time. For this achievement, first dreamed up more than 40 years prior, three of the team’s founders won the 2017 Nobel Prize in Physics: MIT’s Rainer Weiss, professor emeritus of physics (who recently passed away at age 92); Caltech’s Barry Barish, the Ronald and Maxine Linde Professor of Physics, Emeritus; and Caltech’s Kip Thorne, the Richard P. Feynman Professor of Theoretical Physics, Emeritus.
Today, LIGO, which consists of detectors in both Hanford, Washington, and Livingston, Louisiana, routinely observes roughly one black hole merger every three days. LIGO now operates in coordination with two international partners, the Virgo gravitational-wave detector in Italy and KAGRA in Japan. Together, the gravitational-wave-hunting network, known as the LVK (LIGO, Virgo, KAGRA), has captured a total of about 300 black hole mergers, some of which are confirmed while others await further analysis. During the network’s current science run, the fourth since the first run in 2015, the LVK has discovered more than 200 candidate black hole mergers, more than double the number caught in the first three runs.
The dramatic rise in the number of LVK discoveries over the past decade is owed to several improvements to their detectors — some of which involve cutting-edge quantum precision engineering. The LVK detectors remain by far the most precise rulers for making measurements ever created by humans. The space-time distortions induced by gravitational waves are incredibly miniscule. For instance, LIGO detects changes in space-time smaller than 1/10,000 the width of a proton. That’s 1/700 trillionth the width of a human hair.
“Rai Weiss proposed the concept of LIGO in 1972, and I thought, ‘This doesn’t have much chance at all of working,’” recalls Thorne, an expert on the theory of black holes. “It took me three years of thinking about it on and off and discussing ideas with Rai and Vladimir Braginsky [a Russian physicist], to be convinced this had a significant possibility of success. The technical difficulty of reducing the unwanted noise that interferes with the desired signal was enormous. We had to invent a whole new technology. NSF was just superb at shepherding this project through technical reviews and hurdles.”
Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT and dean of the MIT School of Science, says that the challenges the team overcame to make the first discovery are still very much at play. “From the exquisite precision of the LIGO detectors to the astrophysical theories of gravitational-wave sources, to the complex data analyses, all these hurdles had to be overcome, and we continue to improve in all of these areas,” Mavalvala says. “As the detectors get better, we hunger for farther, fainter sources. LIGO continues to be a technological marvel.”
The clearest signal yet
LIGO’s improved sensitivity is exemplified in a recent discovery of a black hole merger referred to as GW250114. (The numbers denote the date the gravitational-wave signal arrived at Earth: January 14, 2025.) The event was not that different from LIGO’s first-ever detection (called GW150914) — both involve colliding black holes about 1.3 billion light-years away with masses between 30 to 40 times that of our sun. But thanks to 10 years of technological advances reducing instrumental noise, the GW250114 signal is dramatically clearer.
“We can hear it loud and clear, and that lets us test the fundamental laws of physics,” says LIGO team member Katerina Chatziioannou, Caltech assistant professor of physics and William H. Hurt Scholar, and one of the authors of a new study on GW250114 published in the Physical Review Letters.
By analyzing the frequencies of gravitational waves emitted by the merger, the LVK team provided the best observational evidence captured to date for what is known as the black hole area theorem, an idea put forth by Stephen Hawking in 1971 that says the total surface areas of black holes cannot decrease. When black holes merge, their masses combine, increasing the surface area. But they also lose energy in the form of gravitational waves. Additionally, the merger can cause the combined black hole to increase its spin, which leads to it having a smaller area. The black hole area theorem states that despite these competing factors, the total surface area must grow in size.
Later, Hawking and physicist Jacob Bekenstein concluded that a black hole’s area is proportional to its entropy, or degree of disorder. The findings paved the way for later groundbreaking work in the field of quantum gravity, which attempts to unite two pillars of modern physics: general relativity and quantum physics.
In essence, the LIGO detection allowed the team to “hear” two black holes growing as they merged into one, verifying Hawking’s theorem. (Virgo and KAGRA were offline during this particular observation.) The initial black holes had a total surface area of 240,000 square kilometers (roughly the size of Oregon), while the final area was about 400,000 square kilometers (roughly the size of California) — a clear increase. This is the second test of the black hole area theorem; an initial test was performed in 2021 using data from the first GW150914 signal, but because that data were not as clean, the results had a confidence level of 95 percent compared to 99.999 percent for the new data.
Thorne recalls Hawking phoning him to ask whether LIGO might be able to test his theorem immediately after he learned of the 2015 gravitational-wave detection. Hawking died in 2018 and sadly did not live to see his theory observationally verified. “If Hawking were alive, he would have reveled in seeing the area of the merged black holes increase,” Thorne says.
The trickiest part of this type of analysis had to do with determining the final surface area of the merged black hole. The surface areas of pre-merger black holes can be more readily gleaned as the pair spiral together, roiling space-time and producing gravitational waves. But after the black holes coalesce, the signal is not as clear-cut. During this so-called ringdown phase, the final black hole vibrates like a struck bell.
In the new study, the researchers precisely measured the details of the ringdown phase, which allowed them to calculate the mass and spin of the black hole and, subsequently, determine its surface area. More specifically, they were able, for the first time, to confidently pick out two distinct gravitational-wave modes in the ringdown phase. The modes are like characteristic sounds a bell would make when struck; they have somewhat similar frequencies but die out at different rates, which makes them hard to identify. The improved data for GW250114 meant that the team could extract the modes, demonstrating that the black hole’s ringdown occurred exactly as predicted by math models based on the Teukolsky formalism — devised in 1972 by Saul Teukolsky, now a professor at Caltech and Cornell University.
Another study from the LVK, submitted to Physical Review Letters today, places limits on a predicted third, higher-pitched tone in the GW250114 signal, and performs some of the most stringent tests yet of general relativity’s accuracy in describing merging black holes.
“A decade of improvements allowed us to make this exquisite measurement,” Chatziioannou says. “It took both of our detectors, in Washington and Louisiana, to do this. I don’t know what will happen in 10 more years, but in the first 10 years, we have made tremendous improvements to LIGO’s sensitivity. This not only means we are accelerating the rate at which we discover new black holes, but we are also capturing detailed data that expand the scope of what we know about the fundamental properties of black holes.”
Jenne Driggers, detection lead senior scientist at LIGO Hanford, adds, “It takes a global village to achieve our scientific goals. From our exquisite instruments, to calibrating the data very precisely, vetting and providing assurances about the fidelity of the data quality, searching the data for astrophysical signals, and packaging all that into something that telescopes can read and act upon quickly, there are a lot of specialized tasks that come together to make LIGO the great success that it is.”
Pushing the limits
LIGO and Virgo have also unveiled neutron stars over the past decade. Like black holes, neutron stars form from the explosive deaths of massive stars, but they weigh less and glow with light. Of note, in August 2017, LIGO and Virgo witnessed an epic collision between a pair of neutron stars — a kilonova — that sent gold and other heavy elements flying into space and drew the gaze of dozens of telescopes around the world, which captured light ranging from high-energy gamma rays to low-energy radio waves. The “multi-messenger” astronomy event marked the first time that both light and gravitational waves had been captured in a single cosmic event. Today, the LVK continues to alert the astronomical community to potential neutron star collisions, who then use telescopes to search the skies for signs of kilonovae.
“The LVK has made big strides in recent years to make sure we’re getting high-quality data and alerts out to the public in under a minute, so that astronomers can look for multi-messenger signatures from our gravitational-wave candidates,” Driggers says.
“The global LVK network is essential to gravitational-wave astronomy,” says Gianluca Gemme, Virgo spokesperson and director of research at the National Institute of Nuclear Physics in Italy. “With three or more detectors operating in unison, we can pinpoint cosmic events with greater accuracy, extract richer astrophysical information, and enable rapid alerts for multi-messenger follow-up. Virgo is proud to contribute to this worldwide scientific endeavor.”
Other LVK scientific discoveries include the first detection of collisions between one neutron star and one black hole; asymmetrical mergers, in which one black hole is significantly more massive than its partner black hole; the discovery of the lightest black holes known, challenging the idea that there is a “mass gap” between neutron stars and black holes; and the most massive black hole merger seen yet with a merged mass of 225 solar masses. For reference, the previous record holder for the most massive merger had a combined mass of 140 solar masses.
Even in the decades before LIGO began taking data, scientists were building foundations that made the field of gravitational-wave science possible. Breakthroughs in computer simulations of black hole mergers, for example, allow the team to extract and analyze the feeble gravitational-wave signals generated across the universe.
LIGO’s technological achievements, beginning as far back as the 1980s, include several far-reaching innovations, such as a new way to stabilize lasers using the so-called Pound–Drever–Hall technique. Invented in 1983 and named for contributing physicists Robert Vivian Pound, the late Ronald Drever of Caltech (a founder of LIGO), and John Lewis Hall, this technique is widely used today in other fields, such as the development of atomic clocks and quantum computers. Other innovations include cutting-edge mirror coatings that almost perfectly reflect laser light; “quantum squeezing” tools that enable LIGO to surpass sensitivity limits imposed by quantum physics; and new artificial intelligence methods that could further hush certain types of unwanted noise.
“What we are ultimately doing inside LIGO is protecting quantum information and making sure it doesn’t get destroyed by external factors,” Mavalvala says. “The techniques we are developing are pillars of quantum engineering and have applications across a broad range of devices, such as quantum computers and quantum sensors.”
In the coming years, the scientists and engineers of LVK hope to further fine-tune their machines, expanding their reach deeper and deeper into space. They also plan to use the knowledge they have gained to build another gravitational-wave detector, LIGO India. Having a third LIGO observatory would greatly improve the precision with which the LVK network can localize gravitational-wave sources.
Looking farther into the future, the team is working on a concept for an even larger detector, called Cosmic Explorer, which would have arms 40 kilometers long. (The twin LIGO observatories have 4-kilometer arms.) A European project, called Einstein Telescope, also has plans to build one or two huge underground interferometers with arms more than 10 kilometers long. Observatories on this scale would allow scientists to hear the earliest black hole mergers in the universe.
“Just 10 short years ago, LIGO opened our eyes for the first time to gravitational waves and changed the way humanity sees the cosmos,” says Aamir Ali, a program director in the NSF Division of Physics, which has supported LIGO since its inception. “There’s a whole universe to explore through this completely new lens and these latest discoveries show LIGO is just getting started.”
The LIGO-Virgo-KAGRA Collaboration
LIGO is funded by the U.S. National Science Foundation and operated by Caltech and MIT, which together conceived and built the project. Financial support for the Advanced LIGO project was led by NSF with Germany (Max Planck Society), the United Kingdom (Science and Technology Facilities Council), and Australia (Australian Research Council) making significant commitments and contributions to the project. More than 1,600 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. Additional partners are listed at my.ligo.org/census.php.
The Virgo Collaboration is currently composed of approximately 1,000 members from 175 institutions in 20 different (mainly European) countries. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa, Italy, and is funded by the French National Center for Scientific Research, the National Institute of Nuclear Physics in Italy, the National Institute of Subatomic Physics in the Netherlands, The Research Foundation – Flanders, and the Belgian Fund for Scientific Research. A list of the Virgo Collaboration groups can be found on the project website.
KAGRA is the laser interferometer with 3-kilometer arm length in Kamioka, Gifu, Japan. The host institute is the Institute for Cosmic Ray Research of the University of Tokyo, and the project is co-hosted by the National Astronomical Observatory of Japan and the High Energy Accelerator Research Organization. The KAGRA collaboration is composed of more than 400 members from 128 institutes in 17 countries/regions. KAGRA’s information for general audiences is at the website gwcenter.icrr.u-tokyo.ac.jp/en/. Resources for researchers are accessible at gwwiki.icrr.u-tokyo.ac.jp/JGWwiki/KAGRA.
Study explains how a rare gene variant contributes to Alzheimer’s disease
A new study from MIT neuroscientists reveals how rare variants of a gene called ABCA7 may contribute to the development of Alzheimer’s in some of the people who carry it.
Dysfunctional versions of the ABCA7 gene, which are found in a very small proportion of the population, contribute strongly to Alzheimer’s risk. In the new study, the researchers discovered that these mutations can disrupt the metabolism of lipids that play an important role in cell membranes.
This disruption makes neurons hyperexcitable and leads them into a stressed state that can damage DNA and other cellular components. These effects, the researchers found, could be reversed by treating neurons with choline, an important building block precursor needed to make cell membranes.
“We found pretty strikingly that when we treated these cells with choline, a lot of the transcriptional defects were reversed. We also found that the hyperexcitability phenotype and elevated amyloid beta peptides that we observed in neurons that lost ABCA7 was reduced after treatment,” says Djuna von Maydell, an MIT graduate student and the lead author of the study.
Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory and the Picower Professor in the MIT Department of Brain and Cognitive Sciences, is the senior author of the paper, which appears today in Nature.
Membrane dysfunction
Genomic studies of Alzheimer’s patients have found that people who carry variants of ABCA7 that generate reduced levels of functional ABCA7 protein have about double the odds of developing Alzheimer’s as people who don’t have those variants.
ABCA7 encodes a protein that transports lipids across cell membranes. Lipid metabolism is also the primary target of a more common Alzheimer’s risk factor known as APOE4. In previous work, Tsai’s lab has shown that APOE4, which is found in about half of all Alzheimer’s patients, disrupts brain cells’ ability to metabolize lipids and respond to stress.
To explore how ABCA7 variants might contribute to Alzheimer’s risk, the researchers obtained tissue samples from the Religious Orders Study/Memory and Aging Project (ROSMAP), a longitudinal study that has tracked memory, motor, and other age-related changes in older people since 1994. Of about 1,200 samples in the dataset that had genetic information available, the researchers obtained 12 from people who carried a rare variant of ABCA7.
The researchers performed single-cell RNA sequencing of neurons from these ABCA7 carriers, allowing them to determine which other genes are affected when ABCA7 is missing. They found that the most significantly affected genes fell into three clusters related to lipid metabolism, DNA damage, and oxidative phosphorylation (the metabolic process that cells use to capture energy as ATP).
To investigate how those alterations could affect neuron function, the researchers introduced ABCA7 variants into neurons derived from induced pluripotent stem cells.
These cells showed many of the same gene expression changes as the cells from the patient samples, especially among genes linked to oxidative phosphorylation. Further experiments showed that the “safety valve” that normally lets mitochondria limit excess build-up of electrical charge was less active. This can lead to oxidative stress, a state that occurs when too many cell-damaging free radicals build up in tissues.
Using these engineered cells, the researchers also analyzed the effects of ABCA7 variants on lipid metabolism. Cells with the variants altered metabolism of a molecule called phosphatidylcholine, which could lead to membrane stiffness and may explain why the mitochondrial membranes of the cells were unable to function normally.
A boost in choline
Those findings raised the possibility that intervening in phosphatidylcholine metabolism might reverse some of the cellular effects of ABCA7 loss. To test that idea, the researchers treated neurons with ABCA7 mutations with a molecule called CDP-choline, a precursor of phosphatidylcholine.
As these cells began producing new phosphatidylcholine (both saturated and unsaturated forms), their mitochondrial membrane potentials also returned to normal, and their oxidative stress levels went down.
The researchers then used induced pluripotent stem cells to generate 3D tissue organoids made of neurons with the ABCA7 variant. These organoids developed higher levels of amyloid beta proteins, which form the plaques seen in the brains of Alzheimer’s patients. However, those levels returned to normal when the organoids were treated with CDP-choline. The treatment also reduced neurons’ hyperexcitability.
In a 2021 paper, Tsai’s lab found that CDP-choline treatment could also reverse many of the effects of another Alzheimer’s-linked gene variant, APOE4, in mice. She is now working with researchers at the University of Texas and MD Anderson Cancer Center on a clinical trial exploring how choline supplements affect people who carry the APOE4 gene.
Choline is naturally found in foods such as eggs, meat, fish, and some beans and nuts. Boosting choline intake with supplements may offer a way for many people to reduce their risk of Alzheimer’s disease, Tsai says.
“From APOE4 to ABCA7 loss of function, my lab demonstrates that disruption of lipid homeostasis leads to the development of Alzheimer’s-related pathology, and that restoring lipid homeostasis, such as through choline supplementation, can ameliorate these pathological phenotypes,” she says.
In addition to the rare variants of ABCA7 that the researchers studied in this paper, there is also a more common variant that is found at a frequency of about 18 percent in the population. This variant was thought to be harmless, but the MIT team showed that cells with this variant exhibited many of the same gene alterations in lipid metabolism that they found in cells with the rare ABCA7 variants.
“There’s more work to be done in this direction, but this suggests that ABCA7 dysfunction might play an important role in a much larger part of the population than just people who carry the rare variants,” von Maydell says.
The research was funded, in part, by the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Carol and Gene Ludwig Family Foundation, James D. Cook, and the National Institutes of Health.
Lincoln Laboratory technologies win seven R&D 100 Awards for 2025
Seven technologies developed at MIT Lincoln Laboratory, either wholly or with collaborators, have earned 2025 R&D 100 Awards. This annual awards competition recognizes the year's most significant new technologies, products, and materials available on the marketplace or transitioned to use. An independent panel of technology experts and industry professionals selects the winners.
"Winning an R&D 100 Award is a recognition of the exceptional creativity and effort of our scientists and engineers. The awarded technologies reflect Lincoln Laboratory's mission to transform innovative ideas into real-world solutions for U.S. national security, industry, and society," says Melissa Choi, director of Lincoln Laboratory.
Lincoln Laboratory's winning technologies enhance national security in a range of ways, from securing satellite communication links and identifying nearby emitting devices to providing a layer of defense for U.S. Army vehicles and protecting service members from chemical threats. Other technologies are pushing frontiers in computing, enabling the 3D integration of chips and the close inspection of superconducting electronics. Industry is also benefiting from these developments — for example, by adopting an architecture that streamlines the development of laser communications terminals.
The online publication R&D World manages the awards program. Recipients span Fortune 500 companies, federally funded research institutions, academic and government labs, and small companies. Since 2010, Lincoln Laboratory has received 108 R&D 100 Awards.
Protecting lives
Tactical Optical Spherical Sensor for Interrogating Threats (TOSSIT) is a throwable, baseball-sized sensor that remotely detects hazardous vapors and aerosols. It is designed to alert soldiers, first responders, and law enforcement to the presence of chemical threats, like nerve and blister agents, industrial chemical accidents, or fentanyl dust. Users can simply toss, drone-drop, or launch TOSSIT into an area of concern. To detect specific chemicals, the sensor samples the air with a built-in fan and uses an internal camera to observe color changes on a removable dye card. If chemicals are present, TOSSIT alerts users wirelessly on an app or via audible, light-up, or vibrational alarms in the sensor.
"TOSSIT fills an unmet need for a chemical-vapor point sensor, one that senses the immediate environment around it, that can be kinetically deployed ahead of service personnel. It provides a low-cost sensing option for vapors and solid aerosol threats — think toxic dust particles — that would otherwise not be detectable by small deployed sensor systems,” says principal investigator Richard Kingsborough. TOSSIT has been tested extensively in the field and is currently being transferred to the military.
Wideband Selective Propagation Radar (WiSPR) is an advanced radar and communications system developed to protect U.S. Army armored vehicles. The system's active electronically scanned antenna array extends signal range at millimeter-wave frequencies, steering thousands of beams per second to detect incoming kinetic threats while enabling covert communications between vehicles. WiSPR is engineered to have a low probability of detection, helping U.S. Army units evade adversaries seeking to detect radio-frequency (RF) energy emitting from radars. The system is currently in production.
"Current global conflicts are highlighting the susceptibility of armored vehicles to adversary anti-tank weapons. By combining custom technologies and commercial off-the-shelf hardware, the Lincoln Laboratory team produced a WiSPR prototype as quickly and efficiently as possible," says program manager Christopher Serino, who oversaw WiSPR development with principal investigator David Conway.
Advancing computing
Bumpless Integration of Chiplets to Al-Optimized Fabric is an approach that enables the fabrication of next-generation 2D, 2.5D, and 3D integrated circuits. As data-processing demands increase, designers are exploring 3D stacked assemblies of small specialized chips (chiplets) to pack more power into devices. Tiny bumps of conductive material are used to electrically connect these stacks, but these microbumps cannot accommodate the extremely dense, massively interconnected components needed for future microcomputers. To address this issue, Lincoln Laboratory developed a technique eliminating microbumps. Key to this technique is a lithographically produced fabric allowing electrical bonding of chiplet stack layers. Researchers used an AI-driven decision-tree approach to optimize the design of this fabric. This bumpless feature can integrate hundreds of chiplets that perform like a single chip, improving data-processing speed and power efficiency, especially for high-performance AI applications.
"Our novel, bumpless, heterogeneous chiplet integration is a transformative approach addressing two semiconductor industry challenges: expanding chip yield and reducing cost and time to develop systems," says principal investigator Rabindra Das.
Quantum Diamond Magnetic Cryomicroscope is a breakthrough in magnetic field imaging for characterizing superconducting electronics, a promising frontier in high-performance computing. Unlike traditional techniques, this system delivers fast, wide-field, high-resolution imaging at the cryogenic temperatures required for superconducting devices. The instrument combines an optical microscopy system with a cryogenic sensor head containing a diamond engineered with nitrogen-vacancy centers — atomic-scale defects highly sensitive to magnetic fields. The cryomicroscope enables researchers to directly visualize trapped magnetic vortices that interfere with critical circuit components, helping to overcome a major obstacle to scaling superconducting electronics.
“The cryomicroscope gives us an unprecedented window into magnetic behavior in superconducting devices, accelerating progress toward next-generation computing technologies,” says Pauli Kehayias, joint principal investigator with Jennifer Schloss. The instrument is currently advancing superconducting electronics development at Lincoln Laboratory and is poised to impact materials science and quantum technology more broadly.
Enhancing communications
Lincoln Laboratory Radio Frequency Situational Awareness Model (LL RF-SAM) utilizes advances in AI to enhance U.S. service members' vigilance over the electromagnetic spectrum. The modern spectrum can be described as a swamp of mixed signals originating from civilian, military, or enemy sources. In near-real time, LL RF-SAM inspects these signals to disentangle and identify nearby waveforms and their originating devices. For example, LL RF-SAM can help a user identify a particular packet of energy as a drone transmission protocol and then classify whether that drone is part of a corpus of friendly or enemy drones.
"This type of enhanced context helps military operators make data-driven decisions. The future adoption of this technology will have profound impact across communications, signals intelligence, spectrum management, and wireless infrastructure security," says principal investigator Joey Botero.
Modular, Agile, Scalable Optical Terminal (MAScOT) is a laser communications (lasercom) terminal architecture that facilitates mission-enabling lasercom solutions adaptable to various space platforms and operating environments. Lasercom is rapidly becoming the go-to technology for space-to-space links in low Earth orbit because of its ability to support significantly higher data rates compared to radio frequency terminals. However, it has yet to be used operationally or commercially for longer-range space-to-ground links, as such systems often require custom designs for specific missions. MASCOT's modular, agile, and scalable design streamlines the process for building lasercom terminals suitable for a range of missions, from near Earth to deep space. MAScOT made its debut on the International Space Station in 2023 to demonstrate NASA's first two-way lasercom relay system, and is now being prepared to serve in an operational capacity on Artemis II, NASA's moon flyby mission scheduled for 2026. Two industry-built terminals have adopted the MAScOT architecture, and technology transfer to additional industry partners is ongoing.
"MAScOT is the latest lasercom terminal designed by Lincoln Laboratory engineers following decades of pioneering lasercom work with NASA, and it is poised to support lasercom for decades to come," says Bryan Robinson, who co-led MAScOT development with Tina Shih.
Protected Anti-jam Tactical SATCOM (PATS) Key Management System (KMS) Prototype addresses the critical challenge of securely distributing cryptographic keys for military satellite communications (SATCOM) during terminal jamming, compromise, or disconnection. Realizing the U.S. Space Systems Command's vision for resilient, protected tactical SATCOM, the PATS KMS Prototype leverages innovative, bandwidth-efficient protocols and algorithms to enable real-time, scalable key distribution over wireless links, even under attack, so that warfighters can communicate securely in contested environments. PATS KMS is now being adopted as the core of the Department of Defense's next-generation SATCOM architecture.
"PATS KMS is not just a technology — it's a linchpin enabler of resilient, modern SATCOM, built for the realities of today's contested battlefield. We worked hand-in-hand with government stakeholders, operational users, and industry partners across a multiyear, multiphase journey to bring this capability to life," says Joseph Sobchuk, co-principal investigator with Nancy List. The R&D 100 Award is shared with the U.S. Space Force Space Systems Command, whose “visionary leadership has been instrumental in shaping the future of protected tactical SATCOM,” Sobchuk adds.
Study finds cell memory can be more like a dimmer dial than an on/off switch
When cells are healthy, we don’t expect them to suddenly change cell types. A skin cell on your hand won’t naturally morph into a brain cell, and vice versa. That’s thanks to epigenetic memory, which enables the expression of various genes to “lock in” throughout a cell’s lifetime. Failure of this memory can lead to diseases, such as cancer.
Traditionally, scientists have thought that epigenetic memory locks genes either “on” or “off” — either fully activated or fully repressed, like a permanent Lite-Brite pattern. But MIT engineers have found that the picture has many more shades.
In a new study appearing today in Cell Genomics, the team reports that a cell’s memory is set not by on/off switching but through a more graded, dimmer-like dial of gene expression.
The researchers carried out experiments in which they set the expression of a single gene at different levels in different cells. While conventional wisdom would assume the gene should eventually switch on or off, the researchers found that the gene’s original expression persisted: Cells whose gene expression was set along a spectrum between on and off remained in this in-between state.
The results suggest that epigenetic memory — the process by which cells retain gene expression and “remember” their identity — is not binary but instead analog, which allows for a spectrum of gene expression and associated cell identities.
“Our finding opens the possibility that cells commit to their final identity by locking genes at specific levels of gene expression instead of just on and off,” says study author Domitilla Del Vecchio, professor of mechanical and biological engineering at MIT. “The consequence is that there may be many more cell types in our body than we know and recognize today, that may have important functions and could underlie healthy or diseased states.”
The study’s MIT lead authors are Sebastian Palacios and Simone Bruno, with additional co-authors.
Beyond binary
Every cell shares the same genome, which can be thought of as the starting ingredient for life. As a cell takes shape, it differentiates into one type or another, through the expression of genes in its genome. Some genes are activated, while others are repressed. The combination steers a cell toward one identity versus another.
A process of DNA methylation, by which certain molecules attach to the genes’ DNA, helps lock their expression in place. DNA methylation assists a cell to “remember” its unique pattern of gene expression, which ultimately establishes the cell’s identity.
Del Vecchio’s group at MIT applies mathematics and genetic engineering to understand cellular molecular processes and to engineer cells with new capabilities. In previous work, her group was experimenting with DNA methylation and ways to lock the expression of certain genes in ovarian cells.
“The textbook understanding was that DNA methylation had a role to lock genes in either an on or off state,” Del Vecchio says. “We thought this was the dogma. But then we started seeing results that were not consistent with that.”
While many of the cells in their experiment exhibited an all-or-nothing expression of genes, a significant number of cells appeared to freeze genes in an in-between state — neither entirely on or off.
“We found there was a spectrum of cells that expressed any level between on and off,” Palacios says. “And we thought, how is this possible?”
Shades of blue
In their new study, the team aimed to see whether the in-between gene expression they observed was a fluke or a more established property of cells that until now has gone unnoticed.
“It could be that scientists disregarded cells that don’t have a clear commitment, because they assumed this was a transient state,” Del Vecchio says. “But actually these in-between cell types may be permanent states that could have important functions.”
To test their idea, the researchers ran experiments with hamster ovarian cells — a line of cells commonly used in the laboratory. In each cell, an engineered gene was initially set to a different level of expression. The gene was turned fully on in some cells, completely off in others, and set somewhere in between on and off for the remaining cells.
The team paired the engineered gene with a fluorescent marker that lights up with a brightness corresponding to the gene’s level of expression. The researchers introduced, for a short time, an enzyme that triggers the gene’s DNA methylation, a natural gene-locking mechanism. They then monitored the cells over five months to see whether the modification would lock the genes in place at their in-between expression levels, or whether the genes would migrate toward fully on or off states before locking in.
“Our fluorescent marker is blue, and we see cells glow across the entire spectrum, from really shiny blue, to dimmer and dimmer, to no blue at all,” Del Vecchio says. “Every intensity level is maintained over time, which means gene expression is graded, or analog, and not binary. We were very surprised, because we thought after such a long time, the gene would veer off, to be either fully on or off, but it did not.”
The findings open new avenues into engineering more complex artificial tissues and organs by tuning the expression of certain genes in a cell’s genome, like a dial on a radio, rather than a switch. The results also complicate the picture of how a cell’s epigenetic memory works to establish its identity. It opens up the possibility that cell modifications such as those exhibited in therapy-resistant tumors could be treated in a more precise fashion.
“Del Vecchio and colleagues have beautifully shown how analog memory arises through chemical modifications to the DNA itself,” says Michael Elowitz, professor of biology and biological engineering at the California institute of Technology, who was not involved in the study. “As a result, we can now imagine repurposing this natural analog memory mechanism, invented by evolution, in the field of synthetic biology, where it could help allow us to program permanent and precise multicellular behaviors.”
“One of the things that enables the complexity in humans is epigenetic memory,” Palacios says. “And we find that it is not what we thought. For me, that’s actually mind-blowing. And I think we’re going to find that this analog memory is relevant for many different processes across biology.”
This research was supported, in part, by the National Science Foundation, MODULUS, and a Vannevar Bush Faculty Fellowship through the U.S. Office of Naval Research.
“Bottlebrush” particles deliver big chemotherapy payloads directly to cancer cells
Using tiny particles shaped like bottlebrushes, MIT chemists have found a way to deliver a large range of chemotherapy drugs directly to tumor cells.
To guide them to the right location, each particle contains an antibody that targets a specific tumor protein. This antibody is tethered to bottlebrush-shaped polymer chains carrying dozens or hundreds of drug molecules — a much larger payload than can be delivered by any existing antibody-drug conjugates.
In mouse models of breast and ovarian cancer, the researchers found that treatment with these conjugated particles could eliminate most tumors. In the future, the particles could be modified to target other types of cancer, by swapping in different antibodies.
“We are excited about the potential to open up a new landscape of payloads and payload combinations with this technology, that could ultimately provide more effective therapies for cancer patients,” says Jeremiah Johnson, the A. Thomas Geurtin Professor of Chemistry at MIT, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the new study.
MIT postdoc Bin Liu is the lead author of the paper, which appears today in Nature Biotechnology.
A bigger drug payload
Antibody-drug conjugates (ADCs) are a promising type of cancer treatment that consist of a cancer-targeting antibody attached to a chemotherapy drug. At least 15 ADCs have been approved by the FDA to treat several different types of cancer.
This approach allows specific targeting of a cancer drug to a tumor, which helps to prevent some of the side effects that occur when chemotherapy drugs are given intravenously. However, one drawback to currently approved ADCs is that only a handful of drug molecules can be attached to each antibody. That means they can only be used with very potent drugs — usually DNA-damaging agents or drugs that interfere with cell division.
To try to use a broader range of drugs, which are often less potent, Johnson and his colleagues decided to adapt bottlebrush particles that they had previously invented. These particles consist of a polymer backbone that are attached to tens to hundreds of “prodrug” molecules — inactive drug molecules that are activated upon release within the body. This structure allows the particles to deliver a wide range of drug molecules, and the particles can be designed to carry multiple drugs in specific ratios.
Using a technique called click chemistry, the researchers showed that they could attach one, two, or three of their bottlebrush polymers to a single tumor-targeting antibody, creating an antibody-bottlebrush conjugate (ABC). This means that just one antibody can carry hundreds of prodrug molecules. The currently approved ADCs can carry a maximum of about eight drug molecules.
The huge number of payloads in the ABC particles allows the researchers to incorporate less potent cancer drugs such as doxorubicin or paclitaxel, which enhances the customizability of the particles and the variety of drug combinations that can be used.
“We can use antibody-bottlebrush conjugates to increase the drug loading, and in that case, we can use less potent drugs,” Liu says. “In the future, we can very easily copolymerize with multiple drugs together to achieve combination therapy.”
The prodrug molecules are attached to the polymer backbone by cleavable linkers. After the particles reach a tumor site, some of these linkers are broken right away, allowing the drugs to kill nearby cancer cells even if they don’t express the target antibody. Other particles are absorbed into cells with the target antibody before releasing their toxic payload.
Effective treatment
For this study, the researchers created ABC particles carrying a few different types of drugs: microtubule inhibitors called MMAE and paclitaxel, and two DNA-damaging agents, doxorubicin and SN-38. They also designed ABC particles carrying an experimental type of drug known as PROTAC (proteolysis-targeting chimera), which can selectively degrade disease-causing proteins inside cells.
Each bottlebrush was tethered to an antibody targeting either HER2, a protein often overexpressed in breast cancer, or MUC1, which is commonly found in ovarian, lung, and other types of cancer.
The researchers tested each of the ABCs in mouse models of breast or ovarian cancer and found that in most cases, the ABC particles were able to eradicate the tumors. This treatment was significantly more effective than giving the same bottlebrush prodrugs by injection, without being conjugated to a targeting antibody.
“We used a very low dose, almost 100 times lower compared to the traditional small-molecule drug, and the ABC still can achieve much better efficacy compared to the small-molecule drug given on its own,” Liu says.
These ABCs also performed better than two FDA-approved ADCs, T-DXd and TDM-1, which both use HER2 to target cells. T-DXd carries deruxtecan, which interferes with DNA replication, and TDM-1 carries emtansine, a microtubule inhibitor.
In future work, the MIT team plans to try delivering combinations of drugs that work by different mechanisms, which could enhance their overall effectiveness. Among these could be immunotherapy drugs such as STING activators.
The researchers are also working on swapping in different antibodies, such as antibodies targeting EGFR, which is widely expressed in many tumors. More than 100 antibodies have been approved to treat cancer and other diseases, and in theory any of those could be conjugated to cancer drugs to create a targeted therapy.
The research was funded in part by the National Institutes of Health, the Ludwig Center at MIT, and the Koch Institute Frontier Research Program.
Remembering David Baltimore, influential biologist and founding director of the Whitehead Institute
The Whitehead Institute for Biomedical Research fondly remembers its founding director, David Baltimore, a former MIT Institute Professor and Nobel laureate who died Sept. 6 at age 87.
With discovery after discovery, Baltimore brought to light key features of biology with direct implications for human health. His work at MIT earned him a share of the 1975 Nobel Prize in Physiology or Medicine (along with Howard Temin and Renato Dulbecco) for discovering reverse transcriptase and identifying retroviruses, which use RNA to synthesize viral DNA.
Following the award, Baltimore reoriented his laboratory’s focus to pursue a mix of immunology and virology. Among the lab’s most significant subsequent discoveries were the identification of a pair of proteins that play an essential role in enabling the immune system to create antibodies for so many different molecules, and investigations into how certain viruses can cause cell transformation and cancer. Work from Baltimore’s lab also helped lead to the development of the important cancer drug Gleevec — the first small molecule to target an oncoprotein inside of cells.
In 1982, Baltimore partnered with philanthropist Edwin C. “Jack” Whitehead to conceive and launch the Whitehead Institute and then served as its founding director until 1990. Within a decade of its founding, the Baltimore-led Whitehead Institute was named the world’s top research institution in molecular biology and genetics.
“More than 40 years later, Whitehead Institute is thriving, still guided by the strategic vision that David Baltimore and Jack Whitehead articulated,” says Phillip Sharp, MIT Institute Professor Emeritus, former Whitehead board member, and fellow Nobel laureate. “Of all David’s myriad and significant contributions to science, his role in building the first independent biomedical research institute associated with MIT and guiding it to extraordinary success may well prove to have had the broadest and longest-term impact.”
Ruth Lehmann, director and president of the Whitehead Institute, and professor of biology at MIT, says: “I, like many others, owe my career to David Baltimore. He recruited me to Whitehead Institute and MIT in 1988 as a faculty member, taking a risk on an unproven, freshly-minted PhD graduate from Germany. As director, David was incredibly skilled at bringing together talented scientists at different stages of their careers and facilitating their collaboration so that the whole would be greater than the sum of its parts. This approach remains a core strength of Whitehead Institute.”
As part of the Whitehead Institute’s mission to cultivate the next generation of scientific leaders, Baltimore founded the Whitehead Fellows program, which provides extraordinarily talented recent PhD and MD graduates with the opportunity to launch their own labs, rather than to go into traditional postdoctoral positions. The program has been a huge success, with former fellows going on to excel as leaders in research, education, and industry.
David Page, MIT professor of biology, Whitehead Institute member, and former director who was the Whitehead's first fellow, recalls, “David was both an amazing scientist and a peerless leader of aspiring scientists. The launching of the Whitehead Fellows program reflected his recipe for institutional success: gather up the resources to allow young scientists to realize their dreams, recruit with an eye toward potential for outsized impact, and quietly mentor and support without taking credit for others’ successes — all while treating junior colleagues as equals. It is a beautiful strategy that David designed and executed magnificently.”
Sally Kornbluth, president of MIT and a member of the Whitehead Institute Board of Directors, says that “David was a scientific hero for so many. He was one of those remarkable individuals who could make stellar scientific breakthroughs and lead major institutions with extreme thoughtfulness and grace. He will be missed by the whole scientific community.”
“David was a wise giant. He was brilliant. He was an extraordinarily effective, ethical leader and institution builder who influenced and inspired generations of scientists and premier institutions,” says Susan Whitehead, member of the board of directors and daughter of Jack Whitehead.
Gerald R. Fink, the Margaret and Herman Sokol Professor Emeritus at MIT who was recruited by Baltimore from Cornell University as one of four founding members of the Whitehead Institute, and who succeeded him as director in 1990, observes: “David became my hero and friend. He upheld the highest scientific ideals and instilled trust and admiration in all around him.”
David Baltimore - Infinite History (2010)
Video: MIT | Watch with transcript
Baltimore was born in New York City in 1938. His scientific career began at Swarthmore College, where he earned a bachelor’s degree with high honors in chemistry in 1960. He then began doctoral studies in biophysics at MIT, but in 1961 shifted his focus to animal viruses and moved to what is now the Rockefeller University, where he did his thesis work in the lab of Richard Franklin.
After completing postdoctoral fellowships with James Darnell at MIT and Jerard Hurwitz at the Albert Einstein College of Medicine, Baltimore launched his own lab at the Salk Institute for Biological Studies from 1965 to 1968. Then, in 1968, he returned to MIT as a member of its biology faculty, where he remained until 1990. (Whitehead Institute’s members hold parallel appointments as faculty in the MIT Department of Biology.)
In 1990, Baltimore left the Whitehead Institute and MIT to become the president of Rockefeller University. He returned to MIT from 1994 to 1997, serving as an Institute Professor, after which he was named president of Caltech. Baltimore held that position until 2006, when he was elected to a three-year term as president of the American Association for the Advancement of Science.
For decades, Baltimore has been viewed not just as a brilliant scientist and talented academic leader, but also as a wise counsel to the scientific community. For example, he helped organize the 1975 Asilomar Conference on Recombinant DNA, which created stringent safety guidelines for the study and use of recombinant DNA technology. He played a leadership role in the development of policies on AIDS research and treatment, and on genomic editing. Serving as an advisor to both organizations and individual scientists, he helped to shape the strategic direction of dozens of institutions and to advance the careers of generations of researchers. As Founding Member Robert Weinberg summarizes it, “He had no tolerance for nonsense and weak science.”
In 2023, the Whitehead Institute established the endowed David Baltimore Chair in Biomedical Research, honoring Baltimore’s six decades of scientific, academic, and policy leadership and his impact on advancing innovative basic biomedical research.
“David was a visionary leader in science and the institutions that sustain it. He devoted his career to advancing scientific knowledge and strengthening the communities that make discovery possible, and his leadership of Whitehead Institute exemplified this,” says Richard Young, MIT professor of biology and Whitehead Institute member. “David approached life with keen observation, boundless curiosity, and a gift for insight that made him both a brilliant scientist and a delightful companion. His commitment to mentoring and supporting young scientists left a lasting legacy, inspiring the next generation to pursue impactful contributions to biomedical research. Many of us found in him not only a mentor and role model, but also a steadfast friend whose presence enriched our lives and whose absence will be profoundly felt.”
Alzheimer’s erodes brain cells’ control of gene expression, undermining function, cognition
Most people recognize Alzheimer’s disease from its devastating symptoms such as memory loss, while new drugs target pathological aspects of disease manifestations, such as plaques of amyloid proteins. Now, a sweeping new open-access study in the Sept. 4 edition of Cell by MIT researchers shows the importance of understanding the disease as a battle over how well brain cells control the expression of their genes. The study paints a high-resolution picture of a desperate struggle to maintain healthy gene expression and gene regulation, where the consequences of failure or success are nothing less than the loss or preservation of cell function and cognition.
The study presents a first-of-its-kind, multimodal atlas of combined gene expression and gene regulation spanning 3.5 million cells from six brain regions, obtained by profiling 384 post-mortem brain samples across 111 donors. The researchers profiled both the “transcriptome,” showing which genes are expressed into RNA, and the “epigenome,” the set of chromosomal modifications that establish which DNA regions are accessible and thus utilized between different cell types.
The resulting atlas revealed many insights showing that the progression of Alzheimer’s is characterized by two major epigenomic trends. The first is that vulnerable cells in key brain regions suffer a breakdown of the rigorous nuclear “compartments” they normally maintain to ensure some parts of the genome are open for expression but others remain locked away. The second major finding is that susceptible cells experience a loss of “epigenomic information,” meaning they lose their grip on the unique pattern of gene regulation and expression that gives them their specific identity and enables their healthy function.
Accompanying the evidence of compromised compartmentalization and the erosion of epigenomic information are many specific findings pinpointing molecular circuitry that breaks down by cell type, by region, and gene network. They found, for instance, that when epigenomic conditions deteriorate, that opens the door to expression of many genes associated with disease, whereas if cells manage to keep their epigenomic house in order, they can keep disease-associated genes in check. Moreover, the researchers clearly saw that when the epigenomic breakdowns were occurring people lost cognitive ability, but where epigenomic stability remained, so did cognition.
“To understand the circuitry, the logic responsible for gene expression changes in Alzheimer’s disease [AD], we needed to understand the regulation and upstream control of all the changes that are happening, and that’s where the epigenome comes in,” says senior author Manolis Kellis, a professor in the Computer Science and Artificial Intelligence Lab and head of MIT’s Computational Biology Group. “This is the first large-scale, single-cell, multi-region gene-regulatory atlas of AD, systematically dissecting the dynamics of epigenomic and transcriptomic programs across disease progression and resilience.”
By providing that detailed examination of the epigenomic mechanisms of Alzheimer’s progression, the study provides a blueprint for devising new Alzheimer’s treatments that can target factors underlying the broad erosion of epigenomic control or the specific manifestations that affect key cell types such as neurons and supporting glial cells.
“The key to developing new and more effective treatments for Alzheimer’s disease depends on deepening our understanding of the mechanisms that contribute to the breakdowns of cellular and network function in the brain,” says Picower Professor and co-corresponding author Li-Huei Tsai, director of The Picower Institute for Learning and Memory and a founding member of MIT’s Aging Brain Initiative, along with Kellis. “This new data advances our understanding of how epigenomic factors drive disease.”
Kellis Lab members Zunpeng Liu and Shanshan Zhang are the study’s co-lead authors.
Compromised compartments and eroded information
Among the post-mortem brain samples in the study, 57 came from donors to the Religious Orders Study or the Rush Memory and Aging Project (collectively known as “ROSMAP”) who did not have AD pathology or symptoms, while 33 came from donors with early-stage pathology and 21 came from donors at a late stage. The samples therefore provided rich information about the symptoms and pathology each donor was experiencing before death.
In the new study, Liu and Zhang combined analyses of single-cell RNA sequencing of the samples, which measures which genes are being expressed in each cell, and ATACseq, which measures whether chromosomal regions are accessible for gene expression. Considered together, these transcriptomic and epigenomic measures enabled the researchers to understand the molecular details of how gene expression is regulated across seven broad classes of brain cells (e.g., neurons or other glial cell types) and 67 subtypes of cell types (e.g., 17 kinds of excitatory neurons or six kinds of inhibitory ones).
The researchers annotated more than 1 million gene-regulatory control regions that different cells employ to establish their specific identities and functionality using epigenomic marking. Then, by comparing the cells from Alzheimer’s brains to the ones without, and accounting for stage of pathology and cognitive symptoms, they could produce rigorous associations between the erosion of these epigenomic markings, and ultimately loss of function.
For instance, they saw that among people who advanced to late-stage AD, normally repressive compartments opened up for more expression and compartments that were normally more open during health became more repressed. Worryingly, when the normally repressive compartments of brain cells opened up, they became more afflicted with disease.
“For Alzheimer’s patients, repressive compartments opened up, and gene expression levels increased, which was associated with decreased cognitive function,” explains Liu.
But when cells managed to keep their compartments in order such that they expressed the genes they were supposed to, people remained cognitively intact.
Meanwhile, based on the cells’ expression of their regulatory elements, the researchers created an epigenomic information score for each cell. Generally, information declined as pathology progressed, but that was particularly notable among cells in the two brain regions affected earliest in Alzheimer’s: the entorhinal cortex and the hippocampus. The analyses also highlighted specific cell types that were especially vulnerable including microglia that play immune and other roles, oligodendrocytes that produce myelin insulation for neurons, and particular kinds of excitatory neurons.
Risk genes and “chromatin guardians”
Detailed analyses in the paper highlighted how epigenomic regulation tracked with disease-related problems, Liu notes. The e4 variant of the APOE gene, for instance, is widely understood to be the single biggest genetic risk factor for Alzheimer’s. In APOE4 brains, microglia initially responded to the emerging disease pathology with an increase in their epigenomic information, suggesting that they were stepping up to their unique responsibility to fight off disease. But as the disease progressed, the cells exhibited a sharp drop off in information, a sign of deterioration and degeneration. This turnabout was strongest in people who had two copies of APOE4, rather than just one. The findings, Kellis said, suggest that APOE4 might destabilize the genome of microglia, causing them to burn out.
Another example is the fate of neurons expressing the gene RELN and its protein Reelin. Prior studies, including by Kellis and Tsai, have shown that RELN- expressing neurons in the entorhinal cortex and hippocampus are especially vulnerable in Alzheimer’s, but promote resilience if they survive. The new study sheds new light on their fate by demonstrating that they exhibit early and severe epigenomic information loss as disease advances, but that in people who remained cognitively resilient the neurons maintained epigenomic information.
In yet another example, the researchers tracked what they colloquially call “chromatin guardians” because their expression sustains and regulates cells’ epigenomic programs. For instance, cells with greater epigenomic erosion and advanced AD progression displayed increased chromatin accessibility in areas that were supposed to be locked down by Polycomb repression genes or other gene expression silencers. While resilient cells expressed genes promoting neural connectivity, epigenomically eroded cells expressed genes linked to inflammation and oxidative stress.
“The message is clear: Alzheimer’s is not only about plaques and tangles, but about the erosion of nuclear order itself,” Kellis says. “Cognitive decline emerges when chromatin guardians lose ground to the forces of erosion, switching from resilience to vulnerability at the most fundamental level of genome regulation.
“And when our brain cells lose their epigenomic memory marks and epigenomic information at the lowest level deep inside our neurons and microglia, it seems that Alheimer’s patients also lose their memory and cognition at the highest level.”
Other authors of the paper are Benjamin T. James, Kyriaki Galani, Riley J. Mangan, Stuart Benjamin Fass, Chuqian Liang, Manoj M. Wagle, Carles A. Boix, Yosuke Tanigawa, Sukwon Yun, Yena Sung, Xushen Xiong, Na Sun, Lei Hou, Martin Wohlwend, Mufan Qiu, Xikun Han, Lei Xiong, Efthalia Preka, Lei Huang, William F. Li, Li-Lun Ho, Amy Grayson, Julio Mantero, Alexey Kozlenkov, Hansruedi Mathys, Tianlong Chen, Stella Dracheva, and David A. Bennett.
Funding for the research came from the National Institutes of Health, the National Science Foundation, the Cure Alzheimer’s Fund, the Freedom Together Foundation, the Robert A. and Renee E. Belfer Family Foundation, Eduardo Eurnekian, and Joseph P. DiSabato.
Physicists devise an idea for lasers that shoot beams of neutrinos
At any given moment, trillions of particles called neutrinos are streaming through our bodies and every material in our surroundings, without noticeable effect. Smaller than electrons and lighter than photons, these ghostly entities are the most abundant particles with mass in the universe.
The exact mass of a neutrino is a big unknown. The particle is so small, and interacts so rarely with matter, that it is incredibly difficult to measure. Scientists attempt to do so by harnessing nuclear reactors and massive particle accelerators to generate unstable atoms, which then decay into various byproducts including neutrinos. In this way, physicists can manufacture beams of neutrinos that they can probe for properties including the particle’s mass.
Now MIT physicists propose a much more compact and efficient way to generate neutrinos that could be realized in a tabletop experiment.
In a paper appearing in Physical Review Letters, the physicists introduce the concept for a “neutrino laser” — a burst of neutrinos that could be produced by laser-cooling a gas of radioactive atoms down to temperatures colder than interstellar space. At such frigid temps, the team predicts the atoms should behave as one quantum entity, and radioactively decay in sync.
The decay of radioactive atoms naturally releases neutrinos, and the physicists say that in a coherent, quantum state this decay should accelerate, along with the production of neutrinos. This quantum effect should produce an amplified beam of neutrinos, broadly similar to how photons are amplified to produce conventional laser light.
“In our concept for a neutrino laser, the neutrinos would be emitted at a much faster rate than they normally would, sort of like a laser emits photons very fast,” says study co-author Ben Jones PhD ’15, an associate professor of physics at the University of Texas at Arlington.
As an example, the team calculated that such a neutrino laser could be realized by trapping 1 million atoms of rubidium-83. Normally, the radioactive atoms have a half-life of about 82 days, meaning that half the atoms decay, shedding an equivalent number of neutrinos, every 82 days. The physicists show that, by cooling rubidium-83 to a coherent, quantum state, the atoms should undergo radioactive decay in mere minutes.
“This is a novel way to accelerate radioactive decay and the production of neutrinos, which to my knowledge, has never been done,” says co-author Joseph Formaggio, professor of physics at MIT.
The team hopes to build a small tabletop demonstration to test their idea. If it works, they envision a neutrino laser could be used as a new form of communication, by which the particles could be sent directly through the Earth to underground stations and habitats. The neutrino laser could also be an efficient source of radioisotopes, which, along with neutrinos, are byproducts of radioactive decay. Such radioisotopes could be used to enhance medical imaging and cancer diagnostics.
Coherent condensate
For every atom in the universe, there are about a billion neutrinos. A large fraction of these invisible particles may have formed in the first moments following the Big Bang, and they persist in what physicists call the “cosmic neutrino background.” Neutrinos are also produced whenever atomic nuclei fuse together or break apart, such as in the fusion reactions in the sun’s core, and in the normal decay of radioactive materials.
Several years ago, Formaggio and Jones separately considered a novel possibility: What if a natural process of neutrino production could be enhanced through quantum coherence? Initial explorations revealed fundamental roadblocks in realizing this. Years later, while discussing the properties of ultracold tritium (an unstable isotope of hydrogen that undergoes radioactive decay) they asked: Could the production of neutrinos be enhanced if radioactive atoms such as tritium could be made so cold that they could be brought into a quantum state known as a Bose-Einstein condensate?
A Bose-Einstein condensate, or BEC, is a state of matter that forms when a gas of certain particles is cooled down to near absolute zero. At this point, the particles are brought down to their lowest energy level and stop moving as individuals. In this deep freeze, the particles can start to “feel” each others’ quantum effects, and can act as one coherent entity — a unique phase that can result in exotic physics.
BECs have been realized in a number of atomic species. (One of the first instances was with sodium atoms, by MIT’s Wolfgang Ketterle, who shared the 2001 Nobel Prize in Physics for the result.) However, no one has made a BEC from radioactive atoms. To do so would be exceptionally challenging, as most radioisotopes have short half-lives and would decay entirely before they could be sufficiently cooled to form a BEC.
Nevertheless, Formaggio wondered, if radioactive atoms could be made into a BEC, would this enhance the production of neutrinos in some way? In trying to work out the quantum mechanical calculations, he found initially that no such effect was likely.
“It turned out to be a red herring — we can’t accelerate the process of radioactive decay, and neutrino production, just by making a Bose-Einstein condensate,” Formaggio says.
In sync with optics
Several years later, Jones revisited the idea, with an added ingredient: superradiance — a phenomenon of quantum optics that occurs when a collection of light-emitting atoms is stimulated to behave in sync. In this coherent phase, it’s predicted that the atoms should emit a burst of photons that is “superradiant,” or more radiant than when the atoms are normally out of sync.
Jones proposed to Formaggio that perhaps a similar superradiant effect is possible in a radioactive Bose-Einstein condensate, which could then result in a similar burst of neutrinos. The physicists went to the drawing board to work out the equations of quantum mechanics governing how light-emitting atoms morph from a coherent starting state into a superradiant state. They used the same equations to work out what radioactive atoms in a coherent BEC state would do.
“The outcome is: You get a lot more photons more quickly, and when you apply the same rules to something that gives you neutrinos, it will give you a whole bunch more neutrinos more quickly,” Formaggio explains. “That’s when the pieces clicked together, that superradiance in a radioactive condensate could enable this accelerated, laser-like neutrino emission.”
To test their concept in theory, the team calculated how neutrinos would be produced from a cloud of 1 million super-cooled rubidium-83 atoms. They found that, in the coherent BEC state, the atoms radioactively decayed at an accelerating rate, releasing a laser-like beam of neutrinos within minutes.
Now that the physicists have shown in theory that a neutrino laser is possible, they plan to test the idea with a small tabletop setup.
“It should be enough to take this radioactive material, vaporize it, trap it with lasers, cool it down, and then turn it into a Bose-Einstein condensate,” Jones says. “Then it should start doing this superradiance spontaneously.”
The pair acknowledge that such an experiment will require a number of precautions and careful manipulation.
“If it turns out that we can show it in the lab, then people can think about: Can we use this as a neutrino detector? Or a new form of communication?” Formaggio says. “That’s when the fun really starts.”
Study finds exoplanet TRAPPIST-1e is unlikely to have a Venus- or Mars-like atmosphere
In the search for habitable exoplanets, atmospheric conditions play a key role in determining if a planet can sustain liquid water. Suitable candidates often sit in the “Goldilocks zone,” a distance that is neither too close nor too far from their host star to allow liquid water. With the launch of the James Webb Space Telescope (JWST), astronomers are collecting improved observations of exoplanet atmospheres that will help determine which exoplanets are good candidates for further study.
In an open-access paper published today in The Astrophysical Journal Letters, astronomers used JWST to take a closer look at the atmosphere of the exoplanet TRAPPIST-1e, located in the TRAPPIST-1 system. While they haven’t found definitive proof of what it is made of — or if it even has an atmosphere — they were able to rule out several possibilities.
“The idea is: If we assume that the planet is not airless, can we constrain different atmospheric scenarios? Do those scenarios still allow for liquid water at the surface?” says Ana Glidden, a postdoc in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and the MIT Kavli Institute for Astrophysics and Space Research, and the first author on the paper. The answers they found were yes.
The new data rule out a hydrogen-dominated atmosphere, and place tighter constraints on other atmospheric conditions that are commonly created through secondary-generation, such as volcanic eruptions and outgassing from the planet’s interior. The data were consistent enough to still allow for the possibility of a surface ocean.
“TRAPPIST-1e remains one of our most compelling habitable-zone planets, and these new results take us a step closer to knowing what kind of world it is,” says Sara Seager, Class of 1941 Professor of Planetary Science at MIT and co-author on the study. “The evidence pointing away from Venus- and Mars-like atmospheres sharpens our focus on the scenarios still in play.”
The study’s co-authors also include collaborators from the University of Arizona, Johns Hopkins University, University of Michigan, the Space Telescope Science Institute, and members of the JWST-TST DREAMS Team.
Improved observations
Exoplanet atmospheres are studied using a technique called transmission spectroscopy. When a planet passes in front of its host star, the starlight is filtered through the planet’s atmosphere. Astronomers can determine which molecules are present in the atmosphere by seeing how the light changes at different wavelengths.
“Each molecule has a spectral fingerprint. You can compare your observations with those fingerprints to suss out which molecules may be present,” says Glidden.
JWST has a larger wavelength coverage and higher spectral resolution than its predecessor, the Hubble Space Telescope, which makes it possible to observe molecules like carbon dioxide and methane that are more commonly found in our own solar system. However, the improved observations have also highlighted the problem of stellar contamination, where changes in the host star’s temperature due to things like sunspots and solar flares make it difficult to interpret data.
“Stellar activity strongly interferes with the planetary interpretation of the data because we can only observe a potential atmosphere through starlight,” says Glidden. “It is challenging to separate out which signals come from the star versus from the planet itself.”
Ruling out atmospheric conditions
The researchers used a novel approach to mitigate for stellar activity and, as a result, “any signal you can see varying visit-to-visit is most likely from the star, while anything that’s consistent between the visits is most likely the planet,” says Glidden.
The researchers were then able to compare the results to several different possible atmospheric scenarios. They found that carbon dioxide-rich atmospheres, like those of Mars and Venus, are unlikely, while a warm, nitrogen-rich atmosphere similar to Saturn’s moon Titan remains possible. The evidence, however, is too weak to determine if any atmosphere was present, let alone detecting a specific type of gas. Additional, ongoing observations that are already in the works will help to narrow down the possibilities.
“With our initial observations, we have showcased the gains made with JWST. Our follow-up program will help us to further refine our understanding of one of our best habitable-zone planets,” says Glidden.
AI and machine learning for engineering design
Artificial intelligence optimization offers a host of benefits for mechanical engineers, including faster and more accurate designs and simulations, improved efficiency, reduced development costs through process automation, and enhanced predictive maintenance and quality control.
“When people think about mechanical engineering, they're thinking about basic mechanical tools like hammers and … hardware like cars, robots, cranes, but mechanical engineering is very broad,” says Faez Ahmed, the Doherty Chair in Ocean Utilization and associate professor of mechanical engineering at MIT. “Within mechanical engineering, machine learning, AI, and optimization are playing a big role.”
In Ahmed’s course, 2.155/156 (AI and Machine Learning for Engineering Design), students use tools and techniques from artificial intelligence and machine learning for mechanical engineering design, focusing on the creation of new products and addressing engineering design challenges.
“There’s a lot of reason for mechanical engineers to think about machine learning and AI to essentially expedite the design process,” says Lyle Regenwetter, a teaching assistant for the course and a PhD candidate in Ahmed’s Design Computation and Digital Engineering Lab (DeCoDE), where research focuses on developing new machine learning and optimization methods to study complex engineering design problems.
First offered in 2021, the class has quickly become one of the Department of Mechanical Engineering (MechE)’s most popular non-core offerings, attracting students from departments across the Institute, including mechanical and civil and environmental engineering, aeronautics and astronautics, the MIT Sloan School of Management, and nuclear and computer science, along with cross-registered students from Harvard University and other schools.
The course, which is open to both undergraduate and graduate students, focuses on the implementation of advanced machine learning and optimization strategies in the context of real-world mechanical design problems. From designing bike frames to city grids, students participate in contests related to AI for physical systems and tackle optimization challenges in a class environment fueled by friendly competition.
Students are given challenge problems and starter code that “gave a solution, but [not] the best solution …” explains Ilan Moyer, a graduate student in MechE. “Our task was to [determine], how can we do better?” Live leaderboards encourage students to continually refine their methods.
Em Lauber, a system design and management graduate student, says the process gave space to explore the application of what students were learning and the practice skill of “literally how to code it.”
The curriculum incorporates discussions on research papers, and students also pursue hands-on exercises in machine learning tailored to specific engineering issues including robotics, aircraft, structures, and metamaterials. For their final project, students work together on a team project that employs AI techniques for design on a complex problem of their choice.
“It is wonderful to see the diverse breadth and high quality of class projects,” says Ahmed. “Student projects from this course often lead to research publications, and have even led to awards.” He cites the example of a recent paper, titled “GenCAD-Self-Repairing,” that went on to win the American Society of Mechanical Engineers Systems Engineering, Information and Knowledge Management 2025 Best Paper Award.
“The best part about the final project was that it gave every student the opportunity to apply what they’ve learned in the class to an area that interests them a lot,” says Malia Smith, a graduate student in MechE. Her project chose “markered motion captured data” and looked at predicting ground force for runners, an effort she called “really gratifying” because it worked so much better than expected.
Lauber took the framework of a “cat tree” design with different modules of poles, platforms, and ramps to create customized solutions for individual cat households, while Moyer created software that is designing a new type of 3D printer architecture.
“When you see machine learning in popular culture, it’s very abstracted, and you have the sense that there’s something very complicated going on,” says Moyer. “This class has opened the curtains.”
