MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 16 hours 24 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Generative AI improves a wireless vision system that sees through obstructions

22 hours 27 min ago

MIT researchers have spent more than a decade studying techniques that enable robots to find and manipulate hidden objects by “seeing” through obstacles. Their methods utilize surface-penetrating wireless signals that reflect off concealed items.

Now, the researchers are leveraging generative artificial intelligence models to overcome a longstanding bottleneck that limited the precision of prior approaches. The result is a new method that produces more accurate shape reconstructions, which could improve a robot’s ability to reliably grasp and manipulate objects that are blocked from view.

This new technique builds a partial reconstruction of a hidden object from reflected wireless signals and fills in the missing parts of its shape using a specially trained generative AI model.

The researchers also introduced an expanded system that uses generative AI to accurately reconstruct an entire room, including all the furniture. The system utilizes wireless signals sent from one stationary radar, which reflect off humans moving in the space.  

This overcomes one key challenge of many existing methods, which require a wireless sensor to be mounted on a mobile robot to scan the environment. And unlike some popular camera-based techniques, their method preserves the privacy of people in the environment.

These innovations could enable warehouse robots to verify packed items before shipping, eliminating waste from product returns. They could also allow smart home robots to understand someone’s location in a room, improving the safety and efficiency of human-robot interaction.

“What we’ve done now is develop generative AI models that help us understand wireless reflections. This opens up a lot of interesting new applications, but technically it is also a qualitative leap in capabilities, from being able to fill in gaps we were not able to see before to being able to interpret reflections and reconstruct entire scenes,” says Fadel Adib, associate professor in the Department of Electrical Engineering and Computer Science, director of the Signal Kinetics group in the MIT Media Lab, and senior author of two papers on these techniques. “We are using AI to finally unlock wireless vision.”

Adib is joined on the first paper by lead author and research assistant Laura Dodds; as well as research assistants Maisy Lam, Waleed Akbar, and Yibo Cheng; and on the second paper by lead author and former postdoc Kaichen Zhou; Dodds; and research assistant Sayed Saad Afzal. Both papers will be presented at the IEEE Conference on Computer Vision and Pattern Recognition.

Surmounting specularity

The Adib Group previously demonstrated the use of millimeter wave (mmWave) signals to create accurate reconstructions of 3D objects that are hidden from view, like a lost wallet buried under a pile.

These waves, which are the same type of signals used in Wi-Fi, can pass through common obstructions like drywall, plastic, and cardboard, and reflect off hidden objects.

But mmWaves usually reflect in a specular manner, which means a wave reflects in a single direction after striking a surface. So large portions of the surface will reflect signals away from the mmWave sensor, making those areas effectively invisible.

“When we want to reconstruct an object, we are only able to see the top surface and we can’t see any of the bottom or sides,” Dodds explains.

The researchers previously used principles from physics to interpret reflected signals, but this limits the accuracy of the reconstructed 3D shape.

In the new papers, they overcame that limitation by using a generative AI model to fill in parts that are missing from a partial reconstruction.

“But the challenge then becomes: How do you train these models to fill in these gaps?” Adib says.

Usually, researchers use extremely large datasets to train a generative AI model, which is one reason models like Claude and Llama exhibit such impressive performance. But no mmWave datasets are large enough for training.

Instead, the researchers adapted the images in large computer vision datasets to mimic the properties in mmWave reflections.

“We were simulating the property of specularity and the noise we get from these reflections so we can apply existing datasets to our domain. It would have taken years for us to collect enough new data to do this,” Lam says.

The researchers embed the physics of mmWave reflections directly into these adapted data, creating a synthetic dataset they use to teach a generative AI model to perform plausible shape reconstructions.

The complete system, called Wave-Former, proposes a set of potential object surfaces based on mmWave reflections, feeds them to the generative AI model to complete the shape, and then refines the surfaces until it achieves a full reconstruction.

Wave-Former was able to generate faithful reconstructions of about 70 everyday objects, such as cans, boxes, utensils, and fruit, boosting accuracy by nearly 20 percent over state-of-the-art baselines. The objects were hidden behind or under cardboard, wood, drywall, plastic, and fabric.

Seeing “ghosts”

The team used this same approach to build an expanded system that fully reconstructs entire indoor scenes by leveraging mmWave reflections off humans moving in a room.

Human motion generates multipath reflections. Some mmWaves reflect off the human, then reflect again off a wall or object, and then arrive back at the sensor, Dodds explains.

These secondary reflections create so-called “ghost signals,” which are reflected copies of the original signal that change location as a human moves. These ghost signals are usually discarded as noise, but they also hold information about the layout of the room.

“By analyzing how these reflections change over time, we can start to get a coarse understanding of the environment around us. But trying to directly interpret these signals is going to be limited in accuracy and resolution.” Dodds says.

They used a similar training method to teach a generative AI model to interpret those coarse scene reconstructions and understand the behavior of multipath mmWave reflections. This model fills in the gaps, refining the initial reconstruction until it completes the scene.

They tested their scene reconstruction system, called RISE, using more than 100 human trajectories captured by a single mmWave radar. On average, RISE generated reconstructions that were about twice as precise than existing techniques.

In the future, the researchers want to improve the granularity and detail in their reconstructions. They also want to build large foundation models for wireless signals, like the foundation models GPT, Claude, and Gemini for language and vision, which could open new applications.

This work is supported, in part, by the National Science Foundation (NSF), the MIT Media Lab, and Amazon.

A better method for identifying overconfident large language models

22 hours 27 min ago

Large language models (LLMs) can generate credible but inaccurate responses, so researchers have developed uncertainty quantification methods to check the reliability of predictions. One popular method involves submitting the same prompt multiple times to see if the model generates the same answer.

But this method measures self-confidence, and even the most impressive LLM might be confidently wrong. Overconfidence can mislead users about the accuracy of a prediction, which might result in devastating consequences in high-stakes settings like health care or finance.   

To address this shortcoming, MIT researchers introduced a new method for measuring a different type of uncertainty that more reliably identifies confident but incorrect LLM responses.

Their method involves comparing a target model’s response to responses from a group of similar LLMs. They found that measuring cross-model disagreement more accurately captures this type of uncertainty than traditional approaches.

They combined their approach with a measure of LLM self-consistency to create a total uncertainty metric, and evaluated it on 10 realistic tasks, such as question-answering and math reasoning. This total uncertainty metric consistently outperformed other measures and was better at identifying unreliable predictions.

“Self-consistency is being used in a lot of different approaches for uncertainty quantification, but if your estimate of uncertainty only relies on a single model’s outcome, it is not necessarily trustable. We went back to the beginning to understand the limitations of current approaches and used those as a starting point to design a complementary method that can empirically improve the results,” says Kimia Hamidieh, an electrical engineering and computer science (EECS) graduate student at MIT and lead author of a paper on this technique.

She is joined on the paper by Veronika Thost, a research scientist at the MIT-IBM Watson AI Lab; Walter Gerych, a former MIT postdoc who is now an assistant professor at Worcester Polytechnic Institute; Mikhail Yurochkin, a staff research scientist at the MIT-IBM Watson AI Lab; and senior author Marzyeh Ghassemi, an associate professor in EECS and a member of the Institute of Medical Engineering Sciences and the Laboratory for Information and Decision Systems.

Understanding overconfidence

Many popular methods for uncertainty quantification involve asking a model for a confidence score or testing the consistency of its responses to the same prompt. These methods estimate aleatoric uncertainty, or how internally confident a model is in its own prediction.

However, LLMs can be confident when they are completely wrong. Research has shown that epistemic uncertainty, or uncertainty about whether one is using the right model, can be a better way to assess true uncertainty when a model is overconfident.

The MIT researchers estimate epistemic uncertainty by measuring disagreement across a similar group of LLMs.    

“If I ask ChatGPT the same question multiple times and it gives me the same answer over and over again, that doesn’t mean the answer is necessarily correct. If I switch to Claude or Gemini and ask them the same question, and I get a different answer, that is going to give me a sense of the epistemic uncertainty,” Hamidieh explains.

Epistemic uncertainty attempts to capture how far a target model diverges from the ideal model for that task. But since it is impossible to build an ideal model, researchers use surrogates or approximations that often rely on faulty assumptions.

To improve uncertainty quantification, the MIT researchers needed a more accurate way to estimate epistemic uncertainty.

An ensemble approach

The method they developed involves measuring the divergence between the target model and a small ensemble of models with similar size and architecture. They found that comparing semantic similarity, or how closely the meanings of the responses match, could provide a better estimate of epistemic uncertainty.

To achieve the most accurate estimate, the researchers needed a set of LLMs that covered diverse responses, weren’t too similar to the target model, and were weighted based on credibility.

“We found that the easiest way to satisfy all these properties is to take models that are trained by different companies. We tried many different approaches that were more complex, but this very simple approach ended up working best,” Hamidieh says.

Once they had developed this method for estimating epistemic uncertainty, they combined it with a standard approach that measures aleatoric uncertainty. This total uncertainty metric (TU) offered the most accurate reflection of whether a model’s confidence level is trustworthy.

“Uncertainty depends on the uncertainty of the given prompt as well as how close our model is to the optimal model. This is why summing up these two uncertainty metrics is going to give us the best estimate,” Hamidieh says.

TU could more effectively identify situations where an LLM is hallucinating, since epistemic uncertainty can flag confidently wrong outputs that aleatoric uncertainty might miss. It could also enable researchers to reinforce an LLM’s confidently correct answers during training, which may improve performance.

They tested TU using multiple LLMs on 10 common tasks, such as question-answering, summarization, translation, and math reasoning. Their method more effectively identified unreliable predictions than either measure on its own.

Measuring total uncertainty often required fewer queries than calculating aleatoric uncertainty, which could reduce computational costs and save energy.

Their experiments also revealed that epistemic uncertainty is most effective on tasks with a unique correct answer, like factual question-answering, but may underperform on more open-ended tasks.

In the future, the researchers could adapt their technique to improve its performance on open-ended queries. They may also build on this work by exploring other forms of aleatoric uncertainty.

This work is funded, in part, by the MIT-IBM Watson AI Lab.

New model predicts how mosquitoes will fly

Wed, 03/18/2026 - 2:00pm

A mosquito finds its target with the help of certain cues in its environment, such as a person’s silhouette and the carbon dioxide they exhale.

Now researchers at MIT and Georgia Tech have found that these visual and chemical cues help determine the insects’ flight paths. The team has developed the first three-dimensional model of mosquito flight, based on experiments with mosquitoes flying in the presence of different sensory cues.

Their model, reported today in the journal Science Advances, identifies three flight patterns that mosquitoes exhibit in response to sensory stimuli.

When they can only see a potential target, mosquitoes take a “fly-by” approach, quickly diving in toward the target, then flying back out if they do not detect any other host-confirming cues.

When they can’t see a target but can smell a chemical cue such as carbon dioxide, mosquitoes will do “double-takes,” slowing down and flitting back and forth to keep close to the source.

Interestingly, when mosquitoes receive both visual and chemical cues, such as seeing a silhouette and smelling carbon dioxide, they switch to an “orbiting” pattern, flying around a target at a steady speed as they prepare to land, much like a shark circling its prey.

The researchers say the new model can be used to predict how mosquitoes will fly in response to other cues, such as heat, humidity, and certain odors. Such predictions could help to design more effective traps and mosquito control strategies.

“Our work suggests that mosquito traps need specifically calibrated, multisensory lures to keep mosquitoes engaged long enough to be captured,” says study author Jörn Dunkel, MathWorks Professor of Mathematics at MIT. “We hope this establishes a new paradigm for studying pest behavior by using 3D tracking and data-driven modeling to decode their movement and solve major public health challenges.”

The study’s MIT co-authors are Chenyi Fei, a postdoc in MIT’s Department of Mathematics, and Alexander Cohen PhD ’26, a recent MIT chemical engineering PhD student advised by Dunkel and Professor Martin Bazant, along with Christopher Zuo, Soohwan Kim, and David L. Hu ’01, PhD ’06 of Georgia Tech, and Ring Carde of the University of California at Riverside.

Flight by numbers

Mosquitoes are considered to be the most dangerous animals in the world, given their collective impact on human health. The blood-sucking insects transmit malaria, dengue fever, West Nile virus, and other deadly diseases that together cause over 770,000 deaths each year.

Of the 3,500 known species of mosquitoes, around 100 have evolved to specifically target humans, including Aedes aegypti, a species that uses a variety of cues to seek out human hosts. Scientists have studied how certain cues attract mosquitoes, mainly by setting up experiments in wind tunnels, where they can waft cues such as carbon dioxide and study how mosquitoes respond. Such experiments have mainly recorded data such as where and when the insects land. The researchers say no study has explored how mosquitoes fly as they hunt for a host.

“The big question was: How do mosquitoes find a human target?” says Fei. “There were previous experimental studies on what kind of cues might be important. But nothing has been especially quantitative.”

At MIT, Dunkel’s group develops mathematical models to describe and predict the behavior of complex living systems, such as how worms untangle, how starfish embryos develop and swim, and how microbes evolve their community structure over time.

Dunkel looked to apply similar quantitative techniques to predict flight patterns of mosquitoes after giving a talk at Georgia Tech. David Hu, a former MIT graduate student who is now a professor of mechanical engineering at Georgia Tech, proposed a collaboration; Hu’s lab was carrying out experiments with mosquitoes at a facility at the Centers of Disease Control and Prevention in Atlanta, where they were studying the insects’ behavior in response to sensory cues. Could Dunkel’s group use the collected data to identify significant flight behavior that could ultimately help scientists control mosquito populations?

“One of the original motivations was designing better traps for mosquitoes,” says Cohen. “Figuring out how they fly around a human gives insights on how we can avoid them.”

Taking cues

For their new study, Hu and his colleagues at Georgia Tech carried out experiments with 50 to 100 mosquitoes of the Aedes aegypti species. The insects flew around inside a long, white, slightly angled rectangular room as cameras around the room captured detailed three-dimensional trajectories of each mosquito as it flew around. In the center of the room, they placed an object to represent a certain visual or chemical cue.

In some trials, they placed a black Styrofoam sphere on a stand to represent a simple visual cue. (Mosquitoes would be able to see the black sphere against the room’s white background). In other trials, they set up a white sphere with a tube running through to pump out carbon dioxide at rates similar to what humans breathe out. These trials represented the presence of a chemical cue, but not a visual cue.

The researchers also studied the mosquitoes’ response to both visual and chemical cues, using a black sphere that emitted carbon dioxide. Finally, they observed how mosquitoes behaved around a human volunteer who wore protective clothing that was black on one side and white on the other.

Across 20 experiments, the team generated more than 53 million data points and over 477,220 mosquito flight paths. Hu shared the data with Dunkel, whose group used the measurements to develop a model for mosquito flight behavior.

“We are proposing a very broad range of dynamical equations, and when you start out, the equation to predict a mosquito’s flight path is very complicated, with a lot of terms, including the relative importance of a visual versus a chemical cue,” Dunkel explains. “Then through iteration against data, we reduce the complexity of that equation until we get the simplest model that still agrees with the data.”

In the end, the group whittled down a simple model that accurately predicts how a mosquito will fly, given the presence of a visual cue, a chemical cue, or both. The flight paths in response to one or the other cue are markedly different. And interestingly, when both cues are present, the researchers noted that the resulting path is not “additive.” In other words, a mosquito does not simply combine the paths that it would separately take when it can both see and smell a target. Instead, the insects take a distinct path, circling, rather than diving or darting around their target.

“Our work suggests that mosquito traps need specifically calibrated ‘multisensory’ lures to keep mosquitoes engaged long enough to be captured,” Dunkel says.

“Obviously there are additional cues that humans emit, like odor, heat, and humidity,” Cohen notes. “For the species we study, visual and carbon dioxide cues are the most important. But we can apply this model to study different species and how they respond to other sensory cues.”

The researchers have developed an interactive app that incorporates the new mosquito flight model. Users can experiment with different objects and set parameters such as the number of mosquitoes around the object and the type of sensory cue that is present. The model then visualizes how the mosquitoes would fly in response.

“The original hope was to have a quantitative model that can simulate mosquito behavior around various trap designs,” Cohen says. “Now that we have a model, we can start to design more intelligent traps.”

This work was supported, in part, by the National Science Foundation, Schmidt Sciences, LLC, the NDSEG Fellowship Program, and the MIT MathWorks Professorship Fund. 

Pursuing a passion for public health

Wed, 03/18/2026 - 10:00am

MIT senior Srihitha Dasari never imagined she would be speaking in front of the United Nations about health care, technology, and the power of co-designing public health interventions in collaboration with impacted communities. 

But when she stepped up to the podium to speak about digital well-being and community-centered health care design, she carried with her more than research findings. She brought several years of experiential learning in public health environments, ranging from visiting exam rooms of New England’s largest safety net hospital to collaborating with nurses in rural Argentina and working on maternal health in India and Nepal. 

Dasari arrived at MIT intending to major in brain and cognitive sciences and follow a pre-med track. Like many aspiring physicians, she pictured her MIT years filled with lab work, shadowing doctors, and preparing for medical school. Instead, during her first Independent Activities Period (IAP), she enrolled in the PKG Center for Social Impact’s IAP Health Program and began to broaden her understanding of practicing medicine. 

“What was really incredible about IAP Health,” says Dasari, is that “I did it so early in not only my academic career, but just in the beginning of when I was actually formulating a lot of my career aspirations, [and] it really immersed me into what public health looks like.”

Through IAP Health, Dasari worked as an intern at the Boston Medical Center Autism Program. There, she provided in-clinic support to children with autism and their families, helping guide them through appointments and collaborating with physicians to adapt exam techniques to meet patients’ needs.

“When you think about how medicine is delivered, it can feel very systematic — like there are boxes you have to check,” she says. “But working in that clinic showed me … you can modify the experience to truly care for the whole person.”

The program exposed her not only to clinical care, but to the broader forces that shape health outcomes. “I didn’t envision myself doing public health when I entered college,” Dasari says. “But looking back, public health is the through line of everything I’ve done.”

She remained at Boston Medical Center as an intern for over a year with continued support and funding from the PKG Center’s Federal Work-Study and Social Impact Internship programs. The sustained engagement deepened her understanding of how health-care systems can either reinforce or reduce disparities — a systems-level perspective that carried into her global work.

During her second-year IAP, Dasari received a PKG Fellowship to develop an electronic health record system for a maternal ward in a rural hospital in Argentina. The project grew out of a relationship she developed through the student group MIT Global Health Alliance, which supports co-designing public health interventions with impacted communities.  

Dasari’s collaboration with the hospital evolved into a social enterprise that she co-founded: PuntoSalud, an AI-powered chatbot designed to bridge health information gaps in rural Argentina. Dasari and her co-founders received a $5,000 award and seed funding to prototype and develop PuntoSalud through the PKG IDEAS Social Innovation Incubator, MIT’s only entrepreneurship program focused solely on social impact. 

Speaking at the United Nations underscored a lesson she absorbed throughout her varied experience: Meaningful health innovation begins with relationships.

“I’ve been able to meet people from so many different facets of the health-care pipeline that I didn’t envision myself meeting,” Dasari says.

The mindset she developed through PKG programming has informed her experience beyond the center. Through MIT D-Lab, Dasari conducted maternal and neonatal health needs assessments in rural Nepal, interviewing community members to better understand gaps in care. The findings informed efforts to retrofit birthing centers with improved heating systems in cold climates. Later, supported by the MIT International Science and Technology Initiatives, she traveled to India to interview health-care providers about strategies to reduce non-medical cesarean section rates, with the goal of developing policy recommendations for other health systems.

“I came in thinking I would practice medicine one-on-one,” Dasari says. “Now I want to increase my impact in the health care field. I see that as clinical medicine intersected with public health, relieving health disparities for a wider population.”

As Dasari prepares to leave MIT for a year in clinical research, she does so with a systems lens on science and health care, and a commitment to social impact. 

“The path I’ve taken in health care as an undergrad student has given me both a sense of purpose and fulfillment as I prepare to leave MIT,” she says. “It’s shown me that meaningful impact can begin long before medical school, and that I want to carry forward the values these experiences instilled in me.”

For Dasari, experiential learning didn’t redirect her ambitions, but enhanced them. 

“I feel like the PKG Center … it’s not changing your goals,” she says. “It’s shaping them into their fullest potential.”

Brain circuit needed to incorporate new information may be linked to schizophrenia

Wed, 03/18/2026 - 6:00am

One of the symptoms of schizophrenia is difficulty incorporating new information about the world. This can lead people with schizophrenia to struggle with making decisions and, eventually, to lose touch with reality.

MIT neuroscientists have now identified a gene mutation that appears to give rise to this type of difficulty. In a study of mice, the researchers found that the mutated gene impairs the function of a brain circuit that is responsible for updating beliefs based on new input.

This mutation, in a gene called grin2a, was originally identified in a large-scale screen of patients with schizophrenia. The new study suggests that drugs targeting this brain circuit could help with some of the cognitive impairments seen in people with schizophrenia.

“If this circuit doesn’t work well, you cannot quickly integrate information,” says Guoping Feng, the James W. and Patricia T. Poitras Professor in Brain and Cognitive Sciences at MIT, a member of the Broad Institute of Harvard and MIT, and the associate director of the McGovern Institute for Brain Research at MIT. “We are quite confident this circuit is one of the mechanisms that contributes to the cognitive impairment that is a major part of the pathology of schizophrenia.”

Feng and Michael Halassa, a professor of psychiatry and neuroscience and director of translational research at Tufts University School of Medicine, are the senior authors of the new study, which appears today in Nature Neuroscience. Tingting Zhou, a research scientist at the McGovern Institute, and Yi-Yun Ho, a former MIT postdoc, are the lead authors of the paper.

Adapting to new information

Schizophrenia is known to have a strong genetic component. For the general population, the risk of developing the disease is about 1 percent, but that goes up to 10 percent for those who have a parent or sibling with the disease, and 50 percent for people who have an identical twin with the disease.

Researchers at the Stanley Center for Psychiatric Research at the Broad Institute have identified more than 100 gene variants linked to schizophrenia, using genome-wide association studies. However, many of those variants are located in non-coding regions of the genome, making it difficult to figure out how they might influence development of the disease.

More recently, researchers at the Stanley Center used a different strategy, known as whole-exome sequencing, to reveal gene mutations linked to schizophrenia. This technique sequences only the protein-coding regions of the genome, so it can reveal mutations that are located in known genes.

Using this approach on about 25,000 sequences from people with schizophrenia and 100,000 sequences from control subjects, the researchers identified 10 genes in which mutations significantly increase the risk of developing schizophrenia.

In the new Nature Neuroscience study, Feng and his students created a mouse model with a mutation in one of those genes, grin2a. This gene encodes a protein that forms part of the NMDA receptor — a receptor that is activated by the neurotransmitter glutamate and is often found on the surface of neurons.

Zhou then investigated whether these mice displayed any of the characteristic behaviors seen in people with schizophrenia. These individuals show many complex symptoms, including psychoses such as hallucinations and delusions (loss of contact with reality). Those are difficult to study in mice, but it is possible to study related symptoms such as difficulty in interpreting new sensory input.

Over the past two decades, schizophrenia researchers have hypothesized that psychosis may stem from an impaired ability to update beliefs based on new information.

“Our brain can form a prior belief of reality, and when sensory input comes into the brain, a neurotypical brain can use this new input to update the prior belief. This allows us to generate a new belief that’s close to what the reality is,” Zhou says. “What happens in schizophrenia patients is that they weigh too heavily on the prior belief. They don’t use as much current input to update what they believed before, so the new belief is detached from reality.”

To study this, Zhou designed an experiment that required mice to choose between two levers to press to earn a food reward. One lever was low-reward — mice had to push it six times to get one drop of milk. A high-reward lever dispensed three drops per push.

At the beginning of the study, all of the mice learned to prefer the high-reward lever. However, as the experiment went on, the number of presses required to dispense the higher reward gradually went up, while there were no changes to the low-reward lever.

As the effort required went up, healthy mice start to switch back and forth between the two levers. Once they had to press the high-reward lever around 18 times for three drops of milk, making the effort per drop about the same for each lever, they eventually switched permanently to the low-reward lever. However, mice with a mutation in grin2a showed a different behavior pattern. They spent more time switching back and forth between the two levers, and they made the switch to the low-reward side much later.

“We find that neurotypical animals make adaptive decisions in this changing environment,” Zhou says. “They can switch from the high-reward side to the low-reward side around the equal value point, while for the animals with the mutation, the switch happens much later. Their adaptive decision-making is much slower compared to the wild-type animals.”

An impaired circuit

Using functional ultrasound imaging and electrical recordings, the researchers found that the brain region affected most by the grin2a mutation was the mediodorsal thalamus. This part of the brain connects with the prefrontal cortex to form a thalamocortical circuit that is responsible for regulating cognitive functions such as executive control and decision-making.

The researchers found that neuronal activity in the mediodorsal thalamus appears to keep track of the changes in value of the two reward options. Additionally, the mice showed different patterns of neural activity depending on which state they were — either an exploratory state or committed to one side.

The researchers also showed that they could use optogenetics to reverse the behavioral symptoms of the mice with mutated grin2a. They engineered the neurons of the mediodorsal thalamus so that they could be activated by light, and when these neurons were activated, the mice began behaving similarly to mice without the grin2a mutation.

While only a very small percentage of schizophrenia patients have mutations in the grin2a gene, it’s possible that this circuit dysfunction is a converging mechanism of cognitive impairment for a subset of schizophrenia patients with different causes.

Targeting this circuit could offer a way to overcome some of the cognitive impairments seen in people with schizophrenia, the researchers say. To do that, they are now working on identifying targets within the circuit that could be potentially druggable.

The research was funded by the National Institutes of Mental Health, the Poitras Center for Psychiatric Disorders Research at MIT, the Yang Tan Collective at MIT, the K. Lisa Yang and Hock E. Tan Center for Molecular Therapeutics at MIT, the Stelling Family Research Fund at MIT, the Stanley Center for Psychiatric Research, and the Brain and Behavior Research Foundation.

Turning extreme heat into large-scale energy storage

Wed, 03/18/2026 - 12:00am

Thermal batteries can efficiently store energy as heat. But building them requires a carefully designed system with materials that can withstand cycles of extremely high temperatures, without succumbing to problems like corrosion, thermal expansion, and structural fatigue.

Many thermal battery systems move high-temperature gas or molten salt around through metal pipes. Fourth Power, founded by MIT Professor Asegun Henry, is turning these materials inside out, using molten metal to transport the heat, which is stored in carbon bricks.

“The idea was, instead of making the system from metal, let’s move liquid metals,” says Henry SM ’06, PhD ’09.

Henry’s approach earned him a Guinness World Record for the hottest liquid pump back in 2017 — important because when you double the absolute temperature of a material, to the point where it glows white-hot, the amount of light it emits doesn’t just double, it increases 16 times (or to the fourth power).

The company is harvesting all that light with thermophotovoltaic cells, which work like solar cells to convert light into electricity. Henry and his collaborators broke another record when they demonstrated a thermophotovoltaic cell that could convert light to electricity with an efficiency above 40 percent.

Fourth Power is working to use those record-breaking innovations to provide energy for power grids, power producers, and technology companies building power-hungry infrastructure like data centers. Henry says the batteries can provide anywhere from 10 to over 100 hours of electricity at a storage cost that is significantly cheaper than lithium-ion batteries at grid scale. The company is currently cycling each section of its system through relevant operating temperatures — which are nearly half as hot as the sun — and plans to have a fully integrated demonstration unit operating later this year.

“Explaining why our system is such a huge improvement over everything else centers around power density,” explains Henry, who serves as Fourth Power’s chief technologist. “We realized if you push the temperature higher, you will transfer heat at a higher rate and shrink the system. Then everything gets cheaper. That’s why we pursue such high temperatures at Fourth Power. We operate our thermal battery between 1,900 and 2,400 degrees Celsius, which allows us to save a tremendous amount on the balance of system costs.”

A career in heat

Henry earned his master’s and PhD degrees from MIT before working in faculty positions at Georgia Tech and MIT. As a professor at both schools, his research has focused on thermal transport, storage, renewable energy, and other technologies that could lead to improvements in sustainability and decarbonization. Today, he is the George N. Hatsopoulos Professor in Thermodynamics in MIT’s Department of Mechanical Engineering.

Heat transfer systems are usually made out of metals like iron and nickel. Generally, the higher temperature you want to reach, the more expensive the metal. Henry noticed ceramics can get much hotter than metals, but they’re not used nearly as often. He started asking why.

“The answer is often pretty straightforward: You can’t weld ceramics,” Henry says. “Ceramics aren’t ductile. They generally fail in a catastrophically brittle way, and that’s not how we like large systems to behave. But I couldn’t find many problems beyond that.”

After receiving funding from the Department of Energy and the MIT Energy Initiative, Henry spent years developing a pump made from ceramics and graphite (which is similar to a ceramic). In 2017, his pump set the record for the highest recorded operating temperature for a liquid pump, at 1,200 Celsius. The pump used white-hot liquid tin as a fuel. He chose tin because it doesn’t react with carbon, eliminating corrosion. It also has a relatively low melting point and high boiling point, which keeps it liquid in a large temperature range.

The challenge then became designing the system.

“Typically, a mechanical engineer would come up with a design and say, ‘Give me the best materials to do this,’” Henry says. “We flipped the problem, so we were saying, ‘We know what materials will work, now we need to figure out how to make a system out of it.’”

In 2023, Henry met Arvin Ganesan, who had previously led global energy work at Apple. At first, Ganesan wasn’t interested in joining a startup — he had two young kids and wanted to prioritize his family — but he was intrigued by the potential of the technology. At their first meeting, the two connected over shared values and fatherhood, as Henry surprised Ganesan by bringing his own young children.

“I had a sense this technology had the promise to tackle the twin crises of affordability and climate change at the same time,” says Ganesan, who is now Fourth Power’s CEO. “As energy demand becomes more pronounced, we either need to deploy harder and deeper tech, which is also important, or improve existing tech. Fourth Power is trying to simplify the physics and thermodynamic principles to deliver an approach that has been very well-studied for a very long time.”

The system Fourth Power designed takes in excess electricity from sources like the grid and uses it to heat a series of 6-foot-long, 20-inch thick graphite bricks until they reach about 2,400 Celsius. At that point the system is considered fully charged.

When the customer wants the electricity back, the bricks are used to heat up liquid tin, which flows through a series of graphite pipes, pumps, and flow meters to thermophotovoltaic cells, which turn the light from the glowing hot infrastructure back into electricity.

“You can basically dip the cells into the light and get power, or you can pull them back out and shut it off,” Henry explains. “The liquid metal starts at 2,400 Celsius and then cools as it’s going through the system because it’s giving a bunch of its energy to the photovoltaic, and then it circulates back through the graphite blocks, which act as a furnace, to retrieve more heat.”

From concept to company

Later this year, Fourth Power plans to turn on a 1-megawatt-hour system in its new headquarters in Bedford, Massachusetts. A full-scale system would offer 25 megawatts of power and 250 megawatt hours of storage and take up about half a football field.

“Most technologies you’ll see in storage are around 10 megawatts an acre or less,” Henry explains. “Fourth Power is more like 100 megawatts per acre. It’s very power-dense.”

The power and storage units of Fourth Power’s system are modular, which will allow customers to start with a smaller system and add storage units to extend storage length later. The company expects to lose about 1 percent of total heat stored per day.

“Customers can buy one storage and one power module, and that’s a 10-hour battery,” Henry explains. “But if they want one power module and two storage modules, that’s a 20-hour battery. Customers can mix and match, which is really advantageous for utilities as renewables scale and storage needs change.”

Down the line, the system could also be run as a power plant, converting fuel into electricity or using fuel to charge its batteries during stretches with little wind or sun. It could also be used to provide industrial heat.

But for now, Fourth Power is focused on the battery application.

“Utilities need something cheap and they need something reliable,” Henry says. “The only technology that has managed to reach at least one of those requirements is lithium ion. But the world is waiting for something that’s much cheaper than lithium ion and just as reliable, if not better. That’s what we’re focused on demonstrating to the world.”

John Ochsendorf named associate dean for research for the School of Architecture and Planning

Wed, 03/18/2026 - 12:00am

Professor John Ochsendorf, a member of the MIT faculty since 2002, is taking on a new role in support of the research efforts of faculty and students in the MIT School of Architecture and Planning (SA+P). At the start of this year, Ochsendorf was appointed to lead an initiative strengthening research strategy, support, and funding across the school.

“John is a bridge-builder by instinct and practice, and we look forward to the bridges he will build between our school and industry, our school and MIT, and between research and pedagogy in our school,” says SA+P Dean Hashim Sarkis. The appointment comes as sponsored research across SA+P continues to grow, expanding opportunities for graduate research assistantships and interdisciplinary collaboration across MIT.

Ochsendorf is the Class of 1942 Professor with dual appointments in the departments of Architecture and Civil and Environmental Engineering in the MIT School of Engineering. At the center of his work is a deep commitment to students and education through research and making. For example, in close collaboration with students and alumni, he has contributed to projects ranging from the Sean Collier Memorial on campus to a recent Martin Puryear sculpture at Storm King Art Center. Since 2022, Ochsendorf has served as the founding director of the MIT Morningside Academy for Design, where he helped establish new models for design research, interdisciplinary collaboration, and student engagement across the Institute.

Ochsendorf describes the new role as both a “challenge and an opportunity” to support the considerable and increasingly broad portfolio of research across SA+P.

“We want to understand the current landscape of our research funding and identify the challenges and inefficiencies impacting faculty,” he notes. “The ultimate goal is to grow our research capacity for a world that needs the best ideas from MIT.”

The effort is consistent with SA+P’s history of pioneering research and pedagogic exploration. The Department of Architecture was among the first in the United States to establish doctoral programs within a school of architecture, including PhDs in history, theory, and criticism and in building technology. The Department of Urban Studies and Planning is home to the largest urban planning faculty in the country and maintains a variety of research labs, while Media Arts and Sciences and the Media Lab has a broad and deep research culture. Each of the school’s departments enjoys the advantage of operating within the context of MIT’s culture of innovation and interdisciplinary study. As new faculty hires have been increasingly research-driven, the time for developing and supporting robust research portfolios is now. 

Ochsendorf and his students’ research have bridged the spectrum from humanistic research supported by organizations such as the National Endowment for the Humanities and the Graham Foundation for Advanced Studies in the Fine Arts to more scientific research supported by the National Science Foundation. In his new role, he will build on that experience to work with faculty and Institute partners to strengthen grant development, clarify research priorities, and expand research capacity across SA+P.

“I’ve always loved being at MIT because of the team spirit here,” says Ochsendorf. “We’re a place where we try to support each other, and it’s because of this environment that I am excited about this new role.”

Sustaining diplomacy amid competition in US-China relations

Wed, 03/18/2026 - 12:00am

The United States and China “are the two largest emitters of carbon in the world,” said Nicholas Burns, former U.S. ambassador to the People’s Republic of China, at a recent MIT seminar. “We need to work with each other for the good of both of our countries.” 

During the MITEI Presents: Advancing the Energy Transition presentation, Burns gave insight into the evolving state of U.S.-China relations, its implications for the global order, and its impact on global efforts to advance the energy transition and address climate change.

“We are the two largest global economies,” said Burns, who is now the Goodman Professor of the Practice of Diplomacy and International Relations at Harvard University’s Kennedy School of Government. “These are the only two countries that affect everybody else in the international system because of our weight.”

The relationship between the United States and China can be summarized in three words, according to Burns: competitive, tough, and adversarial — a description that rings true on both sides. He listed four primary areas for this competition: military, technology, trade and economics, and values.

Burns described the especially complicated area of trade and economics. “We both want to be number one. Neither of us — to be honest — is willing to be number two,” said Burns. Outside of North America, China is the United States’ largest trade partner. Outright trade wars — like those in April and October 2025 — create friction. “At one point, you’ll remember, 145 percent tariffs by the United States, and 125 percent by China on the United States. That just grinds a relationship. Those level of tariffs, had they been sustained, would have meant zero trade between the two countries.”

The energy field can be significantly impacted by this area of competition, Burns added. China is dominant in the production and processing of rare earth elements, many of which are critical to products like lithium batteries, solar panels, and electric vehicles. In 2024 and 2025, the United States was not the only country to place tariffs on these products; India, Turkey, South Africa, Mexico, Canada, the EU, and others followed suit. “I think the Trump administration is right, as President Biden was, to try to diversify sources on rare earths,” Burns said.

Burns also noted with interest the dichotomy in the Chinese energy sector between their lead on clean energy technology and their continual use of coal, standing out as an inconsistency in China’s efforts. Burns believes that climate change could be a key area of cooperation between China and the United States, emphasizing the importance of the United States’ participation, both technologically and diplomatically.

Burns also described the significant technological competition between the United States and China — an area of central importance. Throughout his presentation, Burns was quick to praise the emphasis that China puts on education and academic achievement, particularly in STEM fields. Pulling from a recent article in The Economist, he compared the 36 percent of Chinese first-year university students majoring in STEM fields to the 5 percent of American first-year students in STEM. “Think about the volume of graduates and the disparity between our country and China,” he said. “Then think about the percentage of those graduates who go into science and technology.”

Currently, areas like artificial intelligence, quantum computing, and biotechnology are taking center stage in technological innovation. “The Chinese are very skilled in terms of industrial processes and doctrine of adapting quickly,” said Burns. He explained that holding a competitive edge lies not only in who is first on the market, but who adopts the technology first, and who is able to unite that technological progress with policy.

“This is the most important relationship that we have in the world,” said Burns. He believes that the true test is whether the United States and China can manage competition so that interests are protected, while avoiding the use of the massive destructive power both countries possess. “We’ve got to normalize the communication and engagement to prevent the worst from happening,” said Burns.

“We’re at a stage of human history where we’re all linked together, and the fate of everybody in this room and all of our countries is linked together by these huge transnational challenges,” said Burns. “We’ve got to learn to compete and yet live in peace with each other in the process.”

This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.

MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impact

Tue, 03/17/2026 - 4:35pm

The early years of faculty members’ careers are a formative and exciting time in which to establish a firm footing that helps determine the trajectory of researchers’ studies. This includes building a research team, which demands innovative ideas and direction, creative collaborators, and reliable resources. 

For a group of MIT faculty working with and on artificial intelligence, early engagement with the MIT-IBM Watson AI Lab through projects has played an important role helping to promote ambitious lines of inquiry and shaping prolific research groups.

Building momentum

“The MIT-IBM Watson AI Lab has been hugely important for my success, especially when I was starting out,” says Jacob Andreas — associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab — who studies natural language processing (NLP). Shortly after joining MIT, Andreas jump-started his first major project through the MIT-IBM Watson AI Lab, working on language representation and structured data augmentation methods for low-resource languages. “It really was the thing that let me launch my lab and start recruiting students.” 

Andreas notes that this occurred during a “pivotal moment” when the field of NLP was undergoing significant shifts to understand language models — a task that required significantly more compute, which was available through the MIT-IBM Watson AI Lab. “I feel like the kind of the work that we did under that [first] project, and in collaboration with all of our people on the IBM side, was pretty helpful in figuring out just how to navigate that transition.” Further, the Andreas group was able to pursue multi-year projects on pre-training, reinforcement learning, and calibration for trustworthy responses, thanks to the computing resources and expertise within the MIT-IBM community.

For several other faculty members, timely participation with the MIT-IBM Watson AI Lab proved to be highly advantageous as well. “Having both intellectual support and also being able to leverage some of the computational resources that are within MIT-IBM, that’s been completely transformative and incredibly important for my research program,” says Yoon Kim — associate professor in EECS, CSAIL, and a researcher with the MIT-IBM Watson AI Lab — who has also seen his research field alter trajectory. Before joining MIT, Kim met his future collaborators during an MIT-IBM postdoctoral position, where he pursued neuro-symbolic model development; now, Kim’s team develops methods to improve large language model (LLM) capabilities and efficiency. 

One factor he points to that led to his group’s success is a seamless research process with intellectual partners. This has allowed his MIT-IBM team to apply for a project, experiment at scale, identify bottlenecks, validate techniques, and adapt as necessary to develop cutting-edge methods for potential inclusion in real-world applications. “This is an impetus for new ideas, and that’s, I think, what’s unique about this relationship,” says Kim.

Merging expertise

The nature of the MIT-IBM Watson AI Lab is that it not only brings together researchers in the AI realm to accelerate research, but also blends work across disciplines. Lab researcher and MIT associate professor in EECS and CSAIL Justin Solomon describes his research group as growing up with the lab, and the collaboration as being “crucial … from its beginning until now.” Solomon’s research team focuses on theoretically oriented, geometric problems as they pertain to computer graphics, vision, and machine learning. 

Solomon credits the MIT-IBM collaboration with expanding his skill set as well as applications of his group’s work — a sentiment that’s also shared by lab researchers Chuchu Fan, an associate professor of aeronautics and astronautics and a member of the Laboratory for Information and Decision Systems, and Faez Ahmed, associate professor of mechanical engineering. “They [IBM] are able to translate some of these really messy problems from engineering into the sort of mathematical assets that our team can work on, and close the loop,” says Solomon. This, for Solomon, includes fusing distinct AI models that were trained on different datasets for separate tasks. “I think these are all really exciting spaces,” he says.

“I think these early-career projects [with the MIT-IBM Watson AI Lab] largely shaped my own research agenda,” says Fan, whose research intersects robotics, control theory, and safety-critical systems. Like Kim, Solomon, and Andreas, Fan and Ahmed began projects through the collaboration the first year they were able to at MIT. Constraints and optimization govern the problems that Fan and Ahmed address, and so require deep domain knowledge outside of AI. 

Working with the MIT-IBM Watson AI Lab enabled Fan’s group to combine formal methods with natural language processing, which she says, allowed the team to go from developing autoregressive task and motion planning for robots to creating LLM-based agents for travel planning, decision-making, and verification. “That work was the first exploration of using an LLM to translate any free-form natural language into some specification that robot can understand, can execute. That’s something that I’m very proud of, and very difficult at the time,” says Fan. Further, through joint investigation, her team has been able to improve LLM reasoning­ — work that “would be impossible without the IBM support,” she says.   

Through the lab, Faez Ahmed’s collaboration facilitated the development of machine-learning methods to accelerate discovery and design within complex mechanical systems. Their Linkages work, for instance, employs “generative optimization” to solve engineering problems in a way that is both data-driven and has precision; more recently, they’re applying multi-modal data and LLMs to computer-aided design. Ahmed states that AI is frequently applied to problems that are already solvable, but could benefit from increased speed or efficiency; however, challenges — like mechanical linkages that were deemed “almost unsolvable” — are now within reach. “I do think that is definitely the hallmark [of our MIT-IBM team],” says Ahmed, praising the achievements of his MIT-IBM group, which is co-lead by Akash Srivastava and Dan Gutfreund of IBM.

What began as initial collaborations for each MIT faculty member has evolved into a lasting intellectual relationship, where both parties are “excited about the science,” and “student-driven,” Ahmed adds. Taken together, the experiences of Jacob Andreas, Yoon Kim, Justin Solomon, Chuchu Fan, and Faez Ahmed speak to the impact that a durable, hands-on, academia-industry relationship can have on establishing research groups and ambitious scientific exploration.

Three anesthesia drugs all have the same effect in the brain, MIT researchers find

Tue, 03/17/2026 - 11:00am

When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.

This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.

“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.

Miller, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience Emery Brown, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.

Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.

Destabilizing the brain

Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.

When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.

“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”

In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.

For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.

In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.

This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.

Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.

“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”

The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.

“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”

Monitoring anesthesia

Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.

For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.

To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.

“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.

Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.

The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.

“We the People” depicts inventors, dreamers, and innovators in all 50 states

Tue, 03/17/2026 - 12:00am

Zora Neale Hurston remains one of America’s best-known authors. Charles Henry Turner developed landmark studies about the behavior of bees and spiders. Brian Wilson founded the Beach Boys. George Nissen invented the trampoline. What do they all have in common?

Well, for one thing, they were all innovative Americans — creators and discoverers, producing work no one anticipated. For another, they are all now celebrated as such, in verse, by Joshua Bennett.

That’s right. Bennett — an MIT professor, lauded poet, and literary scholar — is marking the 250th anniversary of the founding of the U.S. with a book-length work of poetry about the country and some of its distinctive figures. In fact, 50 of them: Bennett has written a substantial work featuring remarkable people or inventions from each of the 50 states, meditating on their place in cultural fabric of the U.S.

“There’s so much to be said for a country where you and I are possible, and the things we do are possible,” Bennett says.

The book, “We (The People of the United States),” is published today by Penguin Books. Bennett is a professor and the Distinguished Chair of the Humanities at MIT.

Bennett’s new work has some prominent Americans in it, but is no gauzy listing of familiar icons. Many of the 50 people in his book overcame hardship, poverty, rejection, or discrimination; some have already been rescued from obscurity, but others have not received proper acclaim. Few of them had a straightforward, simple connection with their times.

“It’s about feeling that you have a life in this country which is undeniably complex, but also has this remarkable beauty to it,” Bennett says of the work. “A beauty you helped to create, and that no one can take away from you.”

The figures that Bennett writes about are sources of fascination, and inspiration, demonstrating the kinds of lives it is possible to invent in the U.S.

“We’re in a moment that calls for compelling, historically grounded stories about what America is, what it has been, and what it can be,” Bennett adds. “Can we build a life-affirming vision for the future and those who will inherit it? I’m trying to. I work on it every day.”

Taking flight

“We (The People of the United States)” is inspired, in part, by Virgil’s “Georgics,” pastoral poems by the great Roman poet. Bennett encountered them while a PhD student in literature at Princeton University.

“The poet Susan Stewart, my professor at Princeton, introduced me to Virgil’s Georgics,” Bennett says. “I eventually started to think: What would it look like for me to cover Virgil?” Adding to his interest in the concept, one of his favorite poets, Gwendolyn Brooks, had spent time recasting Virgil’s ancient epic, “The Aeneid,” for her Pulitzer Prize-winning work, “Annie Allen.” She also translated the original work from Latin as a teenager. Moreover, Bennett’s writing has long engaged with the subject of people working the land in America.

“I decided to start writing all these poems about agriculture,” Bennett says. “But then I thought, this would be interesting as an epic poem about America.” As he launched the project, its focus shifted some more: “I started to think about the book as an ode to invention.”

Soon Bennett had worked out the structure. An opening section of the work is about his own family background, becoming a father, and the process of building a life here in Massachusetts.

“Where does my influence, my aspiration, end and the child begin?” Bennett writes in one poem. That section prefigures further themes in the collection about the domestic environments many of its figures emerged from. For the rest of the work, with one innovator or innovation for each of the 50 states, Bennett adopted a regular writing schedule, producing at least one new poem per week until he was finished. 

Hurston, one of several famous authors and artists featured in the book, represents Florida. From Ohio, entomologist Charles Henry Turner was the first Black person to receive a PhD from the University of Chicago, in 1907, before conducting a wide range of studies about the cognition and behavior of spiders and bees, among other things.

George Nissen, alternately, was a University of Iowa gymnast who built the first trampoline in the 1930s in his home state — something Bennett calls a “magical device” that brings to life “the scene in your mind of the leap/and of the leap itself, where you are airborne, illuminated/quickly immortal.” Whether these innovations appear through rigorous academic exploration or became mass-market goods that produce flights of fancy, Bennett has a keen eye for people who break new ground and fire our own feelings of wonder.

“We actually are all bound up in it together,” Bennett says. “These different figures, from various fields, eras, and lifelong pursuits are in here together precisely because they helped weave the story of this country together. It’s a story that is still unfolding.”

Bennett is straightforward about the struggles many of his subjects faced. His choice to represent North Carolina is the poet George Moses Horton, an enslaved man who not only learned to read and write in the early 1800s — the state later made that illegal for enslaved persons, in 1830 — but made money selling poems to University of North Carolina students. Indeed, Horton’s work was published in the 1820s. Bennett writes that Horton’s public performance of his poetry was “an ancient art revived in the flesh of a prodigy in chains.”

Bennett’s unblinking regard for historical reality is a motif throughout the work. “To me it’s not only about exploring a history that a reader might feel connected to or want to learn more about,” he says. “It’s about honoring those who lived that history, who helped make some of the most beautiful parts of the present possible, through an engagement with the substance of their lives.”

Just my imagination

Many figures in “We (The People of the United States)” are artists, but of many forms. From watching VH1 as a child, Bennett got into the Beach Boys, and he devotes the California entry in the poem to them. Or as Bennett puts it, he was “newly initiated into a sound/I do not understand until I am old enough to be nostalgic/for windswept locales, and singular moments in time/I never lived through.”

Bennett was learning about the Beach Boys while growing up in Yonkers, New York, far from any California beaches. But then, Brian Wilson wasn’t a surfer either — he grew up in an industrial suburb of Los Angeles. Imagination was the coin of the realm for Wilson, something Bennett understood when Beach Boys songs would veer off in unexpected directions.

“I’ve always been drawn to moments of great surprise, or revelation, in the works of art I love,” Bennett says. “Which is part of why I’ve dedicated my life to poetry. You think one thing is happening in a poem, and suddenly that shock comes, that unexpected turn, or volta. Brian Wilson always had a great understanding of that. It works in pop music. Surprise, sometimes, is a shift in register that takes you higher.”

Various poems in the collection have down-to-earth origins. Bennett remembers his father often fixing things in the family home, from toys to the boiler, saying, “Pass me the Phillips-head,” when he needed a screwdriver. Thus Oregon appears in the book: Portland is where the Phillips-head screwdriver was invented.

In conversation, Bennett notes the hopeful disposition of his father, who after living through Jim Crow and serving in the Vietnam War, worked 10-hour shifts at the U.S. Postal Service to support his family. Even with all the difficulty he experienced in his life, Bennett’s father always encouraged his son to pursue his dreams.

“I’m grateful that I inherited a profound sense of belonging, and dignity, from my parents,” Bennett says. “There was always this feeling that we were part of a much larger story, and that we had a responsibility to tell the truth about the world as we knew it.”

And that’s really what Bennett’s new book is about.

“We can reckon with our history in its fullness and work, tirelessly, toward a world that’s worthy of the most vulnerable among us,” Bennett says. “Like Toni Morrison, we can ‘dream the world as it ought to be.’ And then make it real. That’s my vision.”

Ocean bacteria team up to break down biodegradable plastic

Mon, 03/16/2026 - 10:00am

Biodegradable plastics could help alleviate the plastic waste crisis that is polluting the environment and harming our health. But how long plastics take to degrade and how environmental bacteria work together to break them down is still largely unknown.

Understanding how plastics are broken down by microbes could help scientists create more sustainable materials and even new microbial recycling systems that convert plastic waste into useful materials.

Now MIT researchers have taken an important first step toward understanding how bacteria work together to break down plastic. In a new paper, the researchers uncovered the role of individual ocean bacteria in the breakdown of a widely used biodegradable plastic. They also showed the complementary processes microbes use to fully consume the plastic, with one microbe cleaving the plastic into its component chemicals and others consuming each chemical.

The researchers say it’s one of the first studies illuminating specific bacterial species’ role in the breakdown of plastic and indicates the speed of plastic degradation can vary widely depending on a few key factors.

“There is a lot of ambiguity about how long these materials actually exist in the environment,” says lead author Marc Foster, a PhD student in the MIT-WHOI Joint Program. “This shows plastic biodegradation is highly dependent on the microbial community where the plastic ends up. It’s also dependent on the plastics — the chemistry of the polymer and how they’re made as a product. It’s important to understand these processes because we’re trying to constrain the environmental lifetime of these materials.”

Joining Foster on the paper are MIT PhD candidate Philip Wasson; former MIT postdoc Andreas Sichert; MIT undergraduate Deborah Madden; Woods Hole Oceanographic Institute researchers Matthew Hayden and Adam Subhas; Chong Becker and Sebastian Gross of the international chemical and plastic company BASF; Otto Cordero, an MIT associate professor of civil and environmental engineering; Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor; and Desirée Plata, MIT’s School of Engineering Distinguished Climate and Energy Professor. The paper appears in the journal Environmental Science and Technology.

Uncovering collaboration

Scientists hope biodegradable plastic can be used to address the mountains of plastic waste piling up in our oceans and landfills.

“More than half of produced plastic is either sent to landfills or directly released into the environment,” Foster says. “But without knowing the specifics of different degradation processes, we won’t be able to accurately predict the lifetime of these materials and better control that degradation.”

To date, many studies into the biodegradation of plastics have focused on single microbial organisms, but Foster says that’s not representative of how most plastics are broken down in the environment.

“It’s really rare for a single bacterium to carry out the full degradation process because it requires a significant metabolic burden to carry all of the enzymatic functions to depolymerize the polymer and then use those chemical subunits as a carbon and energy source,” Foster says.

Other studies have sought to capture the molecular footprints of groups of bacteria as they degrade plastic, which gives a snapshot of the species involved without uncovering the mechanisms of action.

For this study, the researchers wanted to uncover the roles of specific bacterial species as they fully degraded plastic. They started with a type of biodegradable plastic known as an aromatic aliphatic co-polyester. Such plastic is used in shopping bags and food packaging. It’s also often laid across the soil of farms to prevent weeds and retain moisture.

To begin the study, researchers at BASF, which produces that type of plastic, first placed samples of the product into different depths of the Mediterranean Sea to let bacteria grow as a thin biofilm around the plastic. The company then shipped the samples to researchers at MIT, who isolated as many species of bacteria as possible from the samples. The researchers mixed those isolates and identified 30 bacterial species that continued to grow in abundance on the plastic.

Using carbon dioxide as a measure of plastic degradation, the researchers isolated each bacterium and found one, Pseudomonas pachastrellae, that could depolymerize the plastic compounds, breaking them into the three chemical components of the plastic: terephthalic acid, sebacic acid, and butanediol.

But that bacterium couldn’t consume all three components on its own. One by one, the researchers exposed each bacterium to each chemical, finding no bacteria that could consume all three, although they did find some species that could consume one or two chemicals on their own.

Finally, the researchers selected five bacterial species based on their complementary breakdown abilities and showed the small group exhibited the same ability to fully degrade the plastic as the 30-member bacteria community.

“I was able to minimize the degradation process to this simplistic set of specific metabolic functions,” Foster says. “And then when I took out one bacterium, the mineralization dropped, which indicated the organism was controlling the degradation of the polymer. Then when I had each one of the bacteria alone in a culture, none of them could reach the same degradation as all five together, indicating there was this complementary function required. It worked much better than I thought it would.”

The researchers also found the five-member bacteria community couldn’t mineralize a different plastic, showing groups of bacteria may only be able to mineralize specific plastics.

“It highlights that the microbes living where this plastic ends up are going to dictate the plastic’s lifetime,” Foster says.

Faster plastic degradation

Foster notes the bacteria in his study are likely specific to the Mediterranean Sea. The study also only involved bacteria that could survive in his lab environment. Still, Foster says it’s one of the first papers that identifies the roles of bacteria in consuming plastic.

“Most studies wouldn’t be able to identify the specific bacteria that’s controlling each complementary mineralization process,” Foster says. “Here we can say this bacteria controls degradation, these bacteria handle mineralization, and then we show the function of each bacteria and show that together, they can remove the entire polymer.”

Foster says the work is an important first step toward creating microbial systems that are better at breaking down plastic or converting it into something useful. In follow-up work for his PhD, he is exploring what makes successful bacterial pairs for faster plastic consumption and how enzymes dock on plastic particles to initiate and continue degradation.

The work was supported by the MIT Climate and Sustainability Consortium and BASF SE. Partial support was provided by the U.S. National Science Foundation Graduate Research Fellowship Program.

New sensor sniffs out pneumonia on a patient’s breath

Mon, 03/16/2026 - 12:00am

Diagnosing some diseases could be as easy as breathing into a tube. MIT engineers have developed a test to detect disease-related compounds in a patient’s breath. The new test could provide a faster way to diagnose pneumonia and other lung conditions. Rather than sit for a chest X-ray or wait hours for a lab result, a patient may one day take a breath test and get a diagnosis within minutes.

The new breath test is a portable, chip-scale sensor that traps and detects synthetic compounds, or “biomarkers,” of disease, which are initially attached to inhalable nanoparticles. The biomarkers serve as tiny tags that can only be unlocked and detached from the nanoparticle by a very particular key, such as a disease-related enzyme.

The idea is that a person would first breathe in the nanoparticles, similar to inhaling asthma medicine. If the person is healthy, the nanoparticles would eventually circulate out of the body intact. If a disease such as pneumonia is present, however, enzymes produced as a result of the infection would snip off the nanoparticles’ biomarkers. These untethered biomarkers would be exhaled and measured, confirming the presence of the disease.

Until now, detecting such exhaled biomarkers required laboratory-grade instruments that are not available in most doctor’s offices. The MIT team has now shown they can detect exhaled biomarkers of pneumonia at extremely low concentrations using the new portable, chip-scale breath test, which they’ve dubbed “PlasmoSniff.”

They plan to incorporate the new sensor into a handheld instrument that could be used in clinical or at-home settings to quickly diagnose pneumonia and other diseases.

“In practice, we envision that a patient would inhale nanoparticles and, within about 10 minutes, exhale a synthetic biomarker that reports on lung status,” says Aditya Garg, a postdoc in MIT’s Department of Mechanical Engineering. “Our new PlasmoSniff technology would enable detection of these exhaled biomarkers within minutes at the point of care.”

Garg is the first author of a study that details the team’s new sensor design. The study appears online in the journal Nano Letters. MIT co-authors include Marissa Morales, Aashini Shah, Daniel Kim, Ming Lei, Jia Dong, Seleem Badawy, Sahil Patel, Sangeeta Bhatia, and Loza Tadesse.

Tailored tags

PlasmoSniff is a project led by Loza Tadesse, an assistant professor of mechanical engineering at MIT. Tadesse’s group builds diagnostic devices that can be used directly in doctor’s office and other point-of-care settings. Her work specializes in spectroscopy, using light to identify key fingerprints in a chemical or molecule.

Several years ago, Tadesse teamed up with Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and of Electrical Engineering and Computer Science at MIT. Bhatia’s group focuses in part on developing nanoparticle sensors — tiny particles that can be tagged with a synthetic biomarker. Bhatia can tailor these biomarkers to cleave from their nanoparticle only in the presence of specific “protease” enzymes that are produced by certain diseases.

In work that was reported in 2020, Bhatia’s group demonstrated they could detect cleaved biomarkers of pneumonia from the breath of infected mice. The biomarkers were exhaled at extremely low concentrations, of about 10 parts per billion. Nevertheless, the researchers were able to detect the compounds using mass spectrometry — a technology that is highly sensitive but requires bulky and expensive instrumentation that is not widely available in clinical settings.

“We thought, ‘How can we achieve that same sensitivity, in a way that’s accessible, at the point of need, and in a chip format that can be scalable in terms of cost?’” Tadesse says. 

A fingerprint trap

For their new study, Tadesse’s group looked to design a sensitive, portable breath test to quickly detect Bhatia’s biomarkers. Their new design centers on “plasmonics” — the study and manipulation of light and how it interacts with matter at the nanoscale.

The researchers noted that molecules exhibit characteristic vibrational modes, corresponding to the motions of atoms within their chemical bonds. These vibrations can be detected using Raman spectroscopy, an optical technique in which molecules are illuminated with light. A small fraction of the scattered light shifts in energy due to interactions with a molecule’s vibrations. By measuring these energy shifts, researchers can identify molecules based on their distinctive vibrational fingerprints.

To detect Bhatia’s biomarkers, however, they would need to isolate the comparatively few molecules from the dense cloud of many other exhaled molecules. They would also need to boost the biomarker’s vibrational signal, as the Raman-scattered light by an individual molecule is inherently extremely small.

“This is a needle-in-a-haystack problem,” Tadesse says. “Our method detects that needle that would otherwise be embedded in the noise.”

The team’s new sensor is designed to trap target biomarkers and boost their vibrational signal. The core of the sensor is made from a thin gold film, above which the researchers suspended a layer of gold nanoparticles. The gold nanoparticles are coated with a porous silica shell, generating a 5-nanometer-wide gap between the gold nanoparticles and the gold film. The silica is modified to strongly bond with molecules of water. The hydrogen in water can in turn bond with the target biomarkers. If any biomarkers pass through the sensor’s gap, they stick to the water molecules like Velcro.

The sensor’s gap is engineered to strongly amplify light due to plasmonic resonance, where electrons in the nearby gold structures collectively oscillate in response to incoming light, concentrating the electromagnetic field into the gap. Biomarkers trapped in these gaps experience a greatly enhanced electromagnetic field, which amplifies their Raman scattering signal. The researchers can then measure the Raman scattered light, and compare the pattern to the biomarker’s known “fingerprint,” to confirm its presence.

The team worked with Daniel Kim, a graduate student in Bhatia’s lab, and tested the sensor’s performance on samples of lung fluid that they obtained from healthy mice. They spiked these samples with biomarkers of pneumonia that Bhatia’s group previously designed. They then placed the spiked fluid in a vial and heated it to evaporate the fluid, to simulate exhaled breath. They placed the new sensor on the underside of the vial’s cap and used a Raman spectrometer to measure the scattered light as the fluid vapor passed through the sensor.

Through these experiments, they showed the sensor quickly detected biomarkers of pneumonia at extremely low, clinically relevant concentrations.

“Our next goal is to have a breath collection system, like a mask you can breathe into,” Garg says. “A patient would first use something like an asthma inhaler to inhale the nanoparticles. They could then breathe through the mask sensor for five minutes. We could then integrate a handheld Raman spectrometer to detect whatever biomarker is breathed out, within minutes.”

Breath tests for disease, sometimes referred to as disease breathalyzers, are an emerging technology. Most designs are still in the experimental stage, and take different approaches to detect various conditions such as certain cancers, intestinal infections, and viruses such as Covid-19. The MIT team notes that its design can be used to detect diseases beyond pneumonia, as well as biomarkers that are not related to disease, as long as the biomarker of interest has a known vibrational “fingerprint.”

“It’s not just limited to these biomarkers or even diagnostic applications,” Tadesse says. “It can sniff out industrial chemicals or airborne pollutants as well. If a molecule can form hydrogen bonds with water, we can use its vibrational fingerprint to detect it. It’s a pretty universal platform.”

This work was supported, in part, by funding from Open Philanthropy (now Coefficient Giving). Several characterization and fabrication steps were conducted at MIT.nano.

From Idaho to MIT, on a quest to cut methane emissions

Sun, 03/15/2026 - 12:00am

Amid the hum of milking equipment and the shuffle of cow hooves, PhD student Audrey Parker and her collaborators pull a wagon through a dusty path of a dairy barn, measuring an invisible greenhouse gas drifting through the air. Most engineering students wouldn’t expect their graduate research to take them to a dairy farm, but for Parker, this is where some of the most impactful climate solutions are hiding in plain sight.

The scene was part of the civil and environmental engineering student’s PhD work exploring advanced yet practical technologies to mitigate methane emissions. Such emissions are much more effective at trapping heat in the atmosphere than carbon dioxide. Dairy farms are a major source of methane, and Parker’s wagon carried sensors to measure methane concentrations.

Now in her fourth year in the lab of Professor Desirée Plata, Parker looks forward to visiting such farms. When she’s not taking measurements, she can look across the rolling fields and think of home.

Parker grew up in Boise, Idaho. Her childhood was filled with backpacking trips, skiing, horseback riding, and otherwise enjoying what her natural surroundings had to offer.

“Growing up, we were always outside,” she says. “I knew how to cast a fly rod before I knew how to ride a bike.”

That experience motivated Parker to pursue studies related to preserving the environment she loved. She attended Boise State University as an undergraduate, where she studied sustainable materials development under the mentorship of Assistant Dean Paul Davis. In the summer before her senior year, she was accepted to the MIT Summer Research Program (MSRP), which equips students for graduate school by bringing them to MIT to conduct cutting-edge research. That’s where she began working with Plata, MIT’s Distinguished Climate and Energy Professor.

“They do a great job bringing in people of different backgrounds,” Parker says. “It wasn’t until I started working with Desirée that I started applying materials science as a tool to reduce greenhouse gas emissions. That was a profound insight.”

Parker graduated Boise State University as a Top Ten Scholar, the highest academic honor granted to graduating seniors, before driving across the country to begin her studies at MIT. She decided to devote her PhD to exploring methane mitigation strategies, building on her experience from MSRP.

Her focus is on methane emissions from two sources: air being vented from coal mines, and dairy farms. Those two areas alone account for a large portion of human-driven methane emissions. Both sources are dilute compared to the average oil or gas well, which makes the methane challenging to capture and convert into less environmentally harmful molecules.

Parker also wanted to work with community members in the field during her PhD to ensure whatever technical solutions she developed are practical enough to implement at scale.

“Desirée’s approach is to make sure industry is aware of affordable and sustainable ways to remove methane from their operations, while also incorporating the nuanced expertise stakeholders offer,” Parker says. “I appreciate that she is focused on not just doing work for the chapter of a PhD thesis, but also making our work lead to real-world change.”

Parker’s research explores both quantifying methane at emission sources and designing technologies that could be used to convert methane into carbon dioxide, a molecule with significantly less climate warming potential.

“Methane naturally converts into carbon dioxide over the course of about 12 years in the atmosphere,” Parker explains. “The technology we work on simply speeds up this natural process to achieve near-term climate benefits.”

The main technology Parker studies is a catalyst made from zeolites, an abundant and inexpensive mineral with complex internal structures like honeycombs. Parker dopes the zeolites with copper and explores ways to apply external heat to facilitate complete methane conversion.

Parker and her collaborators assess the durability of the material and its performance under different conditions. Recognizing that real-world deployment environments can often be difficult to replicate in lab, they test catalyst performance in operating dairy farms. In a 2025 paper, she analyzed the use of thermal energy to sustain methane combustion in catalyst materials, detailing when the approach actually brings net-climate benefits.

“If your methane concentrations are low and you’re having to provide so much energy into your system, you could become climate-harmful, but there’s also a context where it’s beneficial,” Parker explains. “Understanding where that trade-off occurs is critical to making sure your mitigation technologies are having the benefits you’re anticipating.”

That kind of systems-level thinking is necessary to understand the long-term impacts of interconnected climate systems.

“It lays a framework that other people can use for their mitigation technologies,” Parker says. “There are trade-offs with every technology, and being transparent about that is important. I think as academics it’s easy to get tunnel vision based on our research. There’s such limited funding for mitigation technologies overall and so making sure those few funding dollars are allocated appropriately is critical for achieving our climate goals.”

Some of Parker’s research findings have informed the design of a pilot-scale methane mitigation system in a coal mine, although she hasn’t gotten a chance to visit it just yet.

Outside of her research, Parker co-chairs the MIT Congressional Visit Days, a program run by the Science Policy Initiative that sends MIT students to Washington to meet with lawmakers and advocate for science-based policies.

“On-the-Hill advocacy teaches you about the policy landscape in unparalleled ways,” Parker says. “Those conversations you have with lawmakers can drive transformational change to bridge the gap between science and policy. It is our job as scientists to communicate our findings clearly so policymakers can design regulations that enable effective solutions.”

This spring, Parker is also leading a workshop for the MIT Climate and Sustainability Consortium around financing the voluntary carbon market. Here, she plans to leverage industry insights to catalyze private capital at the scale needed to meet our climate goals.

Parker also still gets plenty of outdoor time, hiking outside Boston and skiing a bit, though she says the New England ski mountains don’t compare to those out west.

Parker, who expects to complete her PhD next year, says it’s gratifying to be able to devote her research to protecting the environment she loves so much.

“For me it’s about preserving the world I grew up in,” Parker says. “Especially in Idaho, where communities are experiencing more frequent wildfires and more intense droughts. As a child, the natural world provided so much wonder. Today, that same sense of wonder is what drives me to protect it.”

Financial Times ranks MIT Sloan No. 1 in 2026 Global MBA Ranking

Fri, 03/13/2026 - 3:35pm

The Financial Times has placed MIT Sloan School of Management at the top of its recently released 2026 Global MBA Ranking. It is the school’s first time gaining the No. 1 spot in the list.

In its announcement of the rankings, the publication noted MIT’s school of management tops the list “at a time of sharpening focus from students on the importance of technology, including artificial intelligence, as they prepare for disruptions in the workplace.”

Global education editor Andrew Jack said in the Financial Times News Briefing podcast that MIT is “very much at the center of the tech revolution that we are seeing.” He added, “there’s no question that we’re talking more and more about artificial intelligence and expertise around some of the technical skills related and notably how you might apply AI in the workplace. That certainly reflects both its technical and engineering computer science skills historically. And [MIT Sloan] is doing a lot with those other departments in the university. So I think that says something very much about how the wider job market and the aspirations of students are evolving.”

“MIT Sloan operates at the intersection of management and technology,” says Richard Locke, the John C Head III Dean of the MIT Sloan School of Management. “Our students and alumni are employing artificial intelligence to solve complex problems in the world and across industries. At MIT Sloan, we focus on doing that work in a way that centers human capabilities, ensuring artificial intelligence extends what humans can do to improve organizations and the world.”

To determine its rankings, the Financial Times considers 21 criteria. Eight of those — accounting for 56 percent of the ranking’s weight — are determined by surveying alumni three years after they have completed their MBA program. School data are used for 34 percent percent of the rank. The remaining 10 percent measures how often full-time faculty publish in top journals.

MIT Sloan ranked fourth for its alumni network, which measures how effectively alumni support one another through career advice, internships, job opportunities, and recruiting efforts. 

“This ranking underscores the strength of our global alumni community,” says Kathy Hawkes, senior associate dean of external engagement. “'Sloanies Helping Sloanies' isn’t just a phrase — it’s a lived experience. Our 31,000 alumni actively open doors, share expertise, and invest in each other’s success.”

Scientists discover genetics behind leaky brain blood vessels in Rett syndrome

Fri, 03/13/2026 - 3:15pm

MIT researchers have discovered that two common genetic mutations that cause Rett syndrome each set off a molecular chain of events that compromises the structural integrity of developing brain blood vessels, making them leaky. The study traces the problem to overexpression of a particular microRNA (miRNA-126-3p), and shows that tamping down the miRNA’s levels helps to rescue the vascular defect.

Rett syndrome is a severe developmental disorder affecting both the brain and body. It is caused by various mutations in the widely expressed MECP2 gene, but the first symptoms don’t become apparent until affected children (mostly girls) reach 2-3 years of age. Because that’s a critical time in development for the brain’s blood vessels, neuroscientists in The Picower Institute for Learning and Memory at MIT embarked on a study to model how two common but distinct MeCP2 mutations may affect vascular development and contribute to the disease’s profound neurological pathology.

To conduct the research published recently in Molecular Psychiatry, lead author Tatsuya Osaki and senior author Mriganka Sur developed advanced human tissue cultures to model vessel development, with and without the MeCP2 mutations. The cultures not only enabled them to model and closely observe how the mutations affected the vessels, but also allowed them to molecularly dissect the problems they observed and then to test an intervention that helped.

“A role for microRNAs in Rett syndrome has been shown, but now demonstrating that miRNA-126-3p is actually downstream of MeCP2 and directly implicated in the endothelial cell dysfunction is an important piece of the Rett syndrome puzzle,” says Sur, the Newton Professor of Neuroscience in the Picower Institute and MIT’s Department of Brain and Cognitive Sciences.

Building vessels and spotting leaks

Building on years of tissue engineering experience, including time as a postdoc in the lab of co-author and MIT mechanical engineering and biological engineering Professor Roger D. Kamm, Osaki built “3-dimensional microvascular networks” using human induced pluripotent stem cells (iPS cells) donated by patients with Rett syndrome. The donated cells were induced to become stem cells, and then endothelial cells (the backbone of blood vessels). Embedded in a gel and mixed with fibroblast cells, the endothelial cells self-assembled into networks of tubes, which Osaki then hooked up to microfluidics to provide circulation.

One set of the cultures harbored the mutation R306C. Osaki created a control microvasculature that was genetically identical except that it did not have the mutation. Another set of the cultures had the R168X mutation. And again, Osaki paired that with control culture that was identical except for the mutation using CRISPR.

The research team chose these two mutations because they are each relatively common but affect the MeCP2 gene differently, Sur says. The finding that each of these distinct Rett-causing mutations ultimately led to upregulating miRNA-126-3p and undermining blood vessel integrity suggests that vascular problems are indeed a central feature of the disease.

“There is something common across these mutations,” Sur says.

In particular, lab tests showed that the vessels harboring either mutation showed reduced expression of a protein called ZO-1, which is critical for ensuring that the junctions among endothelial cells in blood vessels form a tight seal (like the grout in a tile floor). ZO-1 also didn’t localize to those junctions as well. Sure enough, further tests showed that the Rett-mutation vessel cultures were relatively leaky compared to the controls.

Similar deficiencies were evident in another cell culture the team created, in which they added astrocyte cells to even more closely simulate the blood-brain-barrier (BBB), which tightly regulates what can go in or out of blood vessels and into the brain. BBB problems are widely suspected of contributing to neurodegenerative diseases such as Alzheimer’s, Huntington’s, and ALS and frontotemporal dementia.

To gain some insight into how the vascular problems might undermine neural function in Rett syndrome, the researchers exposed neurons to medium from their Rett vasculature cultures. Those nerve cells showed reduced electrical activity, a possible sign that secretions from the Rett endothelial cells disrupted the neurons.

Catching a culprit

Generally speaking, the role of MeCP2 is to repress the expression of other genes. The scientists’ expectation, therefore, was that when MeCP2 is compromised by mutations the result would be overexpression of many genes. Yet ZO-1 was downregulated. Something had to account for that and miRNAs were a suspect, Osaki says, because they function as regulators of gene expression.

“That’s why we hypothesized that we should have some mediator between the MeCP2 mutation and ZO-1 downregulation and the BBB permeability increase,” Osaki says. “We focused on the microRNAs.”

Indeed, by profiling miRNAs in the Rett cultures and the controls, the scientists found that miRNA-126-3p was overexpressed. And by sequencing RNA, the team identified more molecular pathways needed to support vascular integrity that were dysregulated in the Rett cultures.

While the sequencing and profile associated miRNA-126-3p upregulation with the altered molecular chain of events, Osaki and Sur sought more definitive proof. To obtain it, they treated the Rett-mutation cultures with an “antisense” — a molecule that reduces miRNA-126-3p levels. Doing that resulted in an increase in ZO-1 expression and a partial restoration of endothelial cell barrier function — meaning less leakiness — in the vessel cultures. Knocking down the miRNA’s expression also restored the molecular pathways the scientists were tracking to more healthy states.

It turns out that there is a drug that inhibits miR-126 called miRisten that is undergoing clinical testing for leukemia. Osaki and Sur say they are planning on administering it to mice modeling Rett syndrome to see if it helps them.

In addition to Osaki, Sur, and Kamm, the paper’s co-authors are Zhengpeng Wan, Koji Haratani, Ylliah Jin, Marco Campisi, and David Barbie.

Funding for the study came from sources including the National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Center for the Social Brain.

Next-generation geothermal energy: Promise, progress, and challenges

Fri, 03/13/2026 - 2:55pm

Geothermal energy, a clean, continuous energy source accessible in many locations, has been slow to catch on. Nearly 2,000 years ago, the Romans made extensive use of geothermal energy — heat from the Earth — including at the spa complex at present-day Bath, England. Electricity was first produced from geothermal sources in the early 1900s in Italy. In the United States, the Geysers geothermal field in California began generating electricity at scale in 1960, and routinely produces more than 725 megawatts of baseload power today. 

According to the International Energy Agency (IEA), geothermal energy still supplies less than 1 percent of global electricity demand, although countries like Kenya (more than 40 percent of electricity generation) and Iceland (nearly 30 percent of electricity and 90 percent of the heating) have seen widespread adoption.

In recent years, technological advances, an influx of private capital, and shifting energy and environmental policies have driven renewed interest in expanding development of geothermal energy. If project costs continue to decline, the IEA predicts that geothermal energy could meet 15 percent of the growth in global electricity demand between 2024 and 2050. Many countries, including the United States, Indonesia, New Zealand, and Turkey, are prioritizing an expansion of geothermal energy as part of their broader energy strategies.

Achieving large-scale electricity generation from geothermal sources will depend on a significant expansion of so-called next-generation geothermal. This refers to tapping heat from source rocks at temperatures of 100 degrees Celsius to more than 400 C, often at depths of several kilometers below the surface. Last month, U.S. Congressional Rep. Jake Auchincloss (D-MA) and Rep. Mark Amodei (R-NV) introduced bipartisan legislation to promote research, testing, and development of one type of next-generation geothermal energy known as superhot rock.

Geothermal energy at MIT

Through its leadership in producing the influential 2006 “The Future of Geothermal Energy” report led by former MIT professor Jeff Tester, MIT and the predecessor of the MIT Energy Initiative (MITEI) played an important role in national geothermal strategy two decades ago. In 2008, researchers at the Plasma Science and Fusion Center (PSFC) invented millimeter-wave drilling with support from one of the first MITEI seed innovation grants. The technology, which could be particularly useful for geothermal installations in superhot and deep rock, is being commercialized by MIT spinout Quaise Energy.

MITEI is sponsoring next-generation geothermal projects through its Future Energy Systems Center. A project led by MITEI Research Scientist Pablo Duenas-Martinez focuses on the techno-economics of electricity generation from a geothermal plant co-located with a data center, a timely topic given the proliferation of data center power purchase agreements for electricity generated by geothermal energy. MITEI’s March 4 Spring Symposium focused on next-geothermal energy for the generation of firm power, and many of the leading exploration, drilling, reservoir development, and advanced technology companies working in this area sent panelists and speakers. On March 5, MITEI collaborated with the Clean Air Task Force (CATF) to co-host the GeoTech Summit, which explored accelerating technology development for and investment in next-generation geothermal.

To prepare for the recent symposium, MITEI organized a geothermal bootcamp during MIT’s Independent Activities Period (IAP) that introduced more than 40 members of the MIT community to geothermal basics, key technologies, and related MIT research. Carolyn Ruppel, MITEI’s deputy director of science and technology and the organizer of the IAP bootcamp and Spring Symposium, says, “MITEI’s member companies, which represent leading voices on energy, power generation, infrastructure, heavy industry, and digital technology, are increasingly approaching us about their interest in next-generation geothermal. There is also good momentum building across MIT, ranging from projects at the Earth Resources Laboratory to the millimeter-wave testbed being developed by PSFC and its MIT collaborators, individual projects in academic departments, and of course the work MITEI has been funding.”

Geothermal basics

Temperatures a few tens of meters below the ground are typically stable year-round. In some locations, these temperatures are warmer than the surface in winter and cooler in summer, making it possible to use geothermal heat pumps to moderate temperatures in buildings throughout the year. Overlooking the Charles River, Boston University’s 19-story Center for Computing and Data Science meets an estimated 90 percent of its heating and cooling needs using this kind of geothermal system. At the scale of large institutions or whole towns, thermal networks, district heating, and other approaches can efficiently supply heat from shallow geothermal sources without producing greenhouse gas emissions.

Tapping hotter and usually deeper geothermal sources could generate large amounts of electricity for decades at a single site. Next-generation geothermal is the term applied to these higher-temperature systems developed using enhanced, advanced, and superhot technologies. Enhanced geothermal refers to circulating fluids through engineered fracture systems in deep, dry rock with relatively low native permeability. Advanced geothermal adopts a closed loop approach, in which a working fluid is heated by circulating it through pipes embedded in the subsurface. Superhot geothermal, which is in its infancy, will likely use enhanced geothermal technology to circulate supercritical water through rock at almost 400 C.   

Next-generation geothermal

Drill deep enough and higher-temperature resources are nearly ubiquitous beneath the continents, but early-stage development must focus on the most promising sites, where the methods and technologies to routinely reach these hotter rocks can be tested and refined. Locations like Iceland and the southwestern U.S. state of Nevada, where tectonic plates are separating or the Earth’s outer layer is thinning, have hotter temperatures closer to the surface than areas like the northeastern United States, where the Earth’s crust is old, thick, and cooler. Even in the southwestern United States, though, reaching the high temperatures required for generating electricity via geothermal systems will require routinely drilling to depths of greater than 4 kilometers in crystalline rock. This is significantly more challenging than drilling in the sedimentary basins that host most of the world’s oil and gas reserves. 

For a location to be suitable for a next-generation geothermal installation requires not only heat, but also a fluid (usually water) to carry the heat. Water circulated through the rock formation to extract heat can be present naturally or brought from elsewhere and injected into the reservoir. This type of system also requires connected permeability such as an engineered fracture network oriented to prevent significant fluid losses and to channel fluid toward the extraction well. Closed-loop (advanced) systems replace the freely circulating water with a working fluid that has favorable thermal characteristics and that is confined in piping.

Various geophysical methods are used to find sites with sufficient heat within a few kilometers of the surface, a prerequisite for their development as next-generation geothermal installations. Apart from direct measurements of temperatures in test boreholes, electrical resistivity and magnetotelluric surveys are among the most useful for inferring subsurface temperature regimes. Both techniques infer the electrical conductivity structure beneath the ground, permitting the identification of relatively warmer and more permeable rocks.

Drilling is often the most time-consuming and expensive part of preparing a site for a geothermal plant. This is particularly true for next-generation geothermal, where the targets can be deep, or the system design may require large-scale horizontal drilling. Over the past few years, numerous innovations have increased drilling rates and attainable depths and temperatures and also lowered costs. Nonetheless, even with high-quality geophysical surveys, “you may spend $10 million on an exploratory well and find no heat,” says Andrew Inglis, the geothermal channel venture builder at MIT Proto Ventures. 

Superhot geothermal, a next-generation geothermal approach that is advancing rapidly, presents special challenges. The metal drilling tools, the rocks in the formation, and circulating fluids all behave differently at temperatures of several hundred degrees, and standard practices, materials, and sensors must be significantly modified to tolerate the tough conditions. Once temperatures exceed 374 C in a borehole even ~1 km deep, water reaches a supercritical state. This presents substantial advantages for extracting heat from the formation, but introduces the specter of rapid metal corrosion and precipitation of salts and silica that can quickly foul a borehole. Researchers are investigating substitution of supercritical carbon dioxide for water as a working fluid for superhot geothermal.

MIT innovations advancing next-generation geothermal

The millimeter-wave drilling technology invented at PSFC and being commercialized by Quaise Energy is the highest-profile next-generation geothermal innovation to emerge from MIT so far. Millimeter-wave technology uses microwave energy to vaporize rock and could prove to be several times faster than conventional drilling. PSFC and a multidisciplinary MIT team are devising a dedicated laboratory to study how millimeter-wave drilling interacts with crystalline rock at realistic pressure and temperature conditions, and to test improvements to the existing technology. Steve Wukitch, interim director and principal research scientist at PSFC, notes that “the facility we are building at MIT will allow us to test samples 500 times larger than is currently possible. This is an important step for investigating technologies that could unlock superhot geothermal energy."

MIT Proto Ventures, which focuses on creating startups based on technology invented at MIT, currently hosts a dedicated geothermal energy channel led by Inglis. Since arriving at MIT in late 2024, Inglis has identified inventions and research that could advance next-generation geothermal from disciplines as disparate as mechanical and materials engineering, earth sciences, and chemistry. Examples of technologies originating with MIT researchers include sensors that measure micro-cracking in high-temperature rock, advanced metal alloys that could handle superhot fluids at a fraction of the cost of titanium, and anti-fouling coatings to protect pipes from the caustic geofluids common in hot, deep systems.

MITEI Spring Symposium

At the recent MITEI Spring Symposium, these MIT innovators introduced their technology to MITEI member companies in a session led by Inglis. Wukitch, who moderated a panel on advanced drilling, described the planned millimeter-wave testbed, and Duenas-Martinez led a panel on power generation and storage. Terra Rogers, director for superhot rock geothermal energy at the CATF and the organizer of the joint CATF-MITEI GeoTech Summit on March 5, led a discussion of international and U.S. policies and the regulatory environment for expansion of next-generation geothermal. 

Poster presenters included MIT graduate students and researchers, MIT’s D-Lab, and the Geo@MIT geothermal-focused MIT student group, which was recognized with a 2024 bonus award by the U.S. Department of Energy’s Geothermal Technologies Office in the nationwide EnergyTech University Prize competition.  

How the brain handles the “cocktail party problem”

Fri, 03/13/2026 - 6:00am

MIT neuroscientists have figured out how the brain is able to focus on a single voice among a cacophony of many voices, shedding light on a longstanding neuroscientific phenomenon known as the cocktail party problem.

This attentional focus becomes necessary when you’re in any crowded environment, such as a cocktail party, with many conversations going on at once. Somehow, your brain is able to follow the voice of the person you’re talking to, despite all the other voices that you’re hearing in the background.

Using a computational model of the auditory system, the MIT team found that amplifying the activity of the neural processing units that respond to features of a target voice, such as its pitch, allows that voice to be boosted to the forefront of attention.

“That simple motif is enough to cause much of the phenotype of human auditory attention to emerge, and the model ends up reproducing a very wide range of human attentional behaviors for sound,” says Josh McDermott, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research and Center for Brains, Minds, and Machines, and the senior author of the study.

The findings are consistent with previous studies showing that when people or animals focus on a specific auditory input, neurons in the auditory cortex that respond to features of the target stimulus amplify their activity. This is the first study to show that extra boost is enough to explain how the brain solves the cocktail party problem.

Ian Griffith, a graduate student in the Harvard Program in Speech and Hearing Biosciences and Technology, who is advised by McDermott, is the lead author of the paper. MIT graduate student R. Preston Hess is also an author of the paper, which appears today in Nature Human Behavior.

Modeling attention

Neuroscientists have been studying the phenomenon of selective attention for decades. Many studies in people and animals have shown that when focusing on a particular stimulus like the sound of someone’s voice, neurons that are tuned to features of that voice — for example, high pitch — amplify their activity.

When this amplification occurs, neurons’ firing rates are scaled upward, as though multiplied by a number greater than one. It has been proposed that these “multiplicative gains” allow the brain to focus its attention on certain stimuli. Neurons that aren’t tuned to the target feature exhibit a corresponding reduction in activity.

“The responses of neurons tuned to features that are in the target of attention get scaled up,” Griffith says. “Those effects have been known for a very long time, but what’s been unclear is whether that effect is sufficient to explain what happens when you’re trying to pay attention to a voice or selectively attend to one object.”

This question has remained unanswered because computational models of perception haven’t been able to perform attentional tasks such as picking one voice out of many. Such models can readily perform auditory tasks when there is an unambiguous target sound to identify, but they haven’t been able to perform those tasks when other stimuli are competing for their attention.

“None of our models has had the ability that humans have, to be cued to a particular object or a particular sound and then to base their response on that object or that sound. That’s been a real limitation,” McDermott says.

In this study, the MIT team wanted to see if they could train models to perform those types of tasks by enabling the model to produce neuronal activity boosts like those seen in the human brain.

To do that, they began with a neural network that they and other researchers have used to model audition, and then modified the model to allow each of its stages to implement multiplicative gains. Under this architecture, the activation of processing units within the model can be boosted up or down depending on the specific features they represent, such as pitch.

To train the model, on each trial the researchers first fed it a “cue”: an audio clip of the voice that they wanted the model to pay attention to. The unit activations produced by the cue then determined the multiplicative gains that were applied when the model heard a subsequent stimulus.

“Imagine the cue is an excerpt of a voice that has a low pitch. Then, the units in the model that represent low pitch would get multiplied by a large gain, whereas the units that represent high pitch would get attenuated,” Griffith says.

Then, the model was given clips featuring a mix of voices, including the target voice, and asked to identify the second word said by the target voice. The model activations to this mixture were multiplied by the gains that resulted from the previous cue stimulus. This was expected to cause the target voice to be “amplified” within the model, but it was not clear whether this effect would be enough to yield human-like attentional behavior.

The researchers found that under a variety of conditions, the model performed very similarly to humans, and it tended to make errors similar to those that humans make. For example, like humans, it sometimes made mistakes when trying to focus on one of two male voices or one of two female voices, which are more likely to have similar pitches.

“We did experiments measuring how well people can select voices across a pretty wide range of conditions, and the model reproduces the pattern of behavior pretty well,” Griffith says.

Effects of location

Previous research has shown that in addition to pitch, spatial location is a key factor that helps people focus on a particular voice or sound. The MIT team found that the model also learned to use spatial location for attentional selection, performing better when the target voice was at a different location from distractor voices.

The researchers then used the model to discover new properties of human spatial attention. Using their computational model, the researchers were able to test all possible combinations of target locations and distractor locations, an undertaking that would be hugely time-consuming with human subjects.

“You can use the model as a way to screen large numbers of conditions to look for interesting patterns, and then once you find something interesting, you can go and do the experiment in humans,” McDermott says.

These experiments revealed that the model was much better at correctly selecting the target voice when the target and distractor were at different locations in the horizontal plane. When the sounds were instead separated in the vertical plane, this task became much more difficult. When the researchers ran a similar experiment with human subjects, they observed the same result.

“That was just one example where we were able to use the model as an engine for discovery, which I think is an exciting application for this kind of model,” McDermott says.

Another application the researchers are pursuing is using this kind of model to simulate listening through a cochlear implant. These studies, they hope, could lead to improvements in cochlear implants that could help people with such implants focus their attention more successfully in noisy environments.

The research was funded by the National Institutes of Health.

Can AI help predict which heart-failure patients will worsen within a year?

Thu, 03/12/2026 - 5:30pm

Characterized by weakened or damaged heart musculature, heart failure results in the gradual buildup of fluid in a patient’s lungs, legs, feet, and other parts of the body. The condition is chronic and incurable, often leading to arrhythmias or sudden cardiac arrest. For many centuries, bloodletting and leeches were the treatment of choice, famously practiced by barber surgeons in Europe, during a time when physicians rarely operated on patients. 

In the 21st century, the management of heart failure has become decidedly less medieval: Today, patients undergo a combination of healthy lifestyle changes, prescription of medications, and sometimes use pacemakers. Yet heart failure remains one of the leading causes of morbidity and mortality, placing a substantial burden on health-care systems across the globe. 

“About half of the people diagnosed with heart failure will die within five years of diagnosis,” says Teya Bergamaschi, an MIT PhD student in the lab of Nina T. and Robert H. Rubin Professor Collin Stultz and the co-first author of a new paper introducing a deep learning model for predicting heart failure. “Understanding how a patient will fare after hospitalization is really important in allocating finite resources.”

The paper, published in Lancet eClinical Medicine by a team of researchers at MIT, Mass General Brigham, and Harvard Medical School, shares results from developing and testing PULSE-HF, which stands loosely for “Predict changes in left ventricULar Systolic function from ECGs of patients who have Heart Failure.” The project was conducted in Stultz’s lab, which is affiliated with the MIT Abdul Latif Jameel Clinic for Machine Learning in Health. Developed and retrospectively tested across three different patient cohorts from Massachusetts General Hospital, Brigham and Women’s Hospital, and MIMIC-IV (a publicly available dataset), the deep learning model accurately predicts changes in the left ventricular ejection fraction (LVEF), which is the percentage of blood being pumped out of the left ventricle of the heart.

A healthy human heart pumps out about 50 to 70 percent of blood from the left ventricle with each beat — anything less is considered a sign of a potential problem. “The model takes an [electrocardiogram] and outputs a prediction of whether or not there will be an ejection fraction within the next year that falls below 40 percent,” says Tiffany Yau, an MIT PhD student in Stultz’s lab who is also co-first author of the PULSE-HF paper. “That is the most severe subgroup of heart failure.” 

If PULSE-HF predicts that a patient’s ejection fraction is likely to worsen within a year, the clinician can prioritize the patient for follow-up. Subsequently, lower-risk patients can reduce their number of hospital visits and the amount of time spent getting 10 electrodes adhered to their body for a 12-lead ECG. The model can also be deployed in low-resource clinical settings, including doctors offices in rural areas that don’t typically have a cardiac sonographer employed to run ultrasounds on a daily basis.

“The biggest thing that distinguishes [PULSE-HF] from other heart failure ECG methods is instead of detection, it does forecasting,” says Yau. The paper notes that to date, no other methods exist for predicting future LVEF decline among patients with heart failure.

During the testing and validation process, the researchers used a metric known as "area under the receiver operating characteristic curve" (AUROC) to measure PULSE-HF’s performance. AUROC is typically used to measure a model’s ability to discriminate between classes on a scale from 0 to 1, with 0.5 being random and 1 being perfect. PULSE-HF achieved AUROCs ranging from 0.87 to 0.91 across all three patient cohorts.

Notably, the researchers also built a version of PULSE-HF for single-lead ECGs, meaning only one electrode needs to be placed on the body. While 12-lead ECGs are generally considered superior for being more comprehensive and accurate, the performance of the single-lead version of PULSE-HF was just as strong as the 12-lead version.

Despite the elegant simplicity behind the idea of PULSE-HF, like most clinical AI research, it belies a laborious execution. “It’s taken years [to complete this project],” Bergamaschi recalls. “It’s gone through many iterations.” 

One of the team’s biggest challenges was collecting, processing, and cleaning the ECG and echocardiogram datasets. While the model aims to forecast a patient’s ejection fraction, the labels for the training data weren’t always readily available. Much like a student learning from a textbook with an answer key, labeling is critical for helping machine-learning models correctly identify patterns in data.

Clean, linear text in the form of TXT files typically works best when training models. But echocardiogram files typically come in the form of PDFs, and when PDFs are converted to TXT files, the text (which gets broken up by line breaks and formatting) becomes difficult for the model to read. The unpredictable nature of real-life scenarios, like a restless patient or a loose lead, also marred the data. “There are a lot of signal artifacts that need to be cleaned,” Bergamaschi says. “It’s kind of a never-ending rabbit hole.”

While Bergamaschi and Yau acknowledge that more complicated methods could help filter the data for better signals, there is a limit to the usefulness of these approaches. “At what point do you stop?” Yau asks. “You have to think about the use case — is it easiest to have this model that works on data that is slightly messy? Because it probably will be.”

The researchers anticipate that the next step for PULSE-HF will be testing the model in a prospective study on real patients, whose future ejection fraction is unknown.

Despite the challenges inherent to bringing clinical AI tools like PULSE-HF over the finish line, including the possible risk of prolonging a PhD by another year, the students feel that the years of hard work were worthwhile. 

“I think things are rewarding partially because they’re challenging,” Bergamaschi says. “A friend said to me, ‘If you think you will find your calling after graduation, if your calling is truly calling, it will be there in the one additional year it takes you to graduate.’ … The way we’re measured as researchers in [the ML and health] space is different from other researchers in ML space. Everyone in this community understands the unique challenges that exist here.”

“There’s too much suffering in the world,” says Yau, who joined Stultz’s lab after a health event made her realize the importance of machine learning in health care. “Anything that tries to ease suffering is something that I would consider a valuable use of my time.” 

Pages