MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Astrocyte diversity across space and time
When it comes to brain function, neurons get a lot of the glory. But healthy brains depend on the cooperation of many kinds of cells. The most abundant of the brain’s non-neuronal cells are astrocytes, star-shaped cells with a lot of responsibilities. Astrocytes help shape neural circuits, participate in information processing, and provide nutrient and metabolic support to neurons. Individual cells can take on new roles throughout their lifetimes, and at any given time, the astrocytes in one part of the brain will look and behave differently than the astrocytes somewhere else.
After an extensive analysis by researchers at MIT, neuroscientists now have an atlas detailing astrocytes’ dynamic diversity. Its maps depict the regional specialization of astrocytes across the brains of both mice and marmosets — two powerful models for neuroscience research — and show how their populations shift as brains develop, mature, and age.
The open-access study, reported in the Nov. 20 issue of the journal Neuron, was led by Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor of Brain and Cognitive Sciences at MIT. This work was supported by the Hock E. Tan and K. Lisa Yang Center for Autism Research, part of the Yang Tan Collective at MIT, and the National Institutes of Health’s BRAIN Initiative.
“It’s really important for us to pay attention to non-neuronal cells’ role in health and disease,” says Feng, who is also the associate director of the McGovern Institute for Brain Research and the director of the Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT. And indeed, these cells — once seen as mere supporting players — have gained more of the spotlight in recent years. Astrocytes are known to play vital roles in the brain’s development and function, and their dysfunction seems to contribute to many psychiatric disorders and neurodegenerative diseases. “But compared to neurons, we know a lot less — especially during development,” Feng adds.
Probing the unknown
Feng and Margaret Schroeder, a former graduate student in his lab, thought it was important to understand astrocyte diversity across three axes: space, time, and species. They knew from earlier work in the lab, done in collaboration with Steve McCarroll’s lab at Harvard University and led by Fenna Krienen in his group, that in adult animals, different parts of the brain have distinctive sets of astrocytes.
“The natural question was, how early in development do we think this regional patterning of astrocytes starts?” Schroeder says.
To find out, she and her colleagues collected brain cells from mice and marmosets at six stages of life, spanning embryonic development to old age. For each animal, they sampled cells from four different brain regions: the prefrontal cortex, the motor cortex, the striatum, and the thalamus.
Then, working with Krienen, who is now an assistant professor at Princeton University, they analyzed the molecular contents of those cells, creating a profile of genetic activity for each one. That profile was based on the mRNA copies of genes found inside the cell, which are known collectively as the cell’s transcriptome. Determining which genes a cell is using, and how active those genes are, gives researchers insight into a cell’s function and is one way of defining its identity.
Dynamic diversity
After assessing the transcriptomes of about 1.4 million brain cells, the group focused in on the astrocytes, analyzing and comparing their patterns of gene expression. At every life stage, from before birth to old age, the team found regional specialization: astrocytes from different brain regions had similar patterns of gene expression, which were distinct from those of astrocytes in other brain regions.
This regional specialization was also apparent in the distinct shapes of astrocytes in different parts of the brain, which the team was able to see with expansion microscopy, a high-resolution imaging method developed by McGovern colleague Edward Boyden that reveals fine cellular features.
Notably, the astrocytes in each region changed as animals matured. “When we looked at our late embryonic time point, the astrocytes were already regionally patterned. But when we compare that to the adult profiles, they had completely shifted again,” Schroeder says. “So there’s something happening over postnatal development.” The most dramatic changes the team detected occurred between birth and early adolescence, a period during which brains rapidly rewire as animals begin to interact with the world and learn from their experiences.
Feng and Schroeder suspect that the changes they observed may be driven by the neural circuits that are sculpted and refined as the brain matures. “What we think they’re doing is kind of adapting to their local neuronal niche,” Schroeder says. “The types of genes that they are up-regulating and changing during development points to their interaction with neurons.” Feng adds that astrocytes may change their genetic programs in response to nearby neurons, or alternatively, they might help direct the development or function of local circuits as they adopt identities best suited to support particular neurons.
Both mouse and marmoset brains exhibited regional specialization of astrocytes and changes in those populations over time. But when the researchers looked at the specific genes whose activity defined various astrocyte populations, the data from the two species diverged. Schroeder calls this a note of caution for scientists who study astrocytes in animal models, and adds that the new atlas will help researchers assess the potential relevance of findings across species.
Beyond astrocytes
With a new understanding of astrocyte diversity, Feng says his team will pay close attention to how these cells are impacted by the disease-related genes they study and how those effects change during development. He also notes that the gene expression data in the atlas can be used to predict interactions between astrocytes and neurons. “This will really guide future experiments: how these cells’ interactions can shift with changes in the neurons or changes in the astrocytes,” he says.
The Feng lab is eager for other researchers to take advantage of the massive amounts of data they generated as they produced their atlas. Schroeder points out that the team analyzed the transcriptomes of all kinds of cells in the brain regions they studied, not just astrocytes. They are sharing their findings so researchers can use them to understand when and where specific genes are used in the brain, or dig in more deeply to further to explore the brain’s cellular diversity.
MIT affiliates named 2025 Schmidt Sciences AI2050 Fellows
Two current MIT affiliates and seven additional alumni are among those named to the 2025 cohort of AI2050 Fellows.
Zongyi Li, a postdoc in the MIT Computer Science and Artificial Intelligence Lab, and Tess Smidt ’12, an associate professor of electrical engineering and computer science (EECS), were both named as AI2050 Early Career Fellows.
Seven additional MIT alumni were also honored. AI2050 Early Career Fellows include Brian Hie SM '19, PhD '21; Natasha Mary Jaques PhD '20; Martin Anton Schrimpf PhD '22; Lindsey Raymond SM '19, PhD '24, who will join the MIT faculty in EECS, the Department of Economics, and the MIT Schwarzman College of Computing in 2026; and Ellen Dee Zhong PhD ’22. AI2050 Senior Fellows include Surya Ganguli ’98, MNG ’98; and Luke Zettlemoyer SM ’03, PhD ’09.
AI2050 Fellows are announced annually by Schmidt Sciences, a nonprofit organization founded in 2024 by Eric and Wendy Schmidt that works to accelerate scientific knowledge and breakthroughs with the most promising, advanced tools to support a thriving planet. The organization prioritizes research in areas poised for impact including AI and advanced computing, astrophysics, biosciences, climate, and space — as well as supporting researchers in a variety of disciplines through its science systems program.
Li is postdoc in CSAIL working with associate professor of EECS Kaiming He. Li's research focuses on developing neural operator methods to accelerate scientific computing. He received his PhD in computing and mathematical sciences from Caltech, where he was advised by Anima Anandkumar and Andrew Stuart. He holds undergraduate degrees in computer science and mathematics from Washington University in St. Louis.
Li's work has been supported by a Kortschak Scholarship, PIMCO Fellowship, Amazon AI4Science Fellowship, Nvidia Fellowship, and MIT-Novo Nordisk AI Fellowship. He has also completed three summer internships at Nvidia. Li will join the NYU Courant Institute of Mathematical Sciences as an assistant professor of mathematics and data science in fall 2026.
Smidt, associate professor of electrical engineering and computer science (EECS), is the principal investigator of the Atomic Architects group at the Research Laboratory of Electronics (RLE), where she works at the intersection of physics, geometry, and machine learning to design algorithms that aid in the understanding of physical systems under physical and geometric constraints, with applications to the design both of new materials and new molecules. She has a particular focus on symmetries present in 3D physical systems, such as rotation, translation, and reflection.
Smidt earned her BS in physics from MIT in 2012 and her PhD in physics from the University of California at Berkeley in 2018. Prior to joining the MIT EECS faculty in 2021, she was the 2018 Alvarez Postdoctoral Fellow in Computing Sciences at Lawrence Berkeley National Laboratory, and a software engineering intern on the Google Accelerated Sciences team, where she developed Euclidean symmetry equivariant neural networks that naturally handle 3D geometry and geometric tensor data. Besides the AI2050 fellowship, she has received an Air Force Office of Scientific Research Young Investigator Program award, the EECS Outstanding Educator Award, and a Transformative Research Fund award.
Conceived and co-chaired by Eric Schmidt and James Manyika, AI2050 is a philanthropic initiative aimed at helping to solve hard problems in AI. Within their research, each fellow will contend with the central motivating question of AI2050: “It’s 2050. AI has turned out to be hugely beneficial to society. What happened? What are the most important problems we solved and the opportunities and possibilities we realized to ensure this outcome?”
Prognostic tool could help clinicians identify high-risk cancer patients
Aggressive T-cell lymphoma is a rare and devastating form of blood cancer with a very low five-year survival rate. Patients often relapse after receiving initial therapy, making it especially challenging for clinicians to keep this destructive disease in check.
In a new study, researchers from MIT, in collaboration with researchers involved in the PETAL consortium at Massachusetts General Hospital, identified a practical and powerful prognostic marker that could help clinicians identify high-risk patients early, and potentially tailor treatment strategies to improve survival.
The team found that, when patients relapse within 12 months of initial therapy, their chances of survival decline dramatically. For these patients, targeted therapies might improve their chances for survival, compared to traditional chemotherapy, the researchers say.
According to their analysis, which used data collected from thousands of patients all over the world, the finding holds true across patient subgroups, regardless of the patient’s initial therapy or their score in a commonly used prognostic index.
A causal inference framework called Synthetic Survival Controls (SSC), developed as part of MIT graduate student Jessy (Xinyi) Han’s thesis, was central to this analysis. This versatile framework helps to answer “when-if” questions — to estimate how the timing of outcomes would shift under different interventions — while overcoming the limitations of inconsistent and biased data.
The identification of novel risk groups could guide clinicians as they select therapies to improve overall survival. For instance, a clinician might prioritize early-phase clinical trials over canonical therapies for this cohort of patients. The results could inform inclusion criteria for some clinical trials, according to the researchers.
The causal inference framework for survival analysis can also be applied more broadly. For instance, the MIT researchers have used it in areas like criminal justice to study how structural factors drive recidivism.
“Often we don’t only care about what will happen, but when the target event will happen. These when-if problems have remained under the radar for a long time, but they are common in a lot of domains. We’ve shown here that, to answer these questions with data, you need domain experts to provide insight and good causal inference methods to close the loop,” says Devavrat Shah, the Andrew and Erna Viterbi Professor in Electrical Engineering and Computer Science at MIT, a member of Institute for Data, Systems and Society (IDSS) and of the Laboratory for Information and Decision Systems (LIDS), and co-author of the study.
Shah is joined on the paper by many co-authors, including Han, who is co-advised by Shah and Fotini Christia, the Ford International Professor of the Social Sciences in the Department of Political Science and director of IDSS; and corresponding authors Mark N. Sorial, a clinical pharmacist and investigator at the Dana-Farber Cancer Institute, and Salvia Jain, a clinician-investigator at the Massachusetts General Hospital Cancer Center, founder of the global PETAL consortium, and an assistant professor of medicine at Harvard Medical School. The research appears today in the journal Blood.
Estimating outcomes
The MIT researchers have spent the past few years developing the Synthetic Survival Control causal inference framework, which enables them to answer complex “when-if” questions when using available data is statistically challenging. Their approach estimates when a target event happens if a certain intervention is used.
In this paper, the researchers investigated an aggressive cancer called nodal mature T-cell lymphoma, and whether a certain prognostic marker led to worse outcomes. The marker, TTR12, signifies that a patient relapsed within 12 months of initial therapy.
They applied their framework to estimate when a patient will die if they have TTR12, and how their survival trajectory would be different if they do not have this prognostic marker.
“No experiment can answer that question because we are asking about two outcomes for the same patient. We have to borrow information from other patients to estimate, counterfactually, what a patient’s survival outcome would have been,” Han explains.
Answering these types of questions is notoriously difficult due to biases in the available observational data. Plus, patient data gathered from an international cohort bring their own unique challenges. For instance, a clinical dataset often contains some historical data about a patient, but at some point the patient may stop treatment, leading to incomplete records.
In addition, if a patient receives a specific treatment, that might impact how long they will survive, adding to the complexity of the data. Plus, for each patient, the researchers only observe one outcome on how long the patient survives — limiting the amount of data available.
Such issues lead to suboptimal performance of many classical methods.
The Synthetic Survival Control framework can overcome these challenges. Even though the researchers don’t know all the details for each patient, their method stitches information from multiple other patients together in such a way that it can estimate survival outcomes.
Importantly, their method is robust to specific modeling assumptions, making it broadly applicable in practice.
The power of prognostication
The researchers’ analysis revealed that TTR12 patients consistently had much greater risk of death within five years of initial therapy than patients without the marker. This was true no matter the initial therapy the patients received or which subgroup they fell into.
“This tells us that early relapse is a very important prognosis. This acts as a signal to clinicians so they can think about tailored therapies for these patients that can overcome resistance in second-line or third-line,” Han says.
Moving forward, the researchers are looking to expand this analysis to include high-dimensional genomics data. This information could be used to develop bespoke treatments that can avoid relapse within 12 months.
“Based on our work, there is already a risk calculation tool being used by clinicians. With more information, we can make it a richer tool that can provide more prognostic details,” Shah says.
They are also applying the framework to other domains.
For instance, in a paper recently presented at the Conference on Neural Information Processing Systems, the researchers identified a dramatic difference in the recidivism rate among prisoners of different races that begins about seven months after release. A possible explanation is the different access to long-term support by different racial groups. They are also investigating individuals’ decisions to leave insurance companies, while exploring other domains where the framework could generate actionable insights.
“Partnering with domain experts is crucial because we want to demonstrate that our methods are of value in the real world. We hope these tools can be used to positively impact individuals across society,” Han says.
This work was funded, in part, by Daiichi Sankyo, Secure Bio, Inc., Acrotech Biopharma, Kyowa Kirin, the Center for Lymphoma Research, the National Cancer Institute, Massachusetts General Hospital, the Reid Fund for Lymphoma Research, the American Cancer Society, and the Scarlet Foundation.
NIH Director Jay Bhattacharya visits MIT
National Institutes of Health (NIH) Director Jay Bhattacharya visited MIT on Friday, engaging in a wide-ranging discussion about policy issues and research aims at an event also featuring Rep. Jake Auchincloss MBA ’16 of Massachusetts.
The forum consisted of a dialogue between Auchincloss and Bhattacharya, followed by a question-and-answer session with an audience that included researchers from the greater Boston area. The event was part of a daylong series of stops Bhattacharya and Auchincloss made around Boston, a world-leading hub of biomedical research.
“I was joking with Dr. Bhattacharya that when the NIH director comes to Massachusetts, he gets treated like a celebrity, because we do science, and we take science very seriously here,” Auchincloss quipped at the outset.
Bhattacharya said he was “delighted” to be visiting, and credited the thousands of scientists who participate in peer review for the NIH. “The reason why the NIH succeeds is the willingness and engagement of the scientific community,” he said.
In response to an audience question, Bhattacharya also outlined his overall vision of the NIH’s portfolio of projects.
“You both need investments in ideas that are not tested, just to see if something works. You don’t know in advance,” he said. “And at the same time, you need an ecosystem that tests those ideas rigorously and winnows those ideas to the ones that actually work, that are replicable. A successful portfolio will have both elements in it.”
MIT President Sally A. Kornbluth gave opening remarks at the event, welcoming Bhattacharya and Auchincloss to campus and noting that the Institute’s earliest known NIH grant on record dates to 1948. In recent decades, biomedical research at MIT has boomed, expanding across a wide range of frontier fields.
Indeed, Kornbluth noted, MIT’s federally funded research projects during U.S. President Trump’s first term include a method for making anesthesia safer, especially for children and the elderly; a new type of expanding heart valve for children that eliminates the need for repeated surgeries; and a noninvasive Alzheimer’s treatment using sound and light stimulation, which is currently in clinical trials.
“Today, researchers across our campus pursue pioneering science on behalf of the American people, with profoundly important results,” Kornbluth said.
“The hospitals, universities, startups, investors, and companies represented here today have made greater Boston an extraordinary magnet for talent,” Kornbluth added. “Both as a force for progress in human health and an engine of economic growth, this community of talent is a precious national asset. We look forward to working with Dr. Bhattacharya to build on its strengths.”
The discussion occurred amid uncertainty about future science funding levels and pending changes in the NIH’s grant-review processes. The NIH has announced a “unified strategy” for reviewing grant applications that may lead to more direct involvement in grant decisions by directors of the 27 NIH institutes and centers, along with other changes that could shift the types of awards being made.
Auchincloss asked multiple questions about the ongoing NIH changes; about 10 audience members from a variety of institutions also posed a range of questions to Bhattacharya, often about the new grant-review process and the aims of the changes.
“The unified funding strategy is a way to allow institute direcors to look at the full range of scoring, including scores on innovation, and pick projects that look like they are promising,” Bhattacharya said in response to one of Auchincloss’ queries.
One audience member also emphasized concerns about the long-term effects of funding uncertainties on younger scientists in the U.S.
“The future success of the American biomedical enterprise depends on us training the next generation of scientists,” Bhattacharya acknowledged.
Bhattacharya is the 18th director of the NIH, having been confirmed by the U.S. Senate in March. He has served as a faculty member at Stanford University, where he received his BA, MA, MD, and PhD, and is currently a professor emeritus. During his career, Bhattacharya’s work has often examined the economics of health care, though his research has ranged broadly across topics, in over 170 published papers. He has also served as director of the Center on the Demography and Economics of Health and Aging at Stanford University.
Auchincloss is in his third term as the U.S. Representative to Congress from the 4th district in Massachusetts, having first been elected in 2020. He is also a major in the Marine Corps Reserve, and received his MBA from the MIT Sloan School of Management.
Ian Waitz, MIT’s vice president for research, concluded the session with a note of thanks to Auchincloss and Bhattacharya for their “visit to the greater Boston ecosystem which has done so much for so many and contributed obviously to the NIH mission that you articulated.” He added: “We have such a marvelous history in this region in making such great gains for health and longevity, and we’re here to do more to partner with you.”
When companies “go green,” air quality impacts can vary dramatically
Many organizations are taking actions to shrink their carbon footprint, such as purchasing electricity from renewable sources or reducing air travel.
Both actions would cut greenhouse gas emissions, but which offers greater societal benefits?
In a first step toward answering that question, MIT researchers found that even if each activity reduces the same amount of carbon dioxide emissions, the broader air quality impacts can be quite different.
They used a multifaceted modeling approach to quantify the air quality impacts of each activity, using data from three organizations. Their results indicate that air travel causes about three times more damage to air quality than comparable electricity purchases.
Exposure to major air pollutants, including ground-level ozone and fine particulate matter, can lead to cardiovascular and respiratory disease, and even premature death.
In addition, air quality impacts can vary dramatically across different regions. The study shows that air quality effects differ sharply across space because each decarbonization action influences pollution at a different scale. For example, for organizations in the northeast U.S., the air quality impacts of energy use affect the region, but the impacts of air travel are felt globally. This is because associated pollutants are emitted at higher altitudes.
Ultimately, the researchers hope this work highlights how organizations can prioritize climate actions to provide the greatest near-term benefits to people’s health.
“If we are trying to get to net zero emissions, that trajectory could have very different implications for a lot of other things we care about, like air quality and health impacts. Here we’ve shown that, for the same net zero goal, you can have even more societal benefits if you figure out a smart way to structure your reductions,” says Noelle Selin, a professor in the MIT Institute for Data, Systems, and Society (IDSS) and the Department of Earth, Atmospheric and Planetary Sciences (EAPS); director of the Center for Sustainability Science and Strategy; and senior author of the study.
Selin is joined on the paper by lead author Yuang (Albert) Chen, an MIT graduate student; Florian Allroggen, a research scientist in the MIT Department of Aeronautics and Astronautics; Sebastian D. Eastham, an associate professor in the Department of Aeronautics at Imperial College of London; Evan Gibney, an MIT graduate student; and William Clark, the Harvey Brooks Research Professor of International Science at Harvard University. The research was published Friday in Environmental Research Letters.
A quantification quandary
Climate scientists often focus on the air quality benefits of national or regional policies because the aggregate impacts are more straightforward to model.
Organizations’ efforts to “go green” are much harder to quantify because they exist within larger societal systems and are impacted by these national policies.
To tackle this challenging problem, the MIT researchers used data from two universities and one company in the greater Boston area. They studied whether organizational actions that remove the same amount of CO2 from the atmosphere would have an equivalent benefit on improving air quality.
“From a climate standpoint, CO2 has a global impact because it mixes through the atmosphere, no matter where it is emitted. But air quality impacts are driven by co-pollutants that act locally, so where those emissions occur really matters,” Chen says.
For instance, burning fossil fuels leads to emissions of nitrogen oxides and sulfur dioxide along with CO2. These co-pollutants react with chemicals in the atmosphere to form fine particulate matter and ground-level ozone, which is a primary component of smog.
Different fossil fuels cause varying amounts of co-pollutant emissions. In addition, local factors like weather and existing emissions affect the formation of smog and fine particulate matter. The impacts of these pollutants also depend on the local population distribution and overall health.
“You can’t just assume that all CO2-reduction strategies will have equivalent near-term impacts on sustainability. You have to consider all the other emissions that go along with that CO2,” Selin says.
The researchers used a systems-level approach that involved connecting multiple models. They fed the organizational energy consumption and flight data into this systems-level model to examine local and regional air quality impacts.
Their approach incorporated many interconnected elements, such as power plant emissions data, statistical linkages between air quality and mortality outcomes, and aviation emissions associated with specific flight routes. They fed those data into an atmospheric chemistry transport model to calculate air quality and climate impacts for each activity.
The sheer breadth of the system created many challenges.
“We had to do multiple sensitivity analyses to make sure the overall pipeline was working,” Chen says.
Analyzing air quality
At the end, the researchers monetized air quality impacts to compare them with the climate impacts in a consistent way. Monetized climate impacts of CO2 emissions based on prior literature are about $170 per ton (expressed in 2015 dollars), representing the financial cost of damages caused by climate change.
Using the same method as used to monetize the impact of CO2, the researchers calculated that air quality damages associated with electricity purchases are an additional $88 per ton of CO2, while the damages from air travel are an additional $265 per ton.
This highlights how the air quality impacts of a ton of emitted CO2 depend strongly on where and how the emissions are produced.
“A real surprise was how much aviation impacted places that were really far from these organizations. Not only were flights more damaging, but the pattern of damage, in terms of who is harmed by air pollution from that activity, is very different than who is harmed by energy systems,” Selin says.
Most airplane emissions occur at high altitudes, where differences in atmospheric chemistry and transport can amplify their air quality impacts. These emissions are also carried across continents by atmospheric winds, affecting people thousands of miles from their source.
Nations like India and China face outsized air quality impacts from such emissions due to the higher level of existing ground-level emissions, which exacerbates the formation of fine particulate matter and smog.
The researchers also conducted a deeper analysis of short-haul flights. Their results showed that regional flights have a relatively larger impact on local air quality than longer domestic flights.
“If an organization is thinking about how to benefit the neighborhoods in their backyard, then reducing short-haul flights could be a strategy with real benefits,” Selin says.
Even in electricity purchases, the researchers found that location matters.
For instance, fine particulate matter emissions from power plants caused by one university are in a densely populated region, while emissions caused by the corporation fall over less populated areas.
Due to these population differences, the university’s emissions resulted in 16 percent more estimated premature deaths than those of the corporation, even though the climate impacts are identical.
“These results show that, if organizations want to achieve net zero emissions while promoting sustainability, which unit of CO2 gets removed first really matters a lot,” Chen says.
In the future, the researchers want to quantify the air quality and climate impacts of train travel, to see whether replacing short-haul flights with train trips could provide benefits.
They also want to explore the air quality impacts of other energy sources in the U.S., such as data centers.
This research was funded, in part, by Biogen, Inc., the Italian Ministry for Environment, Land, and Sea, and the MIT Center for Sustainability Science and Strategy.
Paula Hammond named dean of the School of Engineering
Paula Hammond ’84, PhD ’93, an Institute Professor and MIT’s executive vice provost, has been named dean of MIT’s School of Engineering, effective Jan. 16. She will succeed Anantha Chandrakasan, the Vannevar Bush Professor of Electrical Engineering and Computer Science, who was appointed MIT’s provost in July.
Hammond, who was head of the Department of Chemical Engineering from 2015 to 2023, has also served as MIT’s vice provost for faculty. She will be the first woman to hold the role of dean of MIT’s School of Engineering.
“From the rigor and creativity of her scientific work to her outstanding record of service to the Institute, Paula Hammond represents the very best of MIT,” says MIT President Sally Kornbluth. “Wise, thoughtful, down-to-earth, deeply curious, and steeped in MIT’s culture and values, Paula will be a highly effective leader for the School of Engineering. I’m delighted she accepted this new challenge.”
Hammond, who is also a member of MIT’s Koch Institute for Integrative Cancer Research, has earned many accolades for her work developing polymers and nanomaterials that can be used for applications including drug delivery, regenerative medicine, noninvasive imaging, and battery technology.
Chandrakasan announced Hammond’s appointment today in an email to the MIT community, writing, “Ever since enrolling at MIT as an undergraduate, Paula has built a remarkable record of accomplishment in scholarship, teaching, and service. Faculty, staff, and students across the Institute praise her wisdom, selflessness, and kindness, especially when it comes to enabling others’ professional growth and success.”
“Paula is a scholar of extraordinary distinction. It is hard to overstate the value of the broad contributions she has made in her field, which have significantly expanded the frontiers of knowledge,” Chandrakasan told MIT News. “Any one of her many achievements could stand as the cornerstone of an outstanding academic career. In addition, her investment in mentoring the next generation of scholars and building community is unparalleled.”
Chandrakasan also thanked Professor Maria Yang, who has served as the school’s interim dean in recent months. “In a testament to her own longstanding contributions to the School of Engineering, Maria took on the deanship even while maintaining leadership roles with the Ideation Lab, D-Lab, and Morningside Academy for Design. For her excellent service and leadership, Maria deserves our deep appreciation,” he wrote to the community.
Building a sense of community
Throughout her career at MIT, Hammond has helped to create a supportive environment in which faculty and students can do their best work. As vice provost for faculty, a role Hammond assumed in 2023, she developed and oversaw new efforts to improve faculty recruitment and retention, mentoring, and professional development. Earlier this year, she took on additional responsibilities as executive vice provost, providing guidance and oversight for a number of Institute-wide initiatives.
As head of the Department of Chemical Engineering, Hammond worked to strengthen the department’s sense of community and initiated a strategic planning process that led to more collaborative research between faculty members. Under her leadership, the department also launched a major review of its undergraduate curriculum and introduced more flexibility into the requirements for a chemical engineering degree.
Another major priority was ensuring that faculty had the support they needed to pursue new research goals. To help achieve that, she established and raised funds for a series of Faculty Research Innovation Fund grants for mid-career faculty who wanted to explore fresh directions.
“I really enjoyed enabling faculty to explore new areas, finding ways to resource them, making sure that they had the right mentoring early in their career and the ‘wind beneath their wings’ that they needed to get where they wanted to go,” she says. “That, to me, was extremely fulfilling.”
Before taking on her official administrative roles, Hammond served the Institute through her work chairing committees that contributed landmark reports on gender and race at MIT: the Initiative for Faculty Race and Diversity and the Academic and Organizational Relationships Working Group.
In her new role as dean, Hammond plans to begin by consulting with faculty across the School of Engineering to learn more about their needs.
“I like to start with conversations,” she says. “I’m very excited about the idea of visiting each of the departments, finding out what’s on the minds of the faculty, and figuring out how we can meaningfully address their needs and continue to build and grow an excellent engineering program.”
One of her goals is to promote greater cross-disciplinarity in MIT’s curriculum, in part by encouraging and providing resources for faculty to develop more courses that bridge multiple departments.
“There are some barriers that exist between departments, because we all need to teach our core requirements,” she says. “I am very interested in collaborating with departments to think about how we can lower barriers to allow faculty to co-teach, or to perhaps look at different course structures that allow us to teach a core component and then have it branch to a more specialized component.”
She also hopes to guide MIT’s engineering departments in finding ways to incorporate artificial intelligence into their curriculum, and to give students greater opportunity for relevant hands-on experiences in engineering.
“I am particularly excited to build from the strong cross-disciplinary efforts and the key strategic initiatives that Anantha launched during his time as dean,” Hammond says. “I believe we have incredible opportunities to build off these critical areas at the interfaces of science, engineering, the humanities, arts, design, and policy, and to create new emergent fields. MIT should be the leader in providing educational foundations that prepare our students for a highly interdisciplinary and AI-enabled world, and a setting that enables our researchers and scholars to solve the most difficult and urgent problems of the world.”
A pioneer in nanotechnology
Hammond grew up in Detroit, where her father was a PhD biochemist who ran the health laboratories for the city of Detroit. Her mother founded a nursing school at Wayne County Community College, and both parents encouraged her interest in science. As an undergraduate at MIT, she majored in chemical engineering with a focus on polymer chemistry.
After graduating in 1984, Hammond spent two years working as a process engineer at Motorola, then earned a master’s degree in chemical engineering from Georgia Tech. She realized that she wanted to pursue a career in academia, and returned to MIT to earn a PhD in polymer science technology. After finishing her degree in 1993, she spent a year and a half as a postdoc at Harvard University before joining the MIT faculty in 1995.
She became a full professor in 2006, and in 2021, she was named an Institute Professor, the highest honor bestowed by MIT. In 2010, Hammond joined MIT’s Koch Institute for Integrative Cancer Research, where she leads a lab that is developing novel nanomaterials a variety of applications, with a primary focus on treatments and diagnostics for ovarian cancer.
Early in her career, Hammond developed a technique for generating functional thin-film materials by stacking layers of charged polymeric materials. This approach can be used to build polymers with highly controlled architectures by alternately exposing a surface to positively and negatively charged particles.
She has used this layer-by-layer assembly technique to build ultrathin batteries, fuel cell electrodes, and drug delivery nanoparticles that can be specifically targeted to cancer cells. These particles can be tailored to carry chemotherapy drugs such as cisplatin, immunotherapy agents, or nucleic acids such as messenger RNA.
In recognition of her pioneering research, Hammond was awarded the 2024 National Medal of Technology and Innovation. She was also the 2023-24 recipient of MIT’s Killian Award, which honors extraordinary professional achievements by an MIT faculty member. Her many other awards include the Benjamin Franklin Medal in Chemistry in 2024, the ACS Award in Polymer Science in 2018, the American Institute of Chemical Engineers Charles M. A. Stine Award in Materials Engineering and Science in 2013, and the Ovarian Cancer Research Program Teal Innovator Award in 2013.
Hammond has also been honored for her dedication to teaching and mentoring. As a reflection of her excellence in those areas, she was awarded the Irwin Sizer Award for Significant Improvements to MIT Education, the Henry Hill Lecturer Award in 2002, and the Junior Bose Faculty Award in 2000. She also co-chaired the recent Ad Hoc Committee on Faculty Advising and Mentoring, and has been selected as a “Committed to Caring” honoree for her work mentoring students and postdocs in her research group.
Hammond has served on the President’s Council of Advisors on Science and Technology, as well as the U.S. Secretary of Energy Scientific Advisory Board, the NIH Center for Scientific Review Advisory Council, and the Board of Directors of the American Institute of Chemical Engineers. Additionally, she is one of a small group of scientists who have been elected to the National Academies of Engineering, Sciences, and Medicine.
MADMEC winners develop spray-on coating to protect power lines from ice
A spray-on coating to keep power lines standing through an ice storm may not be the obvious fix for winter outages — but it’s exactly the kind of innovation that happens when MIT students tackle a sustainability challenge.
“The big threat to the power line network is winter icing that causes huge amounts of downed lines every year,” says Trevor Bormann, a graduate student in MIT’s Department of Materials Science and Engineering (DMSE) and member of MITten, the winning team in the 2025 MADMEC innovation contest. Fixing those outages is hugely carbon-intensive, requiring diesel-powered equipment, replacement materials, and added energy use. And as households switch to electric heat pumps, the stakes of a prolonged outage rise.
To address the challenge, the team developed a specialized polymer coating that repels water and can be sprayed onto aluminum power lines. The coating contains nanofillers — particles hundreds of times smaller than a human hair — that give the surface a texture that makes water bead and drip off.
The effect is known as “superhydrophobicity,” says Shaan Jagani, a graduate student in the Department of Aeronautics and Astronautics. “And what that really means is water does not stay on the surface, and therefore water will not have the opportunity to nucleate down into ice.”
MITten — pronounced “mitten” — won the $10,000 first prize in the contest, hosted by DMSE on Nov. 10 at MIT, where audience presentations and poster sessions capped months of design and experimentation. Since 2007, MADMEC (the MIT and Dow Materials Engineering Contest), funded by Dow and Saint-Gobain, has given students a chance to tackle real-world sustainability challenges, with each team receiving $1,000 to build and test their projects. Judges evaluated the teams’ work from conception to prototype.
MADMEC winners have gone on to succeed in major innovation competitions such as MassChallenge, and at least six startups — including personal cooling wristband maker Embr and vehicle-motion-control company ClearMotion — trace their roots to the contest.
Cold inspiration
The idea for the MITten project came in part from Bormann’s experience growing up in South Dakota, where winter outages were common. His home was heated by natural gas, but if grid-reliant heat pumps had warmed it in negative-zero winter months, a days-long outage would have been “really rough.”
“I love the part of sustainability that is focused on developing all these new technologies for electricity generation and usage, but also the distribution side of it shouldn’t be neglected, either,” Bormann says. “It’s important for all those to be growing synergistically, and to be paying attention to all aspects of it.”
And there’s an opportunity to make distribution infrastructure more durable: An estimated 50,000 miles of new power lines are planned over the next decade in the northern United States, where icing is a serious risk.
To test their coating, the team built an icing chamber to simulate rain and freezing conditions, comparing coated versus uncoated aluminum samples at –10 degrees Celsius (14 degrees Fahrenheit). They also dipped samples in liquid nitrogen to evaluate performance in extreme cold and simulated real-world stresses such as lines swaying in windstorms.
“We basically coated aluminum substrates and then bent them to demonstrate that the coating itself could accommodate very long strains,” Jagani says.
The team ran simulations to estimate that a typical outage affecting 20 percent of a region could cost about $7 million to repair. “But if you fully coat, say, 1,000 kilometers of line, you actually can save $1 million in just material costs,” says DMSE grad student Matthew Michalek. The team hopes to further refine the coating with more advanced materials and test them in a professional icing chamber.
Amber Velez, a graduate student in the Department of Mechanical Engineering, stressed the parameters of the contest — working within a $1,000 budget.
“I feel we did quite good work with quite a lot of legitimacy, but I think moving on, there is a lot of space that we could have more play in,” she says. “We’ve definitely not hit the ceiling yet, and I think there’s a lot of room to keep growing.”
Compostable electrodes, microwavable ceramics
The second-place, $6,000 prize went to Electrodiligent, which is designing a biodegradable, compostable alternative to electrodes used for heart monitoring. Their prototype uses a cellulose paper backing and a conductive gel made from gelatin, glycerin, and sodium chloride to carry the electric signal.
Comparing electrocardiogram (ECG) results, the team found their electrodes performed similarly to the 3M Red Dot standard. “We’re very optimistic about this result,” says Ethan Frey, a DMSE graduate student.
The invention aims to cut into the 3.6 tons of medical waste produced each day, but judges noted that adhesive electrodes are almost always incinerated for health and safety reasons, making the intended application a tough fit.
“But there’s a whole host of other directions the team could go in,” says Mike Tarkanian, senior lecturer in DMSE and coordinator of MADMEC.
The $4,000 third prize went to Cerawave, a team made up of mostly undergraduates and a member the team jokingly called a “token grad student,” working to make ceramics in an ordinary kitchen microwave. Traditional ceramic manufacturing requires high-temperature kilns, a major source of energy use and carbon emissions. Cerawave added silicon carbide to their ceramic mix to help it absorb microwave energy and fuse into a durable final product.
“We threw it on the ground a few times, and it didn’t break,” says Merrill Chiang, a junior in DMSE, drawing laughs from the audience. The team now plans to refine their recipe and overall ceramic-making process so that hobbyists — and even users in environments like the International Space Station — could create ceramic parts “without buying really expensive furnaces.”
The power of student innovation
Although it didn’t earn a prize, the contest’s most futuristic project was ReForm Designs, which aims to make reusable children’s furniture — expensive and quickly outgrown — from modular blocks made of mycelium, the root-like, growth-driving part of a mushroom. The team showed they could successfully produce mycelium blocks, but slow growth and sensitivity to moisture and temperature meant they didn’t yet have full furniture pieces to show judges.
The project still impressed DMSE senior David Miller, who calls the blocks “really intriguing,” with potential applications beyond furniture in manufacturing, construction, and consumer products.
“They adapt to the way we consume products, where a lot of us use products for one, two, three years before we throw them out,” Miller says. “Their capacity to be fully biodegradable and molded into any shape fills the need for certain kinds of additive manufacturing that requires certain shapes, while also being extremely sustainable.”
While the contest has produced successful startups, Tarkanian says MADMEC’s original goal — giving students a chance to get their hands dirty and pursue their own ideas — is thriving 18 years on, especially at a time when research budgets are being cut and science is under scrutiny.
“It gives students an opportunity to make things that are real and impactful to society,” he says. “So when you can build a prototype and say, ‘This is going to save X millions of dollars or X million pounds of waste,’ that value is obvious to everyone.”
Attendee Jinsung Kim, a postdoc in mechanical engineering, echoed Tarkanian’s comments, emphasizing the space set aside for innovative thinking.
“MADMEC creates the rare environment where students can experiment boldly, validate ideas quickly, and translate core scientific principles into solutions with real societal impact. To move society forward, we have to keep pushing the boundaries of technology and fundamental science,” he says.
MIT researchers “speak objects into existence” using AI and robotics
Generative AI and robotics are moving us ever closer to the day when we can ask for an object and have it created within a few minutes. In fact, MIT researchers have developed a speech-to-reality system, an AI-driven workflow that allows them to provide input to a robotic arm and “speak objects into existence,” creating things like furniture in as little as five minutes.
With the speech-to-reality system, a robotic arm mounted on a table is able to receive spoken input from a human, such as “I want a simple stool,” and then construct the objects out of modular components. To date, the researchers have used the system to create stools, shelves, chairs, a small table, and even decorative items such as a dog statue.
“We’re connecting natural language processing, 3D generative AI, and robotic assembly,” says Alexander Htet Kyaw, an MIT graduate student and Morningside Academy for Design (MAD) fellow. “These are rapidly advancing areas of research that haven’t been brought together before in a way that you can actually make physical objects just from a simple speech prompt.”
The idea started when Kyaw — a graduate student in the departments of Architecture and Electrical Engineering and Computer Science — took Professor Neil Gershenfeld’s course, “How to Make Almost Anything.” In that class, he built the speech-to-reality system. He continued working on the project at the MIT Center for Bits and Atoms (CBA), directed by Gershenfeld, collaborating with graduate students Se Hwan Jeon of the Department of Mechanical Engineering and Miana Smith of CBA.
The speech-to-reality system begins with speech recognition that processes the user’s request using a large language model, followed by 3D generative AI that creates a digital mesh representation of the object, and a voxelization algorithm that breaks down the 3D mesh into assembly components.
After that, geometric processing modifies the AI-generated assembly to account for fabrication and physical constraints associated with the real world, such as the number of components, overhangs, and connectivity of the geometry. This is followed by creation of a feasible assembly sequence and automated path planning for the robotic arm to assemble physical objects from user prompts.
By leveraging natural language, the system makes design and manufacturing more accessible to people without expertise in 3D modeling or robotic programming. And, unlike 3D printing, which can take hours or days, this system builds within minutes.
“This project is an interface between humans, AI, and robots to co-create the world around us,” Kyaw says. “Imagine a scenario where you say ‘I want a chair,’ and within five minutes a physical chair materializes in front of you.”
The team has immediate plans to improve the weight-bearing capability of the furniture by changing the means of connecting the cubes from magnets to more robust connections.
“We’ve also developed pipelines for converting voxel structures into feasible assembly sequences for small, distributed mobile robots, which could help translate this work to structures at any size scale,” Smith says.
The purpose of using modular components is to eliminate the waste that goes into making physical objects by disassembling and then reassembling them into something different, for instance turning a sofa into a bed when you no longer need the sofa.
Because Kyaw also has experience using gesture recognition and augmented reality to interact with robots in the fabrication process, he is currently working on incorporating both speech and gestural control into the speech-to-reality system.
Leaning into his memories of the replicator in the “Star Trek” franchise and the robots in the animated film “Big Hero 6,” Kyaw explains his vision.
“I want to increase access for people to make physical objects in a fast, accessible, and sustainable manner,” he says. “I’m working toward a future where the very essence of matter is truly in your control. One where reality can be generated on demand.”
The team presented their paper “Speech to Reality: On-Demand Production using Natural Language, 3D Generative AI, and Discrete Robotic Assembly” at the Association for Computing Machinery (ACM) Symposium on Computational Fabrication (SCF ’25) held at MIT on Nov. 21.
Cultivating confidence and craft across disciplines
Both Rohit Karnik and Nathan Wilmers personify the type of mentorship that any student would be fortunate to receive — one rooted in intellectual rigor and grounded in humility, empathy, and personal support. They show that transformative academic guidance is not only about solving research problems, but about lifting up the people working on them.
Whether it’s Karnik’s quiet integrity and commitment to scientific ethics, or Wilmers’ steadfast encouragement of his students in the face of challenges, both professors cultivate spaces where students are not only empowered to grow as researchers, but affirmed as individuals. Their mentees describe feeling genuinely seen and supported; mentored not just in theory or technique, but in resilience. It’s this attention to the human element that leaves a lasting impact.
Professors Karnik and Wilmers are two of the 2023–25 Committed to Caring cohort who are cultivating confidence and craft across disciplines. For MIT graduate students, the Committed to Caring program recognizes those who go above and beyond.
Rohit Karnik: Rooted in rigor, guided by care
Rohit Karnik is Abdul Latif Jameel Professor in the Department of Mechanical Engineering at MIT, where he leads the Microfluidics and Nanofluidics Research Group and serves as director of the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). His research explores the physics of micro- and nanofluidic flows and systems. Applications of his work include the development of water filters, portable diagnostic tools, and sensors for environmental monitoring.
Karnik is genuinely excited about his students’ ideas, and open to their various academic backgrounds. He validates students by respecting their research, encouraging them to pursue their interests, and showing enthusiasm for their exploration within mechanical engineering and beyond.
One student reflected on the manner in which Karnik helped them feel more confident in their academic journey. When a student from a non-engineering field joined the mechanical engineering graduate program, Karnik never viewed their background as a barrier to success. The student wrote, “from the start, he was enthusiastic about my interdisciplinarity and the perspective I could bring to the lab.”
He allowed the student to take remedial undergraduate classes to learn engineering basics, provided guidance on leveraging their previous academic background, and encouraged them to write grants and apply for fellowships that would support their interdisciplinary work. In addition to these concrete supports, Karnik also provided the student with the freedom to develop their own ideas, offering constructive, realistic feedback on what was attainable.
“This transition took time, and Karnik honored that, prioritizing my growth in a completely new field over getting quick results,” the nominator reflected. Ultimately, Karnik’s mentorship, patience, and thoughtful encouragement led the student to excel in the engineering field.
Karnik encourages his advisees to explore their interests in mechanical engineering and beyond. This holistic approach extends beyond academics and into Karnik’s view of his students as whole individuals. One student wrote that he treats them as complete humans, with ambitions, aspirations, and passions worthy of his respect and consideration — and remains truly selfless in his commitment to their growth and success.
Karnik emphasizes that “it’s important to have dreams,” regularly encouraging his mentees to take advantage of opportunities that align with their goals and values. This sentiment is felt deeply by his students, with one nominator sharing that Karnik “encourag[ed] me to think broadly and holistically about my life, which has helped me structure and prioritize my time at MIT.”
Nathan Wilmers: Cultivating confidence, craft, and care
Nathan Wilmers is the Sarofim Family Career Development Associate Professor of Work and Organizations at MIT Sloan School of Management. His research spans wage and earnings inequality, economic sociology, and the sociology of labor. He is also affiliated with the Institute for Work and Employment Research, and the Economic Sociology program at Sloan. Wilmers studies wage and earnings inequality, economic sociology, and the sociology of labor, bringing insights from economic sociology to the study of labor markets and the wage structure.
A remarkable mentor, Wilmers is known for guiding his students through different projects while also teaching them more broadly about the system of academia. As one nominator illustrates, “he … helped me learn the ‘tacit’ knowledge to understand how to write a paper,” while also emphasizing the learning process of the PhD as a whole, and never reprimanding any mistakes along the way.
Students say that Wilmers “reassures us that making mistakes is a natural part of the learning process and encourages us to continuously check, identify, and rectify them.” He welcomes all questions without judgment, and generously invests his time and patience in teaching students.
Wilmers is a strong advocate for his students, both academically and personally. He emphasizes the importance of learning, growth, and practical experience, rather than solely focusing on scholarly achievements and goals. Students feel this care, describing “an environment that maximizes learning opportunities and fosters the development of skills,” allowing them to truly collaborate rather than simply aim for the “right” answers.
In addition to his role in the classroom and lab, Wilmers also provides informal guidance to advisees, imparting valuable knowledge about the academic system, emphasizing the significance of networking, and sharing insider information.
“Nate’s down-to-earth nature is evident in his accessibility to students,” expressed one nominator, who wrote that “sometimes we can freely approach his office without an appointment and receive valuable advice on both work-related and personal matters.” Moreover, Wilmers prioritizes his advisees’ career advancement, dedicating a substantial amount of time to providing feedback on thesis projects, and even encouraging students to take a lead in publishing research.
True mentorship often lies in the patient, careful transmission of craft — the behind-the-scenes work that forms the backbone of rigorous research. “I care about the details,” says Wilmers, reflecting a philosophy shaped by his own graduate advisors. Wilmers’ mentors instilled in him a deep respect for the less-glamorous but essential elements of scholarly work: data cleaning, thoughtful analysis, and careful interpretation. These technical and analytical skills are where real learning happens, he believes.
By modeling this approach with his own students, Wilmers creates a culture where precision and discipline are valued just as much as innovation. His mentorship is grounded in the belief that becoming a good researcher requires not just vision, but also an intimate understanding of process — of how ideas are sharpened through methodical practice, and how impact comes from doing the small things well. His thoughtful, detail-oriented mentorship leaves a lasting impression on his students.
A nominator acclaimed, “Nate’s strong enthusiasm for my research, coupled with his expressed confidence and affirmation of its value, served as a significant source of motivation for me to persistently pursue my ideas.”
Robots that spare warehouse workers the heavy lifting
There are some jobs human bodies just weren’t meant to do. Unloading trucks and shipping containers is a repetitive, grueling task — and a big reason warehouse injury rates are more than twice the national average.
The Pickle Robot Company wants its machines to do the heavy lifting. The company’s one-armed robots autonomously unload trailers, picking up boxes weighing up to 50 pounds and placing them onto onboard conveyor belts for warehouses of all types.
The company name, an homage to The Apple Computer Company, hints at the ambitions of founders AJ Meyer ’09, Ariana Eisenstein ’15, SM ’16, and Dan Paluska ’97, SM ’00. The founders want to make the company the technology leader for supply chain automation.
The company’s unloading robots combine generative AI and machine-learning algorithms with sensors, cameras, and machine-vision software to navigate new environments on day one and improve performance over time. Much of the company’s hardware is adapted from industrial partners. You may recognize the arm, for instance, from car manufacturing lines — though you may not have seen it in bright pickle-green.
The company is already working with customers like UPS, Ryobi Tools, and Yusen Logistics to take a load off warehouse workers, freeing them to solve other supply chain bottlenecks in the process.
“Humans are really good edge-case problem solvers, and robots are not,” Paluska says. “How can the robot, which is really good at the brute force, repetitive tasks, interact with humans to solve more problems? Human bodies and minds are so adaptable, the way we sense and respond to the environment is so adaptable, and robots aren’t going to replace that anytime soon. But there’s so much drudgery we can get rid of.”
Finding problems for robots
Meyer and Eisenstein majored in computer science and electrical engineering at MIT, but they didn’t work together until after graduation, when Meyer started the technology consultancy Leaf Labs, which specializes in building embedded computer systems for things like robots, cars, and satellites.
“A bunch of friends from MIT ran that shop,” Meyer recalls, noting it’s still running today. “Ari worked there, Dan consulted there, and we worked on some big projects. We were the primary software and digital design team behind Project Ara, a smartphone for Google, and we worked on a bunch of interesting government projects. It was really a lifestyle company for MIT kids. But 10 years go by, and we thought, ‘We didn’t get into this to do consulting. We got into this to do robots.’”
When Meyer graduated in 2009, problems like robot dexterity seemed insurmountable. By 2018, the rise of algorithmic approaches like neural networks had brought huge advances to robotic manipulation and navigation.
To figure out what problem to solve with robots, the founders talked to people in industries as diverse as agriculture, food prep, and hospitality. At some point, they started visiting logistics warehouses, bringing a stopwatch to see how long it took workers to complete different tasks.
“In 2018, we went to a UPS warehouse and watched 15 guys unloading trucks during a winter night shift,” Meyer recalls. “We spoke to everyone, and not a single person had worked there for more than 90 days. We asked, ‘Why not?’ They laughed at us. They said, ‘Have you tried to do this job before?’”
It turns out warehouse turnover is one of the industry’s biggest problems, limiting productivity as managers constantly grapple with hiring, onboarding, and training.
The founders raised a seed funding round and built robots that could sort boxes because it was an easier problem that allowed them to work with technology like grippers and barcode scanners. Their robots eventually worked, but the company wasn’t growing fast enough to be profitable. Worse yet, the founders were having trouble raising money.
“We were desperately low on funds,” Meyer recalls. “So we thought, ‘Why spend our last dollar on a warm-up task?’”
With money dwindling, the founders built a proof-of-concept robot that could unload trucks reliably for about 20 seconds at a time and posted a video of it on YouTube. Hundreds of potential customers reached out. The interest was enough to get investors back on board to keep the company alive.
The company piloted its first unloading system for a year with a customer in the desert of California, sparing human workers from unloading shipping containers that can reach temperatures up to 130 degrees in the summer. It has since scaled deployments with multiple customers and gained traction among third-party logistics centers across the U.S.
The company’s robotic arm is made by the German industrial robotics giant KUKA. The robots are mounted on a custom mobile base with an onboard computing systems so they can navigate to docks and adjust their positions inside trailers autonomously while lifting. The end of each arm features a suction gripper that clings to packages and moves them to the onboard conveyor belt.
The company’s robots can pick up boxes ranging in size from 5-inch cubes to 24-by-30 inch boxes. The robots can unload anywhere from 400 to 1,500 cases per hour depending on size and weight. The company fine tunes pre-trained generative AI models and uses a number of smaller models to ensure the robot runs smoothly in every setting.
The company is also developing a software platform it can integrate with third-party hardware, from humanoid robots to autonomous forklifts.
“Our immediate product roadmap is load and unload,” Meyer says. “But we’re also hoping to connect these third-party platforms. Other companies are also trying to connect robots. What does it mean for the robot unloading a truck to talk to the robot palletizing, or for the forklift to talk to the inventory drone? Can they do the job faster? I think there’s a big network coming in which we need to orchestrate the robots and the automation across the entire supply chain, from the mines to the factories to your front door.”
“Why not us?”
The Pickle Robot Company employs about 130 people in its office in Charlestown, Massachusetts, where a standard — if green — office gives way to a warehouse where its robots can be seen loading boxes onto conveyor belts alongside human workers and manufacturing lines.
This summer, Pickle will be ramping up production of a new version of its system, with further plans to begin designing a two-armed robot sometime after that.
“My supervisor at Leaf Labs once told me ‘No one knows what they’re doing, so why not us?’” Eisenstein says. “I carry that with me all the time. I’ve been very lucky to be able to work with so many talented, experienced people in my career. They all bring their own skill sets and understanding. That’s a massive opportunity — and it’s the only way something as hard as what we’re doing is going to work.”
Moving forward, the company sees many other robot-shaped problems for its machines.
“We didn’t start out by saying, ‘Let’s load and unload a truck,’” Meyers says. “We said, ‘What does it take to make a great robot business?’ Unloading trucks is the first chapter. Now we’ve built a platform to make the next robot that helps with more jobs, starting in logistics but then ultimately in manufacturing, retail, and hopefully the entire supply chain.”
Alternate proteins from the same gene contribute differently to health and rare disease
Around 25 million Americans have rare genetic diseases, and many of them struggle with not only a lack of effective treatments, but also a lack of good information about their disease. Clinicians may not know what causes a patient’s symptoms, know how their disease will progress, or even have a clear diagnosis. Researchers have looked to the human genome for answers, and many disease-causing genetic mutations have been identified, but as many as 70 percent of patients still lack a clear genetic explanation.
In a paper published in Molecular Cell on Nov. 7, Whitehead Institute for Biomedical Research member Iain Cheeseman, graduate student Jimmy Ly, and colleagues propose that researchers and clinicians may be able to get more information from patients’ genomes by looking at them in a different way.
The common wisdom is that each gene codes for one protein. Someone studying whether a patient has a mutation or version of a gene that contributes to their disease will therefore look for mutations that affect the “known” protein product of that gene. However, Cheeseman and others are finding that the majority of genes code for more than one protein. That means that a mutation that might seem insignificant because it does not appear to affect the known protein could nonetheless alter a different protein made by the same gene. Now, Cheeseman and Ly have shown that mutations affecting one or multiple proteins from the same gene can contribute differently to disease.
In their paper, the researchers first share what they have learned about how cells make use of the ability to generate different versions of proteins from the same gene. Then, they examine how mutations that affect these proteins contribute to disease. Through a collaboration with co-author Mark Fleming, the pathologist-in-chief at Boston Children’s Hospital, they provide two case studies of patients with atypical presentations of a rare anemia linked to mutations that selectively affect only one of two proteins produced by the gene implicated in the disease.
“We hope this work demonstrates the importance of considering whether a gene of interest makes multiple versions of a protein, and what the role of each version is in health and disease,” Ly says. “This information could lead to better understanding of the biology of disease, better diagnostics, and perhaps one day to tailored therapies to treat these diseases.”
Cells have several ways to make different versions of a protein, but the variation that Cheeseman and Ly study happens during protein production from genetic code. Cellular machines build each protein according to the instructions within a genetic sequence that begins at a “start codon” and ends at a “stop codon.” However, some genetic sequences contain more than one start codon, many of them hiding in plain sight. If the cellular machinery skips the first start codon and detects a second one, it may build a shorter version of the protein. In other cases, the machinery may detect a section that closely resembles a start codon at a point earlier in the sequence than its typical starting place, and build a longer version of the protein.
These events may sound like mistakes: the cell’s machinery accidentally creating the wrong version of the correct protein. To the contrary, protein production from these alternate starting places is an important feature of cell biology that exists across species. When Ly traced when certain genes evolved to produce multiple proteins, he found that this is a common, robust process that has been preserved throughout evolutionary history for millions of years.
Ly shows that one function this serves is to send versions of a protein to different parts of the cell. Many proteins contain ZIP code-like sequences that tell the cell’s machinery where to deliver them so the proteins can do their jobs. Ly found many examples in which longer and shorter versions of the same protein contained different ZIP codes and ended up in different places within the cell.
In particular, Ly found many cases in which one version of a protein ended up in mitochondria, structures that provide energy to cells, while another version ended up elsewhere. Because of the mitochondria’s role in the essential process of energy production, mutations to mitochondrial genes are often implicated in disease.
Ly wondered what would happen when a disease-causing mutation eliminates one version of a protein but leaves the other intact, causing the protein to only reach one of its two intended destinations. He looked through a database containing genetic information from people with rare diseases to see if such cases existed, and found that they did. In fact, there may be tens of thousands of such cases. However, without access to the people, Ly had no way of knowing what the consequences of this were in terms of symptoms and severity of disease.
Meanwhile, Cheeseman, who is also a professor of biology at MIT, had begun working with Boston Children’s Hospital to foster collaborations between Whitehead Institute and the hospital’s researchers and clinicians to accelerate the pathway from research discovery to clinical application. Through these efforts, Cheeseman and Ly met Fleming.
One group of Fleming’s patients have a type of anemia called SIFD — sideroblastic anemia with B-cell immunodeficiency, periodic fevers, and developmental delay — that is caused by mutations to the TRNT1 gene. TRNT1 is one of the genes Ly had identified as producing a mitochondrial version of its protein and another version that ends up elsewhere: in the nucleus.
Fleming shared anonymized patient data with Ly, and Ly found two cases of interest in the genetic data. Most of the patients had mutations that impaired both versions of the protein, but one patient had a mutation that eliminated only the mitochondrial version of the protein, while another patient had a mutation that eliminated only the nuclear version.
When Ly shared his results, Fleming revealed that both of those patients had very atypical presentations of SIFD, supporting Ly’s hypothesis that mutations affecting different versions of a protein would have different consequences. The patient who only had the mitochondrial version was anemic, but developmentally normal. The patient missing the mitochondrial version of the protein did not have developmental delays or chronic anemia, but did have other immune symptoms, and was not correctly diagnosed until his 50s. There are likely other factors contributing to each patient’s exact presentation of the disease, but Ly’s work begins to unravel the mystery of their atypical symptoms.
Cheeseman and Ly want to make more clinicians aware of the prevalence of genes coding for more than one protein, so they know to check for mutations affecting any of the protein versions that could contribute to disease. For example, several TRNT1 mutations that only eliminate the shorter version of the protein are not flagged as disease-causing by current assessment tools. Cheeseman lab researchers, including Ly and graduate student Matteo Di Bernardo, are now developing a new assessment tool for clinicians, called SwissIsoform, that will identify relevant mutations that affect specific protein versions, including mutations that would otherwise be missed.
“Jimmy and Iain’s work will globally support genetic disease variant interpretation and help with connecting genetic differences to variation in disease symptoms,” Fleming says. “In fact, we have recently identified two other patients with mutations affecting only the mitochondrial versions of two other proteins, who similarly have milder symptoms than patients with mutations that affect both versions.”
Long term, the researchers hope that their discoveries could aid in understanding the molecular basis of disease and in developing new gene therapies: Once researchers understand what has gone wrong within a cell to cause disease, they are better equipped to devise a solution. More immediately, the researchers hope that their work will make a difference by providing better information to clinicians and people with rare diseases.
“As a basic researcher who doesn’t typically interact with patients, there’s something very satisfying about knowing that the work you are doing is helping specific people,” Cheeseman says. “As my lab transitions to this new focus, I’ve heard many stories from people trying to navigate a rare disease and just get answers, and that has been really motivating to us, as we work to provide new insights into the disease biology.”
MIT School of Engineering faculty and staff receive awards in summer 2025
Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in summer 2025:
Iwnetim Abate, the Chipman Career Development Professor and assistant professor in the Department of Materials Science and Engineering, was honored as one of MIT Technology Review’s 2025 Innovators Under 35. He was recognized for his research on sodium-ion batteries and ammonia production.
Daniel G. Anderson, the Joseph R. Mares (1924) Professor in the Department of Chemical Engineering and the Institute of Medical Engineering and Science (IMES), received the 2025 AIChE James E. Bailey Award. The award honors outstanding contributions in biological engineering and commemorates the pioneering work of James Bailey.
Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in the Department of Electrical Engineering and Computer Science (EECS), was named to Time’s AI100 2025 list, recognizing her groundbreaking work in AI and health.
Richard D. Braatz, the Edwin R. Gilliland Professor in the Department of Chemical Engineering, received the 2025 AIChE CAST Distinguished Service Award. The award recognizes exceptional service and leadership within the Computing and Systems Technology Division of AIChE.
Rodney Brooks, the Panasonic Professor of Robotics, Emeritus in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Sciences, one of the highest honors in scientific research.
Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the Department of Chemical Engineering and IMES, received the 2025 AIChE Alpha Chi Sigma Award. This award honors outstanding accomplishments in chemical engineering research over the past decade.
Connor W. Coley, the Class of 1957 Career Development Professor and associate professor in the departments of Chemical Engineering and EECS, received the 2025 AIChE CoMSEF Young Investigator Award for Modeling and Simulation. The award recognizes outstanding research in computational molecular science and engineering. Coley was also one of 74 highly accomplished, early-career engineers selected to participate in the Grainger Foundation Frontiers of Engineering Symposium, a signature activity of the National Academy of Engineering.
Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design and implementation of efficient, scalable, secure, and trustworthy computing systems.
Christina Delimitrou, the KDD Career Development Professor in Communications and Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award. The award supports assistant professors advancing scalable and trustworthy computing systems for machine learning and cloud computing. Delimitrou also received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.
Priya Donti, the Silverman (1968) Family Career Development Professor and assistant professor in the Department of EECS, was named to Time’s AI100 2025 list, which honors innovators reshaping the world through artificial intelligence.
Joel Emer, a professor of the practice in the Department of EECS, received the Alan D. Berenbaum Distinguished Service Award from ACM SIGARCH. He was honored for decades of mentoring and leadership in the computer architecture community.
Roger Greenwood Mark, the Distinguished Professor of Health Sciences and Technology, Emeritus in IMES, received the IEEE Biomedical Engineering Award for leadership in ECG signal processing and global dissemination of curated biomedical and clinical databases, thereby accelerating biomedical research worldwide.
Ali Jadbabaie, the JR East Professor and head of the Department of Civil and Environmental Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Yoon Kim, associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.
Mathias Kolle, an associate professor in the Department of Mechanical Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of EECS, was elected an International Fellow of the United Kingdom's Royal Academy of Engineering. The honor recognizes exceptional contributions to engineering and technology across sectors.
Pablo Parrilo, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering in the Department of EECS, received the 2025 INFORMS Computing Society Prize. The award honors outstanding contributions at the interface of computing and operations research. Parrilo was recognized for pioneering work on accelerating gradient descent through stepsize hedging, introducing concepts such as Silver Stepsizes and recursive gluing.
Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Professor of Neuroscience and assistant professor in the Department of EECS, was named to MIT Technology Review’s “2025 Innovators Under 35” list. The honor celebrates early-career scientists and entrepreneurs driving real-world impact.
Justin Solomon, an associate professor in the Department of EECS, was named a 2025 Schmidt Science Polymath. The award supports novel, early-stage research across disciplines, including acoustics and climate simulation.
Martin Staadecker, a research assistant in the Sustainable Supply Chain Lab, received the MIT-GE Vernova Energy and Climate Alliance Technology and Policy Program Project Award. The award recognizes his work on Scope 3 emissions and sustainable supply chain practices.
Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Ryan Williams, a professor in the Department of EECS, received the Best Paper Award at STOC 2025 for his paper “Simulating Time With Square-Root Space,” recognized for its technical merit and originality. Williams was also selected as a Member of the Institute for Advanced Study for the 2025–26 academic year. This prestigious fellowship recognizes the significance of these scholars' work, and it is an opportunity to advance their research and exchange ideas with scholars from around the world.
Gioele Zardini, the Rudge (1948) and Nancy Allen Career Development Professor in the Department of Civil and Environmental Engineering, received the 2025 DARPA Young Faculty Award. The award supports rising stars among early-career faculty, helping them develop research ideas aligned with national security needs.
Revisiting a revolution through poetry
There are several narratives surrounding the American Revolution, a well-traveled and -documented series of events leading to the drafting and signing of the Declaration of Independence and the war that followed.
MIT philosopher Brad Skow is taking a new approach to telling this story: a collection of 47 poems about the former American colonies’ journey from England’s imposition of the Stamp Act in 1765 to the war for America’s independence that began in 1775.
When asked why he chose poetry to retell the story, Skow, the Laurence S. Rockefeller Professor in the Department of Linguistics and Philosophy, said he “wanted to take just the great bits of these speeches and writings, while maintaining their intent and integrity.” Poetry, Skow argues, allows for that kind of nuance and specificity.
“American Independence in Verse,” published by Pentameter Press, traces a story of America’s origins through a collection of vignettes featuring some well-known characters, like politician and orator Patrick Henry, alongside some lesser-known but no less important ones, like royalist and former chief justice of North Carolina Martin Howard. Each is rendered in blank verse, a nursery-style rhyme, or free verse.
The book is divided into three segments: “Taxation Without Representation,” “Occupation and Massacre,” and “War and Independence.” Themes like freedom, government, and authority, rendered in a style of writing and oratory seldom seen today, lent themselves to being reimagined as poems. “The options available with poetic license offer opportunities for readers that might prove more difficult with prose,” Skow reports.
Skow based each of the poems on actual speeches, letters, pamphlets, and other printed materials produced by people on both sides of the debate about independence. “While reviewing a variety of primary sources for the book, I began to see the poetry in them,” he says.
In the poem “Everywhere, the spirit of equality prevails,” during an “Interlude” between the “Occupation and Massacre” and “War and Independence” sections of the book, British commissioner of customs Henry Hulton, writing to Robert Nicholson in Liverpool, England, describes the America he experienced during a trip with his wife:
The spirit of equality prevails.
Regarding social differences, they’ve no
Notion of rank, and will show more respect
To one another than to those above them.
They’ll ask a thousand strange impertinent
Questions, sit down when they should wait at a table,
React with puzzlement when you do not
Invite your valet to come share your meal.
Here, Skow, using Hulton’s words, illustrates the tension between agreed-upon social conventions — remnants of the Old World — and the society being built in the New World that animates a portion of the disconnect leading both toward war. “These writings are really powerful, and poetry offers a way to convey that power,” Skow says.
The journey to the printed page
Skow’s interest in exploring the American Revolution came, in part, from watching the Emmy Award-winning play “Hamilton.” The book ends where the play begins. “It led me to want to learn more,” he says of the play and his experience watching it. “Its focus on the Revolution made the era more exciting for me.”
While conducting research for another poetry project, Skow read an interview with American diplomat, inventor, and publisher Benjamin Franklin in the House of Commons conducted in 1766. “There were lots of amazing poetic moments in the interview,” he says. Skow began reading additional pamphlets, letters, and other writings, disconnecting his work as a philosopher from the research that would yield the book.
“I wanted to remove my philosopher hat with this project,” he says. “Poetry can encourage ambiguity and, unlike philosophy, can focus on emotional and non-rational connections between ideas.”
Although eager to approach the work as a poet and author, rather than a philosopher, Skow discovered that more primary sources than he expected were themselves often philosophical treatises. “Early in the resistance movement there were sophisticated arguments, often printed in newspapers, that it was unjust to tax the colonies without granting them representation in Parliament,” he notes.
A series of new perspectives and lessons
Skow made some discoveries that further enhanced his passion for the project. “Samuel Adams is an important figure who isn’t as well-known as he should be,” he says. “I wanted to raise his profile.”
Skow also notes that American separatists used strong-arm tactics to “encourage” support for independence, and that prevailing narratives regarding America and its eventual separation from England are more complex and layered than we might believe. “There were arguments underway about legitimate forms of government and which kind of government was right,” he says, “and many Americans wanted to retain the existing relationship with England.”
Skow says the American Revolution is a useful benchmark when considering subsequent political movements, a notion he hopes readers will take away from the book. “The book is meant to be fun and not just a collection of dry, abstract ideas,” he believes.
“There’s a simple version of the independence story we tell when we’re in a hurry; and there is the more complex truth, printed in long history books,” he continues. “I wanted to write something that was both short and included a variety of perspectives.”
Skow believes the book and its subjects are a testament to ideas he’d like to see return to political and practical discourse. “The ideals around which this country rallied for its independence are still good ideals, and the courage the participants exhibited is still worth admiring,” he says.
What’s the best way to expand the US electricity grid?
Growing energy demand means the U.S. will almost certainly have to expand its electricity grid in coming years. What’s the best way to do this? A new study by MIT researchers examines legislation introduced in Congress and identifies relative tradeoffs involving reliability, cost, and emissions, depending on the proposed approach.
The researchers evaluated two policy approaches to expanding the U.S. electricity grid: One would concentrate on regions with more renewable energy sources, and the other would create more interconnections across the country. For instance, some of the best untapped wind-power resources in the U.S. lie in the center of the country, so one type of grid expansion would situate relatively more grid infrastructure in those regions. Alternatively, the other scenario involves building more infrastructure everywhere in roughly equal measure, which the researchers call the “prescriptive” approach. How does each pencil out?
After extensive modeling, the researchers found that a grid expansion could make improvements on all fronts, with each approach offering different advantages. A more geographically unbalanced grid buildout would be 1.13 percent less expensive, and would reduce carbon emissions by 3.65 percent compared to the prescriptive approach. And yet, the prescriptive approach, with more national interconnection, would significantly reduce power outages due to extreme weather, among other things.
“There’s a tradeoff between the two things that are most on policymakers’ minds: cost and reliability,” says Christopher Knittel, an economist at the MIT Sloan School of Management, who helped direct the research. “This study makes it more clear that the more prescriptive approach ends up being better in the face of extreme weather and outages.”
The paper, “Implications of Policy-Driven Transmission Expansion on Costs, Emissions and Reliability in the United States,” is published today in Nature Energy.
The authors are Juan Ramon L. Senga, a postdoc in the MIT Center for Energy and Environmental Policy Research; Audun Botterud, a principal research scientist in the MIT Laboratory for Information and Decision Systems; John E. Parson, the deputy director for research at MIT’s Center for Energy and Environmental Policy Research; Drew Story, the managing director at MIT’s Policy Lab; and Knittel, who is the George P. Schultz Professor at MIT Sloan, and associate dean for climate and sustainability at MIT.
The new study is a product of the MIT Climate Policy Center, housed within MIT Sloan and committed to bipartisan research on energy issues. The center is also part of the Climate Project at MIT, founded in 2024 as a high-level Institute effort to develop practical climate solutions.
In this case, the project was developed from work the researchers did with federal lawmakers who have introduced legislation aimed at bolstering and expanding the U.S. electric grid. One of these bills, the BIG WIRES Act, co-sponsored by Sen. John Hickenlooper of Colorado and Rep. Scott Peters of California, would require each transmission region in the U.S. to be able to send at least 30 percent of its peak load to other regions by 2035.
That would represent a substantial change for a national transmission scenario where grids have largely been developed regionally, without an enormous amount of national oversight.
“The U.S. grid is aging and it needs an upgrade,” Senga says. “Implementing these kinds of policies is an important step for us to get to that future where we improve the grid, lower costs, lower emissions, and improve reliability. Some progress is better than none, and in this case, it would be important.”
To conduct the study, the researchers looked at how policies like the BIG WIRES Act would affect energy distribution. The scholars used a model of energy generation developed at the MIT Energy Initiative — the model is called “Gen X” — and examined the changes proposed by the legislation.
With a 30 percent level of interregional connectivity, the study estimates, the number of outages due to extreme cold would drop by 39 percent, for instance, a substantial increase in reliability. That would help avoid scenarios such as the one Texas experienced in 2021, when winter storms damaged distribution capacity.
“Reliability is what we find to be most salient to policymakers,” Senga says.
On the other hand, as the paper details, a future grid that is “optimized” with more transmission capacity near geographic spots of new energy generation would be less expensive.
“On the cost side, this kind of optimized system looks better,” Senga says.
A more geographically imbalanced grid would also have a greater impact on reducing emissions. Globally, the levelized cost of wind and solar dropped by 89 percent and 69 percent, respectively, from 2010 to 2022, meaning that incorporating less-expensive renewables into the grid would help with both cost and emissions.
“On the emissions side, a priori it’s not clear the optimized system would do better, but it does,” Knittel says. “That’s probably tied to cost, in the sense that it’s building more transmission links to where the good, cheap renewable resources are, because they’re cheap. Emissions fall when you let the optimizing action take place.”
To be sure, these two differing approaches to grid expansion are not the only paths forward. The study also examines a hybrid approach, which involves both national interconnectivity requirements and local buildouts based around new power sources on top of that. Still, the model does show that there may be some tradeoffs lawmakers will want to consider when developing and considering future grid legislation.
“You can find a balance between these factors, where you’re still going to still have an increase in reliability while also getting the cost and emission reductions,” Senga observes.
For his part, Knittel emphasizes that working with legislation as the basis for academic studies, while not generally common, can be productive for everyone involved. Scholars get to apply their research tools and models to real-world scenarios, and policymakers get a sophisticated evaluation of how their proposals would work.
“Compared to the typical academic path to publication, this is different, but at the Climate Policy Center, we’re already doing this kind of research,” Knittel says.
A smarter way for large language models to think about hard problems
To make large language models (LLMs) more accurate when answering harder questions, researchers can let the model spend more time thinking about potential solutions.
But common approaches that give LLMs this capability set a fixed computational budget for every problem, regardless of how complex it is. This means the LLM might waste computational resources on simpler questions or be unable to tackle intricate problems that require more reasoning.
To address this, MIT researchers developed a smarter way to allocate computational effort as the LLM solves a problem. Their method enables the model to dynamically adjust its computational budget based on the difficulty of the question and the likelihood that each partial solution will lead to the correct answer.
The researchers found that their new approach enabled LLMs to use as little as one-half the computation as existing methods, while achieving comparable accuracy on a range of questions with varying difficulties. In addition, their method allows smaller, less resource-intensive LLMs to perform as well as or even better than larger models on complex problems.
By improving the reliability and efficiency of LLMs, especially when they tackle complex reasoning tasks, this technique could reduce the energy consumption of generative AI systems and enable the use of LLMs in more high-stakes and time-sensitive applications.
“The computational cost of inference has quickly become a major bottleneck for frontier model providers, and they are actively trying to find ways to improve computational efficiency per user queries. For instance, the recent GPT-5.1 release highlights the efficacy of the ‘adaptive reasoning’ approach our paper proposes. By endowing the models with the ability to know what they don’t know, we can enable them to spend more compute on the hardest problems and most promising solution paths, and use far fewer tokens on easy ones. That makes reasoning both more reliable and far more efficient,” says Navid Azizan, the Alfred H. and Jean M. Hayes Career Development Assistant Professor in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this technique.
Azizan is joined on the paper by lead author Young-Jin Park, a LIDS/MechE graduate student; Kristjan Greenewald, a research scientist in the MIT-IBM Watson AI Lab; Kaveh Alim, an IDSS graduate student; and Hao Wang, a research scientist at the MIT-IBM Watson AI Lab and the Red Hat AI Innovation Team. The research is being presented this week at the Conference on Neural Information Processing Systems.
Computation for contemplation
A recent approach called inference-time scaling lets a large language model take more time to reason about difficult problems.
Using inference-time scaling, the LLM might generate multiple solution attempts at once or explore different reasoning paths, then choose the best ones to pursue from those candidates.
A separate model, known as a process reward model (PRM), scores each potential solution or reasoning path. The LLM uses these scores to identify the most promising ones.
Typical inference-time scaling approaches assign a fixed amount of computation for the LLM to break the problem down and reason about the steps.
Instead, the researchers’ method, known as instance-adaptive scaling, dynamically adjusts the number of potential solutions or reasoning steps based on how likely they are to succeed, as the model wrestles with the problem.
“This is how humans solve problems. We come up with some partial solutions and then decide, should I go further with any of these, or stop and revise, or even go back to my previous step and continue solving the problem from there?” Wang explains.
To do this, the framework uses the PRM to estimate the difficulty of the question, helping the LLM assess how much computational budget to utilize for generating and reasoning about potential solutions.
At every step in the model’s reasoning process, the PRM looks at the question and partial answers and evaluates how promising each one is for getting to the right solution. If the LLM is more confident, it can reduce the number of potential solutions or reasoning trajectories to pursue, saving computational resources.
But the researchers found that existing PRMs often overestimate the model’s probability of success.
Overcoming overconfidence
“If we were to just trust current PRMs, which often overestimate the chance of success, our system would reduce the computational budget too aggressively. So we first had to find a way to better calibrate PRMs to make inference-time scaling more efficient and reliable,” Park says.
The researchers introduced a calibration method that enables PRMs to generate a range of probability scores rather than a single value. In this way, the PRM creates more reliable uncertainty estimates that better reflect the true probability of success.
With a well-calibrated PRM, their instance-adaptive scaling framework can use the probability scores to effectively reduce computation while maintaining the accuracy of the model’s outputs.
When they compared their method to standard inference-time scaling approaches on a series of mathematical reasoning tasks, it utilized less computation to solve each problem while achieving similar accuracy.
“The beauty of our approach is that this adaptation happens on the fly, as the problem is being solved, rather than happening all at once at the beginning of the process,” says Greenewald.
In the future, the researchers are interested in applying this technique to other applications, such as code generation and AI agents. They are also planning to explore additional uses for their PRM calibration method, like for reinforcement learning and fine-tuning.
“Human employees learn on the job — some CEOs even started as interns — but today’s agents remain largely static pieces of probabilistic software. Work like this paper is an important step toward changing that: helping agents understand what they don’t know and building mechanisms for continual self-improvement. These capabilities are essential if we want agents that can operate safely, adapt to new situations, and deliver consistent results at scale,” says Akash Srivastava, director and chief architect of Core AI at IBM Software, who was not involved with this work.
This work was funded, in part, by the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, the MIT-Google Program for Computing Innovation, and MathWorks.
MIT engineers design an aerial microrobot that can fly as fast as a bumblebee
In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can’t reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.
So far, aerial microrobots have only been able to fly slowly along smooth trajectories, far from the swift, agile flight of real insects — until now.
MIT researchers have demonstrated aerial microrobots that can fly with speed and agility that is comparable to their biological counterparts. A collaborative team designed a new AI-based controller for the robotic bug that enabled it to follow gymnastic flight paths, such as executing continuous body flips.
With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 percent and 250 percent, respectively, compared to the researchers’ best previous demonstrations.
The speedy robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.
“We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate. Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the robot.
Chen is joined on the paper by co-lead authors Yi-Hsuan Hsiao, an EECS MIT graduate student; Andrea Tagliabue PhD ’24; and Owen Matteson, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro); as well as EECS graduate student Suhan Kim; Tong Zhao MEng ’23; and co-senior author Jonathan P. How, the Ford Professor of Engineering in the Department of Aeronautics and Astronautics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research appears today in Science Advances.
An AI controller
Chen’s group has been building robotic insects for more than five years.
They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilizes larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely fast rate.
But the controller — the “brain” of the robot that determines its position and tells it where to fly — was hand-tuned by a human, limiting the robot’s performance.
For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimizations quickly.
Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot.
To overcome this challenge, Chen’s group joined forces with How’s team and, together, they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid maneuvers, and the computational efficiency needed for real-time deployment.
“The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilize them,” How says.
For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behavior of the robot and plan the optimal series of actions to safely follow a trajectory.
While computationally intensive, it can plan challenging maneuvers like aerial somersaults, rapid turns, and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.
For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again.
“If small errors creep in, and you try to repeat that flip 10 times with those small errors, the robot will just crash. We need to have robust flight control,” How says.
They use this expert planner to train a “policy” based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.
Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very fast.
The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive maneuvers.
“The robust training method is the secret sauce of this technique,” How explains.
The AI-driven policy takes robot positions as inputs and outputs control commands in real time, such as thrust force and torques.
Insect-like performance
In their experiments, this two-step approach enabled the insect-scale robot to fly 447 percent faster while exhibiting a 255 percent increase in acceleration. The robot was able to complete 10 somersaults in 11 seconds, and the tiny robot never strayed more than 4 or 5 centimeters off its planned trajectory.
“This work demonstrates that soft and microrobots, traditionally limited in speed, can now leverage advanced control algorithms to achieve agility approaching that of natural insects and larger robots, opening up new opportunities for multimodal locomotion,” says Hsiao.
The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localize themselves and see clearly.
“This bio-mimicking flight behavior could help us in the future when we start putting cameras and sensors on board the robot,” Chen says.
Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work.
The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.
“For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,” says Chen.
“This work is especially impressive because these robots still perform precise flips and fast turns despite the large uncertainties that come from relatively large fabrication tolerances in small-scale manufacturing, wind gusts of more than 1 meter per second, and even its power tether wrapping around the robot as it performs repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved with this work.
“Although the controller currently runs on an external computer rather than onboard the robot, the authors demonstrate that similar, but less precise, control policies may be feasible even with the more limited computation available on an insect-scale robot. This is exciting because it points toward future insect-scale robots with agility approaching that of their biological counterparts,” she adds.
This research is funded, in part, by the National Science Foundation (NSF), the Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.
Staying stable
With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.
Now, scientists at MIT have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.
Nidhi Seethapathi, the Frederick A. and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT, and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published Oct. 21 in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion — bridging the gap between animal models and human balance.
Corrective action
Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.
“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also an associate investigator at the McGovern Institute for Brain Research.
While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.
To find out, Seethapathi and De Comite, who is a postdoc in Seethapathi’s and Guoping Feng's lab at the McGovern Institute, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species that is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room — not on a treadmill or over unusual terrain.
Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.
One foot in front of another
By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation — at any given moment — is the error.”
“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite. “The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”
The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.
Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.
Studying how brains help animals move stably may also guide the development of more-targeted strategies to help people improve their balance and, ultimately, prevent falls.
“In elderly individuals and individuals with sensorimotor disorders, minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says.
New bioadhesive strategy can prevent fibrous encapsulation around device implants on peripheral nerves
Peripheral nerves — the network connecting the brain, spinal cord, and central nervous system to the rest of the body — transmit sensory information, control muscle movements, and regulate automatic bodily functions. Bioelectronic devices implanted on these nerves offer remarkable potential for the treatment and rehabilitation of neurological and systemic diseases. However, because the body perceives these implants as foreign objects, they often trigger the formation of dense fibrotic tissue at bioelectronic device–tissue interfaces, which can significantly compromise device performance and longevity.
New research published in the journal Science Advances presents a robust bioadhesive strategy that establishes non-fibrotic bioelectronic interfaces on diverse peripheral nerves — including the occipital, vagus, deep peroneal, sciatic, tibial, and common peroneal nerves — for up to 12 weeks.
“We discovered that adhering the bioelectrodes to peripheral nerves can fully prevent the formation of fibrosis on the interfaces,” says Xuanhe Zhao, the Uncas and Helen Whitaker Professor, and professor of mechanical engineering and civil engineering at MIT. “We further demonstrated long-term, drug-free hypertension mitigation using non-fibrotic bioelectronics over four weeks, and ongoing.”
The approach inhibits immune cell infiltration at the device-tissue interface, thereby preventing the formation of fibrous capsules within the inflammatory microenvironment. In preclinical rodent models, the team demonstrated that the non-fibrotic, adhesive bioelectronic device maintained stable, long-term regulation of blood pressure.
“Our long-term blood pressure regulation approach was inspired by traditional acupuncture,” says Hyunmin Moon, lead author of the study and a postdoc in the Department of Mechanical Engineering. “The lower leg has long been used in hypertension treatment, and the deep peroneal nerve lies precisely at an acupuncture point. We were thrilled to see that stimulating this nerve achieved blood pressure regulation for the first time. The convergence of our non-fibrotic, adhesive bioelectronic device with this long-term regulation capability holds exciting promise for translational medicine.”
Importantly, after 12 weeks of implantation with continuous nerve stimulation, only minimal macrophage activity and limited deposition of smooth muscle actin and collagen were detected, underscoring the device’s potential to deliver long-term neuromodulation without triggering fibrosis. “The contrast between the immune response of the adhered device and that of the non-adhered control is striking,” says Bastien Aymon, a study co-author and a PhD candidate in mechanical engineering. “The fact that we can observe immunologically pristine interfaces after three months of adhesive implantation is extremely encouraging for future clinical translation.”
This work offers a broadly applicable strategy for all implantable bioelectronic systems by preventing fibrosis at the device interface, paving the way for more effective and long-lasting therapies such as hypertension mitigation.
Hypertension is a major contributor to cardiovascular diseases, the leading cause of death worldwide. Although medications are effective in many cases, more than 50 percent of patients remain hypertensive despite treatment — a condition known as resistant hypertension. Traditional carotid sinus or vagus nerve stimulation methods are often accompanied by side effects including apnea, bradycardia, cough, and paresthesia.
“In contrast, our non-fibrotic, adhesive bioelectronic device targeting the deep peroneal nerve enables long-term blood pressure regulation in resistant hypertensive patients without metabolic side effects,” says Moon.
Noninvasive imaging could replace finger pricks for people with diabetes
A noninvasive method for measuring blood glucose levels, developed at MIT, could save diabetes patients from having to prick their fingers several times a day.
The MIT team used Raman spectroscopy — a technique that reveals the chemical composition of tissues by shining near-infrared or visible light on them — to develop a shoebox-sized device that can measure blood glucose levels without any needles.
In tests in a healthy volunteer, the researchers found that the measurements from their device were similar to those obtained by commercial continuous glucose monitoring sensors that require a wire to be implanted under the skin. While the device presented in this study is too large to be used as a wearable sensor, the researchers have since developed a wearable version that they are now testing in a small clinical study.
“For a long time, the finger stick has been the standard method for measuring blood sugar, but nobody wants to prick their finger every day, multiple times a day. Naturally, many diabetic patients are under-testing their blood glucose levels, which can cause serious complications,” says Jeon Woong Kang, an MIT research scientist and the senior author of the study. “If we can make a noninvasive glucose monitor with high accuracy, then almost everyone with diabetes will benefit from this new technology.”
MIT postdoc Arianna Bresci is the lead author of the new study, which appears today in the journal Analytical Chemistry. Other authors include Peter So, director of the MIT Laser Biomedical Research Center (LBRC) and an MIT professor of biological engineering and mechanical engineering; and Youngkyu Kim and Miyeon Jue of Apollon Inc., a biotechnology company based in South Korea.
Noninvasive glucose measurement
While most diabetes patients measure their blood glucose levels by drawing blood and testing it with a glucometer, some use wearable monitors, which have a sensor that is inserted just under the skin. These sensors provide continuous glucose measurements from the interstitial fluid, but they can cause skin irritation and they need to be replaced every 10 to 15 days.
In hopes of creating wearable glucose monitors that would be more comfortable for patients, researchers in MIT’s LBRC have been pursuing noninvasive sensors based on Raman spectroscopy. This type of spectroscopy reveals the chemical composition of tissue or cells by analyzing how near-infrared light is scattered, or deflected, as it encounters different kinds of molecules.
In 2010, researchers at the LBRC showed that they could indirectly calculate glucose levels based on a comparison between Raman signals from the interstitial fluid that bathes skin cells and a reference measurement of blood glucose levels. While this approach produced reliable measurements, it wasn’t practical for translating to a glucose monitor.
More recently, the researchers reported a breakthrough that allowed them to directly measure glucose Raman signals from the skin. Normally, this glucose signal is too small to pick out from all of the other signals generated by molecules in tissue. The MIT team found a way to filter out much of the unwanted signal by shining near-infrared light onto the skin at a different angle from which they collected the resulting Raman signal.
The researchers obtained those measurements using equipment that was around the size of a desktop printer, and since then, they have been working on further shrinking the footprint of the device.
In their new study, they were able to create a smaller device by analyzing just three bands — spectral regions that correspond to specific molecular features — in the Raman spectrum.
Typically, a Raman spectrum may contain about 1,000 bands. However, the MIT team found that they could determine blood glucose levels by measuring just three bands — one from the glucose plus two background measurements. This approach allowed the researchers to reduce the amount and cost of equipment needed, allowing them to perform the measurement with a cost-effective device about the size of a shoebox.
“By refraining from acquiring the whole spectrum, which has a lot of redundant information, we go down to three bands selected from about 1,000,” Bresci says. “With this new approach, we can change the components commonly used in Raman-based devices, and save space, time, and cost.”
Toward a wearable sensor
In a clinical study performed at the MIT Center for Clinical Translation Research (CCTR), the researchers used the new device to take readings from a healthy volunteer over a four-hour period. As the subject rested their arm on top of the device, a near-infrared beam shone through a small glass window onto the skin to perform the measurement.
Each measurement takes a little more than 30 seconds, and the researchers took a new reading every five minutes.
During the study, the subject consumed two 75-gram glucose drinks, allowing the researchers to monitor significant changes in blood glucose concentration. They found that the Raman-based device showed accuracy levels similar to those of two commercially available, invasive glucose monitors worn by the subject.
Since finishing that study, the researchers have developed a smaller prototype, about the size of a cellphone, that they’re currently testing at the MIT CCTR as a wearable monitor in healthy and prediabetic volunteers. Next year, they plan to run a larger study working with a local hospital, which will include people with diabetes.
The researchers are also working on making the device even smaller, about the size of a watch. Additionally, they are exploring ways to ensure that the device can obtain accurate readings from people with different skin tones.
The research was funded by the National Institutes of Health, the Korean Technology and Information Promotion Agency for SMEs, and Apollon Inc.
