MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 5 hours 4 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Unlocking mysteries of the universe through math

6 hours 37 min ago

GPS navigation, cryptography, quantum computing — while some of humankind’s greatest advancements have been invented by pioneers from various cultures, they were founded upon one common grammar: mathematics.

“Mathematics is the language with which God wrote the universe,” said the famous Italian astronomer, physicist, and philosopher Galileo Galilei, who, among his various scientific contributions, helped provide evidence for the idea that the sun is at the center of the solar system.

Although mostly conveyed through combinations of numbers, letters, and signs that may seem enigmatic to many, math equations hold within them countless stories — playbooks that generations of wonderers and inventors have crafted, refined, and shared in an attempt to make sense of a world full of unknown variables.

“I have faith in mathematics that, when there seems to be something special happening, when there’s some coincidence, that it’s not just a coincidence,” says mathematician Amanda Burcroff, “but that there’s actually some really deep, interesting, and involved reason for why that should be true.”

Burcroff’s research is focused on algebraic combinatorics, an area that provides discrete frameworks for understanding algebraic and geometric spaces that ubiquitously arise across science. This year, she joins MIT’s Department of Mathematics as a postdoc as part of the School of Science Dean’s Fellowship. Working with Professor Alexander Postnikov, Burcroff is building upon her techniques with the goal of applying them to other areas such as theoretical physics — a field that seeks to uncover the fundamental laws governing everything from subatomic particles to the cosmos itself.

“I have trust that if you keep following the path, eventually you’ll find the treasure — that is, whatever theorem or proof — that you’re looking for,” she says.

Exploring possibilities and redefining rules

Like many children, Burcroff once saw math as a subject that entailed lots of memorizing. Although she felt that it came naturally to her, she didn’t always find math very interesting.

In high school, as she came to learn about areas like calculus and geometry, Burcroff started to see the discipline in a different light — a creative approach to exploring what’s possible.

“[In] most other fields, the rules are imposed on you by the world,” she says, “but in math, you get full freedom to lay down those rules and then figure out what the implications of those rules are by using logical consequence.”

In 2015, Burcroff began her bachelor’s degree at the University of Michigan with a major in math and a minor in computer science. There, she entered the world of combinatorics — a branch of math dealing with counting, arranging, and combining objects that forms a crucial basis for understanding the complexity of problems, as well as the limits of computer algorithms.

“When I was starting out, I was just happy to have any mystery that anyone gave me,” she says.

Math was, to Burcroff, like a fun game with levels to complete. But during a study abroad program in Budapest, Hungary — the hometown of Paul Erdős, who is considered to be one of the most prolific mathematicians of the 20th century — it became more exciting to play when she was handed puzzles no one has yet solved.

“It turns out that if you put down the right set of rules, there’s an infinite number of beautiful things that you can do with it,” she says.

A journey of endless mysteries to unlock

In 2019, Burcroff embarked on a journey to pursue further research in England, later completing a master’s degree in pure mathematics at the University of Cambridge, then a research master’s degree at Durham University. In 2021, she returned to the United States and began her PhD at Harvard University, with the guidance of Professor Lauren Williams.

Among several riddles she has unraveled over the years, Burcroff helped unify different mathematical approaches to understand why systems work so reliably. Think of it as finding out that two seemingly different set of instructions actually lead the same way. By demonstrating their connections, her work has revealed an underlying, overarching mathematical architecture — a finding that later helped Burcroff and her collaborators tackle one of the many enduring riddles in her field.

Generalized cluster algebras form the basis for describing geometries that appear throughout physics. For more than a decade, mathematicians suspected these building blocks were created only by adding up ingredients and never subtracting, although no one was able to prove it. In 2024, Burcroff and her collaborators published a paper demonstrating that these spaces have nice positivity properties by developing a new way to count and organize patterns — helping untangle a long-standing conjecture, whose potential implications span from predicting particle collision outcomes to describing the spaces appearing in string theory.

These findings have earned Burcroff numerous prestigious awards including a National Science Foundation Graduate Research Fellowship, a British Marshall Scholarship, and a Jack Kent Cooke Graduate Fellowship.

Despite the tremendous number of problems she has answered, new ones keep arising.

“Every time you unlock one of them, it gives you a bunch of paths to new connected mysteries,” Burcroff says.

At MIT, she is working with Postnikov, whose research on combinatorics and positivity-type problems has presented a radically different way to calculate fundamental quantities in quantum field theory.

“Burcroff is conducting research across disciplinary boundaries,” says Postnikov.

He adds: “I am sure that she will have a lot of fruitful interactions with researchers in other MIT departments.”

Burcroff’s goal is to apply combinatorial techniques to broader physical contexts and direct applications, especially those with implications to topics like mirror symmetry, a principle in string theory suggesting that very different-looking geometric spaces can be mathematically equivalent.

While “doing math is 99 percent trying something and failing,” Burcroff says it is this same challenge that keeps her motivated. To her, it is not about reaching a destination, but rather about the continuous “process of discovery,” one she hopes to share beyond the typical classroom.

To make math more accessible, especially among underrepresented groups, Burcroff has worked with mentorship programs including Harvard’s Real Representations and Math Includes, Cambridge Girls’ Angle, and MIT PRIMES. During her time as a postdoc, she hopes to continue this outreach and explore ways to get involved with other support groups at MIT’s Department of Mathematics.

Study: Gene circuits reshape DNA folding and affect how genes are expressed

9 hours 7 min ago

When a gene is turned on in a cell, it creates a ripple effect along the DNA strand, changing the physical structure of the strand. A new study by MIT researchers shows that these ripples can stimulate or suppress neighboring genes.

These effects, which result from the winding or unwinding of neighboring DNA, are determined by the order of genes along a strand of DNA. Genes upstream of the active gene are usually turned up, while those downstream are inhibited.

The new findings offer guidance that could make it easier to control the output of synthetic gene circuits. By altering the relative ordering and arrangement of genes, or “gene syntax,” researchers could create circuits that synergize to maximize their output, or that alternate the output of two different genes.

“This is really exciting because we can coordinate gene expression in ways that just weren’t possible before,” says Katie Galloway, an assistant professor of chemical engineering at MIT. “Syntax will be really useful for dynamic circuits. Now we have the ability to select not only the biochemistry of circuits, but also the physical design to support dynamics.”

Galloway is the senior author of the study, which appears today in Science. MIT postdoc Christopher Johnstone PhD ’26 is the paper’s lead author. Other authors include MIT graduate student Kasey Love, members of the lab of Brandon DeKosky, an MIT associate professor of chemical engineering, and researchers from Peter Zandsta’s lab at the University of British Columbia and the labs of Christine Mummery and Richard Davis at Leiden University Medical Center in the Netherlands.

Gene syntax

When a gene is copied into messenger RNA, or “transcribed,” the double-stranded DNA helix must be unwound so that an enzyme called RNA polymerase can access the DNA and start copying it. That unwinding leads to physical changes in the structure of DNA strand.

Upstream of the gene, DNA becomes looser, while downstream, it becomes more tightly wound. These changes affect RNA polymerase’s ability to access the DNA: Upstream of an active gene, it’s easier for the enzyme to attach; downstream, it’s more difficult.

In a study published in 2022, Galloway and Johnstone performed computational modeling that explored how these biophysical changes might influence gene expression. They studied three different arrangements, or types of syntax: tandem, divergent, and convergent.

Most synthetic gene circuits are designed in a tandem arrangement, with one gene followed by another downstream. In a divergent arrangement, neighboring genes are transcribed in opposite directions (away from each other), and in convergent syntax, they are transcribed toward each other.

The modeling suggested that the divergent arrangement was most likely to produce circuits where both genes are expressed at a high level. Tandem arrangements were predicted to result in the downstream gene being suppressed by the upstream gene.In the new study, the researchers wanted to see if they could observe these predicted phenomena in human cells.

“Normally, we think about gene circuits and pieces of DNA as these lines that we draw, but they’re polymers that have physical characteristics,” Galloway says. “The thing that we were trying to solve in this paper was: When you put two genes on the same piece of DNA, how does their physical interaction become coupled?”

The researchers engineered circuits that each contained two genes, in either a tandem, divergent, or convergent configuration, into human cell lines and human induced pluripotent stem cells.

The results confirmed what their modeling had predicted: In divergent circuits, expression of both genes was amplified. In tandem circuits, turning on the upstream gene suppressed the expression of the downstream gene.

These effects produced as much as a 25-fold increase or decrease in gene expression, and they could be seen at distances of up to 2,000 base pairs between genes.

Using a high-resolution genome mapping technique called Region Capture Micro-C, the researchers were also able to analyze how the DNA structure changed when nearby genes were being transcribed.

As predicted, they found that the DNA regions downstream from an active gene formed tightly twisted structures known as plectonemes, similar to the tangles seen in a twisted telephone cord. These structures make it harder for RNA polymerase to bind to DNA.

To engineer these cells, the researchers used a new system they developed with the LUMC team called STRAIGHT-IN Dual, which allows them to efficiently insert two genes into the same DNA strand at both alleles. This system is being reported in a second paper published today, in Nature Biomedical Engineering.

Precise control

The new findings could help guide the design of synthetic gene circuits, which are usually designed to be controlled by biochemical interactions with activator or repressor molecules. Now, circuit designers can also perform biophysical manipulations to enhance or repress genes expression.

“Everyone thinks about the components they need, and the biochemical properties they need to build a circuit,” Galloway says. “Now, we have added the physical construction of those components, which is going to change how those biochemical units are interpreted.”

As a demonstration of one potential application, the researchers built synthetic circuits containing the genes for two segments of a novel antibody discovered by the Dekosky lab, used to treat yellow fever, and incorporated them into human cells. As they expected, the divergent syntax produced larger quantities of the yellow fever antibody.

Galloway’s lab has also used this approach to optimize the output of synthetic gene circuits they previously reported that could be used to deliver gene therapy or to reprogram adult cells into other cell types.

This strategy could also be used to build a variety of other types of dynamic synthetic circuits, such as toggle switches, oscillators, or pulse generators, for any application that requires precise control over gene expression.

“If you want coordinated expression, a divergent circuit is great. If you want something that’s either/or, you can imagine using a convergent or tandem circuit, so when one turns on, the other turns off, and you can alternate pulses,” Galloway says. “Now that we understand the syntax, I think this will pave the way for us to program dynamic behaviors.”

The research was funded, in part, by the National Institutes of Health, the National Institute for General Medical Sciences, a National Science Foundation CAREER Award, the Pershing Square Foundation, the Air Force Research Laboratory, and the Koch Institute Support (core) Grant from the National Cancer Institute.

The hidden structure behind a widely used class of materials

9 hours 7 min ago

Materials called relaxor ferroelectrics have been used for decades in technologies like ultrasounds, microphones, and sonar systems. Their unique properties come from their atomic structure, but that structure has stubbornly eluded direct measurement.

Now a team of researchers from MIT and elsewhere has directly characterized the three-dimensional atomic structure of a relaxor ferroelectric for the first time. The findings, reported today in Science, provide a framework for refining models used to design next-generation computing, energy, and sensing devices.

“Now that we have a better understanding of exactly what’s going on, we can better predict and engineer the properties we want materials to achieve,” says corresponding author James LeBeau, MIT’s Kyocera Professor of Materials Science and Engineering. “The research community is still developing methods to engineer these materials, but in order to predict the properties those materials will have, you have to know if your model is right.”

In their paper, the researchers describe how they used an emerging technique to reveal the distribution of electric charges in the material, with a surprising result.

“We realized the chemical disorder we observed in our experiments was not fully considered previously,” says co-first authors Michael Xu PhD ’25 and Menglin Zhu, who are both postdocs at MIT. “Working with our collaborators, we were able to merge the experimental observations with simulations to refine the models and better predict what we see in experiments.”

Joining Zhu, Xu, and LeBeau on the paper are Colin Gilgenbach and Bridget R. Denzer, MIT PhD students in materials science and engineering; Yubo Qi, an assistant professor at the University of Alabama at Birmingham; Jieun Kim, an assistant professor at the Korea Advanced Institute of Science and Technology; Jiahao Zhang, a former PhD student at the University of Pennsylvania; Lane W. Martin, a professor at Rice University; and Andrew M. Rappe, a professor at the University of Pennsylvania.

Probing disordered materials

Leading simulations of relaxor ferroelectrics suggest that when an electric field is applied, the interactions of positively and negatively charged atoms in different nanoregions of the material help give rise to exceptional energy storage and sensing capabilities. The details of those nanoregions have been impossible to directly measure to date.

For their Science paper, the researchers studied a relaxor ferroelectric material used in sensors, actuators, and defense systems that is a lead magnesium niobate-lead titanate alloy. They used an emerging measurement technique, called multi-slice electron ptychography (MEP), in which researchers move a nanoscale-sized probe of high-energy electrons over a material and measure the resulting electron diffraction patterns.

“We do this in a sequential way, and at each position, we acquire a diffraction pattern,” Zhu explains. “That creates regions of overlap, and that overlap has enough information to use an algorithm to iteratively reconstruct three-dimensional information about the object and the electron wave function.”

The technique revealed a hierarchy of chemical and polar structures that spanned from atomic to mesoscopic scales. The researchers also found that many regions of differing polarization in the material were much smaller than predicted by the leading simulations. The researchers then fed their new data back into those computer simulations and refined the models to better reflect their findings under different conditions.

“Previously, these models basically had random regions of polarization, but they didn’t tell you how those regions correlate with each other,” Xu says. “Now we can tell you that information, and we can see how individual chemical species modulate polarization depending on the charge state of atoms.”

Toward better materials

Zhu says the paper demonstrates the potential of electron ptychography to study complex materials and opens up new avenues of research into complex, disordered materials.

“This study is the first time in the electron microscope that we’ve been able to directly connect the three-dimensional polar structure of relaxor ferroelectrics with molecular dynamics calculations,” Xu says. “It further proves you can get three-dimensional information out of the sample using this technique.”

The researchers also believe the approach could one day help engineer materials with advanced electronic behaviors for a range of improved memory storage, sensing, and energy technologies.

“Materials science is incorporating more complexity into the material design process — whether that’s for metal alloys or semiconductors — as AI has improved and our computational tools have become more advanced,” LeBeau says. “But if our models aren’t accurate enough and we have no way to validate them, it’s garbage in garbage out. This technique helps us understand why the material behaves the way it does and validate our models.”

The work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Office of Naval Research, the U.S. Department of War, and a National Science Graduate Fellowship. The researchers also used MIT.nano facilities.

How neurons sense bacteria in the gut

9 hours 37 min ago

Recent studies suggest animals and people alike have close and complex relationships with the bacteria around and within them. The human gut microbiome, for instance, has been associated with both depression and Parkinson’s disease. To go beyond association toward understanding of the actual mechanisms that enable the bacterial microbiome to influence brain function, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT examines the mechanisms at work in a model “bacterial specialist,” the nematode Caenorhabditis elegans.

In the new study in Current Biology, the team, led by Picower Fellow Cassi Estrem in the Picower Institute for Learning and Memory lab of Associate Professor Steven Flavell, identifies the specific chemicals that a key neuron in C. elegans senses, both in the bacteria that it eats and in the bacteria that it needs to avoid ingesting.

“In our bodies, our own cells are outnumbered by the bacterial cells living in and on us. There’s an increasing recognition that this has a profound impact on human health,” says Flavell, an investigator of the Howard Hughes Medical Institute and faculty member of MIT’s Department of Brain and Cognitive Sciences. “It’s been clear that there are links for some time. Our study aimed to identify the hard mechanisms of how a host nervous system is affected by bacteria in the alimentary canal.”

Achieving a fundamental mechanistic understanding of how neurons interact with bacteria could help improve attempts to intervene in or manipulate those interactions with therapeutic drugs or supplements, Flavell says.

Mmm … sugar

Flavell calls C. elegans a “bacterial specialist” because the tiny, transparent worm has evolved to eat bacteria as its diet, while also needing to avoid pathogenic bacteria that can prove to be its undoing. This has led it to develop a nervous system especially well-attuned to sorting out what is food and what is foe. In 2019, the lab discovered that the neuron NSM, which projects into the worm’s alimentary canal, employs two “acid sensing ion channels” (ASICs) to detect when certain bacteria have been ingested. Notably, those ion channels are analogous to ones found in neurons in humans. When NSM detects yummy bacteria, it releases serotonin that causes the worm to increase its feeding rate and slow its slithering so that it can stay to dine on the surrounding meal.

To really understand how this works, Flavell and Estrem realized they needed to know exactly what the ion channels are detecting in the bacteria. To get started, they exposed worms to 20 different kinds of bacteria the worms are known to encounter and found that they all activated NSM activity to varying extents. Then they broke the bacteria down into more and more specific chemical components to see which one or ones triggered NSM. The experiments ruled out many components, including DNA, lipids, proteins, and simple sugars, and instead found that it’s specifically the polysaccharide sugars that coat many bacteria that drive NSM activation. In particular, in gram-positive bacteria, a chemical called peptidoglycan activated NSM. In gram-negative bacteria, a different polysaccharide was apparently in play.

Estrem and Flavell’s team also ran experiments showing that polysaccharides from bacteria in general, and peptidoglycan in particular, not only trigger NSM electrical activity, but actually promote the feeding and slowing behaviors. They also showed that genetically knocking out the ASICs abolished these responses. In all, they demonstrated that polysaccharide and peptidoglycan detection are sufficient to trigger the worm’s behaviors, and requires the ASICs.

Better not eat this

Having shown what exactly triggers the worms to recognize their bacterial food, the researchers wondered whether they could also pinpoint a danger sign the worm finds in harmful bacteria. For these experiments, they carefully used Serratia marcescens, a bacterium that’s also infectious for humans. Some strains of the bacteria have a red color, while others do not. The red ones, which have a pigment called prodigiosin, tend to be much more lethal for worms. In their testing, the researchers found that when NSM detected the non-pigmented bacteria, the neuron still activated and the worms still ingested the bacteria, but when prodigiosin was present, NSM did not activate and the worm did not pump it in or slow down to eat.

Adding prodigiosin to normally yummy bacteria also suppressed NSM’s usual response. In other words, the worms have evolved their digestive behavior (and the detectors within NSM) to avoid ingesting a chemical specifically associated with danger.

Flavell says it’s likely that some of the fundamental mechanisms highlighted in the new paper will inform studies of similar mechanisms in other animals.

“We developed a way of identifying these pathways by studying this organism that specializes in bacterial detection and displays robust responses,” Flavell explains. “But there’s no reason these pathways should be limited to C. elegans. The molecular players we identified are found in many species, including mammals.”

In addition to Estrem and Flavell, the paper’s other authors are Malvika Dua, Colby Fees, Greg Hoeprich, Matthew Au, Bruce Goode, and Lingyi Deng.

The National Institutes of Health, the McKnight Foundation, the Alfred P. Sloan Foundation, the Howard Hughes Medical Institute, and The Freedom Together Foundation provided support for the study.

A materials scientist’s playground

9 hours 47 min ago

Scientists and engineers around the world are working to improve quantum bits, or qubits, the minuscule building blocks of the quantum computer. Qubits are incredibly sensitive, making it easy for errors to be introduced, lowering device yield. But a new cluster tool at MIT.nano introduces capabilities that will allow researchers to continue advancements in qubit performance.

Passersby outside MIT.nano may have recently noticed a complex looking piece of equipment being installed on the first-floor cleanroom. What looks like a sci-fi movie prop is actually a state-of-the-art, custom-built molecular beam epitaxy (MBE): a physical vapor deposition system that operates under ultra-high vacuum to produce high-quality thin films. With the ability to grow different crystalline materials on a wafer, the tool will support quantum researchers and materials scientists by allowing them to study how film growth affects the properties of the materials used in making qubits.

“To realize the full promise of quantum computing, we need to build qubits that are robust, reproducible, and extensible,” says William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT. “To date, most of the improvements to superconducting qubit performance are traceable to circuit design — essentially, designing qubit circuits that are less sensitive to their environmental noise. However, those improvements have largely run their course. Going forward, we need to address the fundamental materials science and fabrication engineering required to reduce the sources of environmental noise. This multi-chamber, cassette-loaded, 200-millimeter wafer MBE system is exactly the right tool at the right time. And there’s no place better to do this research than at MIT.nano.”

That is because MIT.nano is preconditioned to receive this type of system with physical space, climate controls, policies and procedures for researchers, and expert staff to manage the lab. Through an equipment support plan, Oliver’s Engineering Quantum Systems (EQuS) group is able to install and run the tool inside MIT.nano, a high-performance, safe, and reliable environment.

A controlled environment is essential for the MBE. “Think of this system like an inverted International Space Station (ISS),” explains Patrick Strohbeen, research scientist in the EQuS group. “The ISS is a small chamber of atmosphere surrounded by the vacuum of space. This MBE system is a chamber of space-level vacuum surrounded by atmosphere.” That vacuum of space is kept at a steady negative 90 degrees Celsius, which enables precise growth of thin films on an atomic scale. It is the largest single deposition chamber (1-meter diameter) the manufacturer, DCA, has sold in the United States.

The journey of a wafer

The system, which in total takes up 600 square feet, is made up of six chambers. First is the load lock, where the wafer is placed into the system and brought down from atmospheric pressure to near the vacuum level of space. Then, the wafer enters the distribution center. This space acts like a central hub, transferring the wafers to other chambers. Next is the deposition, or “growth,” chamber. This is where the system’s primary function takes place — depositing materials, specifically atoms of superconducting metal, onto a substrate, typically silicon. From there, it moves to the oxidation chamber, which facilitates the growth of key ceramic materials for qubits. A fifth storage chamber can hold an additional 10 wafers within the vacuum.

A unique aspect of this system is its sixth chamber, designed for X-ray photoelectron spectroscopy (XPS). Using this chamber, researchers can shoot a photon in the form of X-rays at the surface and, when it hits the surface, it will excite the electron inside the material so that the electron jumps out and is picked up by a sensor that then tells the researcher about the environment the electron came from. As individual layers of atoms are put down in the growth chamber, scientists can move the wafer to the XPS chamber to measure changes in the material structure of the film and back again, all while keeping it inside the vacuum space.

Why is this important? “The quantum community has excellent device physicists and device engineers,” said Strohbeen. “The last piece of the puzzle is: We need to understand the materials platform that we’re using for these devices.” The buried interfaces, so far, have been understudied due to the difficulty in probing them, he explained.

For those of us who are not MBE experts, think of the snow that fell in Massachusetts this winter. How can you tell how much ice is on the pavement without removing all of the snow on top of it? And without changing the natural setting where the snow, ice, and pavement meet? With this system, specifically the XPS chamber, scientists can study the interfaces of buried materials without disturbing the physical or chemical environments. “It is a materials scientist’s playground,” jokes Strohbeen — a controlled space where researchers can learn about and explore materials’ interactions within layers of atoms.

Why MIT.nano?

When Oliver, who is also the director of the MIT Center for Quantum Engineering, secured the MBE Quantum, the next question was where to put it. Enter MIT.nano. Housing 45,000 square feet of cleanroom, this facility exists at MIT to support complex, sensitive equipment with both the infrastructure and the staff needed to maintain it.

“MIT.nano’s ultra-stable building utilities and lab environment are exactly what is needed to support a system that demands extreme repeatability and purity,” says Nick Menounos, MIT.nano associate director of infrastructure. “The success of this installation grew from the early collaboration. Professor Oliver engaged the MIT.nano team in the procurement process almost two years in advance. That foresight, combined with the infrastructure momentum we gained from the recent CHIPS Act project, meant that we could prepare the cleanroom perfectly. We compressed the installation process that normally takes several months and had this extraordinary machine running in under three weeks.”

“From the very beginning, the MIT.nano staff were helpful, knowledgeable, and willing to go above and beyond to make this happen,” says Oliver. “While the MIT.nano facility is certainly an infrastructural crown jewel at MIT, it’s the MIT.nano staff who make it the national treasure it is today.”

Positioning the MBE Quantum in the cleanroom helps the team focus on scalability and device yield. Humidity and particle count, two things carefully measured and maintained at MIT.nano, can affect the output of the device. Minimizing as many variables as possible is key to improving qubit performance. The cleanroom also allows for new device research because an array of fabrication and metrology tools are available without having to leave the clean environment.

“We’re really excited to see what we can do with it,” says Strohbeen. “We bought it as a materials science tool, and it will also be a device development tool due to the flexibility of having it in the cleanroom.”

The MBE system was purchased through a combination of grants from the Army Research Office (ARO) and from the Laboratory for Physical Sciences (LPS). The ARO grant, a Defense University Research Instrumentation Program grant, is the premier grant from ARO for funding large capital equipment purchases that should prove disruptive in technologically relevant areas. It arrives at an important time on campus, as one of MIT’s strategic initiatives — the MIT Quantum Initiative — aims to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

Making the case for curiosity-driven science

23 hours 7 min ago

“The thing that really struck me when I came to MIT and strikes me every single day is the stuff that’s going on here is amazing. The science, the engineering… every day I hear something that makes my jaw drop,” remarked President Sally Kornbluth during a live discussion with Lizzie O’Leary of Slate’s “What Next: TBD” podcast.

Kornbluth spoke about everything from the importance of curiosity-driven science and why basic science is critical to our nation’s future, to AI and education, and even bravely joined O’Leary in a rendition of the Williams College song, “The Mountains,” in honor of their shared alma mater.

“We are in this time of incredible uncertainty,” said Kornbluth of the current state of higher education and funding for scientific research. “What we are trying to do is keep the science robust.”

Bouncing back to her time at Duke and her love of college basketball, she noted it’s a combination of zone coverage and man-to-man defense when trying to address skepticism about higher education in Washington, D.C. She emphasized: “As one of the top institutions in the world it’s part of our responsibility to articulate the importance of science. Behind the scenes, I am – along with many other [university] presidents – I am in D.C. all the time now. I want to speak to Congressmen and women, Senators, people in the executive branch to explain the importance of what we are doing.”

Kornbluth emphasized that the pipeline of basic science that flows from U.S. universities is a critical asset for our country, cautioning that to keep straining this pipeline could have enormous negative ramifications for the U.S. down the line.

“If you think about research done in this country, it’s done in in universities, it’s done in national labs, and it’s done in industry,” said Kornbluth. Universities are where most of the science with a long pathway to impact, requiring patience, starts. She pointed to immunotherapy for cancer, which began 30-40 years ago in basic immunotherapy research, as an example. With that pipeline being drained, what does the future hold for new cancer therapies or new AI and quantum technologies?

Kornbluth also underscored that uncertainty and lost funding are having a “huge impact on the talent pipeline,” delving into the unique role universities play in training graduate students, who are the next generation of scientific researchers. “We hear, ‘Oh it would be okay if research was more in industry.’ I say, ‘Would you fly on a plane with a pilot who had never flown?’ How do they think people learn how to do research? We are training the next generation… and we are losing funding for them.” She added: “I think we are going to see reverberations for many decades if we don’t rectify that issue.”

When asked how she and her colleagues are working to keep research moving forward, Kornbluth explained that at MIT, “we have tried to find alternative ways to elevate the science. We have a series of presidential initiatives that cut across the whole campus in things like health and life sciences, quantum, humanities and social sciences. The notion is that we are trying to create new opportunities.”

Still, she acknowledged that losses from the endowment tax and diminished federal funding are painful. “There are only four schools right now that are subject to the 8% endowment tax, which is a tax on our earnings. For us, that means $240 million dollars a year plus other losses in grants. So, let’s say the whole thing is, we budgeted for a loss of $300 million a year on a $1.7 billion budget… That has definitely had an impact on us. No question about it. 

“The other thing about it is again there’s all this uncertainty. Our investigators are writing a ton of grants. They don’t know if they’re going off into the void or they really have the sort of competitive opportunities they’ve always had in the past.”

Asked why universities did not see this moment coming, Kornbluth offered a few thoughts. “Look at MIT – 30,000 companies have come from MIT. When you look at something like that, why would you think any government that wants economic flourishing in their country would come after MIT?” she reflected. “It just never would have occurred to us.”

Turning towards the rapid advances in AI, and how the field is impacting education, Kornbluth noted that at MIT and other universities, “we have to focus on the human element, we have to educate our students, they need to know how to write and do mathematics…they have to view AI as a tool to augment their capabilities. That is how we are thinking about it.”

In the course of the conversation, Kornbluth also expressed her unwavering support for international students, noting that most want the opportunity to stay and contribute to research in the U.S. after graduation. “The talent brought to us through our international community is unbelievable. We can attract the very best in the world. You can bet when they talk about competitiveness with China, for example, in AI, quantum, etc., they are not sitting around in China saying, ‘Oh it’s great America is taking all our students.’ They’re thinking, ‘It’s great that America doesn’t want to take as many of our students anymore because we can train them.’ It’s a competitive issue that we really should lean into.”

Study: Immigrants help address the US eldercare shortage

23 hours 7 min ago

Good caregivers are often in short supply, but after the Covid-19 pandemic hit the U.S. in early 2020, staff levels at nursing homes dropped by 10 percent. What was a simple personnel shortage has moved closer to being a nursing-care crisis.

“We have an aging population, care for them is labor-intensive, and there are shortages everywhere in that supply chain,” says MIT economist Jonathan Gruber.

As it happens, about one-fifth of health care support workers in the U.S. are immigrants. And as a newly published study of the nation’s metro areas shows, changes in immigration levels can affect how much nursing care the elderly receive.

“When immigration rises in a city, it significantly increases the health care workforce,” says Gruber, co-author of the study and a paper detailing its findings.

Overall, Gruber and his colleagues determined that when there is more immigration, registered nurses and other aides work more hours at nursing homes, without displacing already-employed caregivers, while patient outcomes improve. Essentially, a 10 percent increase in female immigrants in a given metro area leads to a 1.1 percent increase in hours that registered nurses spend with elderly patients, while hospitalizations for those patients drop, among other things.

“Even if immigration actually increases labor supply to the medical sector, it was an open question if that would improve outcomes, and it does,” adds Gruber, the Ford Professor of Economics and head of the MIT Department of Economics.

The paper, “Immigration, the Long-Term Care Workforce, and Elder Outcomes in the U.S.,” appears in the American Journal of Health Economics. The authors are Gruber; David C. Grabowski, a professor in the Department of Health Care Policy at Harvard Medical School; and Brian E. McGarry, an assistant professor in the Department of Medicine and the Department of Public Health Sciences at the University of Rochester.

More care, fewer hospitalizations

To conduct the study, the researchers tapped into multiple data sources, including immigration information from 2000 to 2018 appearing in the U.S. Census Bureau’s American Community Survey. Extensive nursing home data came from different types of reports that facilities are required to file in order to maintain Medicare and Medicaid eligibility, allowing the scholars to examine care staffing levels and patient outcomes.

All told, the study encompasses 16 million Medicare beneficiaries in over 13,000 nursing homes in metropolitan statistical areas of the U.S., and evaluates immigrations flows for two decades.

“One of the key groups that’s taking care of our nation’s elders is immigrants,” Gruber says. “So I thought it would be fascinating to understand how much does immigration actually matter for elder care.”

More specifically, the scholars find that for every 10 percent increase in immigration above the norm in metro areas, in addition to the 1.1 percent increase in registered nurse hours, there is a 0.7 percent increase in hours of care provided by certified nurse assistants. There is a 0.6 percent decline in hospitalizations for patients making short-term stays, of up to a month, in nursing homes.

Beyond that, the study yielded other markers showing that patient outcomes improve in these situations. The roughly 1 percent increase in hours of care was accompanied by a decline in the use of physical restraints needed for patients, who also needed less psychiatric medication prescriptions and had fewer urinary tract infections, among other things.   

The fact that those outcomes improved in more immigrant-staffed situations is among the new insights provided by the research.

“There’s a lot of evidence that providing more labor supply to the elderly sector improves patient outcomes,” Gruber says. “But it wasn’t clear whether more immigrants would work the same way, because of language issues or other factors.”

A new lens

The study comes as immigration policy has become a major issue in the U.S., something that Gruber says helped spur his curiosity about its health care implications — although he did not know what the study would reveal, one way or another. In this case, he notes, the impact of immigration on eldercare may be another factor to be considered in the larger debates about the subject.

“I think it provides a new lens on the debate over immigration,” Gruber says. “The debate over immigration has been solely about what will it do to native workers, what will it do to the crime rate, what will it do to tax collection. This adds a new element, which is: What will it do to our citizens’ care? By having more immigration, we provide more care.”

Gruber, Grabowski, and McGarry are continuing to study this issue. In a new working paper, released in February, they found that increases in immigration are consistent with a reduction in the mortality rate, in part by allowing more elderly people the opportunity to receive care at home.

Gruber recognizes that there will continue to be sharp policy disagreements over immigration. Still, as the just-published paper states, to this point, when it comes to nursing care, the “results paint a consistent picture of improved quality of care resulting from increased immigration.”

Solving the “Whac-a-mole dilemma”: A smarter way to debias AI vision models

Wed, 04/29/2026 - 5:40pm

In today’s hospitals and clinics, a dermatologist may use an artificial intelligence model for classifying skin lesions to assess if the lesion is at risk of developing into a cancer or if it is benign. But if the model is biased toward certain skin tones, it could fail to identify a high-risk patient.

Perhaps one of the best known and most persistent challenges that AI research continues to reckon with is bias. Bias is often discussed in relation to training data, but model architecture can also contain and amplify bias, negatively influencing model performance in real-world settings. In high-stakes medical scenarios, the very real consequences of poor performance have made bias into a quintessential safety issue.

A new paper from researchers at MIT, Worcester Polytechnic Institute, and Google that was accepted to the 2026 International Conference for Learning Representations proposes a novel debiasing approach called “Weighted Rotational DebiasING” (i.e., WRING) that can be applied to vision language models (VLMs), like OpenAI’s OpenCLIP.

VLMs are multi-modal models that can understand and interpret different data modalities like video, image, and text simultaneously. While debiasing approaches for VLMs do exist, the most commonly used approach is known as “projection debiasing,” which leads to what has been termed the “Whac-A-Mole dilemma”, an empirical observation that was formally introduced to AI research in 2023.

Projection debiasing is a post-processing approach that removes the undesirable, biased information from model embeddings by “projecting” the subspace out of a representation space of relationships, thereby cutting out the bias. But this approach has its drawbacks.

“When you do that, you inadvertently squish everything around,” says Walter Gerych, the paper’s first author, who conducted this research last year as a postdoc at MIT. “All the other relationships that the model learns change when you do that.”

Gerych, who is now an assistant professor of computer science at Worcester Polytechnic Institute, is joined on the paper by MIT graduate students Cassandra Parent and Quinn Perian; Google’s Rafiya Javed; and MIT associate professors of electrical engineering Justin Solomon and Marzyeh Ghassemi, who is an affiliate of the Abdul Latif Jameel Clinic for Machine Learning and Health and the Laboratory for Information and Decision Systems. 

While projection debiasing stops the model from acting upon the bias that’s been projected out of the subspace, it can end up amplifying and creating other biases, hence the Whac-A-Mole dilemma. According to Ghassemi, the unintended amplification of model biases is “both a technical and practical challenge. For instance, when debiasing a VLM that retrieves images of clinical staff — if racial bias is removed — it could have the unintended consequence of amplifying gender bias.” 

WRING works by moving certain coordinates within the high-dimensional space of a model — the ones that appear to be responsible for bias — to a different angle, so the model can no longer distinguish between different groups within a certain concept. This changes the representation within a specific space while leaving the model’s other relationships intact. And like projection debiasing, WRING is a post-processing approach, which means it can be applied “on the fly” to a pre-trained VLM. 

“People already spent a lot of resources, a lot of money, training these huge models, and we don’t really want to go in and modify something during training because then you have to start from scratch,” Gerych explains. “[WRING is] very efficient. It doesn’t require more training of the model and it’s minimally invasive.”

In their results, the researchers found that WRING significantly reduced bias for a target concept without increasing bias in other areas. But for now, the approach is somewhat limited to Contrastive Language-Image Pre-training (CLIP) models, a type of VLM that connects images to language for search or classification.

“Extending this for ChatGPT-style, generative language models, is the reasonable next step for us,” says Gerych.

This work was supported, in part, by a National Science Foundation CAREER Award, AI2050 Award Early Career Fellowship, Sloan Research Fellow Award, the Gordon and Betty Moore Foundation Award, and MIT-Google Computing Innovation Award.

Transforming deep-space signals into cathedral sound

Wed, 04/29/2026 - 2:30pm

A new immersive sound installation at Oulu Cathedral, Finland, brings the research of MIT astrophysicist and associate professor of physics Kiyoshi Masui into a striking sensory form, transforming more than 4,000 cosmic signals into spatial audio.

With its grand opening on April 4, “The Logos” project invites visitors to experience deep-space phenomena not as distant abstractions, but as something immediate and resonant. The work is led by artist and creative technologist Andrew Melchior in collaboration with Masui, philosopher Timothy Morton, and cathedral dean Satu Saarinen. Together, they treat the cathedral, built in 1832, not just as a setting but as part of the instrument itself. Its stone surfaces and reverberant acoustics give physical presence to signals that have traveled from distant galaxies.

At the heart of the installation are data gathered by the Canadian Hydrogen Intensity Mapping Experiment (CHIME) radio telescope, which detects fast radio bursts (FRBs). FRBs are immensely energetic flashes lasting only milliseconds and originating in distant galaxies across the observable universe. The Logos represents one of the most extensive artistic sonifications of FRB data to date. Each day at noon, the cathedral is filled with a one-hour procedural composition derived from these bursts. Some bursts are singular events, never repeating, while others pulse again and again from unknown sources. These patterns remain one of astrophysics’ most compelling mysteries.

“The fast flashes will echo as snare-like beats bouncing through the cathedral,” says Masui. “The sweeping dispersion of the signal — where different radio frequencies arrive at slightly different times — creates harmonies between high and low tones. It should feel rich and layered, while also revealing something real about how these signals travel across billions of years of cosmic space before reaching Earth.”

By converting FRB data into a shared listening experience, the collaboration suggests a different way of understanding the universe: not only through analysis, but through attention.

Running through April 2027 to mark the cathedral’s 250th anniversary, The Logos will feature as part of Oulu2026 European Capital of Culture and the Lumo Art and Tech Festival. 

A month in Panama: Rethinking what real estate development can be

Wed, 04/29/2026 - 1:50pm

Cherry Tang, a master of science in real estate development student at the MIT Center for Real Estate, recently participated in an experiential learning opportunity in Panama working with Conservatorio, a development firm based in Casco Viejo. What began as a modeling exercise quickly became a deeper exploration of how development, community, and environment intersect, shaped as much by people and culture as by the work itself.

“I went in expecting to build a financial model. I didn’t expect that the experience would fundamentally reshape how I think about development,” Tang reflects.

The project centered on Santa Catalina, a remote surf town on Panama’s Pacific coast. The development comprises approximately 140 residential units across condos, villas, and homes, along with vacant lots, four retail spaces, a surf school with a stadium, and a restaurant with a pool — all envisioned as the town’s first true center.

At first glance, Tang says, Santa Catalina didn’t resemble a typical “prime” development market. It had limited infrastructure, low density, and no established core.

“What it does have is something powerful: world-class surf and access to Coiba National Park, a premier diving destination,” Tang says. “Here, the ocean becomes the anchor tenant.”

The project is designed as an open, walkable master-planned community that integrates seamlessly with the existing town. Anchored by surfing and diving, it introduces a diverse product mix and a 600-meter linear park, positioning it as the future heart of Santa Catalina and a differentiated alternative to both local developments and traditional resort-style communities.

Tang saw this as a different vision of place-making. “It wasn’t about building a resort. It was about building a center of gravity for a community that has never really had one.”

Tang’s primary role was to build the project’s financial model from the ground up. The capital structure, with land contributed as equity and sales deposits used to fund construction, required a different way of thinking than the institutional frameworks she had used in previous roles in Toronto and Boston.

“It was more than a technical exercise,” she explains. “It reinforced how financial, physical, and strategic decisions are deeply interconnected, and how thoughtful structuring can unlock projects that might otherwise not be feasible.”

Working directly with KC Hardin, founder and CEO of Conservatorio, and the broader leadership team, Tang gained firsthand exposure to real-time development decision-making. She presented her financial model to leadership and prospective investors, and her assumptions helped shape conversations around phasing, design, and construction.

Development is a feedback loop between underwriting and the built environment,” Tang says.

Throughout the month, Tang and her colleagues met with a range of people shaping the project’s future. They spent time with local developers and brokers, learning about infrastructure improvements and ongoing real estate activity in the region. 

Tang described meeting one family with long-standing ties to the area as one of the more memorable moments.

“Their coastline conservation work in Panama is deeply inspiring,” she says.

They also met with scientists from the Smithsonian Tropical Research Institute, trekking through mangroves and learning about coastal ecosystems and the long-term environmental implications of development.

“It was a vivid reminder that development decisions don’t exist in isolation,” says Tang.

Outside of work, Panama had its way of leaving an impression. Sailing through the Panama Canal ... watching cargo ships pass through landscapes filled with monkeys and sloths ... living in Casco Viejo — each added another layer to the experience for Tang. The neighborhood itself served as a real-life case study in thoughtful, community-oriented development.

“What stayed with me most was Conservatorio’s approach to revitalization, not through displacement, but through deep engagement, trust-building, and creating pathways for local residents to be part of the area’s transformation.”

That same spirit was reflected in everyday moments, from co-workers who went out of their way to make interns feel welcome.

“Strangers greeted us like neighbors,” says Tang. “The level of warmth and hospitality defined the experience as much as the work itself.”

By the end of the month, the experience left her with more than technical skills — she had a shift in perspective.

“I began to see development less as a formula and more as a system,” she explains. “One that sits at the intersection of finance, design, environment, and community.”

Her takeaway is that value can be created in unconventional ways, and leadership in real estate is grounded in trust, curiosity, and a deep respect for place.

Tang arrived in Panama to build a model. She left with a deeper understanding of what it means to build thoughtfully — as a developer, and as a steward of place.

The MIT-IBM Computing Research Lab launches to shape the future of AI and quantum computing

Wed, 04/29/2026 - 6:00am

The following is a joint announcement by the MIT Schwarzman College of Computing and IBM.

IBM and MIT today announced the launch of the MIT-IBM Computing Research Lab, advancing their long-standing collaboration to shape the next era of computing. The new lab expands its scope to include quantum computing, alongside foundational artificial intelligence research, with the goal of unlocking new computational approaches that go beyond the limits of today’s classical systems.

The MIT-IBM Computing Research Lab builds on a distinguished history of scientific excellence at the intersection of research and academia. Evolving from the MIT-IBM Watson AI Lab, which originated in 2017 on MIT’s campus, the new lab reflects a transformed technology landscape — one which AI has entered mainstream deployment, and quantum computing is rapidly advancing toward practical impact. Together, MIT and IBM aim to help lead research in AI and quantum and to redefine mathematical foundations across both domains.

“We expect the MIT-IBM Computing Research Lab to emerge as one of the world’s premier academic and industrial hubs accelerating the future of computing,” says Jay Gambetta, director of IBM Research and IBM Fellow, and IBM chair of the MIT-IBM Computing Research Lab. “Together, the brightest minds at MIT and IBM will rethink how models, algorithms, and systems are designed for an era that will be defined by the sum of what’s possible when AI and quantum computing come together.”

“For a decade, the collaboration between MIT and IBM has produced leading-edge research and innovation, and provided mentorship and supported the professional growth of researchers both at MIT and IBM,” says Anantha Chandrakasan, MIT’s provost, who, as then-dean of the School of Engineering, spearheaded the creation of the MIT-IBM Watson AI Lab and will continue as MIT chair of the lab. “The incredible technical achievements sets the bar high for our work together over the next 10 years. I look forward to another decade of impact.”

Addressing the next frontiers in computation

The MIT-IBM Computing Research Lab will serve as a focal point for joint research between MIT and IBM in AI, algorithms, and quantum computing, as well as the integration of these technologies into hybrid computing systems. The lab is designed to accelerate progress toward powerful new computational approaches that take advantage of rapid advances in AI and quantum-centric supercomputing, including those that combine maturing quantum hardware with classical systems and advanced AI methods.

This research initiative will include improving capabilities and integrating AI with traditional computing, alongside pursuing advances in small, efficient, modular language model architectures, novel AI computing paradigms, and enterprise-focused AI systems designed for deployment in real-world environments, where reliability, transparency, and trust are essential.

In parallel, the lab will rethink the mathematical and algorithmic foundations that underpin the next era of computing by accelerating the development of novel quantum algorithms for complex problems, with impacts in areas such as materials science, chemistry, and biology.

Additionally, the lab will investigate mathematical and algorithmic foundations of machine learning, optimization, Hamiltonian simulations, and partial differential equations, which are used to approximate the behaviors of dynamical systems that currently stump classical systems beyond limited scales and accuracy. Innovations from the lab could have wide implications for global industries, from more accurate weather and air turbulence prediction to better forecasts of financial market performance. Similarly, with improved optimization approaches, research from the lab could help lower risks in areas like finance, predict protein structures for more targeted medicine, and streamline global supply chains.

With its focus on AI, algorithms, and quantum, the MIT-IBM Computing Research Lab will complement and enhance the work of two of MIT’s strategic initiatives, the MIT Generative AI Impact Consortium and the MIT Quantum Initiative. MIT President Sally Kornbluth launched these strategic initiatives to broaden and deepen MIT’s impact in developing solutions to serious global challenges. The MIT-IBM Computing Research Lab will also leverage IBM’s longtime leadership and expertise in quantum computing. As part of its ambitious roadmap, IBM has laid out a clear path to delivering the world’s first fault-tolerant quantum computer by 2029, and is working across industries to drive value from quantum-centric supercomputing, tightly integrating quantum computers with high-performance computing and AI accelerators to solve the world’s toughest problems.

Deep integration with scientific domains

The MIT-IBM Computing Research Lab will also continue to serve as a foundation for training the next generation of computational scientists and innovators. It will do so by engaging faculty and students across MIT departments, enabling new computational approaches to accelerate discoveries in the physical and life sciences.

The lab will continue to be co-directed by Aude Oliva, senior research scientist at MIT’s Computer Science and Artificial Intelligence Laboratory, and David Cox, vice president of AI Foundations at IBM Research. MIT and IBM have appointed leads for each of the lab’s three focus areas — AI, algorithms, and quantum. Jacob Andreas, associate professor in the Department of Electrical Engineering and Computer Science (EECS), and Kenney Ng, principal research scientist at IBM Research and the MIT-IBM science program manager, will co-lead AI; Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in EECS, and Vasileios Kalantzis, IBM Research senior research scientist, will co-lead algorithms; and Aram Harrow, professor of physics, and Hanhee Paik, IBM director of Quantum Algorithm Centers, will co-lead quantum.

“The MIT-IBM Computing Research Lab reflects an important expansion of the collaboration between MIT and IBM and the increasing connections across AI, algorithms, and quantum. This deepened focus also underscores a strong alignment with the MIT Schwarzman College of Computing’s mission to advance the forefront of computing and its integration across disciplines,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and MIT co-chair of the lab. “I’m excited about what this next chapter will enable in these three areas, and their impact broadly.”

Building on nearly a decade of collaboration

The MIT-IBM Watson AI Lab helped pioneer a model for academic-industry research collaboration, aligning long-term scientific inquiry with real-world impact. Since its inception, the lab has funded over 210 research projects involving over 150 MIT faculty members and over 200 IBM researchers. Collectively, the projects have led to over 1,500 peer-reviewed articles. The lab also helped shape the career growth of a number of MIT students and junior researchers, funding more than 500 students and postdocs.

“The true measure of this lab is not just innovation, but transformation of a field. Hundreds of students have contributed to thousands of publications in top conferences and journals, demonstrating their capabilities to address meaningful problems,” says Oliva. “The MIT-IBM Computing Research Lab builds on an extraordinary legacy of impact to advance a trusted collaboration that will redefine the future of AI and quantum computing in a way never seen before.”

“By coupling academic rigor with industrial scale, the lab aims to define the computational foundations that will power the next generation of AI, quantum, and scientific breakthroughs,” says Cox. “By bringing together advances in AI, algorithms, and quantum computing under one integrated research effort, we’re creating the conditions to rethink the mathematical and computational foundations of science and engineering.”

The MIT-IBM Computing Research Lab will capitalize on this foundation, expanding both the scientific scope and the ecosystem of collaborators across the Cambridge-Boston region and beyond.

MIT engineers’ virtual violin produces realistic sounds

Wed, 04/29/2026 - 5:00am

There is no question that violin-making is an art form. It requires a musician’s ear, a craftsperson’s skill, and an historian’s appreciation of lessons learned over time. Making a violin also takes trust: Violin makers, or luthiers, often must wait until the instrument is finished before they can hear how all their hard work will sound.

But a new tool developed by MIT engineers could help luthiers play around with a violin’s design and tweak its sound even before a single part is carved.

In a study appearing today in the journal npj Acoustics, the MIT team reports on a new “computational violin” — a computer simulation that captures the detailed physics of the instrument and realistically produces the sound of a violin when its strings are plucked.

While there are software programs and plug-ins that enable users to play around with virtual violins, their sounds are typically the result of sampling and averaging over thousands of notes played by actual violins.

In contrast, the new computational violin takes a physics-based approach: It produces sound based on the way the instrument, including its vibrating strings, physically interacts with the surrounding air.

As a demonstration, the researchers applied the computational violin to play two short excerpts: one from “Bach’s Fugue in G Minor,” and another from “Daisy Bell” — a nod to the first song that was ever produced by a computer-synthesized voice.

The computational violin currently simulates the sound of plucked strings — a type of playing that musicians know as “pizzicato.” Violin bowing, the researchers say, is a much more complicated interaction to model. However, the computational violin represents the first physics-based foundation of a strung violin sound that could one day be paired with a model of bowing to produce realistic, bowed violin music.

For now, the team says the new virtual violin could be used in the initial stages of violin design. Luthiers can tweak certain parameters such as a violin’s wood type or the thickness of its body, and then listen to the sound that the instrument would make in response.

“These days, people try to improve designs little by little by building a violin, comparing the sound, then making a change to the next instrument,” says Yuming Liu, senior research scientist at MIT. “It’s very slow and expensive. Now they can make a change virtually and see what the sound would be.”

“We’re not saying that we can reproduce the artisan’s magic,” adds Nicholas Makris, professor of mechanical engineering at MIT. “We’re just trying to understand the physics of violin sound, and perhaps help luthiers in the design process.”

Makris and Liu’s MIT co-authors include Arun Krishnadas PhD ’23 and former postdoc Bryce Campbell, along with Roman Barnas of the North Bennet Street School.

Sound matrix

The quality of a violin’s sound is determined by its dimensions and design. The instrument is made from thoughtfully crafted parts and materials that all work to generate and amplify sound. In recent years, scientists have sought to understand what artisans have intuited for centuries, in terms of what specific parameters shape a violin’s sound.

In one early effort in 2006, scientists, as part of the Strad3D project, put a rare Stradivarius violin through a CT scanner. The violin was crafted in 1715 by the master violinmaker Antonio Stradivari, during what is considered the “Golden Age” of violin making. To better understand the violin’s anatomy and its relation to sound, the scientists scanned the instrument and produced 600 “slices,” or views, of the violin.

The CT scans are available online for people to view and use as data for their own experiments. For their study, Makris and his colleagues first imported the CT scans into a solid modeling software program to generate a detailed three-dimensional model of the violin. They then ran a finite element simulation, essentially dividing the violin into millions of tiny individual cubes, or “elements.”

For each cube, they noted its material type, such as if a cube from the violin’s back plate is made from maple or spruce, or if a string is made from steel or natural fibers. They then applied physics-based equations of stress and motion to predict how each material element would move in relation to every other element across the instrument.

They also carried out a similar process for the air surrounding the violin, dividing up a roughly cubic-meter volume of air and applying acoustic wave equations to predict how each tiny parcel of air would move and contribute to generating sound.

“The entire thing is a matrix of millions of individual elements,” explains Krishnadas. “And ultimately, you see this whole three-dimensional being, which is the violin and the air all connected and interacting with each other.”

A plucky model

The team then simulated how the new computational violin would sound when plucked. When a violinist plucks a string, they pull the string sideways and let it go, causing the string to vibrate. These vibrations travel across the instrument and inside it; the air’s vibrations are amplified as they travel out of the violin and into the surroundings, where a listener hears the vibrations as sound.

For their purposes, the engineers simulated a simple string pluck by directing one of the virtual violin’s strings to stretch out and then rebound. The simulation computed all the resulting motions and vibrations of the millions of elements in the violin, and the sound that the pluck would produce.

For notes that require pressing down on a violin’s fingerboard, they simulated the same plucking, and in addition, included a condition in which the string is held fixed in the section of the fingerboard where a violinist’s finger would press down.

The researchers carried out this computational process to virtually pluck out the notes in several measures of “Daisy Bell” and “Bach’s Fugue in G Minor.”

“If there’s anything that’s sounding mechanical to it, it’s because we’re using the exact same time function, or standard way of plucking, for each note,” says Makris, who is himself a lute player. “A musician will adapt the way they’re plucking, to put a little more feeling on certain notes than others. But there could be subtleties which we could incorporate and refine.”

As it is, the new computational model is the first to generate realistic sound based on the laws of physics and acoustics. The researchers say that violin makers could use the model to test how a violin might sound when certain dimensions or properties are changed. For instance, when the researchers varied the thickness of the virtual violin’s back plate or changed its wood type, they could hear clear differences in the resulting sounds.

“You can tweak the model, to hear the effect on the sound,” Makris says. “Since everything obeys the laws of physics, including a violin and the music it makes, this approach can add an appreciation to what makes violin sound. But ultimately, we get most of our inspiration from the artisans.”

This work was supported, in part, by an MIT Bose Research Fellowship.

Enabling privacy-preserving AI training on everyday devices

Wed, 04/29/2026 - 12:00am

A new method developed by MIT researchers can accelerate a privacy-preserving artificial intelligence training method by about 81 percent. This advance could enable a wider array of resource-constrained edge devices, like sensors and smartwatches, to deploy more accurate AI models while keeping user data secure.

The MIT researchers boosted the efficiency of a technique known as federated learning, which involves a network of connected devices that work together to train a shared AI model.

In federated learning, the model is broadcast from a central server to wireless devices. Each device trains the model using its local data and then transfers model updates back to the server. Data are kept secure because they remain on each device.

But not all devices in the network have enough capacity, computational capability, and connectivity to store, train, and transfer the model back and forth with the server in a timely manner. This causes delays that worsen training performance.

The MIT researchers developed a technique to overcome these memory constraints and communication bottlenecks. Their method is designed to handle a heterogenous network of wireless devices with varied limitations.

This new approach could make it more feasible for AI models to be used in high-stakes applications with strict security and privacy standards, like health care and finance.

“This work is about bringing AI to small devices where it is not currently possible to run these kinds of powerful models. We carry these devices around with us in our daily lives. We need AI to be able to run on these devices, not just on giant servers and GPUs, and this work is an important step toward enabling that,” says Irene Tenison, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this technique.

Her co-authors include Anna Murphy ’25, a machine-learning engineer at Lincoln Laboratory; Charles Beauville, a visiting student from Ecole Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and a machine-learning engineer at Flower Labs; and senior author Lalana Kagal, a principal research scientist in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT. The research will be presented at the IEEE International Joint Conference on Neural Networks. 

Reducing lag time

Many federated learning approaches assume all devices in the network have enough memory to train the full AI model, and stable connectivity to transmit updates back to the server quickly.

But these assumptions fall short with a network of heterogenous devices, like smartwatches, wireless sensors, and mobile phones. These edge devices have limited memory and computational power, and often face intermittent network connectivity.

The central server usually waits to receive model updates from all devices, then averages them to complete the training round. This process repeats until training is complete.

“This lag time can slow down the training procedure or even cause it to fail,” Tenison says.

To overcome these limitations, the MIT researchers developed a new framework called FTTE (Federated Tiny Training Engine) that reduces the memory and communication overhead needed by each mobile device.

Their framework involves three main innovations.

First, rather than broadcasting the entire model to all devices, FTTE sends a smaller subset of model parameters instead, reducing the memory requirement for each device. Parameters are internal variables the model adjusts during training.

FTTE uses a special search procedure to identify parameters that will maximize the model’s accuracy while staying within a certain memory budget. That limit is set based on the most memory-constrained device.

Second, the server updates the model using an asynchronous approach. Rather than waiting for responses from all devices, the server accumulates incoming updates until it reaches a fixed capacity, then proceeds with the training round.

Third, the server weights updates from each device based on when it received them. In this way, older updates don’t contribute as much to the training process. These outdated data can hold the model back, slowing the training process and reducing accuracy.

“We use this semi-asynchronous approach because want to involve the least powerful devices in the training process so they can contribute their data to the model, but we don’t want the more powerful devices in the network to stay idle for a long time and waste resources,” Tenison says.

Achieving acceleration

The researchers tested their framework in simulations with hundreds of heterogeneous devices and a variety of models and datasets. On average, FTTE enabled the training procedure to reach completing 81 percent faster than standard federated learning approaches.

Their method reduced the on-device memory overhead by 80 percent and the communication payload by 69 percent, while attaining near the accuracy of other techniques.

“Because we want the model to train as fast as possible to save the battery life of these resource-constrained devices, we do have a tradeoff in accuracy. But a small drop in accuracy could be acceptable in some applications, especially since our method performs so much faster,” she says.

FTTE also demonstrated effective scalability and delivered higher performance gains for larger groups of devices.

In addition to these simulations, the researchers tested FTTE on a small network of real devices with varying computational capabilities.

“Not everyone has the latest Apple iPhone. In many developing countries, for instance, users might have less powerful mobile phones. With our technique, we can bring the benefits of federated learning to these settings,” she says.

In the future, the researchers want to study how their method could be used to increase the personalized performance of AI models on each device, rather than focusing on the average performance of the model. They also want to conduct larger experiments on real hardware.

This work was funded, in part, by a Takeda PhD Fellowship.

With a swipe of a magnet, microscopic “magno-bots” perform complex maneuvers

Tue, 04/28/2026 - 11:00am

Under a microscope, a bouquet of lollipop-like structures, each smaller than a grain of sand, waves gently in a petri dish of liquid. Suddenly, they snap together, like the jaws of a Venus flytrap, as a scientist waves a small magnet over the dish. What was previously an assemblage of tiny passive structures has transformed instantly into an active robotic gripper.

The lollipop gripper is one demonstration of a new type of soft magnetic hydrogel developed by engineers at MIT and their collaborators at the École Polytechnique Fédérale de Lausanne (EPFL) in Switzerland and the University of Cincinnati. In a study appearing today in the journal Matter, the MIT team reports on a new method to print and fabricate the gel, which can be made into complex, magnetically activated three-dimensional structures.

The new gel could be the basis for soft, microscopic, magnetically responsive robots and materials. Such magno-bots could be used in medicine, for instance to release drugs or grab biopsies when directed by an external magnet.

Making objects move with magnets is nothing new, at least at the macroscale. We can, for example, wave a refrigerator magnet over a pile of paper clips that will trail the magnet in response. And at the microscale, scientists have designed a variety of magnetic “micro-swimmers” — components that are smaller than a millimeter and can be directed remotely by a magnet to squeeze through small spaces. For the most part, these designs work by mixing magnetic particles into a printable resin and pulling the entire swimmer in the direction of an external magnet.

In contrast, the MIT team’s new material can be made into even more complex and deformable structures with micron-scale precision. These features could enable a magnetic millibot to move individual features and perform more complex maneuvers.

“We can now make a soft, intricate 3D architecture with components that can move and deform in complex ways within the same microscopic structure,” says study author Carlos Portela, the Robert N. Noyce Career Development Associate Professor of Mechanical Engineering at MIT. “For soft microscopic robotics, or stimuli-responsive matter, that could be a game-changing capability.”

The study’s MIT co-authors include graduate students Rachel Sun and Andrew Chen, along with Yiming Ji and Daryl Yee of EPFL and Eric Stewart of the University of Cincinnati.

In a flash

At MIT, Portela’s group develops new metamaterials — materials engineered with unique, microscopic architectures that give rise to beyond-normal material properties. Portela has fabricated a variety of such metamaterials, including extremely tough and stretchy architectures and designs that can manipulate sound and withstand violent impacts.

Most recently, he’s expanded his research to “programmable” materials, which can be engineered to change their properties in response to stimuli, such as certain chemicals, light, and electric and magnetic fields.

From the team’s perspective, magnetic stimuli stand out from the rest.

“With a magnetically responsive material, we have control at a distance and the response is instantaneous,” says co-lead author Andrew Chen. “We don’t have to wait for a slow chemical reaction or physical process, and we can manipulate the material without touching it.”

For the new study, the team aimed to create a magnetically responsive metamaterial that can be made into structures smaller than a millimeter. Researchers typically fabricate microstructures by using two-photon lithography — a high-resolution 3D printing technique that flashes a laser into a small pool of resin. With repeated flashes, the laser traces a microscopic pattern into the resin, which solidifies into the same pattern, ultimately creating a tiny, three-dimensional structure, layer by layer.

While 3D resin printing produces intricate microstructures, using the same process to print magnetic structures has been a challenge. Researchers have tried to combine the resin with magnetic nanoparticles before printing the mixture. But magnetic particles are essentially bits of metal that inherently scatter light away or agglomerate and sediment unintentionally. Scientists have found that any magnetic particles in the resin can reduce the laser’s power at a given spot and weaken the resulting structure or prevent its printing altogether.

“Directly 3D printing deformable micron-scale structures with a high fraction of magnetic particles is extremely difficult, often involving a tradeoff between magnetic functionality and structural integrity,” says Sun, a co-lead author on the work.

A printed double-dip

The researchers created a new way to fabricate magnetic microstructures, by combining 3D resin printing with a double-dip process. The researchers first applied conventional resin printing to create a microstructure using a typical polymer gel, with no added magnetic particles. Then they dipped the printed gel into a solution containing iron ions, which the gel can absorb. The iron-soaked structure is then dipped again in a second solution of hydroxide ions. The iron ions in the gel bond with the hydroxide ions, creating iron-oxide nanoparticles that are inherently magnetic.

With this new process, the team can print intricate structures smaller than a millimeter, and add magnetic properties to the structures after printing. What’s more, they are able to control how magnetic a structure’s individual features can be. They found that, by tuning the laser’s power as they print certain features, they can set how cross-linked, or “tight” the gel is when printed. The tighter the gel, the fewer magnetic particles it can form. In this way, the researchers can determine how magnetic each tiny feature can be.

“This provides unprecedented design freedom to print multifunctional structures and materials at the microscale,” Sun says.

As a demonstration, the team fabricated ball-and-stick structures resembling tiny lollipops. The structures were less than a millimeter in height, with balls that were smaller than a grain of sand. The researchers printed the lollipops out of polymer gel and infused each ball with different amounts of magnetic particles, giving them various degrees of magnetism. Under a microscope, they observed that when they passed an ordinary refrigerator magnet over the structures, the lollipops pulled toward the magnet in various degrees, in a configuration that mimicked gripping fingers.

“You could imagine a magnetic architecture like this could act as a small robot that you could guide through the body with an external magnet, and it could latch onto something, for instance to take a biopsy,” Portela says. “That is a vision that others can take from this work.”

The team also fabricated a magnetically responsive, “bistable” switch. They first printed a small millimeter-long rectangle of polymer gel and attached to either side four tiny, oar-like magnetic structures. Each oar measured about 8 microns thick — about the size of a red blood cell. When the team applied a magnet on one end of the rectangle, the oars flipped toward the magnet, pulling the rectangle in the same direction and locking it in that position. When the magnet was applied to the other side, the oars flipped again, pulling the rectangle, like a switch, in the opposite direction.

“We think this is a new kind of bistable mechanism that could be used, for instance, in a microfluidic device, as a magnetic valve to open or shut some flow,” Portela says. “For now, we’ve figured out how to fabricate magnetic complex architectures at the microscale and also spatially tune their properties. That opens up a lot of interesting ideas for soft miniature robots going forward.”

This research was supported, in part, by the National Science Foundation and the MathWorks seed grant program.

This work was performed, in part, in the MIT.nano fabrication and characterization facilities.

Robotically assembled building blocks could make construction more efficient and sustainable

Tue, 04/28/2026 - 12:00am

Robotically assembled building blocks could be a more environmentally friendly method for erecting large-scale structures than some existing construction techniques, according to a new study by MIT researchers.

The team conducted a feasibility study to evaluate the efficiency of constructing a simple building using “voxels,” which are modular 3D subunits that assemble into complex, durable structures.

After studying the performance of multiple voxels, the researchers developed three new designs intended to streamline building construction. They also produced a robotic assembler and a user-friendly interface for generating voxel-based building layouts and feeding instructions to the robots.

Their results indicate this voxel-based robotic assembly system could reduce embodied carbon — all of the carbon emitted during the lifecycle of building materials — by as much as 82 percent, compared with popular techniques like 3D concrete printing, precast modular concrete, and steel framing. The system would also be competitive in terms of cost and construction time. However, the choice of materials used to manufacture the voxels does play a major role in their carbon footprint and cost.

While scalability, durability, long-term robustness, and important considerations like fire resistance remain to be explored before such a system could be widely deployed, the researchers say these initial results highlight the potential of this approach for automated, on-site construction.

“I’m particularly excited about how the robotic assembly of discrete lattices can enable a practical way to apply digital fabrication to the built environment in a way that can let us build much more efficiently and sustainably,” says Miana Smith, a graduate student in the Center for Bits and Atoms (CBA) at MIT and lead author the study.

She is joined on the paper by Paul Richard, a graduate student at École Polytechnique Fédérale de Lausanne in Switzerland and former visiting researcher at MIT; Alfonso Parra Rubio, a CBA graduate student; and senior author Neil Gershenfeld, an MIT professor and the director of the CBA. The research appears in Automation in Construction.

Designing better building blocks

Over the past several years, researchers in the Center for Bits and Atoms have been developing voxels, which are lattice-structured building blocks that can be assembled into objects with high strength and stiffness, like airplane wings, wind turbine blades, and space structures.

“Here, we are taking aerospace principles and applying them to buildings. Why don’t we make buildings as efficiently as we make airplanes?” Gershenfeld says, based on prior work his lab has done on voxel assembly with NASA, Airbus, and Boeing.

To explore the feasibility of voxel-based assembly strategies for buildings, the researchers first evaluated the mechanical performance and sustainability of eight existing voxel designs, including a cuboctahedron made from glass-reinforced nylon and a Kelvin lattice made from steel.

Based on those evaluations, they developed a set of three voxels using a new geometry that could be more easily assembled robotically into a larger structure. The new design, based on a high-strength and high-stiffness octet lattice, mechanically self-aligns into rigid structures.

“The interlocking nature of these voxels means we can get nice mechanical properties without needing to have a lot of connectors in the system, so the construction process can run a lot faster,” Smith says.

To accelerate construction, they designed a robotic assembly system based on inchworm-like robots that crawl across a voxel structure by anchoring and extending their bodies. These Modular Inchworm Lattice Assembler robots, or MILAbots, use grippers on each end to place voxel building blocks and engage the snap-fit connections.

“The robots can assemble the voxels by dropping them into place and then stepping on them to have the pieces interlock. We can do precise maneuvers based on the mechanical relationship between the robots and the voxels,” Smith explains.

The team studied the embodied carbon needed to fabricate their new voxel designs using three materials: plastic, plywood, and steel. Then they evaluated the throughput and cost of using the robotic assembly system to build a simple, one-story building. The researchers compared these estimates with the performance of other construction methods.

Potential environmental benefits

They found that most existing voxels, and especially those made from plastics, performed poorly compared to existing methods in terms of sustainability, but the steel and wood voxels they designed offered significant environmental benefits.

For instance, utilizing their steel voxels would generate only 36 percent of the embodied carbon required for 3D concrete printing and 52 percent of the embodied carbon of precast concrete. The plywood voxels had the lowest carbon footprint, requiring about 17 percent and 24 percent of the embodied carbon needed, respectively.

“There is still a potential viable option for a plastics-based voxel approach, we just have to be a bit more strategic about which types of plastics, infills, and geometries we use,” Smith says.

In addition, projected on-site assembly time for the steel and wood voxel approaches averaged 99 hours, whereas existing construction methods averaged 155 hours.

These speed benefits rely on the distributed nature of voxel-based assembly. While one MILAbot working alone is far slower than existing techniques, with a team of 20 robots working in parallel, the system catches up to or surpasses existing automation methods at a lower cost.

“One benefit of this method is how incremental it is. You can start building, and if it turns out you need a new room, you can just add onto the structure. It is also reversible, so if your use changes, you can dissemble the voxels and change the structure,” Gershenfeld says.

The researchers also developed an interface that enables users to input or hand-design a voxelized structure. The automatic system determines the paths the MILAbots should follow for construction and sends commands to the assemblers.

The next step in this project will be a larger testbed in Bhutan, using the “super fab lab” that CBA helped set up there to replicate the robots to test construction for a planned sustainable city, Gershenfeld says.

Additional areas of future work include studying the stability of voxel structures under lateral loads, improving the design tool to account for the physics of the system, enhancing the MILAbots, and evaluating voxels that have integrated sheeting, insulation, or electrical and plumbing routing.

“Our work helps support why doing this type of distributed robot assembly might be a practical way to bring digital fabrication into building construction,” Smith says.

“This is yet another visionary example from Neil Gershenfeld and his team, of finding ways for buildings to build themselves with the help of tiny robotic machines. I’m now fascinated by how we can harness an idea like this to make it more affordable to make the outsides of buildings more engaging and joyful,” says Thomas Heatherwick, founder of the design and architecture firm Heatherwick Studio, who was not involved with this research.

This work was funded, in part, by the MIT Center for Bits and Atoms Consortia.

Mapping molecular markers of physical fitness

Tue, 04/28/2026 - 12:00am

Patterns of molecular activity in the blood may hold clues not only to how fit someone is, but also to the biological processes that support physical performance. Researchers at MIT, GE HealthCare, and the U.S. Military Academy at West Point have developed a computational model that links thousands of these molecular signals to fitness levels, revealing pathways that could inform future studies to improve fitness training and speed injury or disease recovery.

To develop their model, the researchers analyzed more than 50,000 biomarkers in 86 cadets at the U.S. Military Academy who were training for a military competition. Using these data, the researchers were able to identify molecular pathways that appear to contribute to higher levels of physical fitness.

“We had 50,000 measurements, and we wanted to get it down to about 100 where there’s some likelihood that the markers that we’re measuring are mechanistically linked to physical fitness. So, not just a statistical correlation, of which there will be many, but markers where there’s a likelihood that there is a causal relationship,” says Ernest Fraenkel, the Grover M. Hermann Professor in Health Sciences and Technology in MIT’s Department of Biological Engineering.

These biomarkers can be measured by analyzing blood samples, which could offer a simple way to provide an athlete, for example, or perhaps someone with chronic illness or a long-term injury, with additional information about potential areas to focus their efforts to reduce risk of injury, accelerate recovery, or improve their performance ceiling beyond what conventional measures show.

Azar Alizadeh, a principal scientist with GE HealthCare’s Healthcare Technology and Innovation Center, is the paper’s lead author. Fraenkel and Luca Marinelli, a senior principal scientist with GE HealthCare, are the senior authors of the new study, which appears in the journal Communications Biology.

Mapping fitness

To find the genetic basis of a simple trait such as height, scientists can perform large-scale studies known as genome-wide association studies (GWAS), in which genetic markers from thousands of people can be linked with height. However, the picture becomes much more complicated for traits such as physical fitness, which is determined by the interplay of many different genetic, physiological, and environmental factors.

The researchers set out to try to identify some of those factors, working with a group of 86 volunteers at the U.S. Military Academy at West Point who were training for the Sandhurst Military Skills Competition. Alizadeh led the experimental study design and execution, in collaboration with GE HealthCare, GE Research, West Point, and MIT scientists. During the three-month study period, volunteers participated in up to five sessions. At each session, blood samples were taken before and after intense exercise. The researchers also measured other traits such as lean muscle mass and VO2 max (the maximum rate of oxygen consumption during exercise).

From the blood samples, the researchers were able to measure more than 50,000 biomarkers, which they obtained by analyzing DNA methylation patterns, sequencing messenger RNA transcripts, and analyzing thousands of the proteins and small molecules found in the samples.

From their set of 50,000 biomarkers, the researchers hoped to identify a smaller number that could predict overall physical fitness, as measured by performance on the Army Combat Fitness Test (ACFT). This test includes a 2-mile run, maximum deadlift (the heaviest weight a person can lift for a single repetition up to 340 pounds), and sprint-drag-carry, a test that involves sprinting, dragging a sled, and carrying kettlebells.

One way to do this would be to simply train a computational model to identify correlations between fitness and biomarkers. However, with only 86 subjects in the study, that approach would likely yield correlations that were random and did not actually contribute to physical fitness, Fraenkel says.

To take a more targeted approach, the researchers first created a network model that represents the interactions between the markers, based on existing databases that catalog those interactions. These connections might represent proteins that interact with each other in a signaling pathway, or a transcription factor that turns on a set of genes.

“We built a network that you can think of as a city map. You want to find the places in the city map that are lighting up — not just one light going on, but a whole bunch of houses or street lamps going on in the same neighborhood,” Fraenkel says. “We can find neighborhoods on this enormous molecular map that are active at the same time, in a way that correlates with the phenotype that we measure.”

“We built upon the network bioinformatics from the Fraenkel lab to create an end-to-end predictive modeling framework to discover biological expression circuits that drive groups of physical characteristics predictive of ACFT scores, for example, body composition or exercise physiology metrics like VO2 max,” Marinelli says.

After feeding the measurements from the study participants into this predictive model, known as PhenoMol, the researchers were able to identify more than 100 biomarkers linked to performance on the ACFT. Fitness predictions based on these biomarkers were much more accurate than those of a model that correlated biomarkers with performance on the ACFT without taking network connections into account. Additionally, PhenoMol performed similarly to a model that predicted participants’ fitness based on measurements of their VO2 and lean muscle mass.

Cellular pathways

The researchers found that the biomarkers identified by PhenoMol clustered into several different cellular pathways. Those include pathways involved in blood coagulation and the complement cascade — a part of the immune system involved in clearing damaged cells. Those systems likely help with recovery from tissue injury and stress response during exercise, Fraenkel says.

Another prominent cluster involves molecules related to the urea cycle, which is responsible for eliminating the ammonia that results from the breakdown of proteins. The model also identified biomarkers that are linked with the function of mitochondria (the organelles that generate energy within cells).

Fraenkel now hopes to dig deeper into which markers show someone’s current fitness, and which might reveal what their potential fitness levels could be. This could help to reveal potential strengths that might not show up in traditional fitness tests, he says.

That kind of prediction could be useful not only for athletic training, but also for other people who are recovering from an injury or disease, or people experiencing the effects of aging. For example, using this approach in different populations might provide useful information for an elderly person after a stroke, since such events often require months of therapy to regain significant mobility.

“This has relevance for the military and for sports teams, but also in a lot of normal life situations where maybe someone is going through rehabilitation for some injury or disease and they’ve hit a wall,” Fraenkel says. “Or during aging, you may be able to see when somebody’s losing capacity or when they have more capacity than they’ve been able to actualize.”

Molecular markers of fitness could also be used in clinical trials to rigorously test the potential benefits of popular food supplements and fitness programs, he adds.

To make the testing process simpler, the researchers would like to narrow down their pool of biomarkers to a handful that could be easily measured from a blood sample using a single method suitable for widespread use.

The research was developed with funding from the Defense Advanced Research Projects Agency (DARPA), which states that the views, opinions, or findings expressed are those of the authors and should not be interpreted as representing the official views or policies of the U.S. government.

Six from MIT awarded 2026 Paul and Daisy Soros Fellowships for New Americans

Tue, 04/28/2026 - 12:00am

Six MIT affiliates — Denisse Córdova Carrizales SM ’26; Ria Das ’21, MNG ’22; Ronak Desai; Stacy Godfreey-Igwe ’22; Arya Rao; and Ananthan Sadagopan ’24 — have been named 2026 P.D. Soros Fellows. In addition, P.D. Soros Fellow Avinash Vadali will begin a PhD in condensed-matter physics at MIT this fall.

The fellowship provides immigrants and the children of immigrants up to $90,000 in tuition and stipend support for up to two years of graduate studies. Interested students should contact Kim Benard, associate dean of distinguished fellowships in Career Advising and Professional Development.

Denisse Córdova Carrizales

Córdova Carrizales SM '26 is a PhD student in nuclear science and engineering in the lab of Professor Mingda Li, where she completed her master's work earlier this year. She is working on synthesizing and characterizing quantum materials with the goal of bridging fundamental science and industry to make our technology more energy-efficient and sustainable.

Córdova Carrizales, who is of Mexican descent, grew up in Houston, Texas, before attending Harvard University, where she graduated in 2023 with a BA in physics. At Harvard, she dove into experimental condensed-matter research. She also conducted research with the Princeton Plasma Physics Laboratory, Commonwealth Fusion Systems, and VEIR, spanning computational plasma physics and high-temperature superconducting magnet and cable engineering.

Her work includes coauthored papers in Nature Physics, Nature Materials, and Advanced Materials, as well as lead-author publications in Nano Letters and Physical Review Materials. In 2023, she received the LeRoy Apker Award from the American Physical Society.

Beyond research, Córdova Carrizales has advocated in Congress for nuclear disarmament and risk reduction and has written a piece on the nuclear stockpile stewardship program. At Harvard, she founded an organization to support first-generation college students studying physics. In a completely different arena, she performed as the lead in an off-Broadway show in New York.

Ria Das

Das ’21, MNG ’22 is a PhD student in the MIT Department of Electrical Engineering and Computer Science. She graduated from MIT in 2021 with a BS dual degree in mathematics along with electrical engineering and computer science, and received her master of engineering degree in 2022.

The daughter of Indian immigrant parents, Das grew up in Nashua, New Hampshire, where she struggled with issues of belonging and identity. These questions came to the forefront during her PhD studies at Stanford University. Das decided to step off the academic treadmill by taking a leave from her PhD to think more deeply about these topics.

During her leave, she traveled around the country before moving to New York to work at Basis Research Institute, an AI research nonprofit. As a research associate, Das developed an urban data team that worked with federal and municipal government agencies on issues of economic and housing equity, blending her interests in science and social problems. She then returned to MIT to complete her doctoral studies.

Today, Das works with Professor Joshua Tenenbaum in the Department of Brain and Cognitive Sciences to study how people undergo conceptual change to build more robust, accessible systems for automated (social) science and improved educational design. Looking ahead, she hopes to become a professor, collaborating closely with policy practitioners.

Ronak Desai

Desai is currently a student in the Harvard/MIT MD-PhD program, where his PhD focuses on chemistry. The son of immigrants from Gujarat, India, Desai was born in Tyler, Texas, and grew up in nearby Lindale. He earned his undergraduate degree at the University of Texas at Austin.

Desai spent a semester interning at the U.S. House of Representatives as a Bill Archer Fellow. He also completed biomedical research focused on studying and engineering novel polyketide synthases, aspiring to produce next-generation antibiotics by harnessing such newly engineered synthases.

Desai graduated with degrees in chemistry and biochemistry as a first-generation college student, Health Science Scholar, and Dean’s Honored Graduate, receiving nine scholarships throughout college. His research has resulted in publications in journals such as Cell and Nature Communications.

Desai hopes to combine his passions for medicine, science, and public policy in his career to advance the treatment of infectious diseases. He is conducting his doctoral research under Professor James J. Collins in the MIT Department of Biological Engineering and the Harvard-MIT Program in Health Sciences and Technology. Desai’s research centers on using artificial intelligence to discover and design novel antibiotics, an opportunity to advance treatments for patients worldwide.

Stacy Godfreey-Igwe

Godfreey-Igwe ’22 attended MIT as a QuestBridge and Gates Scholar, graduating in 2022 with a BS in mechanical engineering and a concentration in sustainable design. A Burchard Scholar, she also became the first student at MIT to complete a major in African and African diaspora studies. After graduating, she pursued a science policy fellowship in Washington and interned at the U.S. Department of Energy’s Building Technologies Office, where she worked to broaden adoption of heat pump technologies across diverse stakeholders.

Growing up in Richardson, Texas, as the daughter of Nigerian immigrants, Godfreey-Igwe developed an early awareness of structural inequality, particularly in how families like hers managed the burden of the severe Texas heat and high electricity costs. These experiences formed the basis of her lifelong journey seeking to address systemic inequities embedded in everyday systems.

Godfreey-Igwe is currently a doctoral student in the joint engineering and public policy - civil and environmental engineering program at Carnegie Mellon University (CMU), where she was selected for the inaugural CMU Rales Fellowship cohort. At CMU, she studies the impact of extreme heat on household energy use, particularly in vulnerable communities.

Beyond her research, Godfreey-Igwe organizes outreach and programming for local underrepresented students in STEM and participates in institutional efforts to expand access and belonging among graduate students. She aims to be a scholar and advocate whose work, drawing on her personal experiences, informs equitable energy solutions in a warming world.

Arya Rao

Rao is a student in the Harvard/MIT MD-PhD program. She completed her undergraduate degrees in biochemistry and computer science at Columbia University. Working with professors Pardis Sabeti (Harvard University) and Sangeeta Bhatia (MIT), Rao uses evolution as a lens for therapeutic design, developing artificial intelligence methods that read the genetic record and guide new intervention strategies.

Leveraging her dual training in medicine and computer science, Rao also leads the MESH AI Research Group at Mass General Brigham, where she develops simulation-based tools that test clinical AI systems in realistic educational settings before they reach patients.

Rao has been recognized for her work with a Forbes 30 Under 30 honor, the Massachusetts Medical Society Information Technology Award, the Harvard Presidential Public Service Fellowship, a Harvard Medical School Dean’s Innovation Award, and a Ladders to Cures Accelerator Award. She has published more than 30 manuscripts in publications including JAMA, Nature, and NEJM AI.

Growing up in rural northern Michigan, Rao was inspired by her parents, Konkani immigrants from India, who served as two of the area’s only physicians. She has always imagined a career that could leverage scientific innovation to improve patient care, especially for communities without access like her own. Going forward, she envisions a career as a surgeon-scientist that keeps her close to patients while taking on leadership that shapes how new technologies are evaluated, implemented, and made usable in the places that need them most.

Ananthan Sadagopan 

Sadagopan ’24 grew up in Westborough, Massachusetts, as the child of immigrants from Chennai, India. He participated in chemistry competitions, winning the You Be the Chemist Challenge in middle school and earning a gold medal at the International Chemistry Olympiad for the United States in high school. He attended MIT for college, graduating in three years in 2024 with a bachelor’s degree in chemistry and biology.

At MIT, Sadagopan worked with Srinivas Viswanathan on computational biology projects and with William Gibson, Matthew Meyerson, and Stuart Schreiber on chemical biology projects. He led projects characterizing somatic perturbations of X chromosome inactivation in cancer, developing a machine-learning tool for cancer dependency prediction, using small molecules to relocalize proteins in cells, and creating a generalizable strategy to drug the most mutated gene in cancer, TP53. Sadagopan’s work has been patented and published in journals such as Cell and Nature Chemical Biology.

Sadogopan was president of the chemistry undergraduate association and led the events committee for MIT Science Olympiad. He is currently pursuing a PhD in biological and biomedical science at Harvard University as a Hertz Fellow and Herchel Smith Fellow. He is interested in de-risking new therapeutic strategies and hopes that his work will inspire pharma companies to bring first-in-class therapies to patients.

Self-organizing “pencil beam” laser could help scientists design brain-targeted therapies

Mon, 04/27/2026 - 5:00am

MIT researchers discovered a paradoxical phenomenon in optical physics that could enable a new bioimaging method that’s faster and higher-resolution than existing technology.

They discovered that, under the right conditions, a chaotic mess of laser light can spontaneously self-organize into a highly focused “pencil beam.”

Using this self-organized pencil beam, the researchers captured 3D images of the human blood-brain barrier 25 times faster than the gold-standard method, while maintaining comparable resolution.

By showing individual cells absorbing drugs in real-time, this technology could help scientists test whether new drugs for neurodegenerative disease like Alzheimer’s or ALS reach their targets in the brain, with greater speed and resolution.

“The common belief in the field is that if you crank up the power in this type of laser, the light will inevitably become chaotic. But we proved that this is not the case. We followed the evidence, embraced the uncertainty, and found a way to let the light organize itself into a novel solution for bioimaging,” says Sixian You, assistant professor in the MIT Department of Electrical Engineering and Computer Science (EECS), a member of the Research Laboratory for Electronics, and senior author of a paper on this imaging technique.

She is joined on the paper by lead author Honghao Cao, an EECS graduate student; EECS graduate students Li-Yu Yu and Kunzan Liu; postdocs Sarah Spitz, Francesca Michela Pramotton, and Federico Presutti; Zhengyu Zhang PhD ’24; Subhash Kulkarni, an assistant professor at Harvard University and the Beth Israel Deaconess Medical Center; and Roger Kamm, the Cecil and Ida Green Distinguished Professor of Biological and Mechanical Engineering at MIT. The paper appears today in Nature Methods.

A surprising finding

The discovery began with an observation that initially puzzled the researchers.

The team previously developed a precise fiber shaper, a device that enables them to carefully tune the laser light shining through a multimode optical fiber. This type of optical fiber can carry a significant amount of power.

Cao was pushing the multimode fiber toward its limit to see how much power it could take.

Typically, the more power one pumps into the laser, the more disordered and scattered the beam of light becomes due to imperfections in the fiber.

But Cao observed that, as he increased the power almost to the point where it would burn the fiber, the light did the opposite of what was expected: It collapsed into a single, needle-sharp beam.

“Disorder is intrinsic to these fibers. The light engineering you typically need to do to overcome that disorder, especially at high power, is a longstanding hassle. But with this self-organization, you can get a stable, ultrafast pencil beam without the need for custom beam-shaping components,” You says.

To replicate this phenomenon, the researchers found they had to satisfy two simple, but precise conditions.

First, the laser must enter the fiber at a perfect, zero-degree angle. This is a more rigorous requirement than is usually used for these types of fibers. Second, the power must be dialed up until the light begins to interact with the glass of the fiber itself.

“At this critical power, the nonlinearity can counter the intrinsic disorder, creating a balance that transforms the input beam into a self-organized pencil beam,” Cao explains.

Typically, researchers conduct these experiments at much lower power levels for fear of destroying the fiber, in which case they wouldn’t see this self-organization. In addition, such precise on-axis alignment isn’t typically necessary since a multimode fiber can carry so much power.

But taken together, these two techniques can generate a stable pencil-beam without any complicated light engineering methods.

“That is the charm of this method — you could do this with a normal, optical setup and without much domain expertise,” You says.

A better beam

When the researchers performed characterization experiments of this pencil beam, it was more stable and high-resolution than many similar beams. Other beams often suffer from “sidelobes” — blurry halos of light that can distort images.

Their beam was more pristine and tightly focused.

Building on those experiments, the researchers demonstrated the use of this pencil-beam in biomedical imaging of the human blood-brain barrier.

This barrier is a tightly packed layer of cells that protects the brain from toxins, but it also blocks many medicines. Scientists and clinicians often want to see how drugs flow inside the vasculature of the blood-brain barrier and whether they reach their targets within the brain.

But with standard optical settings, the best one can do is capture one 2D section of the vasculature at a time, and then repeat the process multiple times to generate a fuller image, You explains.

Using this new technique, the researchers created an ultrafast, high-precision pencil beam that enabled them to dynamically track how cells absorb proteins in real-time.

“The pharmaceutical industry is especially interested in using human-based models to screen for drugs that effectively cross the barrier, as animal models often fail to predict what happens in humans. That this new method doesn’t require the cells to have a fluorescent tag is a game-changer. For the first time, we can now visualize the time-dependent entry of drugs into the brain and even identify the rate at which specific cell types internalize the drug,” says Kamm.

“Importantly, however, this approach is not limited to the blood-brain barrier but enables time-resolved tracking of diverse compounds and molecular targets across engineered tissue models, providing a powerful tool for biological engineering,” Spitz adds.

The team captured cellular-level 3D images that were higher quality than with other methods, and generated these images about 25 times faster.

“Usually, you have a tradeoff between image resolution and depth of focus — you can only probe so far at a time. But with our method, we can overcome this tradeoff by creating a pencil-beam with both high resolution and a large depth of focus,” You says.

In the future, the researchers want to better understand the fundamental physics of the pencil-beam and the mechanisms behind its self-organization. They also plan to apply the technique to other scenarios, such as imaging neurons in the brain, and work toward commercializing the technology.

“You’s group realized this beam that concentrates energy in time and space could be valuable for microscopy techniques that depend on the intensity of the light that illuminates the sample. They demonstrated just that and found advantages over ordinary laser beams for imaging. It will be scientifically interesting to fully understand the creation of the new pencil beams, which could find use in a variety of imaging applications,” says Frank Wise, the Samuel B. Eckert Professor of Engineering Emeritus at Cornell University, who was not involved with this work.

This work was funded, in part, by MIT startup funds, the National Science Foundation (NSF), the Silicon Valley Community Foundation, Diacomp Foundation, the Harvard Digestive Disease Core, a MathWorks Fellowship, and the Claude E. Shannon Award.

A faster way to estimate AI power consumption

Mon, 04/27/2026 - 12:00am

Due to the explosive growth of artificial intelligence, it is estimated that data centers will consume up to 12 percent of total U.S. electricity by 2028, according to the Lawrence Berkeley National Laboratory. Improving data center energy efficiency is one way scientists are striving to make AI more sustainable.

Toward that goal, researchers from MIT and the MIT-IBM Watson AI Lab developed a rapid prediction tool that tells data center operators how much power will be consumed by running a particular AI workload on a certain processor or AI accelerator chip.

Their method produces reliable power estimates in a few seconds, unlike traditional modeling techniques that can take hours or even days to yield results. Moreover, their prediction tool can be applied to a wide range of hardware configurations — even emerging designs that haven’t been deployed yet.

Data center operators could use these estimates to effectively allocate limited resources across multiple AI models and processors, improving energy efficiency. In addition, this tool could allow algorithm developers and model providers to assess potential energy consumption of a new model before they deploy it.

“The AI sustainability challenge is a pressing question we have to answer. Because our estimation method is fast, convenient, and provides direct feedback, we hope it makes algorithm developers and data center operators more likely to think about reducing energy consumption,” says Kyungmi Lee, an MIT postdoc and lead author of a paper on this technique.

She is joined on the paper by Zhiye Song, an electrical engineering and computer science (EECS) graduate student; Eun Kyung Lee and Xin Zhang, research managers at IBM Research and the MIT-IBM Watson AI Lab; Tamar Eilam, IBM Fellow, chief scientist of sustainable computing at IBM Research, and a member of the MIT-IBM Watson AI Lab; and senior author Anantha P. Chandrakasan, MIT provost, Vannevar Bush Professor of Electrical Engineering and Computer Science, and a member of the MIT-IBM Watson AI Lab. The research is being presented this week at the IEEE International Symposium on Performance Analysis of Systems and Software.

Expediting energy estimation

Inside a data center, thousands of powerful graphics processing units (GPUs) perform operations to train and deploy AI models. The power consumption of a particular GPU will vary based on its configuration and the workload it is handling.

Many traditional methods used to predict energy consumption involve breaking a workload into individual steps and emulating how each module inside the GPU is being utilized one step at a time. But AI workloads like model training and data preprocessing are extremely large and can take hours or even days to simulate in this manner.

“As an operator, if I want to compare different algorithms or configurations to find the most energy-efficient manner to proceed, if a single emulation is going to take days, that is going to become very impractical,” Lee says.

To speed up the prediction process, the MIT researchers sought to use less-detailed information that could be estimated faster. They found that AI workloads often have many repeatable patterns. They could use these patterns to generate the information needed for reliable but quick power estimation.

In many cases, algorithm developers write programs to run as efficiently as possible on a GPU. For instance, they use well-structured optimizations to distribute the work across parallel processing cores and move chunks of data around in the most efficient manner.

“These optimizations that software developers use create a regular structure, and that is what we are trying to leverage,” explains Lee.

The researchers developed a lightweight estimation model, called EnergAIzer, that captures the power usage pattern of a GPU from those optimizations.

An accurate assessment

But while their estimation was fast, the researchers found that it didn’t take all energy costs into account. For instance, every time a GPU runs a program, there is a fixed energy cost required for setting up and configurating that program. Then each time the GPU runs an operation on a chunk of data, an additional energy cost must be paid.

Due to fluctuations in the hardware or conflicts in accessing or moving data, a GPU might not be able to use all available bandwidth, slowing operations down and drawing more energy over time.

To include these additional costs and variances, the researchers gathered real measurements from GPUs to generate correction terms they applied to their estimation model.

“This way, we can get a fast estimation that is also very accurate,” she says.

In the end, a user can provide their workload information, like the AI model they want to run and the number and length of user inputs to process, and EnergAIzer will output an energy consumption estimation in a matter of seconds.

The user can also change the GPU configuration or adjust the operating speed to see how such design choices impact the overall power consumption.

When the researchers tested EnergAIzer using real AI workload information from actual GPUs, it could estimate the power consumption with only about 8 percent error, which is comparable to traditional methods that can take hours to produce results.

Their method could also be used to predict the power consumption of future GPUs and emerging device configurations, as long as the hardware doesn’t change drastically in a short amount of time.

In the future, the researchers want to test EnergAIzer on the newest GPU configurations and scale the model up so it can be applied to many GPUs that are collaborating to run a workload.

“To really make an impact on sustainability, we need a tool that can provide a fast energy estimation solution across the stack, for hardware designers, data center operators, and algorithm developers, so they can all be more aware of power consumption. With this tool, we’ve taken one step toward that goal,” Lee says.

This research was funded, in part, by the MIT-IBM Watson AI Lab.

Pages