MIT Latest News

MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
School of Engineering welcomes new faculty in 2024-25
The MIT School of Engineering welcomes new faculty members across six of its academic units. This new cohort of faculty members, who have recently started their roles at MIT, conduct research across a diverse range of disciplines.
“We are thrilled to welcome these accomplished scholars to the School of Engineering,” says Maria C. Yang, interim dean of engineering and William E. Leonhard (1940) Professor in the Department of Mechanical Engineering. “Each brings unique expertise across a wide range of fields and is advancing knowledge with real-world impact. They all share a deep commitment to research excellence and a passion for teaching and mentorship.”
Faculty with appointments in the Department of Electrical Engineering and Computer Science (EECS) and the Institute for Data, Systems, and Society (IDSS) report into both the School of Engineering and the MIT Stephen A. Schwarzman College of Computing.
The new engineering faculty include:
Masha Folk joined the Department of Aeronautics and Astronautics as an assistant professor in July 2024 and is currently the Charles Stark Draper Career Development Professor. Her research focuses on sustainable aerospace technology driven by a deep desire to accelerate carbon-neutral aviation. She previously worked as an aerodynamics specialist for Rolls-Royce. Folk received her BS in aerospace engineering from Ohio State University, her MS in aerospace engineering from Purdue University, and her PhD in energy, fluids, and turbomachinery from the University of Cambridge.
Sophia Henneberg joined the Department of Nuclear Science and Engineering (NSE) as an assistant professor in September. Her research focuses on developing, utilizing, and extending optimization tools to identify new, promising stellarator designs, which are a promising path toward fusion energy. Previously, she was the principal investigator of EUROfusion’s Stellarator Optimization Theory, Simulation, Validation, and Verification group. Henneberg received a BS in physics at the Goethe-Universität, an MA in physics at the University of Wisconsin at Madison, and a PhD in physics at the University of York.
Omar Khattab joined the Department of Electrical Engineering and Computer Science as an assistant professor in July. He is also affiliated with the Computer Science and Artificial Intelligence Laboratory (CSAIL). His research develops new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. Khattab previously worked as a research scientist at Databricks. He received a BS in computer science from Carnegie Mellon University and a PhD in computer science from Stanford University.
Tania Lopez-Silva joined the Department of Materials Science and Engineering as an assistant professor in July. Her research focuses on supramolecular hydrogels — soft materials made from self-assembling molecules, primarily peptides. Previously, she served as a postdoc at the National Cancer Institute. Lopez-Silva earned her BS in chemistry from Tecnológico de Monterrey and her MA and PhD in chemistry from Rice University.
Ethan Peterson ’13 joined the Department of Nuclear Science and Engineering as an assistant professor in July 2024. His research focuses on improving radiation transport and transmutation methods for the design of fusion technologies, as well as whole-facility modeling for fusion power plants. Previously, he worked as a research scientist at MIT’s Plasma Science and Fusion Center. Peterson received his BS in nuclear engineering and physics from MIT and his PhD in plasma physics from the University of Wisconsin at Madison.
Dean Price joined the Department of Nuclear Science and Engineering as the Atlantic Richfield Career Development Professor in Energy Studies and an assistant professor in September. His work focuses on the simulation and control of advanced reactors, with expertise in uncertainty quantification, scientific machine learning, and artificial intelligence for nuclear applications. Previously, he was the Russell L. Heath Distinguished Postdoctoral Fellow at Idaho National Laboratory. He earned his BS in nuclear engineering from the University of Illinois and his PhD in nuclear engineering from the University of Michigan.
Daniel Varon joined the Department of Aeronautics and Astronautics as the Boeing Assistant Professor, holding an MIT Schwarzman College of Computing shared position with IDSS, in July. Varon’s research focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. Previously, he held a visiting postdoctoral fellowship at the Princeton School of Public and International Affairs. Varon earned a BS in physics and a BA in English literature from McGill University, and an MS in applied mathematics and PhD in atmospheric chemistry from Harvard University.
Raphael Zufferey joined the Department of Mechanical Engineering as an assistant professor in January. He studies bioinspired methods and unconventional designs to solve seamless aerial and aquatic locomotion for applications in ocean sciences. Zufferey previously worked as a Marie Curie postdoc at the École Polytechnique Fédérale de Lausanne (EPFL). He received his BA in micro-engineering and MS in robotics from EPFL and a PhD in robotics and aeronautics from Imperial College London.
The School of Engineering is also welcoming a number of faculty in the Department of EECS and the IDSS who hold shared positions with the MIT Schwarzman College of Computing and other departments. These include: Bailey Flanigan, Brian Hedden, Yunha Hwang, Benjamin Lindquist, Paris Smaragdis, Pu “Paul" Liang, Mariana Popescu, and Daniel Varon. For more information about these faculty members, read the Schwarzman College of Computing’s recent article.
Additionally, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member: Mark Rau. For more information, read the School of Humanities, Arts, and Social Sciences recent article.
MIT Schwarzman College of Computing welcomes 11 new faculty for 2025
The MIT Schwarzman College of Computing welcomes 11 new faculty members in core computing and shared positions to the MIT community. They bring varied backgrounds and expertise spanning sustainable design, satellite remote sensing, decision theory, and the development of new algorithms for declarative artificial intelligence programming, among others.
“I warmly welcome this talented group of new faculty members. Their work lies at the forefront of computing and its broader impact in the world,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.
College faculty include those with appointments in the Department of Electrical Engineering and Computer Science (EECS) or in the Institute for Data, Systems, and Society (IDSS), which report into both the MIT Schwarzman College of Computing and the School of Engineering. There are also several new faculty members in shared positions between the college and other MIT departments and sections, including Political Science, Linguistics and Philosophy, History, and Architecture.
“Thanks to another successful year of collaborative searches, we have hired six additional faculty in shared positions, bringing the total to 20,” says Huttenlocher.
The new shared faculty include:
Bailey Flanigan is an assistant professor in the Department of Political Science, holding an MIT Schwarzman College of Computing shared position with EECS. Her research combines tools from social choice theory, game theory, algorithms, statistics, and survey methods to advance political methodology and strengthen democratic participation. She is interested in sampling algorithms, opinion measurement, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Flanigan was a postdoc at Harvard University’s Data Science Initiative, and she earned her PhD in computer science from Carnegie Mellon University.
Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with EECS. His research focuses on how we ought to form beliefs and make decisions. His works span epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford University. He received his BA from Princeton University and his PhD from MIT, both in philosophy.
Yunha Hwang is an assistant professor in the Department of Biology, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Laboratory for Information and Decision Systems. Her research interests span machine learning for sustainable biomanufacturing, microbial evolution, and open science. She serves as the co-founder and chief scientist at Tatta Bio, a scientific nonprofit dedicated to advancing genomic AI for biological discovery. She holds a BS in computer science from Stanford University and a PhD in biology from Harvard University.
Ben Lindquist is an assistant professor in the History Section, holding an MIT Schwarzman College of Computing shared position with EECS. Through a historical lens, his work observes the ways that computing has circulated with ideas of religion, emotion, and divergent thinking. His book, “The Feeling Machine” (University of Chicago Press, forthcoming), follows the history of synthetic speech to examine how emotion became a subject of computer science. He was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.
Mariana Popescu is an assistant professor in the Department of Architecture, holding an MIT Schwarzman College of Computing shared position with EECS. She is also a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL). A computational architect and structural designer, Popescu has a strong interest and experience in innovative ways of approaching the fabrication process and use of materials in construction. Her area of expertise is computational and parametric design, with a focus on digital fabrication and sustainable design. Popescu earned her doctorate at ETH Zurich.
Paris Smaragdis SM ’97, PhD ’01 is a professor in the Music and Theater Arts Section, holding an MIT Schwarzman College of Computing shared position with EECS. His research focus lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. Prior to coming to MIT, he worked as a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science. He holds a BMus from Berklee College of Music and earned his PhD in perceptual computing from MIT.
Daniel Varon is an assistant professor in the Department of Aeronautics and Astronautics, holding an MIT Schwarzman College of Computing shared position with IDSS. His work focuses on using satellite observations of atmospheric composition to better understand human impacts on the environment and identify opportunities to reduce them. An atmospheric scientist, Varon is particularly interested in greenhouse gasses, air pollution, and satellite remote sensing. He holds an MS in applied mathematics and a PhD in atmospheric chemistry, both from Harvard University.
In addition, the School of Engineering has adopted the shared faculty search model to hire its first shared faculty member:
Mark Rau is an assistant professor in the Music and Theater Arts Section, holding a School of Engineering shared position with EECS. He is involved in developing graduate programming focused on music technology. He has an interest in musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis. His work focuses on musical instruments and creative audio effects. He holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University. He earned his PhD at Stanford’s Center for Computer Research in Music and Acoustics.
The new core faculty are:
Mitchell Gordon is an assistant professor in EECS. He is also a member of CSAIL. In his research, Gordon designs interactive systems and evaluation approaches that bridge principles of human-computer interaction with the realities of machine learning. His work has won awards at conferences in human-computer interaction and artificial intelligence, including a best paper award at CHI and an Oral at NeurIPS. Gordon received a BS from the University of Rochester, and MS and PhD from Stanford University, all in computer science.
Omar Khattab is an assistant professor in EECS. He is also a member of CSAIL. His work focuses on natural language processing, information retrieval, and AI systems. His research includes developing new algorithms and abstractions for declarative AI programming and for composing retrieval and reasoning. He received his BS from Carnegie Mellon University and his PhD from Stanford University, both in computer science.
Rachit Nigam will join EECS as an assistant professor in January 2026. He will also be a member of CSAIL and the Microsystems Technology Laboratories. He works on programming languages and computer architecture to address the design, verification, and usability challenges of specialized hardware. He was previously a visiting scholar at MIT. Nigam earned an MS and PhD in computer science from Cornell University.
Lincoln Laboratory and Haystack Observatory team up to unveil hidden parts of the galaxy
For centuries, humans have sought to study the stars and celestial bodies, whether through observations made by naked eye or by telescopes on the ground and in space that can view the universe across nearly the entire electromagnetic spectrum. Each view unlocks new information about the denizens of space — X-ray pulsars, gamma-ray bursts — but one is still missing: the low-frequency radio sky.
Researchers from MIT Lincoln Laboratory, the MIT Haystack Observatory, and Lowell Observatory are working on a NASA-funded concept study called the Great Observatory for Long Wavelengths, or GO-LoW, that outlines a method to view the universe at as-of-yet unseen low frequencies using a constellation of thousands of small satellites. The wavelengths of these frequencies are 15 meters to several kilometers in length, which means they require a very big telescope in order to see clearly.
"GO-LoW will be a new kind of telescope, made up of many thousands of spacecraft that work together semi-autonomously, with limited input from Earth," says Mary Knapp, the principal investigator for GO-LoW at the MIT Haystack Observatory. "GO-LoW will allow humans to see the universe in a new light, opening up one of the very last frontiers in the electromagnetic spectrum."
The difficulty in viewing the low-frequency radio sky comes from Earth's ionosphere, a layer of the atmosphere that contains charged particles that prevent very low-frequency radio waves from passing through. Therefore, a space-based instrument is required to observe these wavelengths. Another challenge is that long-wavelength observations require correspondingly large telescopes, which would need to be many kilometers in length if built using traditional dish antenna designs. GO-LoW will use interferometry — a technique that combines signals from many spatially separated receivers that, when put together, will function as one large telescope — to obtain highly detailed data from exoplanets and other sources in space. A similar technique was used to make the first image of a black hole and, more recently, an image of the first known extrasolar radiation belts.
Melodie Kao, a member of the team from Lowell Observatory, says the data could reveal details about an exoplanet's makeup and potential for life. "[The radio wave aurora around an exoplanet] carries important information, such as whether or not the planet has a magnetic field, how strong it is, how fast the planet is rotating, and even hints about what's inside," she says. "Studying exoplanet radio aurorae and the magnetic fields that they trace is an important piece of the habitability puzzle, and it's a key science goal for GO-LoW."
Several recent trends and technology developments will make GO-LoW possible in the near future, such as the declining cost of mass-produced small satellites, the rise of mega-constellations, and the return of large, high-capacity launch vehicles like NASA's Space Launch System. Go-LoW would be the first mega-constellation that uses interferometry for scientific purposes.
The GO-LoW constellation will be built through several successive launches, each containing thousands of spacecraft. Once they reach low-Earth orbit, the spacecraft will be refueled before journeying on to their final destination — an Earth-sun Lagrange point where they will then be deployed. Lagrange points are regions in space where the gravitational forces of two large celestial bodies (like the sun and Earth) are in equilibrium, such that a spacecraft requires minimal fuel to maintain its position relative to the two larger bodies. At this long distance from Earth (1 astronomical unit, or approximately 93 million miles), there will also be much less radio-frequency interference that would otherwise obscure GO-LoW’s sensitive measurements.
"GO-LoW will have a hierarchical architecture consisting of thousands of small listener nodes and a smaller number of larger communication and computation nodes (CCNs)," says Kat Kononov, a team member from Lincoln Laboratory's Applied Space Systems Group, who has been working with MIT Haystack staff since 2020, with Knapp serving as her mentor during graduate school. A node refers to an individual small satellite within the constellation. "The listener nodes are small, relatively simple 3U CubeSats — about the size of a loaf of bread — that collect data with their low-frequency antennas, store it in memory, and periodically send it to their communication and computation node via a radio link." In comparison, the CCNs are about the size of a mini-fridge.
The CCN will keep track of the positions of the listener nodes in their neighborhood; collect and reduce the data from their respective listener nodes (around 100 of them); and then transmit that data back to Earth, where more intensive data processing can be performed.
At full strength, with approximately 100,000 listener nodes, the GO-LoW constellation should be able to see exoplanets with magnetic fields in the solar neighborhood — within 5 to 10 parsecs — many for the very first time.
The GO-LoW research team recently published the results of their findings from Phase I of the study, which identified a type of advanced antenna called a vector sensor as the best type for this application. In 2024, Lincoln Laboratory designed a compact deployable version of the sensor suitable for use in space.
The team is now working on Phase II of the program, which is to build a multi-agent simulation of constellation operations.
"What we learned during the Phase I study is that the hard part for GO-LoW is not any specific technology … the hard part is the system: the system engineering and the autonomy to run the system," says Knapp. "So, how do we build this constellation such that it's a tractable problem? That's what we’re exploring in this next part of the study."
GO-LoW is one of many civil space programs at Lincoln Laboratory that aim to harness advanced technologies originally developed for national security to enable new space missions that support science and society. "By adapting these capabilities to serve new stakeholders, the laboratory helps open novel frontiers of discovery while building resilient, cost-effective systems that benefit the nation and the world," says Laura Kennedy, who is the deputy lead of Lincoln Laboratory's Civil Space Systems and Technology Office.
"Like landing on the moon in 1969, or launching Hubble in the 1990s, GO-LoW is envisioned to let us see something we've never seen before and generate scientific breakthroughs," says Kononov.
Go-LoW is a collaboration between Lincoln Laboratory, Haystack Observatory, and Lowell University, as well as Lenny Paritsky from LeafLabs and Jacob Turner from Cornell University.
New software designs eco-friendly clothing that can reassemble into new items
It’s hard to keep up with the ever-changing trends of the fashion world. What’s “in” one minute is often out of style the next season, potentially causing you to re-evaluate your wardrobe.
Staying current with the latest fashion styles can be wasteful and expensive, though. Roughly 92 million tons of textile waste are produced annually, including the clothes we discard when they go out of style or no longer fit. But what if we could simply reassemble our clothes into whatever outfits we wanted, adapting to trends and the ways our bodies change?
A team of researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Adobe are attempting to bring eco-friendly, versatile garments to life. Their new “Refashion” software system breaks down fashion design into modules — essentially, smaller building blocks — by allowing users to draw, plan, and visualize each element of a clothing item. The tool turns fashion ideas into a blueprint that outlines how to assemble each component into reconfigurable clothing, such as a pair of pants that can be transformed into a dress.
With Refashion, users simply draw shapes and place them together to develop an outline for adaptable fashion pieces. It’s a visual diagram that shows how to cut garments, providing a straightforward way to design things like a shirt with an attachable hood for rainy days. One could also create a skirt that can then be reconfigured into a dress for a formal dinner, or maternity wear that fits during different stages of pregnancy.
“We wanted to create garments that consider reuse from the start,” says Rebecca Lin, MIT Department of Electrical Engineering and Computer Science (EECS) PhD student, CSAIL and Media Lab researcher, and lead author on a paper presenting the project. “Most clothes you buy today are static, and are discarded when you no longer want them. Refashion instead makes the most of our garments by helping us design items that can be easily resized, repaired, or restyled into different outfits.”
Modules à la mode
The researchers conducted a preliminary user study where both designers and novices explored Refashion and were able to create garment prototypes. Participants assembled pieces such as an asymmetric top that could be extended into a jumpsuit, or remade into a formal dress, often within 30 minutes. These results suggest that Refashion has the potential to make prototyping garments more approachable and efficient. But what features might contribute to this ease of use?
Its interface first presents a simple grid in its “Pattern Editor” mode, where users can connect dots to outline the boundaries of a clothing item. It’s essentially drawing rectangular panels and specifying how different modules will connect to each other.
Users can customize the shape of each component, create a straight design for garments (which might be useful for less form-fitting items, like chinos) or perhaps tinkering with one of Refashion’s templates. A user can edit pre-designed blueprints for things like a T-shirt, fitted blouse, or trousers.
Another, more creative route is to change the design of individual modules. One can choose the “pleat” feature to fold a garment over itself, similar to an accordion, for starters. It’s a useful way to design something like a maxi dress. The “gather” option adds an artsy flourish, where a garment is crumpled together to create puffy skirts or sleeves. A user might even go with the “dart” module, which removes a triangular piece from the fabric. It allows for shaping a garment at the waist (perhaps for a pencil skirt) or tailor to the upper body (fitted shirts, for instance).
While it might seem that each of these components needs to be sewn together, Refashion enables users to connect garments through more flexible, efficient means. Edges can be seamed together via double-sided connectors such as metal snaps (like the buttons used to close a denim jacket) or Velcro dots. A user could also fasten them in pins called brads, which have a pointed side that they stick through a hole and split into two “legs” to attach to another surface; it’s a handy way to secure, say, a picture on a poster board. Both connective methods make it easy to reconfigure modules, should they be damaged or a “fit check” calls for a new look.
As a user designs their clothing piece, the system automatically creates a simplified diagram of how it can be assembled. The pattern is divided into numbered blocks, which is dragged onto different parts of a 2D mannequin to specify the position of each component. The user can then simulate how their sustainable clothing will look on 3D models of a range of body types (one can also upload a model).
Finally, a digital blueprint for sustainable clothing can extend, shorten, or combine with other pieces. Thanks to Refashion, a new piece could be emblematic of a potential shift in fashion: Instead of buying new clothes every time we want a new outfit, we can simply reconfigure existing ones. Yesterday’s scarf could be today’s hat, and today’s T-shirt could be tomorrow’s jacket.
“Rebecca’s work is at an exciting intersection between computation and art, craft, and design,” says MIT EECS professor and CSAIL principal investigator Erik Demaine, who advises Lin. “I’m excited to see how Refashion can make custom fashion design accessible to the wearer, while also making clothes more reusable and sustainable.”
Constant change
While Refashion presents a greener vision for the future of fashion, the researchers note that they’re actively improving the system. They intend to revise the interface to support more durable items, stepping beyond standard prototyping fabrics. Refashion may soon support other modules, like curved panels, as well. The CSAIL-Adobe team may also evaluate whether their system can use as few materials as possible to minimize waste, and whether it can help “remix” old store-bought outfits.
Lin also plans to develop new computational tools that help designers create unique, personalized outfits using colors and textures. She’s exploring how to design clothing by patchwork — essentially, cutting out small pieces from materials like decorative fabrics, recycled denim, and crochet blocks and assembling them into a larger item.
“This is a great example of how computer-aided design can also be key in supporting more sustainable practices in the fashion industry,” says Adrien Bousseau, a senior researcher at Inria Centre at Université Côte d'Azur who wasn’t involved in the paper. “By promoting garment alteration from the ground up, they developed a novel design interface and accompanying optimization algorithm that helps designers create garments that can undergo a longer lifetime through reconfiguration. While sustainability often imposes additional constraints on industrial production, I am confident that research like the one by Lin and her colleagues will empower designers in innovating despite these constraints.”
Lin wrote the paper with Adobe Research scientists Michal Lukáč and Mackenzie Leake, who is the paper’s senior author and a former CSAIL postdoc. Their work was supported, in part, by the MIT Morningside Academy for Design, an MIT MAKE Design-2-Making Mini-Grant, and the Natural Sciences and Engineering Research Council of Canada. The researchers presented their work recently at the ACM Symposium on User Interface Software and Technology.
In a surprising discovery, scientists find tiny loops in the genomes of dividing cells
Before cells can divide, they first need to replicate all of their chromosomes, so that each of the daughter cells can receive a full set of genetic material. Until now, scientists had believed that as division occurs, the genome loses the distinctive 3D internal structure that it typically forms.
Once division is complete, it was thought, the genome gradually regains that complex, globular structure, which plays an essential role in controlling which genes are turned on in a given cell.
However, a new study from MIT shows that in fact, this picture is not fully accurate. Using a higher-resolution genome mapping technique, the research team discovered that small 3D loops connecting regulatory elements and genes persist in the genome during cell division, or mitosis.
“This study really helps to clarify how we should think about mitosis. In the past, mitosis was thought of as a blank slate, with no transcription and no structure related to gene activity. And we now know that that’s not quite the case,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT. “What we see is that there’s always structure. It never goes away.”
The researchers also discovered that these regulatory loops appear to strengthen when chromosomes become more compact in preparation for cell division. This compaction brings genetic regulatory elements closer together and encourages them to stick together. This may help cells “remember” interactions present in one cell cycle and carry it to the next one.
“The findings help to bridge the structure of the genome to its function in managing how genes are turned on and off, which has been an outstanding challenge in the field for decades,” says Viraat Goel PhD ’25, the lead author of the study.
Hansen and Edward Banigan, a research scientist in MIT’s Institute for Medical Engineering and Science, are the senior authors of the paper, which appears today in Nature Structural and Molecular Biology. Leonid Mirny, a professor in MIT’s Institute for Medical Engineering and Science and the Department of Physics, and Gerd Blobel, a professor at the Perelman School of Medicine at the University of Pennsylvania, are also authors of the study.
A surprising finding
Over the past 20 years, scientists have discovered that inside the cell nucleus, DNA organizes itself into 3D loops. While many loops enable interactions between genes and regulatory regions that may be millions of base pairs away from each other, others are formed during cell division to compact chromosomes. Much of the mapping of these 3D structures has been done using a technique called Hi-C, originally developed by a team that included MIT researchers and was led by Job Dekker at the University of Massachusetts Chan Medical School. To perform Hi-C, researchers use enzymes to chop the genome into many small pieces and biochemically link pieces that are near each other in 3D space within the cell’s nucleus. They then determine the identities of the interacting pieces by sequencing them.
However, that technique doesn’t have high enough resolution to pick out all specific interactions between genes and regulatory elements such as enhancers. Enhancers are short sequences of DNA that can help to activate the transcription of a gene by binding to the gene’s promoter — the site where transcription begins.
In 2023, Hansen and others developed a new technique that allows them to analyze 3D genome structures with 100 to 1,000 times greater resolution than was previously possible. This technique, known as Region-Capture Micro-C (RC-MC), uses a different enzyme that cuts the genome into small fragments of similar size. It also focuses on a smaller segment of the genome, allowing for high-resolution 3-D mapping of a targeted genome region.
Using this technique, the researchers were able to identify a new kind of genome structure that hadn’t been seen before, which they called “microcompartments.” These are tiny highly connected loops that form when enhancers and promoters located near each other stick together.
In that paper, experiments revealed that these loops were not formed by the same mechanisms that form other genome structures, but the researchers were unable to determine exactly how they do form. In hopes of answering that question, the team set out to study cells as they undergo cell division. During mitosis, chromosomes become much more compact, so that they can be duplicated, sorted, and divvied up between two daughter cells. As this happens, larger genome structures called A/B compartments and topologically associating domains (TADs) disappear completely.
The researchers believed that the microcompartments they had discovered would also disappear during mitosis. By tracking cells through the entire cell division process, they hoped to learn how the microcompartments appear after mitosis is completed.
“During mitosis, it has been thought that almost all gene transcription is shut off. And before our paper, it was also thought that all 3D structure related to gene regulation was lost and replaced by compaction. It’s a complete reset every cell cycle,” Hansen says.
However, to their surprise, the researchers found that microcompartments could still be seen during mitosis, and in fact they become more prominent as the cell goes through cell division.
“We went into this study thinking, well, the one thing we know for sure is that there’s no regulatory structure in mitosis, and then we accidentally found structure in mitosis,” Hansen says.
Using their technique, the researchers also confirmed that larger structures such as A/B compartments and TADs do disappear during mitosis, as had been seen before.
“This study leverages the unprecedented genomic resolution of the RC-MC assay to reveal new and surprising aspects of mitotic chromatin organization, which we have overlooked in the past using traditional 3C-based assays. The authors reveal that, contrary to the well-described dramatic loss of TADs and compartmentalization during mitosis, fine-scale “microcompartments” — nested interactions between active regulatory elements — are maintained or even transiently strengthened,” says Effie Apostolou, an associate professor of molecular biology in medicine at Weill Cornell Medicine, who was not involved in the study.
A spike in transcription
The findings may offer an explanation for a spike in gene transcription that usually occurs near the end of mitosis, the researchers say. Since the 1960s, it had been thought that transcription ceased completely during mitosis, but in 2016 and 2017, a few studies showed that cells undergo a brief spike of transcription, which is quickly suppressed until the cell finishes dividing.
In their new study, the MIT team found that during mitosis, microcompartments are more likely to be found near the genes that spike during cell division. They also discovered that these loops appear to form as a result of the genome compaction that occurs during mitosis. This compaction brings enhancers and promoters closer together, allowing them to stick together to form microcompartments.
Once formed, the loops that constitute microcompartments may activate gene transcription somewhat by accident, which is then shut off by the cell. When the cell finishes dividing, entering a state known as G1, many of these small loops become weaker or disappear.
“It almost seems like this transcriptional spiking in mitosis is an undesirable accident that arises from generating a uniquely favorable environment for microcompartments to form during mitosis,” Hansen says. “Then, the cell quickly prunes and filters many of those loops out when it enters G1.”
Because chromosome compaction can also be influenced by a cell’s size and shape, the researchers are now exploring how variations in those features affect the structure of the genome and in turn, gene regulation.
“We are thinking about some natural biological settings where cells change shape and size, and whether we can perhaps explain some 3D genome changes that previously lack an explanation,” Hansen says. “Another key question is how does the cell then pick what are the microcompartments to keep and what are the microcompartments to remove when you enter G1, to ensure fidelity of gene expression?”
The research was funded in part by the National Institutes of Health, a National Science Foundation CAREER Award, the Gene Regulation Observatory of the Broad Institute, a Pew-Steward Scholar Award for Cancer Research, the Mathers Foundation, the MIT Westaway Fund, the Bridge Project of the Koch Institute and Dana-Farber/Harvard Cancer Center, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Book reviews technologies aiming to remove carbon from the atmosphere
Two leading experts in the field of carbon capture and sequestration (CCS) — Howard J. Herzog, a senior research engineer in the MIT Energy Initiative, and Niall Mac Dowell, a professor in energy systems engineering at Imperial College London — explore methods for removing carbon dioxide already in the atmosphere in their new book, “Carbon Removal.” Published in October, the book is part of the Essential Knowledge series from the MIT Press, which consists of volumes “synthesizing specialized subject matter for nonspecialists” and includes Herzog’s 2018 book, “Carbon Capture.”
Burning fossil fuels, as well as other human activities, cause the release of carbon dioxide (CO2) into the atmosphere, where it acts like a blanket that warms the Earth, resulting in climate change. Much attention has focused on mitigation technologies that reduce emissions, but in their book, Herzog and Mac Dowell have turned their attention to “carbon dioxide removal” (CDR), an approach that removes carbon already present in the atmosphere.
In this new volume, the authors explain how CO2 naturally moves into and out of the atmosphere and present a brief history of carbon removal as a concept for dealing with climate change. They also describe the full range of “pathways” that have been proposed for removing CO2 from the atmosphere. Those pathways include engineered systems designed for “direct air capture” (DAC), as well as various “nature-based” approaches that call for planting trees or taking steps to enhance removal by biomass or the oceans. The book offers easily accessible explanations of the fundamental science and engineering behind each approach.
The authors compare the “quality” of the different pathways based on the following metrics:
Accounting. For public acceptance of any carbon-removal strategy, the authors note, the developers need to get the accounting right — and that’s not always easy. “If you’re going to spend money to get CO2 out of the atmosphere, you want to get paid for doing it,” notes Herzog. It can be tricky to measure how much you have removed, because there’s a lot of CO2 going in and out of the atmosphere all the time. Also, if your approach involves, say, burning fossil fuels, you must subtract the amount of CO2 that’s emitted from the total amount you claim to have removed. Then there’s the timing of the removal. With a DAC device, the removal happens right now, and the removed CO2 can be measured. “But if I plant a tree, it’s going to remove CO2 for decades. Is that equivalent to removing it right now?” Herzog queries. How to take that factor into account hasn’t yet been resolved.
Permanence. Different approaches keep the CO2 out of the atmosphere for different durations of time. How long is long enough? As the authors explain, this is one of the biggest issues, especially with nature-based solutions, where events such as wildfires or pestilence or land-use changes can release the stored CO2 back into the atmosphere. How do we deal with that?
Cost. Cost is another key factor. Using a DAC device to remove CO2 costs far more than planting trees, but it yields immediate removal of a measurable amount of CO2 that can then be locked away forever. How does one monetize that trade-off?
Additionality. “You’re doing this project, but would what you’re doing have been done anyway?” asks Herzog. “Is your effort additional to business as usual?” This question comes into play with many of the nature-based approaches involving trees, soils, and so on.
Permitting and governance. These issues are especially important — and complicated — with approaches that involve doing things in the ocean. In addition, Herzog points out that some CCS projects could also achieve carbon removal, but they would have a hard time getting permits to build the pipelines and other needed infrastructure.
The authors conclude that none of the CDR strategies now being proposed is a clear winner on all the metrics. However, they stress that carbon removal has the potential to play an important role in meeting our climate change goals — not by replacing our emissions-reduction efforts, but rather by supplementing them. However, as Herzog and Mac Dowell make clear in their book, many challenges must be addressed to move CDR from today’s speculation to deployment at scale, and the book supports the wider discussion about how to move forward. Indeed, the authors have fulfilled their stated goal: “to provide an objective analysis of the opportunities and challenges for CDR and to separate myth from reality.”
Breaking the old model of education with MIT Open Learning
At an age when many kids prefer to play games on their phones, 11-year-old Vivan Mirchandani wanted to explore physics videos. Little did he know that MIT Open Learning’s free online resources would change the course of his life.
Now, at 16, Mirchandani is well on his way to a career as a physics scholar — all because he forged his own unconventional educational journey.
Nontraditional education has granted Mirchandani the freedom to pursue topics he’s personally interested in. This year, he wrote a paper on cosmology that proposes a new framework for understanding Einstein’s general theory of relativity. Other projects include expanding on fluid dynamics laws for cats, training an AI model to resemble the consciousness of his late grandmother, and creating his own digital twin. That’s in addition to his regular studies, regional science fairs, Model United Nations delegation, and a TEDEd Talk.
Mirchandani started down this path between the ages of 10 and 12, when he decided to read books and find online content about physics during the early Covid-19 lockdown in India. He was shocked to find that MIT Open Learning offers free course videos, lecture notes, exams, and other resources from the Institute on sites like MIT OpenCourseWare and the newly launched MIT Learn.
“My first course was 8.01 (Classical Mechanics), and it completely changed how I saw physics,” Mirchandani says. “Physics sounded like elegance. It’s the closest we’ve ever come to have a theory of everything.”
Experiencing “real learning”
Mirchandani discovered MIT Open Learning through OpenCourseWare, which offers free, online, open educational resources from MIT undergraduate and graduate courses. He says MIT Open Learning’s “academically rigorous” content prepares learners to ask questions and think like a scientist.
“Instead of rote memorization, I finally experienced real learning,” Mirchandani says. “OpenCourseWare was a holy grail. Without it, I would still be stuck on the basic concepts.”
Wanting to follow in the footsteps of physicists like Sir Isaac Newton, Albert Einstein, and Stephen Hawking, Mirchandani decided at age 12 he would sacrifice his grade point average to pursue a nontraditional educational path that gave him hands-on experience in science.
“The education system doesn’t prepare you for actual scientific research, it prepares you for exams,” Mirchandani says. “What draws me to MIT Open Learning and OpenCourseWare is it breaks the old model of education. It’s not about sitting in a lecture hall, it’s about access and experimentation.”
With guidance from his physics teacher, Mirchandani built his own curriculum using educational materials on MIT OpenCourseWare to progress from classical physics to computer science to quantum physics. He has completed more than 27 online MIT courses to date.
“The best part of OpenCourseWare is you get to study from the greatest institution in the world, and you don’t have to pay for it,” he says.
Innovating in the real world
6.0001 (Introduction to Computer Science and Programming Using Python) and slides from 2.06 (Fluid Dynamics) gave Mirchandani the foundation to help with the family business, Dynamech Engineers, which sells machinery for commercial snack production. Some of the recent innovations he has assisted with include a zero-oil frying technology that cuts 300 calories per kilogram, a gas-based heat exchange system, and a simplified, singular machine combining the processes of two separate machines. Using the modeling techniques he learned through MIT OpenCourseWare, Mirchandani designed how these products would work without losing efficiency.
But when you ask Mirchandani which achievement he is most proud of, he’ll say it’s being one of 35 students accepted for the inaugural RSI-India cohort, an academic program for high school students modeled after the Research Science Institute program co-sponsored by MIT and the Center for Excellence in Education. Competing against other Indian students who had perfect scores on their board exams and SATs, he didn’t expect to get in, but the program valued the practical research experience he was able to pursue thanks to the knowledge he gained from his external studies.
“None of it would have happened without MIT OpenCourseWare,” he says. “It’s basically letting curiosity get the better of us. If everybody does that, we’d have a better scientific community.”
Method teaches generative AI models to locate personalized objects
Say a person takes their French Bulldog, Bowser, to the dog park. Identifying Bowser as he plays among the other canines is easy for the dog-owner to do while onsite.
But if someone wants to use a generative AI model like GPT-5 to monitor their pet while they are at work, the model could fail at this basic task. Vision-language models like GPT-5 often excel at recognizing general objects, like a dog, but they perform poorly at locating personalized objects, like Bowser the French Bulldog.
To address this shortcoming, researchers from MIT and the MIT-IBM Watson AI Lab have introduced a new training method that teaches vision-language models to localize personalized objects in a scene.
Their method uses carefully prepared video-tracking data in which the same object is tracked across multiple frames. They designed the dataset so the model must focus on contextual clues to identify the personalized object, rather than relying on knowledge it previously memorized.
When given a few example images showing a personalized object, like someone’s pet, the retrained model is better able to identify the location of that same pet in a new image.
Models retrained with their method outperformed state-of-the-art systems at this task. Importantly, their technique leaves the rest of the model’s general abilities intact.
This new approach could help future AI systems track specific objects across time, like a child’s backpack, or localize objects of interest, such as a species of animal in ecological monitoring. It could also aid in the development of AI-driven assistive technologies that help visually impaired users find certain items in a room.
“Ultimately, we want these models to be able to learn from context, just like humans do. If a model can do this well, rather than retraining it for each new task, we could just provide a few examples and it would infer how to perform the task from that context. This is a very powerful ability,” says Jehanzeb Mirza, an MIT postdoc and senior author of a paper on this technique.
Mirza is joined on the paper by co-lead authors Sivan Doveh, a graduate student at Weizmann Institute of Science; and Nimrod Shabtay, a researcher at IBM Research; James Glass, a senior research scientist and the head of the Spoken Language Systems Group in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); and others. The work will be presented at the International Conference on Computer Vision.
An unexpected shortcoming
Researchers have found that large language models (LLMs) can excel at learning from context. If they feed an LLM a few examples of a task, like addition problems, it can learn to answer new addition problems based on the context that has been provided.
A vision-language model (VLM) is essentially an LLM with a visual component connected to it, so the MIT researchers thought it would inherit the LLM’s in-context learning capabilities. But this is not the case.
“The research community has not been able to find a black-and-white answer to this particular problem yet. The bottleneck could arise from the fact that some visual information is lost in the process of merging the two components together, but we just don’t know,” Mirza says.
The researchers set out to improve VLMs abilities to do in-context localization, which involves finding a specific object in a new image. They focused on the data used to retrain existing VLMs for a new task, a process called fine-tuning.
Typical fine-tuning data are gathered from random sources and depict collections of everyday objects. One image might contain cars parked on a street, while another includes a bouquet of flowers.
“There is no real coherence in these data, so the model never learns to recognize the same object in multiple images,” he says.
To fix this problem, the researchers developed a new dataset by curating samples from existing video-tracking data. These data are video clips showing the same object moving through a scene, like a tiger walking across a grassland.
They cut frames from these videos and structured the dataset so each input would consist of multiple images showing the same object in different contexts, with example questions and answers about its location.
“By using multiple images of the same object in different contexts, we encourage the model to consistently localize that object of interest by focusing on the context,” Mirza explains.
Forcing the focus
But the researchers found that VLMs tend to cheat. Instead of answering based on context clues, they will identify the object using knowledge gained during pretraining.
For instance, since the model already learned that an image of a tiger and the label “tiger” are correlated, it could identify the tiger crossing the grassland based on this pretrained knowledge, instead of inferring from context.
To solve this problem, the researchers used pseudo-names rather than actual object category names in the dataset. In this case, they changed the name of the tiger to “Charlie.”
“It took us a while to figure out how to prevent the model from cheating. But we changed the game for the model. The model does not know that ‘Charlie’ can be a tiger, so it is forced to look at the context,” he says.
The researchers also faced challenges in finding the best way to prepare the data. If the frames are too close together, the background would not change enough to provide data diversity.
In the end, finetuning VLMs with this new dataset improved accuracy at personalized localization by about 12 percent on average. When they included the dataset with pseudo-names, the performance gains reached 21 percent.
As model size increases, their technique leads to greater performance gains.
In the future, the researchers want to study possible reasons VLMs don’t inherit in-context learning capabilities from their base LLMs. In addition, they plan to explore additional mechanisms to improve the performance of a VLM without the need to retrain it with new data.
“This work reframes few-shot personalized object localization — adapting on the fly to the same object across new scenes — as an instruction-tuning problem and uses video-tracking sequences to teach VLMs to localize based on visual context rather than class priors. It also introduces the first benchmark for this setting with solid gains across open and proprietary VLMs. Given the immense significance of quick, instance-specific grounding — often without finetuning — for users of real-world workflows (such as robotics, augmented reality assistants, creative tools, etc.), the practical, data-centric recipe offered by this work can help enhance the widespread adoption of vision-language foundation models,” says Saurav Jha, a postdoc at the Mila-Quebec Artificial Intelligence Institute, who was not involved with this work.
Additional co-authors are Wei Lin, a research associate at Johannes Kepler University; Eli Schwartz, a research scientist at IBM Research; Hilde Kuehne, professor of computer science at Tuebingen AI Center and an affiliated professor at the MIT-IBM Watson AI Lab; Raja Giryes, an associate professor at Tel Aviv University; Rogerio Feris, a principal scientist and manager at the MIT-IBM Watson AI Lab; Leonid Karlinsky, a principal research scientist at IBM Research; Assaf Arbelle, a senior research scientist at IBM Research; and Shimon Ullman, the Samy and Ruth Cohn Professor of Computer Science at the Weizmann Institute of Science.
This research was funded, in part, by the MIT-IBM Watson AI Lab.
MIT-Toyota collaboration powers driver assistance in millions of vehicles
A decade-plus collaboration between MIT’s AgeLab and the Toyota Motor Corporation is recognized as a key contributor to advancements in automotive safety and human-machine interaction. Through the AgeLab at the MIT Center for Transportation and Logistics (CTL), researchers have collected and analyzed vast real-world driving datasets that have helped inform Toyota’s vehicle design and safety systems.
Toyota recently marked the completion of its 100th project through the Collaborative Safety Research Center (CSRC), celebrating MIT’s role in shaping technologies that enhance driver-assistance features and continue to forge the path for automated mobility. A key foundation for the 100th project is CSRC’s ongoing support for MIT CTL’s Advanced Vehicle Technology (AVT) Consortium.
Real-world data, real-world impact
“AVT was conceptualized over a decade ago as an academic-industry partnership to promote shared investment in real-world, naturalistic data collection, analysis, and collaboration — efforts aimed at advancing safer, more convenient, and more comfortable automobility,” says Bryan Reimer, founder and co-director of AVT. “Since its founding, AVT has drawn together over 25 organizations — including vehicle manufacturers, suppliers, insurers, and consumer research groups — to invest in understanding how automotive technologies function, how they influence driver behavior, and where further innovation is needed. This work has enabled stakeholders like Toyota to make more-informed decisions in product development and deployment.”
“CSRC’s 100th project marks a significant milestone in our collaboration,” Reimer adds. “We deeply value CSRC’s sustained investment, and commend the organization’s commitment to global industry impact and the open dissemination of research to advance societal benefit.”
“Toyota, through its Collaborative Safety Research Center, is proud to be a founding member of the AVT Consortium,” says Jason Hallman, senior manager of Toyota CSRC. “Since 2011, CSRC has collaborated with researchers such as AVT and MIT AgeLab on projects that help inform future products and policy, and to promote a future safe mobility society for all. The AVT specifically has helped us to study the real-world use of several vehicle technologies now available.”
Among these technologies are lane-centering assistance and adaptive cruise control — widely-used technologies that benefit from an understanding of how drivers interact with automation. “AVT uniquely combines vehicle and driver data to help inform future products and highlight the interplay between the performance of these features and the drivers using them,” says Josh Domeyer, principal scientist at CSRC.
Influencing global standards and Olympic-scale innovation
Insights from MIT’s pedestrian-driver interaction research with CSRC also helped shape Toyota’s automated vehicle communication systems. “These data helped develop our foundational understanding that drivers and pedestrians use their movements to communicate during routine traffic encounters,” said Domeyer. “This concept informed the deployment of Toyota’s e-Palette at the Tokyo Olympics, and it has been captured as a best practice in an ISO standard for automated driving system communication.”
The AVT Consortium's naturalistic driving datasets continue to serve as a foundation for behavioral safety strategies. From identifying moments of distraction to understanding how drivers multitask behind the wheel, the work is guiding subtle but impactful design considerations.
“By studying the natural behaviors of drivers and their contexts in the AVT datasets, we hope to identify new ways to encourage safe habits that align with customer preferences,” Domeyer says. “These can include subtle nudges, or modifications to existing vehicle features, or even communication and education partnerships outside of Toyota that reinforce these safe driving habits.”
Professor Yossi Sheffi, director of MIT CTL, comments, “This partnership exemplifies the impact of MIT collaborative research on industry to make real, practical innovation possible.”
A model for industry-academic collaboration
Founded in 2015, the AVT Consortium brings together automotive manufacturers, suppliers, and insurers to accelerate research in driver behavior, safety, and the transition toward automated systems. The consortium’s interdisciplinary approach — integrating engineering, human factors, and data science — has helped generate one of the world’s most unique and actionable real-world driving datasets.
As Toyota celebrates its research milestone, MIT reflects on a partnership that exemplifies the power of industry-academic collaboration to shape safer, smarter mobility.
MIT engineers solve the sticky-cell problem in bioreactors and other industries
To help mitigate climate change, companies are using bioreactors to grow algae and other microorganisms that are hundreds of times more efficient at absorbing CO2 than trees. Meanwhile, in the pharmaceutical industry, cell culture is used to manufacture biologic drugs and other advanced treatments, including lifesaving gene and cell therapies.
Both processes are hampered by cells’ tendency to stick to surfaces, which leads to a huge amount of waste and downtime for cleaning. A similar problem slows down biofuel production, interferes with biosensors and implants, and makes the food and beverage industry less efficient.
Now, MIT researchers have developed an approach for detaching cells from surfaces on demand, using electrochemically generated bubbles. In an open-access paper published in Science Advances, the researchers demonstrated their approach in a lab prototype and showed it could work across a range of cells and surfaces without harming the cells.
“We wanted to develop a technology that could be high-throughput and plug-and-play, and that would allow cells to attach and detach on demand to improve the workflow in these industrial processes,” says Professor Kripa Varanasi, senior author of the study. “This is a fundamental issue with cells, and we’ve solved it with a process that can scale. It lends itself to many different applications.”
Joining Varanasi on the study are co-first authors Bert Vandereydt, a PhD student in mechanical engineering, and former postdoc Baptiste Blanc.
Solving a sticky problem
The researchers began with a mission.
“We’ve been working on figuring out how we can efficiently capture CO2 across different sources and convert it into valuable products for various end markets,” Varanasi says. “That’s where this photobioreactor and cell detachment comes into the picture.”
Photobioreactors are used to grow carbon-absorbing algae cells by creating tightly controlled environments involving water and sunlight. They feature long, winding tubes with clear surfaces to let in the light algae need to grow. When algae stick to those surfaces, they block out the light, requiring cleaning.
“You have to shut down and clean up the entire reactor as frequently as every two weeks,” Varanasi says. “It’s a huge operational challenge.”
The researchers realized other industries have similar problem due to many cells’ natural adhesion, or stickiness. Each industry has its own solution for cell adhesion depending on how important it is that the cells survive. Some people scrape the surfaces clean, while others use special coatings that are toxic to cells.
In the pharmaceutical and biotech industries, cell detachment is typically carried out using enzymes. However, this method poses several challenges — it can damage cell membranes, is time-consuming, and requires large amounts of consumables, resulting in millions of liters of biowaste.
To create a better solution, the researchers began by studying other efforts to clear surfaces with bubbles, which mainly involved spraying bubbles onto surfaces and had been largely ineffective.
“We realized we needed the bubbles to form on the surfaces where we don’t want these cells to stick, so when the bubbles detach it creates a local fluid flow that creates shear stress at the interface and removes the cells,” Varanasi explains.
Electric currents generate bubbles by splitting water into hydrogen and oxygen. But previous attempts at using electricity to detach cells were hampered because the cell culture mediums contain sodium chloride, which turns into bleach when combined with an electric current. The bleach damages the cells, making it impractical for many applications.
“The culprit is the anode — that’s where the sodium chloride turns to bleach,” Vandereydt explained. “We figured if we could separate that electrode from the rest of the system, we could prevent bleach from being generated.”
To make a better system, the researchers built a 3-square-inch glass surface and deposited a gold electrode on top of it. The layer of gold is so thin it doesn’t block out light. To keep the other electrode separate, the researchers integrated a special membrane that only allows protons to pass through. The set up allowed the researchers to send a current through without generating bleach.
To test their setup, they allowed algae cells from a concentrated solution to stick to the surfaces. When they applied a voltage, the bubbles separated the cells from the surfaces without harming them.
The researchers also studied the interaction between the bubbles and cells, finding the higher the current density, the more bubbles were created and the more algae was removed. They developed a model for understanding how much current would be needed to remove algae in different settings and matched it with results from experiments involving algae as well as cells from ovarian cancer and bones.
“Mammalian cells are orders of magnitude more sensitive than algae cells, but even with those cells, we were able to detach them with no impact to the viability of the cell,” Vandereydt says.
Getting to scale
The researchers say their system could represent a breakthrough in applications where bleach or other chemicals would harm cells. That includes pharmaceutical and food production.
“If we can keep these systems running without fouling and other problems, then we can make them much more economical,” Varanasi says.
For cell culture plates used in the pharmaceutical industry, the team envisions their system comprising an electrode that could be robotically moved from one culture plate to the next, to detach cells as they’re grown. It could also be coiled around algae harvesting systems.
“This has general applicability because it doesn’t rely on any specific biological or chemical treatments, but on a physical force that is system-agnostic,” Varanasi says. “It’s also highly scalable to a lot of different processes, including particle removal.”
Varanasi cautions there is much work to be done to scale up the system. But he hopes it can one day make algae and other cell harvesting more efficient.
“The burning problem of our time is to somehow capture CO2 in a way that’s economically feasible,” Varanasi says. “These photobioreactors could be used for that, but we have to overcome the cell adhesion problem.”
The work was supported, in part, by Eni S.p.A through the MIT Energy Initiative, the Belgian American Educational Foundation Fellowship, and the Maria Zambrano Fellowship.
Blending neuroscience, AI, and music to create mental health innovations
Computational neuroscientist and singer/songwriter Kimaya (Kimy) Lecamwasam, who also plays electric bass and guitar, says music has been a core part of her life for as long as she can remember. She grew up in a musical family and played in bands all through high school.
“For most of my life, writing and playing music was the clearest way I had to express myself,” says Lecamwasam. “I was a really shy and anxious kid, and I struggled with speaking up for myself. Over time, composing and performing music became central to both how I communicated and to how I managed my own mental health.”
Along with equipping her with valuable skills and experiences, she credits her passion for music as the catalyst for her interest in neuroscience.
“I got to see firsthand not only the ways that audiences reacted to music, but also how much value music had for musicians,” she says. “That close connection between making music and feeling well is what first pushed me to ask why music has such a powerful hold on us, and eventually led me to study the science behind it.”
Lecamwasam earned a bachelor’s degree in 2021 from Wellesley College, where she studied neuroscience — specifically in the Systems and Computational Neuroscience track — and also music. During her first semester, she took a class in songwriting that she says made her more aware of the connections between music and emotions. While studying at Wellesley, she participated in the MIT Undergraduate Research Opportunities Program for three years. Working in the Department of Brain and Cognitive Sciences lab of Emery Brown, the Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience, she focused primarily on classifying consciousness in anesthetized patients and training brain-computer interface-enabled prosthetics using reinforcement learning.
“I still had a really deep love for music, which I was pursuing in parallel to all of my neuroscience work, but I really wanted to try to find a way to combine both of those things in grad school,” says Lecamwasam. Brown recommended that she look into the graduate programs at the MIT Media Lab within the Program in Media Arts and Sciences (MAS), which turned out to be an ideal fit.
“One thing I really love about where I am is that I get to be both an artist and a scientist,” says Lecamwasam. “That was something that was important to me when I was picking a graduate program. I wanted to make sure that I was going to be able to do work that was really rigorous, validated, and important, but also get to do cool, creative explorations and actually put the research that I was doing into practice in different ways.”
Exploring the physical, mental, and emotional impacts of music
Informed by her years of neuroscience research as an undergraduate and her passion for music, Lecamwasam focused her graduate research on harnessing the emotional potency of music into scalable, non-pharmacological mental health tools. Her master’s thesis focused on “pharmamusicology,” looking at how music might positively affect the physiology and psychology of those with anxiety.
The overarching theme of Lecamwasam’s research is exploring the various impacts of music and affective computing — physically, mentally, and emotionally. Now in the third year of her doctoral program in the Opera of the Future group, she is currently investigating the impact of large-scale live music and concert experiences on the mental health and well-being of both audience members and performers. She is also working to clinically validate music listening, composition, and performance as health interventions, in combination with psychotherapy and pharmaceutical interventions.
Her recent work, in collaboration with Professor Anna Huang’s Human-AI Resonance Lab, assesses the emotional resonance of AI-generated music compared to human-composed music; the aim is to identify more ethical applications of emotion-sensitive music generation and recommendation that preserve human creativity and agency, and can also be used as health interventions. She has co-led a wellness and music workshop at the Wellbeing Summit in Bilbao, Spain, and has presented her work at the 2023 CHI conference on Human Factors in Computing Systems in Hamburg, Germany and the 2024 Audio Mostly conference in Milan, Italy.
Lecamwasam has collaborated with organizations near and far to implement real-world applications of her research. She worked with Carnegie Hall's Weill Music Institute on its Well-Being Concerts and is currently partnering on a study assessing the impact of lullaby writing on perinatal health with the North Shore Lullaby Project in Massachusetts, an offshoot of Carnegie Hall’s Lullaby Project. Her main international collaboration is with a company called Myndstream, working on projects comparing the emotional resonance of AI-generated music to human-composed music and thinking of clinical and real-world applications. She is also working on a project with the companies PixMob and Empatica (an MIT Media Lab spinoff), centered on assessing the impact of interactive lighting and large-scale live music experiences on emotional resonance in stadium and arena settings.
Building community
“Kimy combines a deep love for — and sophisticated knowledge of — music with scientific curiosity and rigor in ways that represent the Media Lab/MAS spirit at its best,” says Professor Tod Machover, Lecamwasam’s research advisor, Media Lab faculty director, and director of the Opera of the Future group. “She has long believed that music is one of the most powerful and effective ways to create personalized interventions to help stabilize emotional distress and promote empathy and connection. It is this same desire to establish sane, safe, and sustaining environments for work and play that has led Kimy to become one of the most effective and devoted community-builders at the lab.”
Lecamwasam has participated in the SOS (Students Offering Support) program in MAS for a few years, which assists students from a variety of life experiences and backgrounds during the process of applying to the Program in Media Arts and Sciences. She will soon be the first MAS peer mentor as part of a new initiative through which she will establish and coordinate programs including a “buddy system,” pairing incoming master’s students with PhD students as a way to help them transition into graduate student life at MIT. She is also part of the Media Lab’s Studcom, a student-run organization that promotes, facilitates, and creates experiences meant to bring the community together.
“I think everything that I have gotten to do has been so supported by the friends I’ve made in my lab and department, as well as across departments,” says Lecamwasam. “I think everyone is just really excited about the work that they do and so supportive of one another. It makes it so that even when things are challenging or difficult, I’m motivated to do this work and be a part of this community.”
Why some quantum materials stall while others scale
People tend to think of quantum materials — whose properties arise from quantum mechanical effects — as exotic curiosities. But some quantum materials have become a ubiquitous part of our computer hard drives, TV screens, and medical devices. Still, the vast majority of quantum materials never accomplish much outside of the lab.
What makes certain quantum materials commercial successes and others commercially irrelevant? If researchers knew, they could direct their efforts toward more promising materials — a big deal since they may spend years studying a single material.
Now, MIT researchers have developed a system for evaluating the scale-up potential of quantum materials. Their framework combines a material’s quantum behavior with its cost, supply chain resilience, environmental footprint, and other factors. The researchers used their framework to evaluate over 16,000 materials, finding that the materials with the highest quantum fluctuation in the centers of their electrons also tend to be more expensive and environmentally damaging. The researchers also identified a set of materials that achieve a balance between quantum functionality and sustainability for further study.
The team hopes their approach will help guide the development of more commercially viable quantum materials that could be used for next generation microelectronics, energy harvesting applications, medical diagnostics, and more.
“People studying quantum materials are very focused on their properties and quantum mechanics,” says Mingda Li, associate professor of nuclear science and engineering and the senior author of the work. “For some reason, they have a natural resistance during fundamental materials research to thinking about the costs and other factors. Some told me they think those factors are too ‘soft’ or not related to science. But I think within 10 years, people will routinely be thinking about cost and environmental impact at every stage of development.”
The paper appears in Materials Today. Joining Li on the paper are co-first authors and PhD students Artittaya Boonkird, Mouyang Cheng, and Abhijatmedhi Chotrattanapituk, along with PhD students Denisse Cordova Carrizales and Ryotaro Okabe; former graduate research assistants Thanh Nguyen and Nathan Drucker; postdoc Manasi Mandal; Instructor Ellan Spero of the Department of Materials Science and Engineering (DMSE); Professor Christine Ortiz of the Department of DMSE; Professor Liang Fu of the Department of Physics; Professor Tomas Palacios of the Department of Electrical Engineering and Computer Science (EECS); Associate Professor Farnaz Niroui of EECS; Assistant Professor Jingjie Yeo of Cornell University; and PhD student Vsevolod Belosevich and Assostant Professor Qiong Ma of Boston College.
Materials with impact
Cheng and Boonkird say that materials science researchers often gravitate toward quantum materials with the most exotic quantum properties rather than the ones most likely to be used in products that change the world.
“Researchers don’t always think about the costs or environmental impacts of the materials they study,” Cheng says. “But those factors can make them impossible to do anything with.”
Li and his collaborators wanted to help researchers focus on quantum materials with more potential to be adopted by industry. For this study, they developed methods for evaluating factors like the materials’ price and environmental impact using their elements and common practices for mining and processing those elements. At the same time, they quantified the materials’ level of “quantumness” using an AI model created by the same group last year, based on a concept proposed by MIT professor of physics Liang Fu, termed quantum weight.
“For a long time, it’s been unclear how to quantify the quantumness of a material,” Fu says. “Quantum weight is very useful for this purpose. Basically, the higher the quantum weight of a material, the more quantum it is.”
The researchers focused on a class of quantum materials with exotic electronic properties known as topological materials, eventually assigning over 16,000 materials scores on environmental impact, price, import resilience, and more.
For the first time, the researchers found a strong correlation between the material’s quantum weight and how expensive and environmentally damaging it is.
“That’s useful information because the industry really wants something very low-cost,” Spero says. “We know what we should be looking for: high quantum weight, low-cost materials. Very few materials being developed meet that criteria, and that likely explains why they don’t scale to industry.”
The researchers identified 200 environmentally sustainable materials and further refined the list down to 31 material candidates that achieved an optimal balance of quantum functionality and high-potential impact.
The researchers also found that several widely studied materials exhibit high environmental impact scores, indicating they will be hard to scale sustainably. “Considering the scalability of manufacturing and environmental availability and impact is critical to ensuring practical adoption of these materials in emerging technologies,” says Niroui.
Guiding research
Many of the topological materials evaluated in the paper have never been synthesized, which limited the accuracy of the study’s environmental and cost predictions. But the authors say the researchers are already working with companies to study some of the promising materials identified in the paper.
“We talked with people at semiconductor companies that said some of these materials were really interesting to them, and our chemist collaborators also identified some materials they find really interesting through this work,” Palacios says. “Now we want to experimentally study these cheaper topological materials to understand their performance better.”
“Solar cells have an efficiency limit of 34 percent, but many topological materials have a theoretical limit of 89 percent. Plus, you can harvest energy across all electromagnetic bands, including our body heat,” Fu says. “If we could reach those limits, you could easily charge your cell phone using body heat. These are performances that have been demonstrated in labs, but could never scale up. That’s the kind of thing we’re trying to push forward."
This work was supported, in part, by the National Science Foundation and the U.S. Department of Energy.
Earthquake damage at deeper depths occurs long after initial activity
Earthquakes often bring to mind images of destruction, of the Earth breaking open and altering landscapes. But after an earthquake, the area around it undergoes a period of post-seismic deformation, where areas that didn’t break experience new stress as a result of the sudden change in the surroundings. Once it has adjusted to this new stress, it reaches a state of recovery.
Geologists have often thought that this recovery period was a smooth, continuous process. But MIT research published recently in Science has found evidence that while healing occurs quickly at shallow depths — roughly above 10 km — deeper depths recover more slowly, if at all.
“If you were to look before and after in the shallow crust, you wouldn’t see any permanent change. But there’s this very permanent change that persists in the mid-crust,” says Jared Bryan, a graduate student in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and lead author on the paper.
The paper’s other authors include EAPS Professor William Frank and Pascal Audet from the University of Ottawa.
Everything but the quakes
In order to assemble a full understanding of how the crust behaves before, during, and after an earthquake sequence, the researchers looked at seismic data from the 2019 Ridgecrest earthquakes in California. This immature fault zone experienced the largest earthquake in the state in 20 years, and tens of thousands of aftershocks over the following year. They then removed seismic data created by the sequence and only looked at waves generated by other seismic activity around the world to see how their paths through the Earth changed before and after the sequence.
“One person’s signal is another person’s noise,” says Bryan. They also used general ambient noise from sources like ocean waves and traffic that are also picked up by seismometers. Then, using a technique called a receiver function, they were able to see the speed of the waves as they traveled and how it changed due to conditions in the Earth such as rock density and porosity, much in the same way we use sonar to see how acoustic waves change when they interact with objects. With all this information, they were able to construct basic maps of the Earth around the Ridgecrest fault zone before and after the sequence.
What they found was that the shallow crust, extending about 10 km into the Earth, recovered over the course of a few months. In contrast, deeper depths in the mid-crust didn’t experience immediate damage, but rather changed over the same timescale as shallow depths recovered.
“What was surprising is that the healing in the shallow crust was so quick, and then you have this complementary accumulation occurring, not at the time of the earthquake, but instead over the post-seismic phase,” says Bryan.
Balancing the energy budget
Understanding how recovery plays out at different depths is crucial for determining how energy is spent during different parts of the seismic process, which includes activities such as the release of energy as waves, the creation of new fractures, or energy being stored elastically in the surrounding areas. Altogether, this is collectively known as the energy budget, and it is a useful tool for understanding how damage accumulates and recovers over time.
What remains unclear is the timescales at which deeper depths recover, if at all. The paper presents two possible scenarios to explain why that might be: one in which the deep crust recovers over a much longer timescale than they observed, or one where it never recovers at all.
“Either of those are not what we expected,” says Frank. “And both of them are interesting.”
Further research will require more observations to build out a more detailed picture to see at what depth the change becomes more pronounced. In addition, Bryan wants to look at other areas, such as more mature faults that experience higher levels of seismic activity, to see if it changes the results.
“We’ll let you know in 1,000 years whether it’s recovered,” says Bryan.
Darcy McRose and Mehtaab Sawhney ’20, PhD ’24 named 2025 Packard Fellows for Science and Engineering
The David and Lucile Packard Foundation has announced that two MIT affiliates have been named 2025 Packard Fellows for Science and Engineering. Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Assistant Professor in the MIT Department of Civil and Environmental Engineering, has been honored, along with Mehtaab Sawhney ’20, PhD ’24, a graduate of the Department of Mathematics who is now at Columbia University.
The honorees are among 20 junior faculty named among the nation’s most innovative early-career scientists and engineers. Each Packard Fellow receives an unrestricted research grant of $875,000 over five years to support their pursuit of pioneering research and bold new ideas.
“I’m incredibly grateful and honored to be awarded a Packard Fellowship,” says McRose. “It will allow us to continue our work exploring how small molecules control microbial communities in soils and on plant roots, with much-appreciated flexibility to follow our imagination wherever it leads us.”
McRose and her lab study secondary metabolites — small organic molecules that microbes and plants release into soils. Often known as antibiotics, these compounds do far more than fight infections; they can help unlock soil nutrients, shape microbial communities around plant roots, and influence soil fertility.
“Antibiotics made by soil microorganisms are widely used in medicine, but we know surprisingly little about what they do in nature,” explains McRose. “Just as healthy microbiomes support human health, plant microbiomes support plant health, and secondary metabolites can help to regulate the microbial community, suppressing pathogens and promoting beneficial microbes.”
Her lab integrates techniques from genetics, chemistry, and geosciences to investigate how these molecules shape interactions between microbes and plants in soil — one of Earth’s most complex and least-understood environments. By using secondary metabolites as experimental tools, McRose aims to uncover the molecular mechanisms that govern processes like soil fertility and nutrient cycling that are foundational to sustainable agriculture and ecosystem health.
Studying antibiotics in the environments where they evolved could also yield new strategies for combating soil-borne pathogens and improving crop resilience. “Soil is a true scientific frontier,” McRose says. “Studying these environments has the potential to reveal fascinating, fundamental insights into microbial life — many of which we can’t even imagine yet.”
A native of California, McRose earned her bachelor’s and master’s degrees from Stanford University, followed by a PhD in geosciences from Princeton University. Her graduate thesis focused on how bacteria acquire trace metals from the environment. Her postdoctoral research on secondary metabolites at Caltech was supported by multiple fellowships, including the Simons Foundation Marine Microbial Ecology Postdoctoral Fellowship, the L’Oréal USA For Women in Science Fellowship, and a Division Fellowship from Biology and Biological Engineering at Caltech.
McRose joined the MIT faculty in 2022. In 2025, she was named a Sloan Foundation Research Fellow in Earth System Science and awarded the Maseeh Excellence in Teaching Award.
Past Packard Fellows have gone on to earn the highest honors, including Nobel Prizes in chemistry and physics, the Fields Medal, Alan T. Waterman Awards, Breakthrough Prizes, Kavli Prizes, and elections to the National Academies of Science, Engineering, and Medicine. Each year, the foundation reviews 100 nominations for consideration from 50 invited institutions. The Packard Fellowships Advisory Panel, a group of 12 internationally recognized scientists and engineers, evaluates the nominations and recommends 20 fellows for approval by the Packard Foundation Board of Trustees.
Engineering next-generation fertilizers
Born in Palermo, Sicily, Giorgio Rizzo spent his childhood curious about the natural world. “I have always been fascinated by nature and how plants and animals can adapt and survive in extreme environments,” he says. “Their highly tuned biochemistry, and their incredible ability to create ones of the most complex and beautiful structures in chemistry that we still can’t even achieve in our laboratories.”
As an undergraduate student, he watched as a researcher mounted a towering chromatography column layered with colorful plant chemicals in a laboratory. When the researcher switched on a UV light, the colors turned into fluorescent shades of blue, green, red and pink. “I realized in that exact moment that I wanted to be the same person, separating new unknown compounds from a rare plant with potential pharmaceutical properties,” he recalls.
These experiences set him on a path from a master’s degree in organic chemistry to his current work as a postdoc in the MIT Department of Civil and Environmental Engineering, where he focuses on developing sustainable fertilizers and studying how rare earth elements can boost plant resilience, with the aim of reducing agriculture’s environmental impact.
In the lab of MIT Professor Benedetto Marelli, Rizzo studies plant responses to environmental stressors, such as heat, drought, and prolonged UV irradiation. This includes developing new fertilizers that can be applied as seed coating to help plants grow stronger and enhance their resistance.
“We are working on new formulations of fertilizers that aim to reduce the huge environmental impact of classical practices in agriculture based on NPK inorganic fertilizers,” Rizzo explains. Although they are fundamental to crop yields, their tendency to accumulate in soil is detrimental to the soil health and microbiome living in it. In addition, producing NPK (nitrogen, phosphorus, and potassium) fertilizers is one of the most energy-consuming and polluting chemical processes in the world.
“It is mandatory to reshape our conception of fertilizers and try to rely, at least in part, on alternative products that are safer, cheaper, and more sustainable,” he says.
Recently, Rizzo was awarded a Kavanaugh Fellowship, a program that gives MIT graduate students and postdocs entrepreneurial training and resources to bring their research from the lab to the market. “This prestigious fellowship will help me build a concrete product for a company, adding more value to our research,” he says.
Rizzo hopes their work will help farmers increase their crop yields without compromising soil quality or plant health. A major barrier to adopting new fertilizers is cost, as many farmers rely heavily on each growing season’s output and cannot risk investing in products that may underperform compared to traditional NPK fertilizers. The fertilizers being developed in the Marelli Lab address this challenge by using chitin and chitosan, abundant natural materials that make them far less expensive to produce, which Rizzo hopes will encourage farmers to try them.
“Through the Kavanaugh Fellowship, I will spend this year trying to bring the technology outside the lab to impact the world and meet the need for farmers to support their prosperity,” he says.
Mentorship has been a defining part of his postdoc experience. Rizzo describes Professor Benedetto Marelli as “an incredible mentor” who values his research interests and supports him through every stage of his work. The lab spans a wide range of projects — from plant growth enhancement and precision chemical delivery to wastewater treatment, vaccine development for fish, and advanced biochemical processes. “My colleagues created a stimulant environment with different research topics,” he notes. He is also grateful for the work he does with international institutions, which has helped him build a network of researchers and academics around the world.
Rizzo enjoys the opportunity to mentor students in the lab and appreciates their curiosity and willingness to learn. “It is one of the greatest qualities you can have as a scientist because you must be driven by curiosity to discover the unexpected,” he says.
He describes MIT as a “dynamic and stimulating experience,” but also acknowledges how overwhelming it can be. “You will feel like a small fish in a big ocean,” he says. “But that is exactly what MIT is: an ocean full of opportunities and challenges that are waiting to be solved.”
Beyond his professional work, Rizzo enjoys nature and the arts. An avid reader, he balances his scientific work with literature and history. “I never read about science-related topics — I read about it a lot already for my job,” he says. “I like classic literature, novels, essays, history of nations, and biographies. Often you can find me wandering in museums’ art collections.” Classical art, Renaissance, and Pre-Raphaelites are his favorite artistic currents.
Looking ahead, Rizzo hopes to shift his professional pathway toward startups or companies focused on agrotechnical improvement. His immediate goal is to contribute to initiatives where research has a direct, tangible impact on everyday life.
“I want to pursue the option of being part of a spinout process that would enable my research to have a direct impact in everyday life and help solve agricultural issues,” he adds.
Optimizing food subsidies: Applying digital platforms to maximize nutrition
Oct. 16 is World Food Day, a global campaign to celebrate the founding of the Food and Agriculture Organization 80 years ago, and to work toward a healthy, sustainable, food-secure future. More than 670 million people in the world are facing hunger. Millions of others are facing rising obesity rates and struggle to get healthy food for proper nutrition.
World Food Day calls on not only world governments, but business, academia, the media, and even the youth to take action to promote resilient food systems and combat hunger. This year, the Abdul Latif Jameel Water and Food Systems Laboratory (J-WAFS) is spotlighting an MIT researcher who is working toward this goal by studying food and water systems in the Global South.
J-WAFS seed grants provide funding to early-stage research projects that are unique to prior work. In an 11th round of seed grant funding in 2025, 10 MIT faculty members received support to carry out their cutting-edge water and food research. Ali Aouad PhD ’17, assistant professor of operations management at the MIT Sloan School of Management, was one of those grantees. “I had searched before joining MIT what kind of research centers and initiatives were available that tried to coalesce research on food systems,” Aouad says. “And so, I was very excited about J-WAFS.”
Aouad gathered more information about J-WAFS at the new faculty orientation session in August 2024, where he spoke to J-WAFS staff and learned about the program’s grant opportunities for water and food research. Later that fall semester, he attended a few J-WAFS seminars on agricultural economics and water resource management. That’s when Aouad knew that his project was perfectly aligned with the J-WAFS mission of securing humankind’s water and food.
Aouad’s seed project focuses on food subsidies. With a background in operations research and an interest in digital platforms, much of his work has centered on aligning supply-side operations with heterogeneous customer preferences. Past projects include ones on retail and matching systems. “I started thinking that these types of demand-driven approaches may be also very relevant to important social challenges, particularly as they relate to food security,” Aouad says. Before starting his PhD at MIT, Aouad worked on projects that looked at subsidies for smallholder farmers in low- and middle-income countries. “I think in the back of my mind, I've always been fascinated by trying to solve these issues,” he noted.
His seed grant project, Optimal subsidy design: Application to food assistance programs, aims to leverage data on preferences and purchasing habits from local grocery stores in India to inform food assistance policy and optimize the design of subsidies. Typical data collection systems, like point-of-sales, are not as readily available in India’s local groceries, making this type of data hard to come by for low-income individuals. “Mom-and-pop stores are extremely important last-mile operators when it comes to nutrition,” he explains.
For this project, the research team gave local grocers point-of-sale scanners to track purchasing habits. “We aim to develop an algorithm that converts these transactions into some sort of ‘revelation’ of the individuals’ latent preferences,” says Aouad. “As such, we can model and optimize the food assistance programs — how much variety and flexibility is offered, taking into account the expected demand uptake.” He continues, “now, of course, our ability to answer detailed design questions [across various products and prices] depends on the quality of our inference from the data, and so this is where we need more sophisticated and robust algorithms.”
Following the data collection and model development, the ultimate goal of this research is to inform policy surrounding food assistance programs through an “optimization approach.” Aouad describes the complexities of using optimization to guide policy. “Policies are often informed by domain expertise, legacy systems, or political deliberation. A lot of researchers build rigorous evidence to inform food policy, but it’s fair to say that the kind of approach that I’m proposing in this research is not something that is commonly used. I see an opportunity for bringing a new approach and methodological tradition to a problem that has been central for policy for many decades.”
The overall health of consumers is the reason food assistance programs exist, yet measuring long-term nutritional impacts and shifts in purchase behavior is difficult. In past research, Aouad notes that the short-term effects of food assistance interventions can be significant. However, these effects are often short-lived. “This is a fascinating question that I don’t think we will be able to address within the space of interventions that we will be considering. However, I think it is something I would like to capture in the research, and maybe develop hypotheses for future work around how we can shift nutrition-related behaviors in the long run.”
While his project develops a new methodology to calibrate food assistance programs, large-scale applications are not promised. “A lot of what drives subsidy mechanisms and food assistance programs is also, quite frankly, how easy it is and how cost-effective it is to implement these policies in the first place,” comments Aouad. Cost and infrastructure barriers are unavoidable to this kind of policy research, as well as sustaining these programs. Aouad’s effort will provide insights into customer preferences and subsidy optimization in a pilot setup, but replicating this approach on a real scale may be costly. Aouad hopes to be able to gather proxy information from customers that would both feed into the model and provide insight into a more cost-effective way to collect data for large-scale implementation.
There is still much work to be done to ensure food security for all, whether it’s advances in agriculture, food-assistance programs, or ways to boost adequate nutrition. As the 2026 seed grant deadline approaches, J-WAFS will continue its mission of supporting MIT faculty as they pursue innovative projects that have practical and real impacts on water and food system challenges.
Checking the quality of materials just got easier with a new AI tool
Manufacturing better batteries, faster electronics, and more effective pharmaceuticals depends on the discovery of new materials and the verification of their quality. Artificial intelligence is helping with the former, with tools that comb through catalogs of materials to quickly tag promising candidates.
But once a material is made, verifying its quality still involves scanning it with specialized instruments to validate its performance — an expensive and time-consuming step that can hold up the development and distribution of new technologies.
Now, a new AI tool developed by MIT engineers could help clear the quality-control bottleneck, offering a faster and cheaper option for certain materials-driven industries.
In a study appearing today in the journal Matter, the researchers present “SpectroGen,” a generative AI tool that turbocharges scanning capabilities by serving as a virtual spectrometer. The tool takes in “spectra,” or measurements of a material in one scanning modality, such as infrared, and generates what that material’s spectra would look like if it were scanned in an entirely different modality, such as X-ray. The AI-generated spectral results match, with 99 percent accuracy, the results obtained from physically scanning the material with the new instrument.
Certain spectroscopic modalities reveal specific properties in a material: Infrared reveals a material’s molecular groups, while X-ray diffraction visualizes the material’s crystal structures, and Raman scattering illuminates a material’s molecular vibrations. Each of these properties is essential in gauging a material’s quality and typically requires tedious workflows on multiple expensive and distinct instruments to measure.
With SpectroGen, the researchers envision that a diversity of measurements can be made using a single and cheaper physical scope. For instance, a manufacturing line could carry out quality control of materials by scanning them with a single infrared camera. Those infrared spectra could then be fed into SpectroGen to automatically generate the material’s X-ray spectra, without the factory having to house and operate a separate, often more expensive X-ray-scanning laboratory.
The new AI tool generates spectra in less than one minute, a thousand times faster compared to traditional approaches that can take several hours to days to measure and validate.
“We think that you don’t have to do the physical measurements in all the modalities you need, but perhaps just in a single, simple, and cheap modality,” says study co-author Loza Tadesse, assistant professor of mechanical engineering at MIT. “Then you can use SpectroGen to generate the rest. And this could improve productivity, efficiency, and quality of manufacturing.”
The study’s lead author is former MIT postdoc Yanmin Zhu.
Beyond bonds
Tadesse’s interdisciplinary group at MIT pioneers technologies that advance human and planetary health, developing innovations for applications ranging from rapid disease diagnostics to sustainable agriculture.
“Diagnosing diseases, and material analysis in general, usually involves scanning samples and collecting spectra in different modalities, with different instruments that are bulky and expensive and that you might not all find in one lab,” Tadesse says. “So, we were brainstorming about how to miniaturize all this equipment and how to streamline the experimental pipeline.”
Zhu noted the increasing use of generative AI tools for discovering new materials and drug candidates, and wondered whether AI could also be harnessed to generate spectral data. In other words, could AI act as a virtual spectrometer?
A spectroscope probes a material’s properties by sending light of a certain wavelength into the material. That light causes molecular bonds in the material to vibrate in ways that scatter the light back out to the scope, where the light is recorded as a pattern of waves, or spectra, that can then be read as a signature of the material’s structure.
For AI to generate spectral data, the conventional approach would involve training an algorithm to recognize connections between physical atoms and features in a material, and the spectra they produce. Given the complexity of molecular structures within just one material, Tadesse says such an approach can quickly become intractable.
“Doing this even for just one material is impossible,” she says. “So, we thought, is there another way to interpret spectra?”
The team found an answer with math. They realized that a spectral pattern, which is a sequence of waveforms, can be represented mathematically. For instance, a spectrum that contains a series of bell curves is known as a “Gaussian” distribution, which is associated with a certain mathematical expression, compared to a series of narrower waves, known as a “Lorentzian” distribution, that is described by a separate, distinct algorithm. And as it turns out, for most materials infrared spectra characteristically contain more Lorentzian waveforms, while Raman spectra are more Gaussian, and X-ray spectra is a mix of the two.
Tadesse and Zhu worked this mathematical interpretation of spectral data into an algorithm that they then incorporated into a generative AI model.
“It’s a physics-savvy generative AI that understands what spectra are,” Tadesse says. “And the key novelty is, we interpreted spectra not as how it comes about from chemicals and bonds, but that it is actually math — curves and graphs, which an AI tool can understand and interpret.”
Data co-pilot
The team demonstrated their SpectroGen AI tool on a large, publicly available dataset of over 6,000 mineral samples. Each sample includes information on the mineral’s properties, such as its elemental composition and crystal structure. Many samples in the dataset also include spectral data in different modalities, such as X-ray, Raman, and infrared. Of these samples, the team fed several hundred to SpectroGen, in a process that trained the AI tool, also known as a neural network, to learn correlations between a mineral’s different spectral modalities. This training enabled SpectroGen to take in spectra of a material in one modality, such as in infrared, and generate what a spectra in a totally different modality, such as X-ray, should look like.
Once they trained the AI tool, the researchers fed SpectroGen spectra from a mineral in the dataset that was not included in the training process. They asked the tool to generate a spectra in a different modality, based on this “new” spectra. The AI-generated spectra, they found, was a close match to the mineral’s real spectra, which was originally recorded by a physical instrument. The researchers carried out similar tests with a number of other minerals and found that the AI tool quickly generated spectra, with 99 percent correlation.
“We can feed spectral data into the network and can get another totally different kind of spectral data, with very high accuracy, in less than a minute,” Zhu says.
The team says that SpectroGen can generate spectra for any type of mineral. In a manufacturing setting, for instance, mineral-based materials that are used to make semiconductors and battery technologies could first be quickly scanned by an infrared laser. The spectra from this infrared scanning could be fed into SpectroGen, which would then generate a spectra in X-ray, which operators or a multiagent AI platform can check to assess the material’s quality.
“I think of it as having an agent or co-pilot, supporting researchers, technicians, pipelines and industry,” Tadesse says. “We plan to customize this for different industries’ needs.”
The team is exploring ways to adapt the AI tool for disease diagnostics, and for agricultural monitoring through an upcoming project funded by Google. Tadesse is also advancing the technology to the field through a new startup and envisions making SpectroGen available for a wide range of sectors, from pharmaceuticals to semiconductors to defense.
Helping scientists run complex data analyses without writing code
As costs for diagnostic and sequencing technologies have plummeted in recent years, researchers have collected an unprecedented amount of data around disease and biology. Unfortunately, scientists hoping to go from data to new cures often require help from someone with experience in software engineering.
Now, Watershed Bio is helping scientists and bioinformaticians run experiments and get insights with a platform that lets users analyze complex datasets regardless of their computational skills. The cloud-based platform provides workflow templates and a customizable interface to help users explore and share data of all types, including whole-genome sequencing, transcriptomics, proteomics, metabolomics, high-content imaging, protein folding, and more.
“Scientists want to learn about the software and data science parts of the field, but they don’t want to become software engineers writing code just to understand their data,” co-founder and CEO Jonathan Wang ’13, SM ’15 says. “With Watershed, they don’t have to.”
Watershed is being used by large and small research teams across industry and academia to drive discovery and decision-making. When new advanced analytic techniques are described in scientific journals, they can be added to Watershed’s platform immediately as templates, making cutting-edge tools more accessible and collaborative for researchers of all backgrounds.
“The data in biology is growing exponentially, and the sequencing technologies generating this data are only getting better and cheaper,” Wang says. “Coming from MIT, this issue was right in my wheelhouse: It’s a tough technical problem. It’s also a meaningful problem because these people are working to treat diseases. They know all this data has value, but they struggle to use it. We want to help them unlock more insights faster.”
No code discovery
Wang expected to major in biology at MIT, but he quickly got excited by the possibilities of building solutions that scaled to millions of people with computer science. He ended up earning both his bachelor’s and master’s degrees from the Department of Electrical Engineering and Computer Science (EECS). Wang also interned at a biology lab at MIT, where he was surprised how slow and labor-intensive experiments were.
“I saw the difference between biology and computer science, where you had these dynamic environments [in computer science] that let you get feedback immediately,” Wang says. “Even as a single person writing code, you have so much at your fingertips to play with.”
While working on machine learning and high-performance computing at MIT, Wang also co-founded a high frequency trading firm with some classmates. His team hired researchers with PhD backgrounds in areas like math and physics to develop new trading strategies, but they quickly saw a bottleneck in their process.
“Things were moving slowly because the researchers were used to building prototypes,” Wang says. “These were small approximations of models they could run locally on their machines. To put those approaches into production, they needed engineers to make them work in a high-throughput way on a computing cluster. But the engineers didn’t understand the nature of the research, so there was a lot of back and forth. It meant ideas you thought could have been implemented in a day took weeks.”
To solve the problem, Wang’s team developed a software layer that made building production-ready models as easy as building prototypes on a laptop. Then, a few years after graduating MIT, Wang noticed technologies like DNA sequencing had become cheap and ubiquitous.
“The bottleneck wasn’t sequencing anymore, so people said, ‘Let’s sequence everything,’” Wang recalls. “The limiting factor became computation. People didn’t know what to do with all the data being generated. Biologists were waiting for data scientists and bioinformaticians to help them, but those people didn’t always understand the biology at a deep enough level.”
The situation looked familiar to Wang.
“It was exactly like what we saw in finance, where researchers were trying to work with engineers, but the engineers never fully understood, and you had all this inefficiency with people waiting on the engineers,” Wang says. “Meanwhile, I learned the biologists are hungry to run these experiments, but there is such a big gap they felt they had to become a software engineer or just focus on the science.”
Wang officially founded Watershed in 2019 with physician Mark Kalinich ’13, a former classmate at MIT who is no longer involved in day-to-day operations of the company.
Wang has since heard from biotech and pharmaceutical executives about the growing complexity of biology research. Unlocking new insights increasingly involves analyzing data from entire genomes, population studies, RNA sequencing, mass spectrometry, and more. Developing personalized treatments or selecting patient populations for a clinical study can also require huge datasets, and there are new ways to analyze data being published in scientific journals all the time.
Today, companies can run large-scale analyses on Watershed without having to set up their own servers or cloud computing accounts. Researchers can use ready-made templates that work with all the most common data types to accelerate their work. Popular AI-based tools like AlphaFold and Geneformer are also available, and Watershed’s platform makes sharing workflows and digging deeper into results easy.
“The platform hits a sweet spot of usability and customizability for people of all backgrounds,” Wang says. “No science is ever truly the same. I avoid the word product because that implies you deploy something and then you just run it at scale forever. Research isn’t like that. Research is about coming up with an idea, testing it, and using the outcome to come up with another idea. The faster you can design, implement, and execute experiments, the faster you can move on to the next one.”
Accelerating biology
Wang believes Watershed is helping biologists keep up with the latest advances in biology and accelerating scientific discovery in the process.
“If you can help scientists unlock insights not a little bit faster, but 10 or 20 times faster, it can really make a difference,” Wang says.
Watershed is being used by researchers in academia and in companies of all sizes. Executives at biotech and pharmaceutical companies also use Watershed to make decisions about new experiments and drug candidates.
“We’ve seen success in all those areas, and the common thread is people understanding research but not being an expert in computer science or software engineering,” Wang says. “It’s exciting to see this industry develop. For me, it’s great being from MIT and now to be back in Kendall Square where Watershed is based. This is where so much of the cutting-edge progress is happening. We’re trying to do our part to enable the future of biology.”
New MIT initiative seeks to transform rare brain disorders research
More than 300 million people worldwide are living with rare disorders — many of which have a genetic cause and affect the brain and nervous system — yet the vast majority of these conditions lack an approved therapy. Because each rare disorder affects fewer than 65 out of every 100,000 people, studying these disorders and creating new treatments for them is especially challenging.
Thanks to a generous philanthropic gift from Ana Méndez ’91 and Rajeev Jayavant ’86, EE ’88, SM ’88, MIT is now poised to fill gaps in this research landscape. By establishing the Rare Brain Disorders Nexus — or RareNet — at MIT's McGovern Institute for Brain Research, the alumni aim to convene leaders in neuroscience research, clinical medicine, patient advocacy, and industry to streamline the lab-to-clinic pipeline for rare brain disorder treatments.
“Ana and Rajeev’s commitment to MIT will form crucial partnerships to propel the translation of scientific discoveries into promising therapeutics and expand the Institute’s impact on the rare brain disorders community,” says MIT President Sally Kornbluth. “We are deeply grateful for their pivotal role in advancing such critical science and bringing attention to conditions that have long been overlooked.”
Building new coalitions
Several hurdles have slowed the lab-to-clinic pipeline for rare brain disorder research. It is difficult to secure a sufficient number of patients per study, and current research efforts are fragmented, since each study typically focuses on a single disorder (there are more than 7,000 known rare disorders, according to the World Health Organization). Pharmaceutical companies are often reluctant to invest in emerging treatments due to a limited market size and the high costs associated with preparing drugs for commercialization.
Méndez and Jayavant envision that RareNet will finally break down these barriers. “Our hope is that RareNet will allow leaders in the field to come together under a shared framework and ignite scientific breakthroughs across multiple conditions. A discovery for one rare brain disorder could unlock new insights that are relevant to another,” says Jayavant. “By congregating the best minds in the field, we are confident that MIT will create the right scientific climate to produce drug candidates that may benefit a spectrum of uncommon conditions.”
Guoping Feng, the James W. (1963) and Patricia T. Poitras Professor in Neuroscience and associate director of the McGovern Institute, will serve as RareNet’s inaugural faculty director. Feng holds a strong record of advancing studies on therapies for neurodevelopmental disorders, including autism spectrum disorders, Williams syndrome, and uncommon forms of epilepsy. His team’s gene therapy for Phelan-McDermid syndrome, a rare and profound autism spectrum disorder, has been licensed to Jaguar Gene Therapy and is currently undergoing clinical trials. “RareNet pioneers a unique model for biomedical research — one that is reimagining the role academia can play in developing therapeutics,” says Feng.
RareNet plans to deploy two major initiatives: a global consortium and a therapeutic pipeline accelerator. The consortium will form an international network of researchers, clinicians, and patient groups from the outset. It seeks to connect siloed research efforts, secure more patient samples, promote data sharing, and drive a strong sense of trust and goal alignment across the RareNet community. Partnerships within the consortium will support the aim of the therapeutic pipeline accelerator: to de-risk early lab discoveries and expedite their translation to clinic. By fostering more targeted collaborations — especially between academia and industry — the accelerator will prepare potential treatments for clinical use as efficiently as possible.
MIT labs are focusing on four uncommon conditions in the first wave of RareNet projects: Rett syndrome, prion disease, disorders linked to SYNGAP1 mutations, and Sturge-Weber syndrome. The teams are working to develop novel therapies that can slow, halt, or reverse dysfunctions in the brain and nervous system.
These efforts will build new bridges to connect key stakeholders across the rare brain disorders community and disrupt conventional research approaches. “Rajeev and I are motivated to seed powerful collaborations between MIT researchers, clinicians, patients, and industry,” says Méndez. “Guoping Feng clearly understands our goal to create an environment where foundational studies can thrive and seamlessly move toward clinical impact.”
“Patient and caregiver experiences, and our foreseeable impact on their lives, will guide us and remain at the forefront of our work,” Feng adds. “For far too long has the rare brain disorders community been deprived of life-changing treatments — and, importantly, hope. RareNet gives us the opportunity to transform how we study these conditions, and to do so at a moment when it’s needed more than ever.”