MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 7 hours 12 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

Games people — and machines — play: Untangling strategic reasoning to advance AI

Tue, 05/05/2026 - 5:00pm

Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.

By around age 14, Farina had focused on an idea that would prove foundational to his career.

“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”

At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.

“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.

Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.

Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”

Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.

Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.

As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.

Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”

A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.

After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.

“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”

Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.

According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”

A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.

With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.

“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”

MIT marks first Robert R. Taylor Day with Tuskegee University

Tue, 05/05/2026 - 4:35pm

On April 10, MIT marked its first official Robert R. Taylor Day with a program centered on the life and work of Robert Robinson Taylor (Class of 1892), the Institute’s first Black graduate and the first academically trained Black architect in the United States.

After graduating from MIT, Taylor joined Tuskegee Institute (now Tuskegee University), where he designed campus buildings, developed a curriculum, and helped establish an approach to architectural education grounded in making and community life — an orientation that continues to shape the relationship between MIT and Tuskegee today. 

Taylor returned to MIT on April 10, 1911, to speak at the 50th anniversary of the Institute’s founding — the date now observed as Robert R. Taylor Day. Reflecting on his education, he credited MIT with the “methods and plans” he carried to Tuskegee Institute. “Certainly the spirit,” he said, was found “in the love of doing things correctly, of putting logical ways of thinking into the humblest task … to build up the immediate community in which the persons live.”

One hundred fifteen years later, at the MIT Museum, students and faculty gathered around Taylor’s original thesis, “A Soldiers Home.” The work was presented alongside archival materials from Taylor’s time at MIT by Jonathan Duval, assistant curator of architecture and design. Rather than framing Taylor as a distant historical figure, the encounter with the work itself — its drawings, assumptions, and ambitions — set the terms for the day, bringing forward not only his accomplishments but the ideas and methods that continue to inform teaching and collaboration today. Attendees then gathered for a lunch-and-learn session including a hybrid panel involving MIT and Tuskegee University faculty. 

“It is so important to continue to develop the MIT-Tuskegee relationship begun by Robert R. Taylor,” says Kwesi Daniels, associate professor and head of the architecture department at Tuskegee University. “MIT students are provided an opportunity to experience the campus Taylor designed and his ethos of social architecture. For the Tuskegee students, they are able to appreciate the foundation Taylor received at MIT. The engagement epitomizes the ‘mind and hand’ philosophy of MIT and the head, hand, heart philosophy of Tuskegee.”

An ongoing exchange

Student and faculty exchanges, launched by the architecture departments at both institutions, have extended these connections in recent years. MIT students travel to Tuskegee for work in historic preservation and community engagement, sampling Daniels’ scanning and drone equipment, while Tuskegee students come to MIT to engage with digital fabrication and entrepreneurship.

For Nicholas de Monchaux, professor and head of the Department of Architecture at MIT, the relationship reflects continuity. “We are not uniting. We’re reuniting,” he says. “This year’s celebration should really be seen as the kickoff of a year of reflecting on Robert Taylor’s legacy and imagining what the day, and his legacy, can become over time.”

The day’s program — the vision for which originally emerged from a suggestion made by MIT literature professor Joshua Bennett during a meeting at Tuskegee with de Monchaux, Daniels, and Tuskegee President Mark Brown — moved into a broader effort among faculty and collaborators across architecture, history, and the humanities. As Bennett put it, “The primary aim of Robert R. Taylor Day is to lift up not only Taylor’s accomplishments, but his ideas — and the fact that his ideas live on in those of us who have inherited his legacy.”

That emphasis is also visible in the dedicated coursework and research that has accompanied the exchange since 2022. In class 4.s12 (Brick x Brick: Drawing a Particular Survey), taught by Carrie Norman, assistant professor in architecture at MIT, students document buildings on the Tuskegee campus through measured drawings and archival interpretation. Working from limited historical material, they reconstruct both form and intent.

“My role has been to structure this work pedagogically,” Norman says, “guiding students in methods of close looking, measured drawing, and archival interpretation.” She describes Taylor’s work as “an ongoing research agenda,” adding that “the broader aim is not only to deepen engagement with Taylor’s legacy, but to build on it through new forms of design research.”

Related work has contributed to a recent exhibition on the Tuskegee Chapel at the National Building Museum, curated by Helen Bechtel of the Yale School of Architecture. Building on research conducted in Norman’s course, students developed large-scale models that form part of the exhibition. New 3D fabrications use a limited set of archival materials to reconstruct the chapel originally designed by Taylor as the first electrified building in Alabama’s Macon County, which was destroyed by fire in 1957.

Looking ahead

Timothy Hyde, professor in the MIT Department of Architecture, has also been involved in the ongoing MIT–Tuskegee collaboration and in efforts to situate Taylor’s work within a broader historical context. He notes that Taylor’s training at MIT helped shape the curriculum he later developed at Tuskegee. “The other influence I would like to mention is the city of Boston itself,” Hyde adds. “Boston was a prosperous city with a wealth of civic architecture that Taylor would have seen and studied.” 

A documentary project on Taylor’s life, supported by the MIT Human Insight Collaborative and led by Hyde and historian Christopher Capozzola, senior associate dean for MIT Open Learning, is currently in development.

For some students, these encounters shape longer trajectories. As an undergraduate at Tuskegee, Myles Sampson participated in the MIT Summer Research Program (MSRP), where he began to connect architecture with a growing interest in computation. He later enrolled in MIT’s Master of Science in Architecture Studies (SMArchS) computation program, working with Professor Larry Sass, who introduced him to robotic fabrication.

“I never looked back,” Sampson says. “Without that hands-on research experience, I would never have looked past contemporary architectural practice.” He is now pursuing a doctorate in computational design at Carnegie Mellon University, focused on the role of automation in architecture and construction.

Sampson contributed significant work to the National Building Museum’s exhibition. His installation, Brick Parable, brings together historical reference and robotic construction. As de Monchaux notes, the project reflects the long arc of Taylor’s legacy: “bricks were fired by students as part of Taylor’s training program … Myles [Sampson]’s piece, made with a robotic assembly of bricks, explores the architectural idea of the chapel in contemporary form.”

For Daniels, the continued circulation of students between the two institutions remains central. Viewing Taylor’s thesis in particular offers a shared point of reference. “Whether the student is from Tuskegee or MIT, they are able to appreciate the quality of work Taylor completed as a student,” he says, “and how he built on that work by creating a college campus, beginning at age 25.”

Across these activities, Taylor’s work is approached not as a fixed legacy, but as a set of methods and commitments that continue to be tested. As Catherine Armwood, dean of Tuskegee University Robert R. Taylor School of Architecture and Construction Science, describes it: “While our students leverage [the design and entrepreneurship program] MITdesignX to turn architectural concepts into social enterprises through advanced fabrication and venture mentorship, MIT students come to Tuskegee for an immersion in historic preservation. By surveying buildings handcrafted by our founding students, they learn a legacy of self-reliance and community impact that can’t be found anywhere else,” Armwood says. “Together, we are bridging technical innovation with deep-rooted heritage to train a new generation of visionary leaders.” 

Astronomers pin down the origins of a planetary odd couple

Tue, 05/05/2026 - 12:00am

Across the Milky Way galaxy, a planetary odd couple is circling a star some 190 light years from Earth. A normally “lonely” hot Jupiter is sharing space with a mini-Neptune, in a rare and unlikely pairing that’s had astronomers puzzled since the system’s discovery in 2020.

Now MIT scientists have caught a glimpse into the atmosphere of the mini-Neptune, which is circling inside the orbit of its Jupiter-sized companion, and discovered clues to explain the origins of this unusual planetary system.

In a study appearing today in Astrophysical Journal Letters, the scientists report on new measurements of the mini-Neptune’s atmosphere, made using NASA’s James Webb Space Telescope (JWST). It is the first time astronomers have measured the composition of a mini-Neptune that resides inside the orbit of a hot Jupiter.

Their measurements reveal that the smaller planet has a “heavy” atmosphere that is rich with water vapor, carbon dioxide, sulfur dioxide, and hints of methane. Such a heavy atmosphere would not have been acquired by the planet if it had formed in its current location, very close to its star.

Instead, the scientists say their findings point to an alternate origin story: Both the mini-Neptune and the hot Jupiter may have formed much farther away, in the colder region of the protoplanetary disk. There, the planets could slowly build up atmospheres of ice and other volatiles. Over time, the planets were likely drawn in toward the star in a gradual process that kept them close, with their atmospheres intact.

The team’s results are the first to show that mini-Neptunes can form beyond a star’s “frost line.” This boundary refers to the minimum distance from a star where the temperature is low enough that water instantly condenses into ice.

“This is the first time we’ve observed the atmosphere of a planet that is inside the orbit of a hot Jupiter,” says Saugata Barat, a postdoc in MIT’s Kavli Institute for Astrophysics and Space Research and the lead author of the study. “This measurement tells us this mini-Neptune indeed formed beyond the frost line, giving confirmation that this formation channel does exist.”

The team consists of astronomers around the world, including Andrew Vanderburg, a visiting assistant professor at MIT, and co-authors from multiple other institutions including the Harvard and Smithsonian Center for Astrophysics, the University of South Queensland, the University of Texas at Austin, and Lund University.

A “one-of-a-kind” system

As their name implies, mini-Neptunes are planets that are less massive than Neptune. They are considered to be gas dwarfs, which are made mostly of gas, with an inner, rocky core. Mini-Neptunes are the most commonly found planet in the Milky Way, though, interestingly, no such world exists in our own solar system. Astronomers have observed many planets circling a wide variety of stars in a range of planetary systems. Mini-Neptunes, then, are generally considered to be garden-variety planets.

But in 2020, Chelsea X. Huang, then a Torres Postdoctoral fellow at MIT (now on the faculty at University of South Queensland), discovered a mini-Neptune in a rare and puzzling circumstance: The planet appeared to be circling its star with an unlikely companion — a hot Jupiter.

The astronomers made their discovery using NASA’s Transiting Exoplanet Survey Satellite (TESS). They analyzed TESS’ measurements of TOI-1130, a star located 190 light years from Earth, and detected signs of a mini-Neptune and a hot Jupiter, orbiting the star every four and eight days respectively.

“This was a one-of-a-kind system,” says Huang. “Hot Jupiters are ‘lonely,’ meaning they don’t have companion planets inside their orbits. They are so massive, and their gravity is so strong, that whatever is inside their orbit just gets scattered away. But somehow, with this hot Jupiter, an inner companion has survived. And that raises questions about how such a system could form.”

A spot-on snapshot

The 2020 discovery of TOI-1130 and its odd planetary pair inspired Huang, Vanderburg, and their colleagues to take a closer look at the planets, and specifically, their atmospheres, with JWST. In its new study, the team reports its analysis of TOI-1130b — the inner-orbiting mini-Neptune.

Catching the planet at just the right time was their first challenge. Most planets circle their star with a regular, predictable period, like the tick of a clock. But the mini-Neptune and the hot Jupiter were found to be in “mean motion resonance,” meaning that each can affect the other’s motion, pulling and tugging, and slightly varying the time each takes to orbit their star. This made it tricky to predict when JWST could get a clear view.

The team, led by Judith Korth of Lund University, assembled as many past observations of the system as they could, and developed a model to predict when each planet would pass by the star at an angle that JWST could observe.

“It was a challenging prediction, and we had to be spot-on,” Barat says.

In the end, the team was able to catch a direct and detailed snapshot of both planets.

“The beauty of JWST is that it does not observe just in one color, but at different colors, or wavelengths,” Barat explains. “And the specific wavelengths that a planet absorbs can tell you a lot about the composition of its atmosphere.”

From JWST’s measurements, the team found that the planet absorbed wavelengths specifically for water, carbon dioxide, sulfur dioxide, and to a lesser degree, methane. These molecules are heavier than hydrogen and helium, which constitute lighter atmospheres. Astronomers had assumed that, if mini-Neptunes formed very close to their star, they should have light atmospheres.

But the team’s new results counter that assumption and offer a new way that mini-Neptunes could form. Since heavier molecules were found in the atmosphere of TOI-1130b, which resides very close to its star, the scientists say the only possible explanation for its composition is that the planet formed much farther out than its current location.

The planet likely accumulated its heavy atmosphere of water and other volatiles such as carbon dioxide and sulfur dioxide in the icy region beyond the star’s frost line. In this much colder environment, water condenses onto bits of dust to form icy pebbles, which an infant planet can draw into its atmosphere. The water evaporates as it slowly migrates in closer to its star.

Barat says the team’s detection of heavy molecules in the atmosphere of TOI-1130b confirms that the planet — and likely its hot Jupiter companion — formed in the outskirts of the system. Through gradual migration, the two planets would be able to stay close together and keep their atmospheres intact.

“This system represents one of the rarest architectures that astronomers have ever found,” Barat says. “The observations of TOI-1130b provide the first hint that such mini-Neptunes that form beyond the water/ice line are indeed present in nature.”

This work was supported, in part, by NASA.

The tech revolution that wasn’t

Tue, 05/05/2026 - 12:00am

In 1960, engineers at India’s Tata Institute of Fundamental Research (TIFR) built what they called an “Automatic Calculator,” the country’s first working computer. It had the same type of ferrite-core memory as IBM’s world-leading machines, and at a glance, appeared to herald a new age of tech advances in India.

Constructed with a fraction of the resources Western computer engineers had, the TIFRAC, as they called it, was a remarkable feat.

“The people working on it had never really seen an actual functioning computer,” says Dwai Banerjee, an associate professor of science, technology, and society, and the author of a new book about computing in India. “You had this ambitious group of engineers building a state-of-the-art machine with very, very, limited resources. The fact they could build this is staggering.”

However, the TIFRAC was never even replicated, let alone produced at scale. The visionaries behind it wanted to turn India into an independent computing nation: a place that would produce its own equipment and become an industry power. Instead, the TIFRAC became a technological cul-de-sac, and India’s tech industry took on a very different shape. Instead of exporting equipment, it exports talent, sending skilled engineers and executives around the globe.

Now Banerjee explores those issues in the book, “Computing in the Age of Decolonization: India’s Lost Technological Revolution,” published by Princeton University Press. In it, he examines the country’s pursuit of technological self-sufficiency, and the global forces that prevailed against this vision. As a result, the country is “the world’s leading provider of inexpensive outsourcing and offshoring services, yet enjoys minimal benefits from more profitable advances in research, manufacturing, and development,” Banerjee writes.

“This book is about understanding how the current landscape of technological power came to be and the unequal way in which power is distributed across the world when it comes to anything to do with computing,” Banerjee says. “Basically, the historical conditions of the mid-20th century period are essential to understanding why the world of computing looks the way it does today.”

Computing and the geopolitics of knowledge

When India became a sovereign nation in 1947, many of its leaders believed “rapid technology-driven industrialization was the only way out of centuries of colonial underdevelopment,” as Banerjee writes. Some leapt into action, such as the remarkable nuclear physicist Homi J. Bhabha, who helped establish the TIFR.

Initially, Indian leaders hoped to gain cooperation for the U.S. and international organizations in making technological advances, but quickly ran into Cold War politics. Computing was heavily bound up with defense matters; India was not always fully aligned with U.S. political interests, so the flow of knowledge from the U.S. to India was distinctly limited.

“This is very much an external constraint story,” Banerjee says. “You need blueprints and not just working papers, and that’s what was guarded by the U.S. for a very long time.”

Still, the TIFR research team toiled away as its computing projects until the TIFRAC was up and running — making national headlines.

“The achievement it represents is mind-boggling,” Banerjee emphasizes. “A computer in the U.S. would have cost more to run than this entire institute in India.”

As Banerjee details in the book, the TIFRAC machine was built to grow. Its engineers matched the speed of IBM machines and planned to import larger ferrite-core memory stacks as their workload expanded. But when IBM released the FORTRAN programming language in 1957, it required four times the memory the TIFRAC machine was equipped with. India’s 1958 foreign exchange crisis then shaped the machine’s fate: The World Bank convened a U.S.-led creditor consortium that conditioned rescue loans on the opening of Indian markets to Western capital. Importing larger memory stacks became unaffordable, rendering the TIFRAC obsolete almost as soon as it was completed.

“It’s a geopolitics-of-knowledge question, not that they made a mistake,” Banerjee says of the Indian engineers. “They didn’t know IBM was about to reshape software.”

Exit IBM, enter services

Though IBM’s jump forward after the release of Fortran left the TIFRAC project stalled out, Indian advocates for computer manufacturing did not give up their dream. For one thing, they looked around for partnerships and other ways of moving their domestic tech industry forward. And then in 1978, India, uniquely, banned IBM from the country, on account of its business practices.

That might have set the stage for India’s computer manufacturing industry to flourish. But at the same moment, countervailing forces took hold, including a widespread turn toward the private sector as an increasing source of activity, rather than public-private enterprises.

“For a moment you have this imagination come to a sort of fruition,” Banerjee observes. “But by the late 1970s and 1980s, there is a new group of people arguing for quick profits through software services, saying that this route feels less painful than setting up manufacturing, R&D, and firms for a decade or more.”

This turn toward private-sector services rather than government-involved manufacturing ultimately became a decisive factor in shaping India’s tech-sector trajectory. Rather than seeking to make machines domestically, the country became part of the global tech-services sector, while many of its engineers migrated to Silicon Valley and other tech hotspots. Global tech firms used their reach to advance the idea that many countries would develop independent industries. This is not the outcome India’s leaders and technologists once envisioned.

“It still surprises me because of the one thing India did that no other country in the world managed to do, and that’s kick out IBM,” Banerjee says. “The fact that this vision fades is part of changing government ambition.”

Beyond the mavericks

In writing the book, Banerjee has multiple goals. One is simply shedding more light on the rich details of India’s initial computing efforts. Another is contesting the idea that India somehow naturally found a role providing services and exporting talent; that is not what many people once hoped.

Still another motif in Banerjee’s work is that the history of computing too often centers on innovators who are cast as mavericks, shrugging off conventions to upend business and society — whereas the large-scale forces of global capital and geopolitics matter greatly in technological development.

“This book suggests we often overplay those stories of individual genius, because you can be a genius with all the right ideas, but if you don’t have all the institutions supporting you, it means nothing,” Banerjee says.

Other scholars have praised “Computing in the Age of Decolonization.” Matthew L. Jones, a professor of history at Princeton University, has stated that Banerjee’s book is a “scrupulous accounting of ultimately failed Indian efforts to secure technological sovereignty in the wake of independence,” which “joins the best recent accounts of computing worldwide and transforms how we think through diverse national trajectories through the Cold War and beyond.”

For his part, Banerjee hopes a wide variety of readers will be interested in the book — and recognize that the specific case of India and computing can tell us a lot about the challenges of new types of economic growth in many places.

“India stands in for a lot of countries in the mid-20th century that had recently gained formal political independence and were thinking of ways to catch up with the rest of the advanced industrialized world,” Banerjee says. “But the power structures tied to technological and scientific advancement did not disappear. They were replaced by newer structures, including foreign policy with very specific ideas about what different countries should be doing with regard to technology. That’s where the story starts.”

Biologist Joey Davis explores how cells build complex structures

Tue, 05/05/2026 - 12:00am

Ribosomes, the cellular machines that assemble proteins, are made from dozens of proteins and RNA molecules. Putting all of those pieces together is a complex puzzle — one that MIT Associate Professor Joey Davis PhD ’10 revels in trying to solve.

Understanding how these structures form and later break down could help researchers learn more about how disruptions of these fundamental processes can lead to disease. But, as Davis points out, it’s also an interesting biological question.

“Our long-term goal is to really understand how the natural world assembles these huge complexes rapidly and efficiently. It’s a fundamentally interesting question to think about how these things get put together,” he says.

His work has helped reveal that unlike building a house, which happens in a prescribed sequence of steps — pouring the foundation, building the frame, putting on the roof, then doing electrical and plumbing work — ribosomes can be assembled in a more flexible way. Cells can even skip an assembly step and then come back to it later.

“In these natural systems, it seems like the assembly pathways are much more dynamic and flexible,” he says. “It appears that evolution has selected pathways that aren’t strictly ordered in the way we would think about an assembly line, where you always put in one component, then the next, and then the next. We’re excited to understand the selective advantages of such approaches.”

A love of discovery

Davis’ interest in how things are put together developed early in life, inspired by his father, a carpenter who framed houses. During the mid-1980s, the family moved from Colorado to Southern California, where his father worked in construction during a housing boom there.

“I was always interested in building things, which I think probably came from being around my dad and other builders,” Davis says.

As an undergraduate at the University of California at Berkeley, where he majored in computer science and biological engineering, Davis’ interests turned toward smaller scales, in the realm of cells and molecules. During his junior year, he started working in the lab of chemistry professor Michael Marletta, who studies molecular-level biological interactions.

In the lab, Davis investigated how enzymes that contain heme are able to preferentially bind to either oxygen or nitric oxide, two gases that are very similar in structure. That work kindled a love of studying the natural world and pursuing discoveries in fundamental science.

“Being in the Marletta lab and seeing students and postdocs that were really passionate about these problems had a big impact on me,” Davis says. “The goal was to understand the fundamentals of how molecular discrimination works, and the idea of discovery for the sake of discovery was thrilling.”

After graduating from Berkeley, Davis spent another year working in Marletta’s lab, and then a year working odd jobs, before heading to MIT to pursue a PhD in biology. There, he worked with Professor Bob Sauer, now emeritus, who studied the relationship between protein structure and function, with a particular focus on the molecular machines that degrade or remodel proteins.

Davis’ thesis research centered on enzymes called AAA proteases, which remove damaged proteins from cellular membranes and send them to cell organelles that break them down. In addition to studying the structure and function of the proteases, Davis worked on ways to engineer them to tag specific proteins for destruction.

That work led him into synthetic biology, which he used to develop genetic parts that drive production of proteins of interest. Some of those parts ended up being used by the biotech startup Ginkgo Bioworks, where Davis took a job as a senior scientist after graduating.

Working at Ginkgo Bioworks allowed Davis to stay in Boston while his partner finished her PhD. The couple then moved back to California, where Davis worked as a postdoc at Scripps Research, which was home to one of the first direct electron detection cameras for cryo-electron microscopy (cryo-EM). These detectors allow researchers to generate structures with near atomic resolution. At Scripps, Davis began using them to study ribosomes as they were being assembled.

Peering into the ribosome

After joining the MIT faculty in 2017, Davis continued his work on ribosomes and assembled a lab group that includes students from a variety of backgrounds who work together to develop new ways to explore biological phenomena.

“I have a mix of method developers and biologists in the group, and the work from each of them informs each other,” Davis says. “My lab goes back and forth between building sets of tools to answer biological questions, and then as we’re answering those questions, it motivates the next generation of tool development.”

During ribosome assembly, RNA molecules fold themselves into the correct shapes, creating docking sites for proteins to attach. Then, more RNA molecules come in and fold themselves into the structure.

“It’s a beautifully coupled process by which the cell folds hundreds of RNA helices and binds on the order of 50 proteins, and it does it in two minutes from start to finish. E. coli does this 100,000 times per hour, and it’s amazing how rapid and efficient the process is,” Davis says.

Cryo-EM allows scientists to capture this process in minute detail. It can be used to take hundreds of thousands of two-dimensional images of ribosome samples frozen in a thin layer of ice, from different angles. Computer algorithms then piece together these images into a three-dimensional representation of the ribosome.

To gain insight into how ribosomes are assembled, researchers can stall the process at different points and then analyze the resulting structures. In 2021, Davis’s lab developed a new method called CryoDRGN, which uses neural networks to analyze cryo-EM data and generate the full ensemble of structures that were present in the sample.

This work has shown that when certain steps of ribosome assembly are blocked, many different structures result, suggesting that the assembly can occur in a variety of ways.

In future work, Davis aims to dramatically increase the throughput of cryo-EM to generate datasets of protein structures that could help improve the AI-based models that are now used to predict protein structures.

“There are still huge swaths of sequence space that these models are very poor at predicting, but if we could collect data on those sequences en masse, that could potentially serve as key training data for a next-generation protein structure prediction method that could fill out that space,” he says.

Rett syndrome study highlights potential for personalized treatments

Mon, 05/04/2026 - 2:00pm

Although many studies approach the developmental disorder Rett syndrome as a single condition arising from general loss of function in the gene MECP2, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT shows that two different mutations of the gene caused many distinct abnormalities in lab cultures. Moreover, correcting key differences made by each mutation required different treatments.

“Individual mutations matter,” says Mriganka Sur, senior author of the new open-accdess study in Nature Communications and the Newton Professor in the Picower Institute and the Department of Brain and Cognitive Sciences. “This is an approach to personalizing treatment, even for a single-gene disorder.”

The study employed advanced 3D human brain tissue cultures called “organoids” or “minibrains” derived from skin cells or blood cells donated by Rett syndrome patients with each mutation. Lead author Tatsuya Osaki, a Picower Institute research scientist, says that the organoids’ ability to model the specific consequences of each mutation enabled him to gain mutation-specific insights that haven’t emerged in prior studies, where scientists just knocked out MECP2 overall. The organoids also provided a novel opportunity to understand how each mutation affected different cell types and their interactions.

Distinct effects

More than 800 mutations in MECP2 can cause Rett syndrome, but just eight account for more than 60 percent of cases. Sur and Osaki chose one of these, R306C, which involves a difference of just one DNA base pair (916C>T), because it represents 7-8 percent of Rett syndrome cases. The other mutation they chose, V247X, is much more rare and severe because it cuts off production of the gene’s protein product by a single DNA base deletion (705Gdel), leaving the protein not just errant, but incomplete.

In organoids cultured for three months, each mutation produced some common but also sometimes distinct consequences compared to control organoids with non-mutated MECP2. For many of their experiments, the team used “three-photon” microscopes capable of cellular-level resolution all the way through the organoids’ approximate 1 millimeter thickness, resolving both their structure (via “third-harmonic generation” imaging), and the live activity patterns of their neurons (via calcium fluorescence).

For instance, the scientists observed that the V247X organoids exhibited several structural differences from their controls — they were larger and had different thicknesses of various layers — but the R306C ones were much more like their controls. Organoids harboring either mutation exhibited less-developed axon projections from their neurons, compared to their control comparators.

Looking at properties of neural activity and connectivity in the organoids, the scientists found some similar deficits across both mutations. Both showed reduced spiking activity and synchronicity between neurons compared to in their controls.

But when the scientists looked at other properties, the organoids started to diverge from each other. In particular, an indication of the efficiency of their network structure called “small-world propensity” (SWP) was decreased in R306C organoids, and increased in V247X ones, compared to controls. This means that both mutations altered the development of typical network structures for information processing, but in different directions.

To ensure that their results were meaningful for Rett syndrome patients, the team collaborated with Charles Nelson at Boston Children’s Hospital, whose team measured EEG in several children with different Rett mutations. Although the sample was small, the researchers measured indications that the SWP property in the EEG readings was altered in the volunteers, much like in the organoids.

Finally, by labeling excitatory neurons to flash in one color and inhibitory neurons to flash in a different color, the scientists were able to see that connectivity between the different neural types differed significantly from controls in the V247X organoids.

Treatment tests

All the testing showed that each mutation caused several changes in organoid structure, activity, and connectivity, and that the deviations were often particular to the specific mutation.

To understand how these differences emerged, and how they might be corrected, Sur and Osaki’s team turned to examining how the cells in each kind of organoid might be expressing their genes differently than controls. Differences in gene expression often lead to alterations of key molecular pathways in cells that can disrupt their activity and function. Analysis with a technique called single cell RNA sequencing indeed yielded hundreds of differences in each organoid type, where some genes were expressed more than in controls while others were underexpressed.

For instance, the analyses revealed that in R306C organoids a gene called HDAC2 was overexpressed. That protein is known for repressing expression of other genes. Meanwhile, in the V247X organoids, the scientists found reduced expression of genes for some receptors of the inhibitory neurotransmitter GABA. These organoids also showed defects in the function of astrocyte cells, which support many aspects of neural function.

Organoids with either mutation also exhibited aberrations in molecular pathways that enable the development of circuit connections between neurons, called synapses.

Given the specific defects they observed, the scientists decided to treat the organoids with a drug that can inhibit HDAC2 activity and another that increases GABA’s efficacy. The HDAC2 inhibitor restored neuronal activity and SWP to normal levels in the R306C organoids, and the GABA “agonist” baclofen restored SWP to control levels in the V247X organoids.

Osaki notes each of the treatment drugs has already been studied in other disease contexts, meaning they are well-understood drugs that could be repurposed.

Now that the researchers have developed an organoid platform for dissecting individual mutations’ consequences, identifying both their roots and testing treatments, they plan to apply it to studying four more mutations, Sur says, comparing all of them against a standardized control organoid.

In addition to Sur, Osaki, and Nelson, the paper’s other authors are Chloe Delepine, Yuma Osako, Devorah Kranz, April Levin, and Michela Fagiolini.

The National Institutes of Health, a MURI grant, The Freedom Together Foundation, and the Simons Foundation provided support for the research.

Powering 160,000 hours of discovery at MIT.nano

Mon, 05/04/2026 - 1:50pm

Each year, more than 1,500 researchers rely on over 200 tools and instruments at MIT.nano to pursue experiments that span MIT’s disciplines, collectively generating 160,000 hours of work across 88,000 instances of tool use. Behind this activity is an operational framework that must discretely coordinate access, maintain fairness, and keep research moving without friction.

Managing such a dynamic environment requires more than a scheduling calendar. An automated reservation system serves as the connective tissue of the facility, balancing demand across diverse user needs while supporting the practical realities of a shared lab space. Researchers arrive at MIT.nano with different workflows, safety requirements, and administrative needs, yet the system must present a seamless experience. Integration with MIT’s broader digital infrastructure, from onboarding and authentication to safety training and billing, ensures that access is both efficient and compliant, reducing barriers so researchers can focus on their work.

A system for the modern era

Over the past three years, during a period of rapid growth in both equipment and facility usage, MIT.nano undertook a transition to a new platform designed to scale with demand while maintaining operational continuity. The effort reflects an ongoing commitment to evolving infrastructure that supports the pace, complexity, and collaborative spirit of modern research.

The importance of robust laboratory management systems has long been recognized at MIT. For decades, researchers in the Microsystems Technology Laboratories (MTL) and the Materials Research Laboratory relied on the CORAL lab management platform to reserve and manage shared instrumentation. Jointly developed by MIT and Stanford University and introduced in 2003, CORAL represented a significant advance over the text-based system it replaced. But by the time MIT.nano adopted CORAL in 2018, active development had slowed, and the platform was beginning to show its age, most visibly through the absence of modern web and mobile interfaces expected by today’s users.

To address these limitations, MIT.nano has transitioned to NEMO, an open-source laboratory management system originally developed at the National Institute of Standards and Technology. NEMO centralizes scheduling, communication, and operational logistics into a single platform that manages tool reservations and user access while supporting facility growth. Its modular architecture and plugin framework allow for extensive customization, enabling the system to evolve alongside the needs of a large, shared research environment.

“Over time, NEMO was replicating core functionalities of CORAL while introducing new features that CORAL simply could not support,” explains Thomas Lohman, senior software and systems manager at MTL and a long-time contributor to CORAL’s development. “The question became whether to continue patching the old system or adopt this new platform that already had a lot of the features we use daily, as well as an active community continually improving it.”

For MIT.nano leadership, modernization was about more than replacing an aging tool. “We needed a system that centralizes everything a facility user depends on — policies, tool documentation, training workflows, and communications — within a user-friendly, mobile-accessible environment,” says Anna Osherov, associate director for Characterization.nano, who led the evaluation and transition effort. “Just as important was making sure the platform enhances the experience for both users and staff.”

Collaborating at MIT and with shared access facilities

MIT.nano collaborated closely with Mathieu Rampant, NEMO project lead and CEO of Atlantis Labs, to adopt the community edition of NEMO, an extended version enriched by contributions from a growing global user base. The open-source model ensures that improvements developed at MIT.nano benefit the broader research community, reinforcing a shared ecosystem of innovation. “The NEMO community is expanding rapidly, and many new features originate directly from facility users and administrators,” says Rampant. “That collaborative model allows improvements to propagate quickly while giving institutions a sense of ownership in the platform’s evolution.”

NEMO introduces modern features long requested by MIT.nano researchers, including mobile access, improved transparency, and streamlined workflows. Facility users can now monitor their own tool usage and consumables, customize notifications, register for training, join real-time equipment waitlists, report issues, and communicate with staff, all through a unified dashboard. What was once distributed across multiple systems is now centralized, reducing friction in day-to-day lab operations.

Launching a new platform at the scale of MIT.nano required careful planning and sustained collaboration. The system needed to support multiple facility types, integrate with existing MIT infrastructure, and accommodate a diverse set of instrumentation workflows. “Features that work well in a typical characterization lab can quickly become a burden in a more chemically active environment like the cleanroom,” explains Jorg Scholvin, associate director of Fab.nano. “Relying on researchers to log in using personal devices and Duo authentication, for example, would be impractical in that setting.”

To address these challenges, MIT.nano collaborated with MIT Information Systems and Technology Associate Vice President Olu Brown and Senior Director for Infrastructure Operations Marco Gomes and their teams to streamline integration between MIT systems and NEMO for cleanroom users. “The availability of modern APIs allowed us to connect very different systems efficiently and deliver a convenient, seamless, and productive experience in the lab,” says Scholvin.

The result is a platform that now processes thousands of reservations, communications, and operational actions daily. “We truly value the partnership with MIT.nano and appreciate the collaboration throughout this effort,” says Gomes. “It’s been a great example of teams working together to deliver something meaningful for the research community.”

As one of the largest shared-access facilities deploying NEMO, MIT.nano has played a central role in advancing the platform’s capabilities, both by helping shape its development and by demonstrating a model that is scalable and effective for other facilities and research centers nationwide. Enhancements first created to meet MIT.nano’s needs are now leveraged by other facilities adopting NEMO across the globe. 

It took 40 years for technology to catch up to this zipper design

Mon, 05/04/2026 - 1:45pm

In 1985, the Innovative Design Fund placed an ad in Scientific American offering up to $10,000 to support clever prototypes for clothing, home decor, and textiles. William Freeman PhD ’92, then an electrical engineer at Polaroid and now an MIT professor, saw it and submitted a novel idea: a three-sided zipper. Instead of fastening pants, it’d be like a switch that seamlessly flips chairs, tents, and purses between soft and rigid states, making them easier to pack and put together.

Freeman’s blueprint was much like a regular zipper, except triangular. On each side, he nailed a belt to connect narrow wooden “teeth” together. A slider wrapping around the device could be moved up to fasten the three strips into place, straightening them into a triangular tube. His proposal was rejected, but Freeman patented his prototype and stored it in his garage in the hopes it might come in handy one day.

Nearly 40 years later, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers wanted to revive the project to create items with “tunable stiffness.” Prior attempts to adjust that weren’t easily reversible or required manual assembly, so CSAIL built an automated design tool and adaptable fastener called the “Y-zipper.” The scientists’ software program helps users customize three-sided zippers, which it then builds on its own in a 3D printer using plastics. These devices can be attached or embedded into camping equipment, medical gear, robots, and art installations for more convenient assembly.

“A regular zipper is great for closing up flat objects, like a jacket, but Freeman ideated something more dynamic. Using current fabrication technology, his mechanism can transform more complex items,” says MIT postdoc and CSAIL researcher Jiaji Li, who is a lead author on an open-access paper presenting the project. “We’ve developed a process that builds objects you can rapidly shift from flexible to rigid, and you can be confident they’ll work in the real world.”

Why zippers?

Users can customize how the fasteners look when they’re zipped up in CSAIL’s software program; they can select the length of each strip, as well as the direction and angle at which they’ll bend. They can also choose from one of four motion “primitives” to select how the zipper will appear when it’s zipped up: straight, bent (similar to an arch), coiled (resembling a spring), or twisted (looks like screws).

The Y-zipper that results will appear to “shape-shift” in the real world. When unzipped, it can look like a squid with three sprawling tentacles, and when you close it up, it becomes a more compact structure (like a rod, for instance). This flexibility could be useful when you’re traveling — take pitching a tent, for example. The process can take up to six minutes to do alone, but with the Y-zipper’s help, it can be done in one minute and 20 seconds. You simply attach each arm to a side of the tent, supporting the structure from the top so that the zipper seemingly pops the canopy into place. 

This seamless transition could also unlock more flexible wearables, often useful in medical scenarios. The team wrapped the Y-zipper around a wrist cast, so that a user could loosen it during the day, and zip it up at night to prevent further injuries. In turn, a seemingly stiff device can be made more comfortable, adjusting to a patient’s needs.

The system can also aid users in crafting technology that moves at the push of a button. One can attach a motor to the Y-zipper after fabrication to automate the zipping process, which helps build things like an adaptive robotic quadruped. The robot could potentially change the size of its legs, tightening up into taller limbs and unzipping when it needs to be lower to the ground. Eventually, such rapid adjustments could help the robot explore the uneven terrain of places like canyons or forests. Actuated Y-zippers can also build dynamic art installations — for example, the team created a long, winding flower that “bloomed” thanks to a static motor zipping up the device.

Mastering the material

While Li and his colleagues saw the creative potential of the Y-zipper, it wasn’t yet clear how durable it would be. Could they sustain daily use?

The team ran a series of stress tests to find out. First, they evaluated the strength and flexibility of polylactic acid (PLA) and thermoplastic polyurethane (TPU), two plastics commonly used in 3D printing. Using a machine that bent the Y-zippers down, they found that PLA could handle heavier loads, while TPU was more pliable.

In another experiment, CSAIL researchers used an actuator to continuously open and close the Y-zipper to see how long it’d take to snap. Some 18,000 cycles of zipping and unzipping later, they finally broke. Y-zipper’s secret to durability, according to 3D simulations: its elastic structure, which helps distribute the stress of heavy loads.

Despite these findings, Li envisions an even more durable three-sided zipper using stronger materials, like metal. They may also make the zippers bigger for larger-scale projects, but that’s not yet possible with their current 3D printing platform.

Jiaji also notes that some applications remain unexplored, like space exploration, wherein Y-zipper’s tentacles could be built into a spacecraft to grab nearby rock samples. Likewise, the zippers could be embedded into structures that can be assembled rapidly, helping relief workers quickly set up shelters or medical tents during natural disasters and rescues.

“Reimagining an everyday zipper to tackle 3D morphological transitions is a brilliant approach to dynamic assembly,” says Zhejiang University assistant professor Guanyun Wang, who wasn’t involved in the paper. “More importantly, it effectively bridges the gap between soft and rigid states, offering a highly scalable and innovative fabrication approach that will greatly benefit the future design of embodied intelligence.”

Li and Freeman wrote the paper with Tianjin University PhD student Xiang Chang and MIT CSAIL colleagues: PhD student Maxine Perroni-Scharf; undergraduate Dingning Cao; recent visiting researchers Mingming Li (Zhejiang University), Jeremy Mrzyglocki (Technical University of Munich), and Takumi Yamamoto (Keio University); and MIT Associate Professor Stefanie Mueller, who is a CSAIL principal investigator and senior author on the work. Their research was supported, in part, by a postdoctoral research fellowship from Zhejiang University and the MIT-GIST Program.

The researchers’ work was presented at the ACM’s ​​Computer-Human Interaction (CHI) conference on Human Factors in Computing Systems in April.

How chromatin movement helps control gene expression

Mon, 05/04/2026 - 5:00am

Gene expression is controlled, in part, by the interactions between genes and regulatory elements located along the genome. Those interactions depend on the ability of chromatin — a mix of DNA and proteins — to move around within a crowded space.

In a new study, MIT researchers have measured chromatin movement at timescales ranging from hundreds of microseconds to hours, allowing them to rigorously quantify those dynamics for the first time.

Their analysis revealed that chromatin can exist in two different categories: In one, chromatin moves in a constrained way that allows it to primarily contact only neighboring regions of the genome; in the other, chromatin moves more freely and contacts regions that are farther away, but only over longer timescales.

The findings offer insight into how gene expression is regulated, as well as how chromatin segments come together for other processes such as DNA repair, the researchers say.

“Because we were able to look at chromatin dynamics for the first time at these very fast timescales, and also for the first time across the full dynamic range, we were able to observe chromatin motion over a range that just wasn’t possible before,” says Anders Sejr Hansen, an associate professor of biological engineering at MIT and the senior author of the new study, which appears today in Nature Structural and Molecular Biology.

The paper’s lead authors are MIT postdoc Matteo Mazzocca, Domenic Narducci PhD ’25, and Simon Grosse-Holz PhD ’23. Jessica Matthias, chief commercial officer of Abberior Instruments, and Tatiana Karpova, manager of the National Cancer Institute Optical Microscopy Core, are also authors of the paper.

Constrained movement

In textbooks, chromatin is often depicted as a static structure within the cell nucleus, but in reality, it is constantly moving. Those movements are necessary for genes to interact with DNA regulatory sequences such as enhancers, which can be as far as 1 million base pairs away. They also ensure that when DNA breaks occur, the two ends of DNA can encounter each other to be repaired.

“Chromatin dynamics are foundational to all processes in the nucleus, and especially processes that involve two things finding each other. That’s important in DNA repair, gene regulation, recombination, or moving a particular gene to the right compartment of the nucleus,” Hansen says.

The movement of any particular location on the genome, or locus, is constrained by the fact that DNA is a polymer. After moving in any direction, a locus will be pulled back by the DNA on either side of it.

“Chromosomes are polymers. They’re held together by many nucleotides of DNA. Being part of DNA is a little bit like running while holding hands with other people. If a hundred people are holding hands and you, in the middle of the chain, try to run in one direction, you’ll get pulled back,” Hansen says.

This type of behavior is known as subdiffusive movement. Previous studies have yielded conflicting reports on how subdiffusive chromatin is, mainly because the studies were not able to track the movement over a long enough period of time to obtain statistically robust measurements. Because the movements are so small, on the order of nanometers, data needs to be obtained over long dynamic ranges — from milliseconds to hours.

In those earlier studies, researchers used imaging techniques that can track the position of a single molecule over time by comparing images frame by frame. These are useful but can only be used over a small dynamic range because of the limitations of conventional microscopy.

To generate more statistically robust data, the MIT team used MINFLUX — a super-resolution light microscopy technique that can track the movement of tiny objects such as proteins over longer periods of time. This technique was recently developed by Stefan Hell of the Max Planck Institute, a Nobel laureate for his work in super resolution microscopy. In this study, the MIT team became the first to apply this technique to chromatin in living cells.

“MINFLUX allowed us to get around the limitations of conventional microscopy, letting us measure chromatin movement faster and for a longer period of time than ever before,” Narducci says. “To our knowledge, it’s the first time this technique has been used this way.”

Using MINFLUX, the researchers were able to study cells over timescales that covered four orders of magnitude — from 200 microseconds to 10 seconds. And by combining MINFLUX with two traditional imaging techniques, they could track chromatin movement over seven orders of magnitude across time, from hundreds of microseconds to several hours.

“Region of influence”

These studies, performed across several different mouse and human cell types, allowed the researchers to identify two distinct classes of chromatin dynamics. In both classes, over short and intermediate timescales (up to 200 seconds), any given locus tends to move only within about 200 nanometers. This suggests that the subdiffusive pull is stronger than had been previously thought.

“One of the main takeaways is that you have this region of influence where a genomic locus has access to other genomic loci, and this is roughly a couple hundred nanometers large,” Grosse-Holz says. “If loci are much closer together than a couple hundred nanometers, they’re effectively in contact all the time. You get a cutoff at a couple hundred nanometers where everything within that region around a given locus can see that locus, and everything outside cannot.”

This constant contact is likely beneficial for DNA repair, as the broken strands remain in close proximity to each other. The findings also suggest that for genes and regulatory elements that are within about 100,000 base pairs, they don’t need any extra help to find each other — they will do so routinely through their normal movement.

“If they are closer than 100,000 bases, and most regulatory elements are, then those elements are going to find their target gene within a few milliseconds or a few minutes,” Mazzocca says. “These are timescales that are completely consistent with transcription.”

In the other class of chromatin dynamics that the researchers identified, chromatin is able to move over a wider range, but only at longer timescales (a few minutes to hours). This class of chromatin appeared in some types of cells but not others, for reasons that are not yet understood.

“It would be reasonable to assume that the behavior would be more or less the same in all cell types, but that’s not at all what we found,” Hansen says. “It’s very different in different cell types, with no obvious way of categorizing things.”

He adds that the strength of the subdiffusive pull that the researchers found in this study can’t be explained with existing models that have been developed to study chromatin dynamics — the Rouse model and the fractal globule model. This suggests that the models may need to incorporate factors that were previously left out, such as the interactions between chromatin and the crowded nucleoplasm it sits within.

“These findings are significant for two key reasons,” says Luca Giorgetti, a group leader at the Friedrich Miescher Institute for Biomedical Research in Switzerland, who was not involved in the study. “First, they rigorously confirm longstanding but anecdotal observations that chromatin motion is strongly subdiffusive. Second, they demonstrate that this behavior is consistent across multiple cell types and persists across all measured timescales.”

The research was funded, in part, by the National Institutes of Health, a National Science Foundation CAREER Award, a Pew-Stewart Scholar for Cancer Research Award, and the Bridge Project, a partnership between the Koch Institute for Integrative Cancer Research at MIT and the Dana-Farber/Harvard Cancer Center.

Found Industries aims to strengthen America’s industrial supply chains

Sun, 05/03/2026 - 12:00am

Found Industries has gone through several distinct phases in the four years since it was originally formed as Found Energy. There was the scrappy startup stage, in which the company was primarily housed in the basement of founder Peter Godart ’15, SM ’19, PhD ’21. Then there was the demonstration phase, in which the company worked to productize its technology for transforming aluminum into high-density fuel for industrial operations.

Now, after confronting supply chain vulnerabilities related to critical metals in its aluminum fuel business, the company is launching a new division, Found Metals, to extract the critical metal gallium from mineral refineries — a move that builds on its original technology while addressing a major national security need.

Gallium is a critical material in the defense, semiconductor, and energy sectors. In 2024, China produced 99 percent of the world’s primary supply — market dominance the country takes advantage of through export controls.

Godart’s company developed an electrochemical gallium extraction technology for internal use after realizing how dependent it would be on China for the catalyst material at the center of its aluminum fuel reactors. Now, with support from the U.S. Department of Energy, Found is hoping to use that technology to create a new domestic supply chain for gallium and a host of other important metals.

Found Industries is still committed to its aluminum fuel operations, now under its Found Energy division. It is already running a 100-kilowatt-class demonstration plant and is preparing for industrial pilot deployments next year. But with its expansion, which was announced April 21, the company is also working to meet the moment for critical metals production.

“Gallium is the world’s most critical metal, as it’s 99 percent controlled by China,” Godart says. “When you produce 99 percent of something, you also produce 99 percent of the tools required to extract it. We couldn’t get our hands on some of those tools, so we were forced to come up with a new technology. Now we believe we can deploy this at scale to become one the first major Western suppliers of these metals.”

From fuel to metals

Godart focused on robotics as an undergraduate in MIT’s Department of Mechanical Engineering and Department of Electrical Engineering and Computer Science. Following graduation, he worked at NASA’s Jet Propulsion Laboratory, where he explored systems for tapping into high-density fuels like aluminum on other planets.

“I had this crazy idea that you could use aluminum, which is already a common construction material for aerospace, as a fuel on other planets,” Godart says. “You don’t need most of the aluminum on a spacecraft once you land on another planet. Aluminum is around 40 times more energy-dense than lithium-ion batteries, and if you have an oxidizer, like water on an icy moon for example, then you can react that aluminum with water and extract energy as heat and hydrogen.”

Luckily for people who might spill water on aluminum while cooking, the metal is normally very stable when exposed to air. In order to tap into aluminum’s stored energy, it needs to undergo a chemical reaction. Godart began exploring catalyst materials to create that reaction at NASA. He continued that work with professor of mechanical engineering Douglas Hart when he returned to MIT in 2017, this time for applications a little closer to home.

“If we want to think about moving humanity to other planets, we have some problems to solve here first,” Godart says. “That was the impetus for me to go back to MIT to study using aluminum as a fuel for energy distribution on Earth.”

Around 70 million tons of aluminum are already transported around the globe every year. Godart says that gives aluminum an easier path to scale. During his PhD, he created a process for coating aluminum with a gallium-containing alloy to help tap into aluminum’s embodied energy.

“We found a catalyst that, when mixed with aluminum scraps, enabled aluminum to react with water very rapidly and at orders of magnitude higher power density than what had been possible before,” Godart says. “That meant you could use aluminum as a fuel and get megawatt-scale power from compact reactor systems.”

By the time he finished his PhD in 2021, Godart and his collaborators had developed a system that mixes aluminum fuel with those catalysts to continuously produce electricity at the kilowatt scale through a hydrogen fuel cell.

Godart launched Found Energy in 2022, licensing part of his research from MIT’s Technology License Office and receiving support from MIT’s Venture Mentoring Service. The company received an Activate fellowship, and after quickly outgrowing Godart’s basement, moved into its current 20,000 square foot facility in Charlestown, Massachusetts.

Today, Found Energy is working with industrial companies that have abundant aluminum scrap.

“When you invent a fuel, you then have to invent the engine,” Godart says. “Our engine is called a catalyzed aluminum water reactor. You feed in aluminum that’s been treated with the catalyst and water, and you get a steam-hydrogen gas mixture. We call that our power stream. We use it to cogenerate industrial heat and electricity. The reaction byproduct is a hydrated aluminum oxide that can be sold into various industries or recycled back into aluminum, which is the long-term vision.”

As Godart worked to build more of the systems, he became concerned about Found’s reliance on Chinese supply chains for its catalyst material. So, in 2024, he developed a new way to extract gallium from Bayer liquor, an industrial process stream used to produce aluminum. Traditional methods for extracting gallium rely on foreign-controlled organic chemicals or resins to bind and concentrate the gallium.

Found uses a continuous electrochemical process to recover the gallium directly from Bayer liquor and other industrial feedstocks, even at low concentrations.

“We thought of it as a way to future-proof what we were doing,” Godart says. “Necessity was the mother of invention.”

Then, toward the end of 2024, China began restricting the export of critical metals including gallium.

“We realized we had already developed a technique for producing these restricted metals that could be very quickly adapted,” Godart recalls.

Scaling for national security

On April 14, the Department of Energy’s Office of Critical Minerals and Energy Innovation selected Found as part of its $5.4 million program to recover gallium from domestic feedstocks. The company plans to start extracting gallium, along with other critical metals like indium and germanium, by the end of 2027.

Meanwhile, Found is already running a 100-kilowatt-class aluminum fuel demonstration system in Charlestown and is working through a orders of several megawatts from large public companies.

“For our fuel technology, the vision is to go as big as possible,” Godart says. “We envision major power plants. Aluminum refineries today, for example, consume hundreds of megawatts of continuous thermal power. That’s what we aim to deliver.”

Godart says he spends most of his time now on gallium extraction, but both branches of the business could make supply chains more secure in the future.

“The big focus now is critical metals, because the government needs this,” Godart says. “We’re also making these metals for ourselves, so we’re vertically integrating our own supply chain, which is table stakes now for companies that deal in physical goods. You need to be able to control your inputs. By focusing on metals, it improves the likelihood of success for our aluminum fuel business.”

MIT affiliates awarded 2026 Guggenheim Fellowships

Fri, 05/01/2026 - 5:15pm

MIT Research Scientist Afreen Siddiqi ’99, SM ’01, PhD ’06; MIT professors Kathleen Thelen and Vinod Vaikuntanathan SM ’05, PhD ’09; as well as Kate Manne PhD ’11 are among 223 scientists, artists, and scholars awarded 2026 fellowships from the John Simon Guggenheim Memorial Foundation. Working across 55 disciplines, the fellows were selected from almost 5,000 applicants for “prior career achievement and exceptional promise.”

Each fellow receives a monetary stipend to pursue independent work at the highest level under “the freest possible conditions.” Since its founding in 1925, the Guggenheim Foundation has awarded nearly $450 million in fellowships to more than 19,000 fellows. This year, MIT faculty and staff were recognized in the categories of geography and environmental studies, political science, and computer science.

Afreen Siddiqi is a research scientist in the Engineering Systems Laboratory in the Department of Aeronautics and Astronautics. Her expertise is in the development of systems-theoretic analytical methods and quantitative modeling for technical systems in space and on Earth that need to operate and adapt in changing environments. Her work has focused on space exploration, satellite Earth observation for informing decisions, and critical infrastructure planning. She has served as a contributing author to the sixth assessment report of 2022 of the Intergovernmental Panel on Climate Change (IPCC) on implications of water, energy, and food interconnections for climate change adaptation. Her work has received teaching awards and fellowships including the Amelia Earhart Fellowship, Richard D. DuPont Fellowship, and the Rene H. Miller Prize in Systems Engineering.

Kathleen Thelen is the Ford International Professor of Political Science. Her work focuses on the political economy of the rich democracies, with a current emphasis on the study of American capitalism in comparative perspective. Her most recent book, “Attention Shoppers! American Retail Capitalism and the Origins of the Amazon Economy,” was published by Princeton University Press in 2025. Her awards include the Friedrich Schiedel-Award for Politics and Technology, the Aaron Wildavsky Enduring Contribution Prize, and the Michael Endres Research Prize (2019). She was elected to the American Academy of Arts and Sciences in 2015.

Vinod Vaikuntanathan is the Ford Foundation Professor of Engineering in the Department of Electrical Engineering and Computer Science. A principal investigator at the Computer Science and Artificial Intelligence Laboratory, his research focuses upon the foundations of cryptography and its applications to theoretical computer science at large. He is known for his work on fully homomorphic encryption (a powerful cryptographic primitive that enables complex computations on encrypted data), as well as lattice-based cryptography (which lays down a new mathematical foundation for cryptography in the post-quantum world). His awards include the Harold E. Edgerton Faculty Award, the Godel Prize, the Simons Investigator Award, the Distinguished Alumnus Award from Indian Institute of Technology Madras, a Best Paper Award from CRYPTO 2024, test of time awards from IEEE Symposium on Foundations of Computer Science and CRYPTO conferences, and he was named a MacVicar Faculty Fellow in 2024 and an International Association for Cryptologic Research Fellow in 2026.

Kate Manne, who earned her PhD in philosophy at MIT in 2011, is now a professor at Cornell University.

“Our new class of Guggenheim Fellows is representative of the world’s best thinkers, innovators, and creators in art, science, and scholarship,” says Edward Hirsch, award-winning poet and president of the Guggenheim Foundation. “As the foundation enters its second century and looks to the future, I feel confident that this new class of 223 individuals will do bold and inspiring work, undaunted by the challenges ahead. We are honored to support their visionary contributions.”

Testing sustainable agriculture in Barcelona

Fri, 05/01/2026 - 12:30pm

A dozen MIT students recently set out for Barcelona — not just to study climate resilience, but to experience it firsthand. As part of STS.S22 (How to Grow Resilient Futures: Regenerative Agriculture and Economies in Catalunya, Spain), an Independent Activities Period course taught by Kate Brown, the Thomas M. Siebel Distinguished Professor in the History of Science, they stepped beyond the classroom and into living systems of sustainability.

Offered as a Global Classroom through MIT International Science and Technology (MISTI), the course reimagined what learning could look like. Instead of working their way through a syllabus containing texts about sustainable farming and the power of cooperatives, Brown’s students got their hands dirty. 

In fact, quite literally: They visited local farms and slaughterhouses; prepped, cooked, and served a cooperative dinner to migrants; and constructed a working greenhouse. In the process, they built a lasting community and forged their own visions about sustainability and how they are compelled to confront climate change — as MIT students now, and eventually as alumni. 

“I wanted the students to think about alternatives to the notion of capitalist development, where the latest technology is seen as the solution to every social problem that emerges. I wanted them to see ways people are solving problems in a place like Barcelona, where communities and ecologies are centered as part of the solution,” Brown says.

Through Brown’s partnerships at the Barcelona Urban Research Institute and Research and Degrowth (R&D)  — and MISTI Spain’s infrastructure — the group of eight undergraduates and four graduate students had the opportunity to examine the historical roots of cooperative movements in the region while simultaneously experiencing today’s iteration of co-op work. 

Brown intentionally designed the three-week syllabus to push students beyond the classroom walls and get them face-to-face with local MISTI Spain collaborators from across the farming and ecology sectors. For example, the class met with Pino Delàs, a pig farmer who left the industrial system to start his own localized, cyclical operation, called Llavora, which supported community farming and generated significantly less waste. 

Rooted in community 

With more than a century of creating cooperatives — both workers and farms — Barcelona and its Catalan roots provided an ideal environment for the students to consider Brown’s questions through fieldwork rooted in community. 

Within their first week on the ground, they collaborated with volunteers at the Agora Squat. The small “pocket park” was initially slated to be developed into a luxury hotel, but a local group of 200 neighborhood residents came together to protest the plan, instead exercising their legal right to use the land, a caveat in Spanish law that allows neighbors to make a case for possessing land that isn’t being used productively. Now, the lush green square boasts a community kitchen and gardens. One night a week, volunteers provide dinner for upward of 60 recent North African migrants, using ingredients sourced from local fruiterias and shops that would have otherwise gone to waste at the close of business. 

On this particular Thursday, Brown’s students became nonprofit managers and chefs, but they also became community members themselves. In just a few hours from start to finish, the students had to source donated produce from the local vendors, come up with a recipe using what they’d gathered, and then prepare a meal in the rudimentary kitchen. “They received a lot of turnips and had to create a recipe to use them,” Brown says. In the end, a flavorful stew simmered in a massive metal pan over propane burners, brought alive with fresh chilies picked from the garden. 

“This was way outside some students’ comfort zones,” Brown says. Yet, that was exactly the point of the activity. By the end of the evening, the students discovered that sometimes the most profound educational moments take shape only after challenging the limits of learning. 

“Many of us do not consider ourselves chefs, so it was empowering to discover that, together, we had the capacity to create a nourishing meal for 70 people, with produce that would have otherwise gone to waste. This meal that we created on the spot, in combination with many of the other workshops during the program, was a strong reminder of how much agency each of us has to effect change within isolating and constraining systems, especially in community with like-minded individuals,” says Sonia Torres Rodriguez, a first-year PhD student in urban studies and planning.

Torres Rodriguez focuses her doctoral research on affordable and climate-resilient housing. She was drawn to the IAP program's exploration of innovative approaches to more equitably distributing the means of producing housing and food, and was excited to be learning in person in Spain. “Cooking together, admiring healthy regenerative soil, foraging, learning traditional methods to braid grass, installing mini solar panels, and hosting poetry circles, would have been impossible to replicate on Zoom,” she says. 

Calvin Macatantan, a senior in computer science and urban studies and planning, was initially drawn to the program because of his interest in resilient economies and how they support the communities they serve. Other than visiting family in the Philippines, he’d never left the United States before. He was especially moved by the group’s stay at La Bruguera, an eco-resort partnered with R&D that serves as a “living laboratory.” The cohort heard from local experts in regenerative agriculture, soil health, and low-tech agroforestry, alongside hands-on activities such as eco-art sessions, weaving lessons, and the rebuilding of a greenhouse. 

As part of a final project for the course, Macatantan and another student wrote and illustrated a children’s book that explains La Bruguera’s work by making the soil come to life as the main protagonist for young readers. 

Brown’s course pushed Sofia Espindola de La Mora to think more critically about everyday systems and their environmental impact. Originally from Puerto Rico, the first-year student has watched helplessly in recent years as climate change has increased the frequency and magnitude of natural disasters at home.

She came to MIT looking for answers and wanting to make a difference, and signed up for Brown’s course as part of that quest. “It was fascinating to see firsthand that the degrowth movement doesn’t mean slowing down is a bad thing, but instead that the constant striving for more is what has led us to many of the predicaments we now face as a society. It forced me to think about whether it would even be possible for me to sustain the life I have now using renewable energy,” Espindola de La Mora says. The course convinced her to focus her studies on climate system science and engineering. 

A climate context

Broadening students’ perspectives was a priority for Brown, whose research lies at the intersection of history, science, technology, and bio-politics. She’s known on campus for courses like STS.038 (Risky Business: Food Production, Environment, and Health). Her 2026 book, “Tiny Gardens Everywhere: The Past, Present and Future of the Self-Provisioning City,” examines urban systems, including gardens. 

When Brown was designing the Global Classroom — made possible through MISTI, with additional support from the MIT Energy Initiative — she centered a value she considers imperative in any course today: addressing climate and other human-driven environmental challenges.

“I’m focused on training students to approach these problems at the local level, so they see what happens when they’re working through communities, rather than prescribing to them something to scale all over the world,” Brown says.

That localized, individualized approach helped expand on what the students initially believed was possible, and compelled them to become part of the solution through their studies and in their professional lives. 

Since their return to campus, Brown’s students have continued to lean on one another and build community, one meal at a time. Many Tuesday nights, they come together to cook dinner, Barcelona squat style. Each individual brings their ingredients, and together they create a recipe that nourishes and sustains.  

“I was losing a lot of faith in the world before this trip,” Macatantan admits. “We’re constantly surrounded by consumption and the drive to do more. This experience helped me realize that I want to do something that impacts people. For me, that will look like research. I want to become an expert in a subject and become someone who can help communicate that knowledge to people who need it.” 

“MISTI Global Classrooms like this show what happens when learning extends beyond the MIT campus,” says Alicia Goldstein Raun, associate director of MISTI and managing director of the MIT-Spain Program at the Center for International Studies. “I was excited when Professor Brown approached me to help shape this new class, knowing it would resonate with students,” says Raun. “The students tackled global challenges like climate change and explored the degrowth movement while immersing themselves in Spanish communities and culture.”

For faculty interested in designing a MISTI Global Classroom, more information can be found here.

Beacon Biosignals is mapping the brain during sleep

Fri, 05/01/2026 - 12:00am

The human brain remains one of the most fascinating and perplexing mysteries in medicine. Scientists still struggle to match neurological activity with brain function and detect problems early, slowing efforts to treat neurological disorders and other diseases.

Beacon Biosignals is working to make sense of the brain by monitoring its activity while people sleep. The company, which was founded by Jake Donoghue PhD ’19 and former MIT researcher Jarrett Revels, developed a lightweight headband that uses electroencephalogram (EEG) technology to measure brain activity while people enjoy their normal sleep routines at home. Those data are processed by machine-learning algorithms to monitor the effects of novel treatments, find new signs of disease progression, and create patient cohorts for clinical trials.

“There’s a step-change in what becomes possible when you remove the sleep lab and bring clinical-grade EEG into the home,” says Donoghue, who serves as Beacon’s CEO. “It turns sleep from a constrained, facility-based test into a scalable source of high-quality data for diagnostics, drug development, and longitudinal brain health.”

Beacon partners with pharmaceutical companies to accelerate its path to patients. The company’s FDA 510(k)-cleared medical device has already been used in over 40 clinical trials across the globe as part of studies aimed at treating conditions including major depressive disorder, schizophrenia, narcolepsy, idiopathic hypersomnia, Alzheimer’s disease, and Parkinson’s disease.

With each deployment, Beacon learns more about how the brain works — insights it is using to create a “foundation model” of the brain.

“It’s our belief that the dataset that’s going to transform brain health doesn’t exist yet — but we are rapidly creating it,” Donoghue says. “Our platform can characterize the heterogeneity of disease progression, generating dynamic insights that are impossible to fully capture through static modalities like sequencing or imaging. The brain is an electric organ and changes through synaptic plasticity, so tracking brain function across many diseases at scale will allow us to discover novel subgroups of diseases and map them over time.”

Illuminating the brain

Donoghue trained in the Harvard-MIT Program in Health Sciences and Technology, conducting clinical training for an MD while completing his PhD in neuroscience at MIT under the guidance of Earl Miller, MIT's Picower Professor in Brain and Cognitive Sciences and The Picower Institute for Learning and Memory. While in the program, Donoghue trained at Massachusetts General Hospital and Boston Children’s Hospital, where he helped care for patients, including in oncology, during the rise of genomic sequencing to guide precision cancer therapies. He later worked in neurology and psychiatry, where care often relied on more iterative approaches — highlighting an opportunity to bring similarly data-driven precision to brain health.

“What struck me most was the inability to measure brain function in the ways that cardiologists can longitudinally monitor cardiac function in patients from home,” Donoghue says. “At MIT, I built this conviction that processing a lot of brain data and working to correlate that with brain function would be transformative to how these neurological diseases are identified and treated.”

Toward the end of his training, Donoghue began developing his ideas further, engaging with mentors including HST and Harvard Medical School professors Sydney Cash and Brandon Westover. He had met Revels, who was working as a research software engineer in MIT’s Julia Lab, during his PhD, and convinced him to co-found Beacon with him in 2019.

“We decided building a business to understand the organ of interest — the brain — would be a great start to understanding heterogeneous neuropsychiatric diseases and building better treatments,” Donoghue recalls.

Beacon began as a computation and analytics company building wearable devices to expand clinical impact and reach. From its early days, Beacon has been partnering with large pharmaceutical companies running clinical trials, offering a less invasive way to watch brain activity and learn how their drugs are impacting the brain as well as how patients sleep.

“It was clear sleep was the right window to understand the brain,” Donoghue says. “Neural activity during sleep can be an order of magnitude higher and more structured, almost like a language. It’s a great surface area for understanding brain function and how different drugs affect the brain.”

Donoghue says Beacon’s devices can collect lab-grade data on each patient for multiple sequential nights, resulting in higher quality assessment. The company uses machine learning to extract insights, such as the time patients spend in different sleep stages and the number of small awakenings that occur throughout the night. It can also detect subtle sleep architecture changes that might lead to cognitive decline.

“We’re starting to take features of sleep activity and link them to outcomes in a way that’s never been done with this level of precision,” Donoghue says.

To date, Beacon has taken part in clinical trials for sleep and psychiatric disorders as well as neurodegenerative diseases, where sleep changes can emerge years before the presentation of symptoms.

“We do a lot of work in areas like Alzheimer’s disease and Parkinson’s, which affected my grandfather,” Donoghue says. “We’re analyzing features of rapid-eye-movement and slow-wave sleep to detect early changes that precede clinical symptoms. It’s an opportunity to move these diseases from late recognition to much earlier, data-driven detection.”

Improving brain treatments for millions

Last year, Beacon acquired an at-home sleep apnea testing company that serves more than 100,000 patients each year across the U.S., accelerating access to high-quality, comprehensive testing in the home and expanding the reach of its platform. Then in November, the company raised $97 million to accelerate that expansion.

“The vision has always been to reach patients and help people at scale,” Donoghue says. “What’s powerful is that we’re building a longitudinal record of brain function over time,” Donoghue says. “A patient might come in for sleep apnea screening, but if they develop Parkinson’s years later, that earlier data becomes a window into the disease before symptoms emerged. That turns routine testing into a foundation for entirely new prognostic biomarkers — and a path to detecting and intervening in brain disease earlier, potentially before symptoms ever begin.”

Unlocking mysteries of the universe through math

Thu, 04/30/2026 - 4:30pm

GPS navigation, cryptography, quantum computing — while some of humankind’s greatest advancements have been invented by pioneers from various cultures, they were founded upon one common grammar: mathematics.

“Mathematics is the language with which God wrote the universe,” said the famous Italian astronomer, physicist, and philosopher Galileo Galilei, who, among his various scientific contributions, helped provide evidence for the idea that the sun is at the center of the solar system.

Although mostly conveyed through combinations of numbers, letters, and signs that may seem enigmatic to many, math equations hold within them countless stories — playbooks that generations of wonderers and inventors have crafted, refined, and shared in an attempt to make sense of a world full of unknown variables.

“I have faith in mathematics that, when there seems to be something special happening, when there’s some coincidence, that it’s not just a coincidence,” says mathematician Amanda Burcroff, “but that there’s actually some really deep, interesting, and involved reason for why that should be true.”

Burcroff’s research is focused on algebraic combinatorics, an area that provides discrete frameworks for understanding algebraic and geometric spaces that ubiquitously arise across science. This year, she joins MIT’s Department of Mathematics as a postdoc as part of the School of Science Dean’s Fellowship. Working with Professor Alexander Postnikov, Burcroff is building upon her techniques with the goal of applying them to other areas such as theoretical physics — a field that seeks to uncover the fundamental laws governing everything from subatomic particles to the cosmos itself.

“I have trust that if you keep following the path, eventually you’ll find the treasure — that is, whatever theorem or proof — that you’re looking for,” she says.

Exploring possibilities and redefining rules

Like many children, Burcroff once saw math as a subject that entailed lots of memorizing. Although she felt that it came naturally to her, she didn’t always find math very interesting.

In high school, as she came to learn about areas like calculus and geometry, Burcroff started to see the discipline in a different light — a creative approach to exploring what’s possible.

“[In] most other fields, the rules are imposed on you by the world,” she says, “but in math, you get full freedom to lay down those rules and then figure out what the implications of those rules are by using logical consequence.”

In 2015, Burcroff began her bachelor’s degree at the University of Michigan with a major in math and a minor in computer science. There, she entered the world of combinatorics — a branch of math dealing with counting, arranging, and combining objects that forms a crucial basis for understanding the complexity of problems, as well as the limits of computer algorithms.

“When I was starting out, I was just happy to have any mystery that anyone gave me,” she says.

Math was, to Burcroff, like a fun game with levels to complete. But during a study abroad program in Budapest, Hungary — the hometown of Paul Erdős, who is considered to be one of the most prolific mathematicians of the 20th century — it became more exciting to play when she was handed puzzles no one has yet solved.

“It turns out that if you put down the right set of rules, there’s an infinite number of beautiful things that you can do with it,” she says.

A journey of endless mysteries to unlock

In 2019, Burcroff embarked on a journey to pursue further research in England, later completing a master’s degree in pure mathematics at the University of Cambridge, then a research master’s degree at Durham University. In 2021, she returned to the United States and began her PhD at Harvard University, with the guidance of Professor Lauren Williams.

Among several riddles she has unraveled over the years, Burcroff helped unify different mathematical approaches to understand why systems work so reliably. Think of it as finding out that two seemingly different set of instructions actually lead the same way. By demonstrating their connections, her work has revealed an underlying, overarching mathematical architecture — a finding that later helped Burcroff and her collaborators tackle one of the many enduring riddles in her field.

Generalized cluster algebras form the basis for describing geometries that appear throughout physics. For more than a decade, mathematicians suspected these building blocks were created only by adding up ingredients and never subtracting, although no one was able to prove it. In 2024, Burcroff and her collaborators published a paper demonstrating that these spaces have nice positivity properties by developing a new way to count and organize patterns — helping untangle a long-standing conjecture, whose potential implications span from predicting particle collision outcomes to describing the spaces appearing in string theory.

These findings have earned Burcroff numerous prestigious awards including a National Science Foundation Graduate Research Fellowship, a British Marshall Scholarship, and a Jack Kent Cooke Graduate Fellowship.

Despite the tremendous number of problems she has answered, new ones keep arising.

“Every time you unlock one of them, it gives you a bunch of paths to new connected mysteries,” Burcroff says.

At MIT, she is working with Postnikov, whose research on combinatorics and positivity-type problems has presented a radically different way to calculate fundamental quantities in quantum field theory.

“Burcroff is conducting research across disciplinary boundaries,” says Postnikov.

He adds: “I am sure that she will have a lot of fruitful interactions with researchers in other MIT departments.”

Burcroff’s goal is to apply combinatorial techniques to broader physical contexts and direct applications, especially those with implications to topics like mirror symmetry, a principle in string theory suggesting that very different-looking geometric spaces can be mathematically equivalent.

While “doing math is 99 percent trying something and failing,” Burcroff says it is this same challenge that keeps her motivated. To her, it is not about reaching a destination, but rather about the continuous “process of discovery,” one she hopes to share beyond the typical classroom.

To make math more accessible, especially among underrepresented groups, Burcroff has worked with mentorship programs including Harvard’s Real Representations and Math Includes, Cambridge Girls’ Angle, and MIT PRIMES. During her time as a postdoc, she hopes to continue this outreach and explore ways to get involved with other support groups at MIT’s Department of Mathematics.

Study: Gene circuits reshape DNA folding and affect how genes are expressed

Thu, 04/30/2026 - 2:00pm

When a gene is turned on in a cell, it creates a ripple effect along the DNA strand, changing the physical structure of the strand. A new study by MIT researchers shows that these ripples can stimulate or suppress neighboring genes.

These effects, which result from the winding or unwinding of neighboring DNA, are determined by the order of genes along a strand of DNA. Genes upstream of the active gene are usually turned up, while those downstream are inhibited.

The new findings offer guidance that could make it easier to control the output of synthetic gene circuits. By altering the relative ordering and arrangement of genes, or “gene syntax,” researchers could create circuits that synergize to maximize their output, or that alternate the output of two different genes.

“This is really exciting because we can coordinate gene expression in ways that just weren’t possible before,” says Katie Galloway, an assistant professor of chemical engineering at MIT. “Syntax will be really useful for dynamic circuits. Now we have the ability to select not only the biochemistry of circuits, but also the physical design to support dynamics.”

Galloway is the senior author of the study, which appears today in Science. MIT postdoc Christopher Johnstone PhD ’26 is the paper’s lead author. Other authors include MIT graduate student Kasey Love, members of the lab of Brandon DeKosky, an MIT associate professor of chemical engineering, and researchers from Peter Zandsta’s lab at the University of British Columbia and the labs of Christine Mummery and Richard Davis at Leiden University Medical Center in the Netherlands.

Gene syntax

When a gene is copied into messenger RNA, or “transcribed,” the double-stranded DNA helix must be unwound so that an enzyme called RNA polymerase can access the DNA and start copying it. That unwinding leads to physical changes in the structure of DNA strand.

Upstream of the gene, DNA becomes looser, while downstream, it becomes more tightly wound. These changes affect RNA polymerase’s ability to access the DNA: Upstream of an active gene, it’s easier for the enzyme to attach; downstream, it’s more difficult.

In a study published in 2022, Galloway and Johnstone performed computational modeling that explored how these biophysical changes might influence gene expression. They studied three different arrangements, or types of syntax: tandem, divergent, and convergent.

Most synthetic gene circuits are designed in a tandem arrangement, with one gene followed by another downstream. In a divergent arrangement, neighboring genes are transcribed in opposite directions (away from each other), and in convergent syntax, they are transcribed toward each other.

The modeling suggested that the divergent arrangement was most likely to produce circuits where both genes are expressed at a high level. Tandem arrangements were predicted to result in the downstream gene being suppressed by the upstream gene.In the new study, the researchers wanted to see if they could observe these predicted phenomena in human cells.

“Normally, we think about gene circuits and pieces of DNA as these lines that we draw, but they’re polymers that have physical characteristics,” Galloway says. “The thing that we were trying to solve in this paper was: When you put two genes on the same piece of DNA, how does their physical interaction become coupled?”

The researchers engineered circuits that each contained two genes, in either a tandem, divergent, or convergent configuration, into human cell lines and human induced pluripotent stem cells.

The results confirmed what their modeling had predicted: In divergent circuits, expression of both genes was amplified. In tandem circuits, turning on the upstream gene suppressed the expression of the downstream gene.

These effects produced as much as a 25-fold increase or decrease in gene expression, and they could be seen at distances of up to 2,000 base pairs between genes.

Using a high-resolution genome mapping technique called Region Capture Micro-C, the researchers were also able to analyze how the DNA structure changed when nearby genes were being transcribed.

As predicted, they found that the DNA regions downstream from an active gene formed tightly twisted structures known as plectonemes, similar to the tangles seen in a twisted telephone cord. These structures make it harder for RNA polymerase to bind to DNA.

To engineer these cells, the researchers used a new system they developed with the LUMC team called STRAIGHT-IN Dual, which allows them to efficiently insert two genes into the same DNA strand at both alleles. This system is being reported in a second paper published today, in Nature Biomedical Engineering.

Precise control

The new findings could help guide the design of synthetic gene circuits, which are usually designed to be controlled by biochemical interactions with activator or repressor molecules. Now, circuit designers can also perform biophysical manipulations to enhance or repress genes expression.

“Everyone thinks about the components they need, and the biochemical properties they need to build a circuit,” Galloway says. “Now, we have added the physical construction of those components, which is going to change how those biochemical units are interpreted.”

As a demonstration of one potential application, the researchers built synthetic circuits containing the genes for two segments of a novel antibody discovered by the Dekosky lab, used to treat yellow fever, and incorporated them into human cells. As they expected, the divergent syntax produced larger quantities of the yellow fever antibody.

Galloway’s lab has also used this approach to optimize the output of synthetic gene circuits they previously reported that could be used to deliver gene therapy or to reprogram adult cells into other cell types.

This strategy could also be used to build a variety of other types of dynamic synthetic circuits, such as toggle switches, oscillators, or pulse generators, for any application that requires precise control over gene expression.

“If you want coordinated expression, a divergent circuit is great. If you want something that’s either/or, you can imagine using a convergent or tandem circuit, so when one turns on, the other turns off, and you can alternate pulses,” Galloway says. “Now that we understand the syntax, I think this will pave the way for us to program dynamic behaviors.”

The research was funded, in part, by the National Institutes of Health, the National Institute for General Medical Sciences, a National Science Foundation CAREER Award, the Pershing Square Foundation, the Air Force Research Laboratory, and the Koch Institute Support (core) Grant from the National Cancer Institute.

The hidden structure behind a widely used class of materials

Thu, 04/30/2026 - 2:00pm

Materials called relaxor ferroelectrics have been used for decades in technologies like ultrasounds, microphones, and sonar systems. Their unique properties come from their atomic structure, but that structure has stubbornly eluded direct measurement.

Now a team of researchers from MIT and elsewhere has directly characterized the three-dimensional atomic structure of a relaxor ferroelectric for the first time. The findings, reported today in Science, provide a framework for refining models used to design next-generation computing, energy, and sensing devices.

“Now that we have a better understanding of exactly what’s going on, we can better predict and engineer the properties we want materials to achieve,” says corresponding author James LeBeau, MIT’s Kyocera Professor of Materials Science and Engineering. “The research community is still developing methods to engineer these materials, but in order to predict the properties those materials will have, you have to know if your model is right.”

In their paper, the researchers describe how they used an emerging technique to reveal the distribution of electric charges in the material, with a surprising result.

“We realized the chemical disorder we observed in our experiments was not fully considered previously,” says co-first authors Michael Xu PhD ’25 and Menglin Zhu, who are both postdocs at MIT. “Working with our collaborators, we were able to merge the experimental observations with simulations to refine the models and better predict what we see in experiments.”

Joining Zhu, Xu, and LeBeau on the paper are Colin Gilgenbach and Bridget R. Denzer, MIT PhD students in materials science and engineering; Yubo Qi, an assistant professor at the University of Alabama at Birmingham; Jieun Kim, an assistant professor at the Korea Advanced Institute of Science and Technology; Jiahao Zhang, a former PhD student at the University of Pennsylvania; Lane W. Martin, a professor at Rice University; and Andrew M. Rappe, a professor at the University of Pennsylvania.

Probing disordered materials

Leading simulations of relaxor ferroelectrics suggest that when an electric field is applied, the interactions of positively and negatively charged atoms in different nanoregions of the material help give rise to exceptional energy storage and sensing capabilities. The details of those nanoregions have been impossible to directly measure to date.

For their Science paper, the researchers studied a relaxor ferroelectric material used in sensors, actuators, and defense systems that is a lead magnesium niobate-lead titanate alloy. They used an emerging measurement technique, called multi-slice electron ptychography (MEP), in which researchers move a nanoscale-sized probe of high-energy electrons over a material and measure the resulting electron diffraction patterns.

“We do this in a sequential way, and at each position, we acquire a diffraction pattern,” Zhu explains. “That creates regions of overlap, and that overlap has enough information to use an algorithm to iteratively reconstruct three-dimensional information about the object and the electron wave function.”

The technique revealed a hierarchy of chemical and polar structures that spanned from atomic to mesoscopic scales. The researchers also found that many regions of differing polarization in the material were much smaller than predicted by the leading simulations. The researchers then fed their new data back into those computer simulations and refined the models to better reflect their findings under different conditions.

“Previously, these models basically had random regions of polarization, but they didn’t tell you how those regions correlate with each other,” Xu says. “Now we can tell you that information, and we can see how individual chemical species modulate polarization depending on the charge state of atoms.”

Toward better materials

Zhu says the paper demonstrates the potential of electron ptychography to study complex materials and opens up new avenues of research into complex, disordered materials.

“This study is the first time in the electron microscope that we’ve been able to directly connect the three-dimensional polar structure of relaxor ferroelectrics with molecular dynamics calculations,” Xu says. “It further proves you can get three-dimensional information out of the sample using this technique.”

The researchers also believe the approach could one day help engineer materials with advanced electronic behaviors for a range of improved memory storage, sensing, and energy technologies.

“Materials science is incorporating more complexity into the material design process — whether that’s for metal alloys or semiconductors — as AI has improved and our computational tools have become more advanced,” LeBeau says. “But if our models aren’t accurate enough and we have no way to validate them, it’s garbage in garbage out. This technique helps us understand why the material behaves the way it does and validate our models.”

The work was supported, in part, by the U.S. Army Research Laboratory, the U.S. Office of Naval Research, the U.S. Department of War, and a National Science Graduate Fellowship. The researchers also used MIT.nano facilities.

How neurons sense bacteria in the gut

Thu, 04/30/2026 - 1:30pm

Recent studies suggest animals and people alike have close and complex relationships with the bacteria around and within them. The human gut microbiome, for instance, has been associated with both depression and Parkinson’s disease. To go beyond association toward understanding of the actual mechanisms that enable the bacterial microbiome to influence brain function, a new study by neuroscientists in The Picower Institute for Learning and Memory at MIT examines the mechanisms at work in a model “bacterial specialist,” the nematode Caenorhabditis elegans.

In the new study in Current Biology, the team, led by Picower Fellow Cassi Estrem in the Picower Institute for Learning and Memory lab of Associate Professor Steven Flavell, identifies the specific chemicals that a key neuron in C. elegans senses, both in the bacteria that it eats and in the bacteria that it needs to avoid ingesting.

“In our bodies, our own cells are outnumbered by the bacterial cells living in and on us. There’s an increasing recognition that this has a profound impact on human health,” says Flavell, an investigator of the Howard Hughes Medical Institute and faculty member of MIT’s Department of Brain and Cognitive Sciences. “It’s been clear that there are links for some time. Our study aimed to identify the hard mechanisms of how a host nervous system is affected by bacteria in the alimentary canal.”

Achieving a fundamental mechanistic understanding of how neurons interact with bacteria could help improve attempts to intervene in or manipulate those interactions with therapeutic drugs or supplements, Flavell says.

Mmm … sugar

Flavell calls C. elegans a “bacterial specialist” because the tiny, transparent worm has evolved to eat bacteria as its diet, while also needing to avoid pathogenic bacteria that can prove to be its undoing. This has led it to develop a nervous system especially well-attuned to sorting out what is food and what is foe. In 2019, the lab discovered that the neuron NSM, which projects into the worm’s alimentary canal, employs two “acid sensing ion channels” (ASICs) to detect when certain bacteria have been ingested. Notably, those ion channels are analogous to ones found in neurons in humans. When NSM detects yummy bacteria, it releases serotonin that causes the worm to increase its feeding rate and slow its slithering so that it can stay to dine on the surrounding meal.

To really understand how this works, Flavell and Estrem realized they needed to know exactly what the ion channels are detecting in the bacteria. To get started, they exposed worms to 20 different kinds of bacteria the worms are known to encounter and found that they all activated NSM activity to varying extents. Then they broke the bacteria down into more and more specific chemical components to see which one or ones triggered NSM. The experiments ruled out many components, including DNA, lipids, proteins, and simple sugars, and instead found that it’s specifically the polysaccharide sugars that coat many bacteria that drive NSM activation. In particular, in gram-positive bacteria, a chemical called peptidoglycan activated NSM. In gram-negative bacteria, a different polysaccharide was apparently in play.

Estrem and Flavell’s team also ran experiments showing that polysaccharides from bacteria in general, and peptidoglycan in particular, not only trigger NSM electrical activity, but actually promote the feeding and slowing behaviors. They also showed that genetically knocking out the ASICs abolished these responses. In all, they demonstrated that polysaccharide and peptidoglycan detection are sufficient to trigger the worm’s behaviors, and requires the ASICs.

Better not eat this

Having shown what exactly triggers the worms to recognize their bacterial food, the researchers wondered whether they could also pinpoint a danger sign the worm finds in harmful bacteria. For these experiments, they carefully used Serratia marcescens, a bacterium that’s also infectious for humans. Some strains of the bacteria have a red color, while others do not. The red ones, which have a pigment called prodigiosin, tend to be much more lethal for worms. In their testing, the researchers found that when NSM detected the non-pigmented bacteria, the neuron still activated and the worms still ingested the bacteria, but when prodigiosin was present, NSM did not activate and the worm did not pump it in or slow down to eat.

Adding prodigiosin to normally yummy bacteria also suppressed NSM’s usual response. In other words, the worms have evolved their digestive behavior (and the detectors within NSM) to avoid ingesting a chemical specifically associated with danger.

Flavell says it’s likely that some of the fundamental mechanisms highlighted in the new paper will inform studies of similar mechanisms in other animals.

“We developed a way of identifying these pathways by studying this organism that specializes in bacterial detection and displays robust responses,” Flavell explains. “But there’s no reason these pathways should be limited to C. elegans. The molecular players we identified are found in many species, including mammals.”

In addition to Estrem and Flavell, the paper’s other authors are Malvika Dua, Colby Fees, Greg Hoeprich, Matthew Au, Bruce Goode, and Lingyi Deng.

The National Institutes of Health, the McKnight Foundation, the Alfred P. Sloan Foundation, the Howard Hughes Medical Institute, and The Freedom Together Foundation provided support for the study.

A materials scientist’s playground

Thu, 04/30/2026 - 1:20pm

Scientists and engineers around the world are working to improve quantum bits, or qubits, the minuscule building blocks of the quantum computer. Qubits are incredibly sensitive, making it easy for errors to be introduced, lowering device yield. But a new cluster tool at MIT.nano introduces capabilities that will allow researchers to continue advancements in qubit performance.

Passersby outside MIT.nano may have recently noticed a complex looking piece of equipment being installed on the first-floor cleanroom. What looks like a sci-fi movie prop is actually a state-of-the-art, custom-built molecular beam epitaxy (MBE): a physical vapor deposition system that operates under ultra-high vacuum to produce high-quality thin films. With the ability to grow different crystalline materials on a wafer, the tool will support quantum researchers and materials scientists by allowing them to study how film growth affects the properties of the materials used in making qubits.

“To realize the full promise of quantum computing, we need to build qubits that are robust, reproducible, and extensible,” says William D. Oliver, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science and professor of physics at MIT. “To date, most of the improvements to superconducting qubit performance are traceable to circuit design — essentially, designing qubit circuits that are less sensitive to their environmental noise. However, those improvements have largely run their course. Going forward, we need to address the fundamental materials science and fabrication engineering required to reduce the sources of environmental noise. This multi-chamber, cassette-loaded, 200-millimeter wafer MBE system is exactly the right tool at the right time. And there’s no place better to do this research than at MIT.nano.”

That is because MIT.nano is preconditioned to receive this type of system with physical space, climate controls, policies and procedures for researchers, and expert staff to manage the lab. Through an equipment support plan, Oliver’s Engineering Quantum Systems (EQuS) group is able to install and run the tool inside MIT.nano, a high-performance, safe, and reliable environment.

A controlled environment is essential for the MBE. “Think of this system like an inverted International Space Station (ISS),” explains Patrick Strohbeen, research scientist in the EQuS group. “The ISS is a small chamber of atmosphere surrounded by the vacuum of space. This MBE system is a chamber of space-level vacuum surrounded by atmosphere.” That vacuum of space is kept at a steady negative 90 degrees Celsius, which enables precise growth of thin films on an atomic scale. It is the largest single deposition chamber (1-meter diameter) the manufacturer, DCA, has sold in the United States.

The journey of a wafer

The system, which in total takes up 600 square feet, is made up of six chambers. First is the load lock, where the wafer is placed into the system and brought down from atmospheric pressure to near the vacuum level of space. Then, the wafer enters the distribution center. This space acts like a central hub, transferring the wafers to other chambers. Next is the deposition, or “growth,” chamber. This is where the system’s primary function takes place — depositing materials, specifically atoms of superconducting metal, onto a substrate, typically silicon. From there, it moves to the oxidation chamber, which facilitates the growth of key ceramic materials for qubits. A fifth storage chamber can hold an additional 10 wafers within the vacuum.

A unique aspect of this system is its sixth chamber, designed for X-ray photoelectron spectroscopy (XPS). Using this chamber, researchers can shoot a photon in the form of X-rays at the surface and, when it hits the surface, it will excite the electron inside the material so that the electron jumps out and is picked up by a sensor that then tells the researcher about the environment the electron came from. As individual layers of atoms are put down in the growth chamber, scientists can move the wafer to the XPS chamber to measure changes in the material structure of the film and back again, all while keeping it inside the vacuum space.

Why is this important? “The quantum community has excellent device physicists and device engineers,” said Strohbeen. “The last piece of the puzzle is: We need to understand the materials platform that we’re using for these devices.” The buried interfaces, so far, have been understudied due to the difficulty in probing them, he explained.

For those of us who are not MBE experts, think of the snow that fell in Massachusetts this winter. How can you tell how much ice is on the pavement without removing all of the snow on top of it? And without changing the natural setting where the snow, ice, and pavement meet? With this system, specifically the XPS chamber, scientists can study the interfaces of buried materials without disturbing the physical or chemical environments. “It is a materials scientist’s playground,” jokes Strohbeen — a controlled space where researchers can learn about and explore materials’ interactions within layers of atoms.

Why MIT.nano?

When Oliver, who is also the director of the MIT Center for Quantum Engineering, secured the MBE Quantum, the next question was where to put it. Enter MIT.nano. Housing 45,000 square feet of cleanroom, this facility exists at MIT to support complex, sensitive equipment with both the infrastructure and the staff needed to maintain it.

“MIT.nano’s ultra-stable building utilities and lab environment are exactly what is needed to support a system that demands extreme repeatability and purity,” says Nick Menounos, MIT.nano associate director of infrastructure. “The success of this installation grew from the early collaboration. Professor Oliver engaged the MIT.nano team in the procurement process almost two years in advance. That foresight, combined with the infrastructure momentum we gained from the recent CHIPS Act project, meant that we could prepare the cleanroom perfectly. We compressed the installation process that normally takes several months and had this extraordinary machine running in under three weeks.”

“From the very beginning, the MIT.nano staff were helpful, knowledgeable, and willing to go above and beyond to make this happen,” says Oliver. “While the MIT.nano facility is certainly an infrastructural crown jewel at MIT, it’s the MIT.nano staff who make it the national treasure it is today.”

Positioning the MBE Quantum in the cleanroom helps the team focus on scalability and device yield. Humidity and particle count, two things carefully measured and maintained at MIT.nano, can affect the output of the device. Minimizing as many variables as possible is key to improving qubit performance. The cleanroom also allows for new device research because an array of fabrication and metrology tools are available without having to leave the clean environment.

“We’re really excited to see what we can do with it,” says Strohbeen. “We bought it as a materials science tool, and it will also be a device development tool due to the flexibility of having it in the cleanroom.”

The MBE system was purchased through a combination of grants from the Army Research Office (ARO) and from the Laboratory for Physical Sciences (LPS). The ARO grant, a Defense University Research Instrumentation Program grant, is the premier grant from ARO for funding large capital equipment purchases that should prove disruptive in technologically relevant areas. It arrives at an important time on campus, as one of MIT’s strategic initiatives — the MIT Quantum Initiative — aims to apply quantum breakthroughs to the most consequential challenges in science, technology, industry, and national security.

Making the case for curiosity-driven science

Thu, 04/30/2026 - 12:00am

“The thing that really struck me when I came to MIT and strikes me every single day is the stuff that’s going on here is amazing. The science, the engineering… every day I hear something that makes my jaw drop,” remarked President Sally Kornbluth during a live discussion with Lizzie O’Leary of Slate’s “What Next: TBD” podcast.

Kornbluth spoke about everything from the importance of curiosity-driven science and why basic science is critical to our nation’s future, to AI and education, and even bravely joined O’Leary in a rendition of the Williams College song, “The Mountains,” in honor of their shared alma mater.

“We are in this time of incredible uncertainty,” said Kornbluth of the current state of higher education and funding for scientific research. “What we are trying to do is keep the science robust.”

Bouncing back to her time at Duke and her love of college basketball, she noted it’s a combination of zone coverage and man-to-man defense when trying to address skepticism about higher education in Washington, D.C. She emphasized: “As one of the top institutions in the world it’s part of our responsibility to articulate the importance of science. Behind the scenes, I am – along with many other [university] presidents – I am in D.C. all the time now. I want to speak to Congressmen and women, Senators, people in the executive branch to explain the importance of what we are doing.”

Kornbluth emphasized that the pipeline of basic science that flows from U.S. universities is a critical asset for our country, cautioning that to keep straining this pipeline could have enormous negative ramifications for the U.S. down the line.

“If you think about research done in this country, it’s done in in universities, it’s done in national labs, and it’s done in industry,” said Kornbluth. Universities are where most of the science with a long pathway to impact, requiring patience, starts. She pointed to immunotherapy for cancer, which began 30-40 years ago in basic immunotherapy research, as an example. With that pipeline being drained, what does the future hold for new cancer therapies or new AI and quantum technologies?

Kornbluth also underscored that uncertainty and lost funding are having a “huge impact on the talent pipeline,” delving into the unique role universities play in training graduate students, who are the next generation of scientific researchers. “We hear, ‘Oh it would be okay if research was more in industry.’ I say, ‘Would you fly on a plane with a pilot who had never flown?’ How do they think people learn how to do research? We are training the next generation… and we are losing funding for them.” She added: “I think we are going to see reverberations for many decades if we don’t rectify that issue.”

When asked how she and her colleagues are working to keep research moving forward, Kornbluth explained that at MIT, “we have tried to find alternative ways to elevate the science. We have a series of presidential initiatives that cut across the whole campus in things like health and life sciences, quantum, humanities and social sciences. The notion is that we are trying to create new opportunities.”

Still, she acknowledged that losses from the endowment tax and diminished federal funding are painful. “There are only four schools right now that are subject to the 8% endowment tax, which is a tax on our earnings. For us, that means $240 million dollars a year plus other losses in grants. So, let’s say the whole thing is, we budgeted for a loss of $300 million a year on a $1.7 billion budget… That has definitely had an impact on us. No question about it. 

“The other thing about it is again there’s all this uncertainty. Our investigators are writing a ton of grants. They don’t know if they’re going off into the void or they really have the sort of competitive opportunities they’ve always had in the past.”

Asked why universities did not see this moment coming, Kornbluth offered a few thoughts. “Look at MIT – 30,000 companies have come from MIT. When you look at something like that, why would you think any government that wants economic flourishing in their country would come after MIT?” she reflected. “It just never would have occurred to us.”

Turning towards the rapid advances in AI, and how the field is impacting education, Kornbluth noted that at MIT and other universities, “we have to focus on the human element, we have to educate our students, they need to know how to write and do mathematics…they have to view AI as a tool to augment their capabilities. That is how we are thinking about it.”

In the course of the conversation, Kornbluth also expressed her unwavering support for international students, noting that most want the opportunity to stay and contribute to research in the U.S. after graduation. “The talent brought to us through our international community is unbelievable. We can attract the very best in the world. You can bet when they talk about competitiveness with China, for example, in AI, quantum, etc., they are not sitting around in China saying, ‘Oh it’s great America is taking all our students.’ They’re thinking, ‘It’s great that America doesn’t want to take as many of our students anymore because we can train them.’ It’s a competitive issue that we really should lean into.”

Pages