MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 13 hours 16 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

“This is science!” – MIT president talks about the importance of America’s research enterprise on GBH’s Boston Public Radio

Fri, 02/06/2026 - 12:38pm

In a wide-ranging live conversation, MIT President Sally Kornbluth joined Jim Braude and Margery Eagan live in studio for GBH’s Boston Public Radio on Thursday, February 5. They talked about MIT, the pressures facing America’s research enterprise, the importance of science, that Congressional hearing on antisemitism in 2023, and more – including Sally’s experience as a Type 1 diabetic.

Reflecting on how research and innovation in the treatment of diabetes has advanced over decades of work, leading to markedly better patient care, Kornbluth exclaims: “This is science!”

With new financial pressures facing universities, increased competition for talented students and scholars from outside the U.S., as well as unprecedented pressures on university leaders and campuses, co-host Eagan asks Kornbluth what she thinks will happen in years to come.

“For us, one of the hardest things now is the endowment tax,” remarks Kornbluth. “That is $240 million a year. Think about how much science you can get for $240 million a year. Are we managing it? Yes. Are we still forging ahead on all of our exciting initiatives? Yes. But we’ve had to reconfigure things. We’ve had to merge things. And it’s not the way we should be spending our time and money.”   

Watch and listen to the full episode on YouTube. President Kornbluth appears one hour and seven minutes into the broadcast.

Following Kornbluth’s appearance, MIT Assistant Professor John Urschel – also a former offensive lineman for the Baltimore Ravens –   joined Edgar B. Herwick III, host of GBH’s newest show, The Curiosity Desk, to talk about his love of his family, linear algebra, and football.

On how he eventually chose math over football, Urschel quips: “Well, I hate to break it to you, I like math better… let me tell you, when I started my PhD at MIT, I just fell in love with the place. I fell in love with this idea of being in this environment [where] everyone loves math, everyone wants to learn. I was just constantly excited every day showing up.”

Prof. Urschel appears about 2 hours and 40 minutes into the webcast on YouTube.

Coming up on Curiosity Desk later this month…

Airing weekday afternoons from 1-2 p.m., The Curiosity Desk will welcome additional MIT guests in the coming weeks. On Thursday, Feb. 12 Anette “Peko” Hosoi, Pappalardo Professor of Mechanical Engineering, and Jerry Lu MFin ’24, a former researcher at the MIT Sports Lab, visit The Curiosity Desk to discuss their work using AI to help Olympic figure skaters improve their jumps.

Then, on Thursday, Feb. 19, Professors Sangeeta Bhatia and Angela Belcher talk with Herwick about their research to improve diagnostics for ovarian cancer. We learn that about 80% of the time ovarian cancer starts in the fallopian tubes and how this points the way to a whole new approach to diagnosing and treating the disease. 

MIT News · Curiosity Desk Preview
Source: GBH 

I’m walking here! A new model maps foot traffic in New York City

Fri, 02/06/2026 - 5:00am

Early in the 1969 film “Midnight Cowboy,” Dustin Hoffman, playing the character of Ratso Rizzo, crosses a Manhattan street and angrily bangs on the hood of an encroaching taxi. Hoffman’s line — “I’m walking here!” — has since been repeated by thousands of New Yorkers. Where cars and people mix, tensions rise.

And yet, governments and planners across the U.S. haven’t thoroughly tracked where it is that cars and people mix. Officials have long measured vehicle traffic closely while largely ignoring pedestrian traffic. Now, an MIT research group has assembled a routable dataset of sidewalks, crosswalks, and footpaths for all of New York City — a massive mapping project and the first complete model of pedestrian activity in any U.S. city.

The model could help planners decide where to make pedestrian infrastructure and public space investments, and illuminate how development decisions could affect non-motorized travel in the city. The study also helps pinpoint locations throughout the city where there are both lots of pedestrians and high pedestrian hazards, such as traffic crashes, and where streets or intersections are most in need of upgrades.

“We now have a first view of foot traffic all over New York City and can check planning decisions against it,” says Andres Sevtsuk, an associate professor in MIT’s Department of Urban Studies and Planning (DUSP), who led the study. “New York has very high densities of foot traffic outside of its most well-known areas.”

Indeed, one upshot of the model is that while Manhattan has the most foot traffic per block, the city’s other boroughs contain plenty of pedestrian-heavy stretches of sidewalk and could probably use more investment on behalf of walkers.

“Midtown Manhattan has by far the most foot traffic, but we found there is a probably unintentional Manhattan bias when it comes to policies that support pedestrian infrastructure,” Sevtsuk says. “There are a whole lot of streets in New York with very high pedestrian volumes outside of Manhattan, whether in Queens or the Bronx or Brooklyn, and we’re able to show, based on data, that a lot of these streets have foot-traffic levels similar to many parts of Manhattan.”

And, in an advance that could help cities anywhere, the model was used to quantify vehicle crashes involving pedestrians not only as raw totals, but on a per-pedestrian basis.

“A lot of cities put real investments behind keeping pedestrians safe from vehicles by prioritizing dangerous locations,” Sevtsuk says. “But that’s not only where the most crashes occur. Here we are able to calculate accidents per pedestrian, the risk people face, and that broadens the picture in terms of where the most dangerous intersections for pedestrians really are.”

The paper, “Spatial Distribution of Foot-traffic in New York City and Applications for Urban Planning,” is published today in Nature Cities.

The authors are Sevtsuk, the Charles and Ann Spaulding Associate Professor of Urban Science and Planning in DUSP and head of the City Design and Development Group; Rounaq Basu, an assistant professor at Georgia Tech; Liu Liu, a PhD student at the City Form Lab in DUSP; Abdulaziz Alhassan, a PhD student at MIT’s Center for Complex Engineering Systems; and Justin Kollar, a PhD student at MIT’s Leventhal Center for Advanced Urbanism in DUSP.

Walking everywhere

The current study continues work Sevtsuk and his colleagues have conducted charting and modeling pedestrian traffic around the world, from Melbourne to MIT’s Kendall Square neighborhood in Cambridge, Massachusetts. Many cities collect some pedestrian count data — but not much. And while officials usually request vehicle traffic impact assessments for new development plans, they rarely study how new developments or infrastructure proposals affect pedestrians.

However, New York City does devote part of its Department of Transportation (DOT) to pedestrian issues, and about 41 percent of trips city-wide are made on foot, compared to just 28 percent by vehicle, likely the highest such ratio in any big U.S. city. To calibrate the model, the MIT team used pedestrian counts that New York City’s DOT recorded in 2018 and 2019, covering up to 1,000 city sidewalk segments on weekdays and up to roughly 450 segments on weekends.

The researchers were able to test the model — which incorporates a wide range of factors — against New York City’s pedestrian-count data. Once calibrated, the model could expand foot-traffic estimates throughout the whole city, not just the points where pedestrian counts were observed.

The results showed that in Midtown Manhattan, there are about 1,697 pedestrians, on average, per sidewalk segment per hour during the evening peak of foot traffic, the highest in the city. The financial district in lower Manhattan comes in second, at 740 pedestrians per hour, with Greenwich Village third at 656.

Other parts of Manhattan register lower levels of foot traffic, however. Morningside Heights and East Harlem register 226 and 227 pedestrians per block per hour. And that’s similar to, or lower than, some parts of other boroughs. Brooklyn Heights has 277 pedestrians per sidewalk segment per hour; University Heights in the Bronx has 263; Borough Park in Brooklyn and the Grand Concourse in the Bronx average 236; and a slice of Queens in the Corona area averages 222. Many other spots are over 200.

The model overlays many different types of pedestrian journeys for each time period and shows that people are generally headed to work and schools in the morning, but conduct more varied types of trips in mid-day and the evening, as they seek out amenities or conduct social or recreational visits.

“Because of jobs, transit stops are the biggest generators of foot traffic in the morning peak,” Liu observes. “In the evening peak, of course people need to get home too, but patterns are much more varied, and people are not just returning from work or school. More social and recreational travel happens after work, whether it’s getting together with friends or running errands for family or family care trips, and that’s what the model detects too.”

On the safety front, pedestrians face danger in many places, not just the intersections with the most total accidents. Many parts of the city are riskier than others on a per-pedestrian basis, compared to the locations with the most pedestrian-related crashes.

“Places like Times Square and Herald Square in Manhattan may have numerous crashes, but they have very high pedestrian volumes, and it’s actually relatively safe to walk there,” Basu says. “There are other parts of the city, around highway off-ramps and heavy car-infrastructure, including the relatively low-density borough of Staten Island, which turn out to have a disproportionate number of crashes per pedestrian.”

Taking the model across the U.S.

The MIT model stands a solid chance of being applied in New York City policy and planning circles, since officials there are aware of the research and have been regularly communicating with the MIT team about it.

For his part, Sevtsuk emphasizes that, as distinct as New York City might be, the MIT model can be applied to cities and town anywhere in the U.S. As it happens, the team is working with municipal officials in two other places at the moment. One is Los Angeles, where city officials are not only trying to upgrade pedestrian and public transit mobility for regular daily trips, but making plans to handle an influx of visitors for the 2028 summer Olympics.

Meanwhile the state of Maine is working with the MIT team to evaluate pedestrian movement in over 140 of its cities and towns, to better understand the kinds of upgrades and safety improvements it could make for pedestrians across the state. Sevtsuk hopes that still other places will take notice of the New York City study and recognize that the tools are in place to analyze foot traffic more broadly in U.S. cities, to address the urgent need to decarbonize cities, and to start balancing what he views as the disproportionate focus on car travel prevalent in 20th century urban planning.

“I hope this can inspire other cities to invest in modeling foot traffic and mapping pedestrian infrastructure as well,” Sevtsuk says. “Very few cities make plans for pedestrian mobility or examine rigorously how future developments will impact foot-traffic. But they can. Our models serve as a test bed for making future changes.” 

Some early life forms may have breathed oxygen well before it filled the atmosphere

Fri, 02/06/2026 - 12:00am

Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.

A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.

In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.

The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?

The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?

Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.

The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.

“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”

The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.

First respirers

The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.

For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.

“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”

If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.

To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.

“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.

Tree dates

The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.

“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”

The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.

If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.

In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.

The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.

“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”

This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.

T. Alan Hatton receives Bernard M. Gordon Prize for Innovation in Engineering and Technology Education

Thu, 02/05/2026 - 5:30pm

The National Academy of Engineering (NAE) has announced T. Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, Post-Tenure, as the recipient of the 2026 Bernard M. Gordon Prize for Innovation in Engineering and Technology Education, recognizing his transformative leadership of the Institute’s David H. Koch School of Chemical Engineering Practice. The award citation highlights his efforts to advance “an immersive, industry-integrated educational model that has produced thousands of engineering leaders, strengthening U.S. technological competitiveness and workforce readiness.”

The Gordon Prize recognizes “new modalities and experiments in education that develop effective engineering leaders.” The prize is awarded annually and carries a $500,000 cash award, half granted to the recipient and the remainder granted to their institution to support the recognized innovation.

“As engineering challenges become more complex and interdisciplinary, education must evolve alongside them,” says Paula Hammond, Institute Professor and dean of the School of Engineering. “Under Alan’s leadership, the Practice School has demonstrated how rigorous academics, real industrial problems, and student responsibility can be woven together into an educational experience that is both powerful and adaptable. His work offers a compelling blueprint for the future of engineering education.”

Hatton served as director of the Practice School for 36 years, from 1989 until his retirement in 2025. When he assumed the role, the program worked with a limited number of host companies, largely within traditional chemical industries. Over time, Hatton reshaped the program’s scope and structure, enabling it to operate across continents and sectors to offer students exposure to diverse technologies, organizational cultures, and geographic settings.

“The MIT Chemical Engineering Practice School represents a level of experiential learning that few programs anywhere can match,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering. “This recognition reflects not only Alan’s extraordinary personal contributions, but also the enduring value of a program that prepares students to deliver impact from their very first day as engineers.”

Central to Hatton’s approach was a deliberate strategy of adaptability. He introduced a model in which new companies are recruited regularly as Practice School hosts, broadening participation while keeping the program aligned with emerging technologies and industry needs. He also strengthened on-campus preparation by launching an intensive project management course during MIT’s Independent Activities Period (IAP) — training that has since become foundational for students entering complex, team-based industrial environments.

This forward-looking vision is shared by current Practice School leadership. Fikile Brushett, Ralph Landau Professor of Chemical Engineering Practice and director of the program, emphasizes that Hatton’s legacy is not a static one. “Alan consistently positioned the Practice School to respond to change — whether in technology, industry expectations, or educational practice,” Brushett says. “The Gordon Prize provides an opportunity to further evolve the program while staying true to its core principles of immersion, rigor, and partnership.”

In recognition of Hatton’s service, the department established the T. Alan Hatton Fund in fall 2025 with support from Practice School alumni. The fund is dedicated to helping launch new Practice School stations, lowering barriers for emerging partners and sustaining the program’s ability to engage with a broad and diverse set of industries.

Learning that delivers value on both sides

The Practice School’s impact extends well beyond the classroom. Student teams are embedded directly within host organizations — often in manufacturing plants or research and development centers — where they tackle open-ended technical problems under real operational constraints. Sponsors routinely cite tangible outcomes from these projects, including improved processes, reduced costs, and new technical directions informed by MIT-level analysis.

For students, the experience offers something difficult to replicate in traditional academic settings: sustained responsibility for complex work, direct interaction with industry professionals, and repeated opportunities to present, defend, and refine their ideas. The result is a training environment that closely mirrors professional engineering practice, while retaining the reflective depth of an academic program.

A program shaped by history — and by change

The Practice School was established in 1916 to complement classroom instruction with hands-on industrial experience, an idea that was unconventional at the time. More than a century later, the program has not only endured but continually reinvented itself, expanding far beyond its early focus on regional chemical manufacturing.

Today, Practice School students work with companies around the world in fields that include pharmaceuticals, food production, energy, advanced materials, software, and finance. The program remains a defining feature of graduate education in MIT’s Department of Chemical Engineering, linking research strengths with the practical demands of industry.

Participation in the Practice School is a required component of the department’s Master of Science in Chemical Engineering Practice (MSCEP) and PhD/ScD Chemical Engineering Practice (CEP) programs. After completing coursework, students attend two off-campus stations, spending two months at each site. Teams of two or three students work on month-long projects, culminating in formal presentations and written reports delivered to host organizations. Recent stations have included placements with Evonik in Germany, AstraZeneca in Maryland, EGA in the United Arab Emirates, AspenTech in Massachusetts, and Shell Technology Center and Dimensional Energy in Texas.

“I’m deeply honored by this recognition,” Hatton says. “The Practice School has always been about learning through responsibility — placing students in situations where their work matters. This award will help MIT build on that foundation and explore ways to extend the model so it can serve even more students and partners in the years ahead.”

Hatton obtained his BS and MS degrees in chemical engineering at the University of Natal in Durban, South Africa, before spending three years as a researcher at the Council for Scientific and Industrial Research in Pretoria. He later earned his PhD at the University of Wisconsin at Madison and joined the MIT faculty in 1982 as an assistant professor.

Over the course of his career at MIT, Hatton helped extend the Practice School model beyond campus through his involvement in the Singapore–MIT Alliance for Research and Technology and the Cambridge–MIT Institute, contributing to the development of practice-based engineering education in international settings. He also served as co-director of the MIT Energy Initiative’s Low-Carbon Energy Center focused on carbon capture, utilization, and storage.

Hatton has long been recognized for his commitment to education and service. From 1983 to 1986, he served as a junior faculty housemaster (now known as an associate head of house) in MacGregor House and received MIT’s Everett Moore Baker Teaching Award in 1983. His professional honors include being named a founding fellow of the American Institute of Medical and Biological Engineering and an honorary professorial fellow at the University of Melbourne in Australia.

In addition to his educational leadership, Hatton has made substantial contributions to the broader engineering community, chairing multiple national and international conferences in the areas of colloids and separation processes and delivering numerous plenary, keynote, and invited lectures worldwide.

Hatton will formally receive the Bernard M. Gordon Prize at a ceremony hosted by the National Academy of Engineering at MIT on April 30.

A satellite language network in the brain

Thu, 02/05/2026 - 5:10pm

The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute for Brain Research, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.

Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT's Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported Jan. 21 in the journal Neuron.

“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”

Imaging the language network

There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved, or tease out their roles in language processing.

To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.

Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.

Satellite language network

While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.

Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex — a function that could be important for many cognitive tasks.

“We’ve found that language is distinct from many, many other things — but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”

The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.

Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.

The researchers are also exploring the possibility that the cerebellum is particularly important for language learning — playing an outsized role during development, or when people learn languages later in life.

Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says.

Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.

Helping AI agents search to get the best results out of large language models

Thu, 02/05/2026 - 4:30pm

Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.

AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.

But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes. 

To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.” 

With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.

Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.

“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.” 

EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.

Branching out

When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines. 

You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.

Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.

The coding efficiency of EnCompass

So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.

For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.

“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”

The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”

Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.

“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”  

Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.

The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.

New vaccine platform promotes rare protective B cells

Thu, 02/05/2026 - 2:00pm

A longstanding goal of immunotherapies and vaccine research is to induce antibodies in humans that neutralize deadly viruses such as HIV and influenza. Of particular interest are antibodies that are “broadly neutralizing,” meaning they can in principle eliminate multiple strains of a virus such as HIV, which mutates rapidly to evade the human immune system.

Researchers at MIT and the Scripps Research Institute have now developed a vaccine that generates a significant population of rare precursor B cells that are capable of evolving to produce broadly neutralizing antibodies. Expanding these cells is the first step toward a successful HIV vaccine.

The researchers’ vaccine design uses DNA instead of protein as a scaffold to fabricate a virus-like particle (VLP) displaying numerous copies of an engineered HIV immunogen called eOD-GT8, which was developed at Scripps. This vaccine generated substantially more precursor B cells in a humanized mouse model compared to a protein-based virus-like particle that has shown significant success in human clinical trials.

Preclinical studies showed that the DNA-VLP generated eight times more of the desired, or “on-target,” B cells than the clinical product, which was already shown to be highly potent.

“We were all surprised that this already outstanding VLP from Scripps was significantly outperformed by the DNA-based VLP,” says Mark Bathe, an MIT professor of biological engineering and an associate member of the Broad Institute of MIT and Harvard. “These early preclinical results suggest a potential breakthrough as an entirely new, first-in-class VLP that could transform the way we think about active immunotherapies, and vaccine design, across a variety of indications.”

The researchers also showed that the DNA scaffold doesn’t induce an immune response when applied to the engineered HIV antigen. This means the DNA VLP might be used to deliver multiple antigens when boosting strategies are needed, such as for challenging diseases such as HIV.

“The DNA-VLP allowed us for the first time to assess whether B cells targeting the VLP itself limit the development of ‘on target’ B cell responses — a longstanding question in vaccine immunology,” says Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute and a Howard Hughes Medical Institute Investigator.

Bathe and Irvine are the senior authors of the study, which appears today in Science. The paper’s lead author is Anna Romanov PhD ’25.

Priming B cells

The new study is part of a major ongoing global effort to develop active immunotherapies and vaccines that expand specific lineages of B cells. All humans have the necessary genes to produce the right B cells that can neutralize HIV, but they are exceptionally rare and require many mutations to become broadly neutralizing. If exposed to the right series of antigens, however, these cells can in principle evolve to eventually produce the requisite broadly neutralizing antibodies.

In the case of HIV, one such target antibody, called VRC01, was discovered by National Institutes of Health researchers in 2010 when they studied humans living with HIV who did not develop AIDS. This set off a major worldwide effort to develop an HIV vaccine that would induce this target antibody, but this remains an outstanding challenge.

Generating HIV-neutralizing antibodies is believed to require three stages of vaccination, each one initiated by a different antigen that helps guide B cell evolution toward the correct target, the native HIV envelope protein gp120.

In 2013, William Schief, a professor of immunology and microbiology at Scripps, reported an engineered antigen called eOD-GT6 that could be used for the first step in this process, known as priming. His team subsequently upgraded the antigen to eOD-GT8. Vaccination with eOD-GT8 arrayed on a protein VLP generated early antibody precursors to VRC01 both in mice and more recently in humans, a key first step toward an HIV vaccine.

However, the protein VLP also generated substantial “off-target” antibodies that bound the irrelevant, and potentially highly distracting, protein VLP itself. This could have unknown consequences on propagating target B cells of interest for HIV, as well as other challenging immunotherapy applications.

The Bathe and Irvine labs set out to test if they could use a particle made from DNA, instead of protein, to deliver the priming antigen. These nanoscale particles are made using DNA origami, a method that offers precise control over the structure of synthetic DNA and allows researchers to attach viral antigens at specific locations.

In 2024, Bathe and Daniel Lingwood, an associate professor at Harvard Medical School and a principal investigator at the Ragon Institute, showed this DNA VLP could be used to deliver a SARS-CoV-2 vaccine in mice to generate neutralizing antibodies. From that study, the researchers learned that the DNA scaffold does not induce antibodies to the VLP itself, unlike proteins. They wondered whether this might also enable a more focused antibody response.

Building on these results, Romanov, co-advised by Bathe and Irvine, set off to apply the DNA VLP to the Scripps HIV priming vaccine, based on eOD-GT8.

“Our earlier work with SARS-CoV-2 antigens on DNA-VLPs showed that DNA-VLPs can be used to focus the immune response on an antigen of interest. This property seemed especially useful for a case like HIV, where the B cells of interest are exceptionally rare. Thus, we hypothesized that reducing the competition among other irrelevant B cells (by delivering the vaccine on a silent DNA nanoparticle) may help these rare cells have a better chance to survive,”  Romanov says.

Initial studies in mice, however, showed the vaccine did not induce sufficient early B cell response to the first, priming dose.

After redesigning the DNA VLPs, Romanov and colleagues found that a smaller diameter version with 60 instead of 30 copies of the engineered antigen dramatically out-performed the clinical protein VLP construct, both in overall number of antigen-specific B cells and the fraction of B cells that were on-target to the specific HIV domain of interest. This was a result of improved retention of the particles in B cell follicles in lymph nodes and better collaboration with helper T cells, which promote B cell survival.

Overall, these improvements enabled the particles to generate eightfold more on-target B cells than the vaccine consisting of eOD-GT8 carried by a protein scaffold. Another key finding, elucidated by the Lingwood lab, was that the DNA particles promoted VRC01 precursor B cells toward the VRC01 antibody more efficiently than the protein VLP.

“In the field of vaccine immunology, the question of whether B cell responses to a targeted protective epitope on a vaccine antigen might be hindered by responses to neighboring off-target epitopes on the same antigen has been under intense investigation,” says Schief, who is also vice president for protein design at Moderna. “There are some data from other studies suggesting that off-target responses might not have much impact, but this study shows quite convincingly that reducing off-target responses by using a DNA VLP can improve desired on-target responses.”

“While nanoparticle formulations have been great at boosting antibody responses to various antigens, there is always this nagging question of whether competition from B cells specific for the particle’s own structural antigens won’t get in the way of antibody responses to targeted epitopes,” says Gabriel Victora, a professor of immunology, virology, and microbiology at Rockefeller University, who was not involved in the study. “DNA-based particles that leverage B cells’ natural tolerance to nucleic acids are a clever idea to circumvent this problem, and the research team’s elegant experiments clearly show that this strategy can be used to make difficult epitopes easier to target.”

A “silent” scaffold

The fact that the DNA-VLP scaffold doesn’t induce scaffold-specific antibodies means that it could be used to carry second and potentially third antigens needed in the vaccine series, as the researchers are currently investigating. It also might offer significantly improved on-target antibodies for numerous antigens that are outcompeted and dominated by off-target, irrelevant protein VLP scaffolds in this or other applications.

“A breakthrough of this paper is the rigorous, mechanistic quantification of how DNA-VLPs can ‘focus’ antibody responses on target antigens of interest, which is a consequence of the silent nature of this DNA-based scaffold we’ve previously shown is stealth to the immune system,” Bathe says.

More broadly, this new type of VLP could be used to generate other kinds of protective antibody responses against pandemic threats such as flu, or potentially against chemical warfare agents, the researchers suggest. Alternatively, it might be used as an active immunotherapy to generate antibodies that target amyloid beta or tau protein to treat degenerative diseases such as Alzheimer’s, or to generate antibodies that target noxious chemicals such as opioids or nicotine to help people suffering from addiction.

The research was funded by the National Institutes of Health; the Ragon Institute of MGH, MIT, and Harvard; the Howard Hughes Medical Institute; the National Science Foundation; the Novo Nordisk Foundation; a Koch Institute Support (core) Grant from the National Cancer Institute; the National Institute of Environmental Health Sciences; the Gates Foundation Collaboration for AIDS Vaccine Discovery; the IAVI Neutralizing Antibody Center; the National Institute of Allergy and Infectious Diseases; and the U.S. Army Research Office through MIT’s Institute for Soldier Nanotechnologies.

“Essential” torch heralds the start of the 2026 Winter Olympics

Thu, 02/05/2026 - 8:00am

Before the thrill of victory; before the agony of defeat; before the gold medalist’s national anthem plays, there is the Olympic torch. A symbol of unity, friendship, and the spirit of competition, the torch links today’s Olympic Games to its heritage in ancient Greece.

The torch for the 2026 Milano Cortina Olympic Games and Paralympic Games was designed by Carlo Ratti, a professor of the practice for the MIT Department of Urban Studies and Planning and the director of the Senseable City Lab in the MIT School of Architecture and Planning.

A native of Turin, Italy, and a respected designer and architect worldwide, Ratti’s work and that of his firm, Carlo Ratti Associati, has been featured at various international expositions such as the French Pavilion at the Osaka Expo (World’s Fair) in 2025 and the Italian Pavilion at the Dubai Expo in 2020. Their design for The Cloud, a 400-foot tall spherical structure that would serve as a unique observation deck, was a finalist for the 2012 Olympic Games in London, but ultimately not built.

Ratti relishes the opportunity to participate in these events.

“You can push the boundaries more at these [venues] because you are building something that is temporary,” says Ratti. “They allow for more creativity, so it’s a good moment to experiment.”

Based on his previous work, Ratti was invited to design the torch by the Olympic organizers. He approached the project much as he instructs his students working in his lab.

“It is about what the object or the design is to convey,” Ratti says. “How it can touch people, how it can relate to people, how it can transmit emotions. That’s the most important thing.”

To Ratti, the fundamental aspect of the torch is the flame. A few months before the games begin, the torch is lit in Olympia, Greece, using a parabolic mirror reflecting the sun’s rays. In ancient Greece, the flame was considered “sacred,” and was to remain lit throughout the competition. Ratti, familiar with the history of the Olympic torch, is less impressed with designs that he deems overwrought. Many torches added superfluous ornamentation to its exterior much like cars are designed around their engines, he says. Instead, he decided to strip away everything that wasn’t essential to the flame itself.

What is “essential”

“Essential” — the official name for the 2026 Winter Olympic torch — was designed to perform regardless of the weather, wind, or altitude it would encounter on its journey from Olympia to Milan. The process took three years with many designs created, considered, and discussed with the local and global Olympic committees and Olympic sponsor Versalis. And, as with Ratti’s work at MIT, researchers and engineers collaborated in the effort.

“Each design pushed the boundaries in different directions, but all of them with the key principle to put the flame at the center,” says Ratti who wanted the torch to embody “an ethos of frugality.”

At the core of Ratti’s torch is a high-performance burner powered by bio-GPL produced by energy company ENI from 100 percent renewable feedstocks. Furthermore, the torch can be recharged 10 times. In previous years, torches were used only once. This allowed for a 10-fold reduction in the number of torches created.

Also unique to this torch is its internal mechanism, which is visible via a vertical opening along its side, allowing audiences to see the burner in action. This reinforces the desire to keep the emphasis on the flame instead of the object.

In keeping with the requisite for minimalism and sustainability, the torch is primarily composed of recycled aluminum. It is the lightest torch created for the Olympics, weighing just under 2.5 pounds. The body is finished with a PVD coating that is heat resistant, letting it shift colors by reflecting the environments — such as the mountains and the city lights — through which it is carried. The Olympic torch is a blue-green shade, while the Paralympic torch is gold.

The torch won an honorable mention in Italy’s most prestigious industrial design award, the Compasso d’Oro.

The Olympic Relay

The torch relay is considered an event itself, drawing thousands as it is carried to the host city by hundreds of volunteers. Its journey for the 2026 Olympics started in late November and, after visiting cities across Greece, will have covered all 110 Italian provinces before arriving in Milan for the opening ceremony on Feb. 6.

Ratti carried the torch for a portion of its journey through Turin in mid-January — another joyful invitation to this quadrennial event. He says winter sports are his favorite; he grew up skiing where these games are being held, and has since skied around the world — from Utah to the Himalayas.

In addition to a highly sustainable torch, there was another statement Ratti wanted to make: He wanted to showcase the Italy of today and of the future. It is the same issue he confronted as the curator of the 2025 Biennale Architettura in Venice titled “Intelligens. Natural. Artificial. Collective: an architecture exhibition, but infused with technology for the future.”

“When people think about Italy, they often think about the past, from ancient Romans to the Renaissance or Baroque period,” he says. “Italy does indeed have a significant past. But the reality is that it is also the second-largest industrial powerhouse in Europe and is leading in innovation and tech in many fields. So, the 2026 torch aims to combine both past and future. It draws on Italian design from the past, but also on future-forward technologies.”

“There should be some kind of architectural design always translating into form some kind of ethical principles or ideals. It’s not just about a physical thing. Ultimately, it’s about the human dimension. That applies to the work we do at MIT or the Olympic torch.”

Brian Hedden named co-associate dean of Social and Ethical Responsibilities of Computing

Wed, 02/04/2026 - 1:25pm

Brian Hedden PhD ’12 has been appointed co-associate dean of the Social and Ethical Responsibilities of Computing (SERC) at MIT, a cross-cutting initiative in the MIT Schwarzman College of Computing, effective Jan. 16.

Hedden is a professor in the Department of Linguistics and Philosophy, holding an MIT Schwarzman College of Computing shared position with the Department of Electrical Engineering and Computer Science (EECS). He joined the MIT faculty last fall from the Australian National University and the University of Sydney, where he previously served as a faculty member. He earned his BA from Princeton University and his PhD from MIT, both in philosophy.

“Brian is a natural and compelling choice for SERC, as a philosopher whose work speaks directly to the intellectual challenges facing education and research today, particularly in computing and AI. His expertise in epistemology, decision theory, and ethics addresses questions that have become increasingly urgent in an era defined by information abundance and artificial intelligence. His scholarship exemplifies the kind of interdisciplinary inquiry that SERC exists to advance,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

Hedden’s research focuses on how we ought to form beliefs and make decisions, and it explores how philosophical thinking about rationality can yield insights into contemporary ethical issues, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics such as collective action problems, legal standards of proof, algorithmic fairness, and political polarization.

Joining co-associate dean Nikos Trichakis, the J.C. Penney Professor of Management at the MIT Sloan School of Management, Hedden will help lead SERC and advance the initiative’s ongoing research, teaching, and engagement efforts. He succeeds professor of philosophy Caspar Hare, who stepped down at the conclusion of his three-year term on Sept. 1, 2025.

Since its inception in 2020, SERC has launched a range of programs and activities designed to cultivate responsible “habits of mind and action” among those who create and deploy computing technologies, while fostering the development of technologies in the public interest.

The SERC Scholars Program invites undergraduate and graduate students to work alongside postdoctoral mentors to explore interdisciplinary ethical challenges in computing. The initiative also hosts an annual prize competition that challenges MIT students to envision the future of computing, publishes a twice-yearly series of case studies, and collaborates on coordinated curricular materials, including active-learning projects, homework assignments, and in-class demonstrations. In 2024, SERC introduced a new seed grant program to support MIT researchers investigating ethical technology development; to date, two rounds of grants have been awarded to 24 projects.

Antonio Torralba, three MIT alumni named 2025 ACM fellows

Wed, 02/04/2026 - 1:15pm

Antonio Torralba, Delta Electronics Professor of Electrical Engineering and Computer Science and faculty head of artificial intelligence and decision-making at MIT, has been named to the 2025 cohort of Association for Computing Machinery (ACM) Fellows. He shares the honor of an ACM Fellowship with three MIT alumni: Eytan Adar ’97, MEng ’98; George Candea ’97, MEng ’98; and Gookwon Edward Suh SM ’01, PhD ’05.

A principal investigator within both the Computer Science and Artificial Intelligence Laboratory and the Center for Brains, Minds, and Machines, Torralba received his BS in telecommunications engineering from Telecom BCN, Spain, in 1994, and a PhD in signal, image, and speech processing from the Institut National Polytechnique de Grenoble, France, in 2000. At different points in his MIT career, he has been director of both the MIT Quest for Intelligence (now the MIT Siegel Family Quest for Intelligence) and the MIT-IBM Watson AI Lab. 

Torralba’s research focuses on computer vision, machine learning, and human visual perception; as he puts it, “I am interested in building systems that can perceive the world like humans do.” Alongside Phillip Isola and William Freeman, he recently co-authored “Foundations of Computer Vision,” an 800-plus page textbook exploring the foundations and core principles of the field. 

Among other awards and recognitions, he is the recipient of the 2008 National Science Foundation Career award; the 2010 J. K. Aggarwal Prize from the International Association for Pattern Recognition; the 2017 Frank Quick Faculty Research Innovation Fellowship; the Louis D. Smullin (’39) Award for Teaching Excellence; and the 2020 PAMI Mark Everingham Prize. In 2021, he was awarded the inaugural Thomas Huang Memorial Prize by the Pattern Analysis and Machine Intelligence Technical Committee and was named a fellow of the Association for the Advancement of Artificial Intelligence. In 2022, he received an honorary doctoral degree from the Universitat Politècnica de Catalunya — BarcelonaTech (UPC). 

ACM fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.

3 Questions: Using AI to accelerate the discovery and design of therapeutic drugs

Wed, 02/04/2026 - 1:00pm

In the pursuit of solutions to complex global challenges including disease, energy demands, and climate change, scientific researchers, including at MIT, have turned to artificial intelligence, and to quantitative analysis and modeling, to design and construct engineered cells with novel properties. The engineered cells can be programmed to become new therapeutics — battling, and perhaps eradicating, diseases.

James J. Collins is one of the founders of the field of synthetic biology, and is also a leading researcher in systems biology, the interdisciplinary approach that uses mathematical analysis and modeling of complex systems to better understand biological systems. His research has led to the development of new classes of diagnostics and therapeutics, including in the detection and treatment of pathogens like Ebola, Zika, SARS-CoV-2, and antibiotic-resistant bacteria. Collins, the Termeer Professor of Medical Engineering and Science and professor of biological engineering at MIT, is a core faculty member of the Institute for Medical Engineering and Science (IMES), the director of the MIT Abdul Latif Jameel Clinic for Machine Learning in Health, as well as an institute member of the Broad Institute of MIT and Harvard, and core founding faculty at the Wyss Institute for Biologically Inspired Engineering, Harvard.

In this Q&A, Collins speaks about his latest work and goals for this research.

Q.  You’re known for collaborating with colleagues across MIT, and at other institutions. How have these collaborations and affiliations helped you with your research? 

A: Collaboration has been central to the work in my lab. At the MIT Jameel Clinic for Machine Learning in Health, I formed a collaboration with Regina Barzilay [the Delta Electronics Professor in the MIT Department of Electrical Engineering and Computer Science and affiliate faculty member at IMES] and Tommi Jaakkola [the Thomas Siebel Professor of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society] to use deep learning to discover new antibiotics. This effort combined our expertise in artificial intelligence, network biology, and systems microbiology, leading to the discovery of halicin, a potent new antibiotic effective against a broad range of multidrug-resistant bacterial pathogens. Our results were published in Cell in 2020 and showcased the power of bringing together complementary skill sets to tackle a global health challenge.

At the Wyss Institute, I’ve worked closely with Donald Ingber [the Judah Folkman Professor of Vascular Biology at Harvard Medical School and the Vascular Biology Program at Boston Children’s Hospital, and Hansjörg Wyss Professor of Biologically Inspired Engineering at Harvard], leveraging his organs-on-chips technology to test the efficacy of AI-discovered and AI-generated antibiotics. These platforms allow us to study how drugs behave in human tissue-like environments, complementing traditional animal experiments and providing a more nuanced view of their therapeutic potential.

The common thread across our many collaborations is the ability to combine computational predictions with cutting-edge experimental platforms, accelerating the path from ideas to validated new therapies.

Q. Your research has led to many advances in designing novel antibiotics, using generative AI and deep learning. Can you talk about some of the advances you’ve been a part of in the development of drugs that can battle multi-drug-resistant pathogens, and what you see on the horizon for breakthroughs in this arena?

A: In 2025, our lab published a study in Cell demonstrating how generative AI can be used to design completely new antibiotics from scratch. We used genetic algorithms and variational autoencoders to generate millions of candidate molecules, exploring both fragment-based designs and entirely unconstrained chemical space. After computational filtering, retrosynthetic modeling, and medicinal chemistry review, we synthesized 24 compounds and tested them experimentally. Seven showed selective antibacterial activity. One lead, NG1, was highly narrow-spectrum, eradicating multi-drug-resistant Neisseria gonorrhoeae, including strains resistant to first-line therapies, while sparing commensal species. Another, DN1, targeted methicillin-resistant Staphylococcus aureus (MRSA) and cleared infections in mice through broad membrane disruption. Both were non-toxic and showed low rates of resistance.

Looking ahead, we are using deep learning to design antibiotics with drug-like properties that make them stronger candidates for clinical development. By integrating AI with high-throughput biological testing, we aim to accelerate the discovery and design of antibiotics that are novel, safe, and effective, ready for real-world therapeutic use. This approach could transform how we respond to drug-resistant bacterial pathogens, moving from a reactive to a proactive strategy in antibiotic development.

Q. You’re a co-founder of Phare Bio, a nonprofit organization that uses AI to discover new antibiotics, and the Collins Lab has helped to launch the Antibiotics-AI Project in collaboration with Phare Bio. Can you tell us more about what you hope to accomplish with these collaborations, and how they tie back to your research goals?

A: We founded Phare Bio as a nonprofit to take the most promising antibiotic candidates emerging from the Antibiotics-AI Project at MIT and advance them toward the clinic. The idea is to bridge the gap between discovery and development by collaborating with biotech companies, pharmaceutical partners, AI companies, philanthropies, other nonprofits, and even nation states. Akhila Kosaraju has been doing a brilliant job leading Phare Bio, coordinating these efforts and moving candidates forward efficiently.

Recently, we received a grant from ARPA-H to use generative AI to design 15 new antibiotics and develop them as pre-clinical candidates. This project builds directly on our lab’s research, combining computational design with experimental testing to create novel antibiotics that are ready for further development. By integrating generative AI, biology, and translational partnerships, we hope to create a pipeline that can respond more rapidly to the global threat of antibiotic resistance, ultimately delivering new therapies to patients who need them most.

3D-printed metamaterials that stretch and fail by design

Wed, 02/04/2026 - 12:35pm

Metamaterials — materials whose properties are primarily dictated by their internal microstructure, and not their chemical makeup — have been redefining the engineering materials space for the last decade. To date, however, most metamaterials have been lightweight options designed for stiffness and strength.

New research from the MIT Department of Mechanical Engineering introduces a computational design framework to support the creation of a new class of soft, compliant, and deformable metamaterials. These metamaterials, termed 3D woven metamaterials, consist of building blocks that are composed of intertwined fibers that self-contact and entangle to endow the material with unique properties.

“Soft materials are required for emerging engineering challenges in areas such as soft robotics, biomedical devices, or even for wearable devices and functional textiles,” explains Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor of mechanical engineering.

In an open-access paper published Jan. 26 in the journal Nature Communications, researchers from Portela’s lab provide a universal design framework that generates complex 3D woven metamaterials with a wide range of properties. The work also provides open-source code that allows users to create designs to fit specifications and generate a file for printing or simulating the material using a 3D printer.

“Normal knitting or weaving have been constrained by the hardware for hundreds of years — there’s only a few patterns that you can make clothes out of, for example — but that changes if hardware is no longer a limitation,” Portela says. “With this framework, you can come up with interesting patterns that completely change the way the textile is going to behave.”

Possible applications include wearable sensors that move with human skin, fabrics for aerospace or defense needs, flexible electronic devices, and a variety of other printable textiles.

The team developed general design rules — in the form of an algorithm — that first provide a graph representation of the metamaterial. The attributes of this graph eventually dictate how each fiber is placed and connected within the metamaterial. The fundamental building blocks are woven unit cells that can be functionally graded via control of various design parameters, such as the radius and pitch of the fibers that make up the woven struts.

“Because this framework allows these metamaterials to be tailored to be softer in one place and stiffer in another, or to change shape as they stretch, they can exhibit an exceptional range of behaviors that would be hard to design using conventional soft materials,” says Molly Carton, lead author of the study. Carton, a former postdoc in Portela’s lab, is now an assistant research professor in mechanical engineering at the University of Maryland.

Further, the simulation framework also allows users to predict the deformation response of these materials, capturing complex phenomena such as self-contact within fibers and entanglement, and design to predict and resist deformation or tearing patterns.

“The most exciting part was being able to tailor failure in these materials and design arbitrary combinations,” says Portela. “Based on the simulations, we were able to fabricate these spatially varying geometries and experiment on them at the microscale.”

This work is the first to provide a tool for users to design, print, and simulate an emerging class of metamaterials that are extensible and tough. It also demonstrates that through tuning of geometric parameters, users can control and predict how these materials will deform and fail, and presents several new design building blocks that substantially expand the property space of woven metamaterials.

“Until now, these complex 3D lattices have been designed manually, painstakingly, which limits the number of designs that anyone has tested,” says Carton. “We’ve been able to describe how these woven lattices work and use that to create a design tool for arbitrary woven lattices. With that design freedom, we’re able to design the way that a lattice changes shape as it stretches, how the fibers entangle and knot with each other, as well as how it tears when stretched to the limit.”

Carton says she believes the framework will be useful across many disciplines. “In releasing this framework as a software tool, our hope is that other researchers will explore what’s possible using woven lattices and find new ways to use this design flexibility,” she says. “I’m looking forward to seeing what doors our work can open.”

The paper, “Design framework for programmable three-dimensional woven metamaterials,” is available now in the journal Nature Communications. Its other MIT-affiliated authors are James Utama Surjadi, Bastien F. G. Aymon, and Ling Xu.

This work was performed, in part, through the use of MIT.nano’s fabrication and characterization facilities.

Terahertz microscope reveals the motion of superconducting electrons

Wed, 02/04/2026 - 11:00am

You can tell a lot about a material based on the type of light you shine at it: Optical light illuminates a material’s surface, while X-rays reveal its internal structures and infrared captures a material’s radiating heat.

Now, MIT physicists have used terahertz light to reveal inherent, quantum vibrations in a superconducting material, which have not been observable until now.

Terahertz light is a form of energy that lies between microwaves and infrared radiation on the electromagnetic spectrum. It oscillates over a trillion times per second — just the right pace to match how atoms and electrons naturally vibrate inside materials. Ideally, this makes terahertz light the perfect tool to probe these motions.

But while the frequency is right, the wavelength — the distance over which the wave repeats in space — is not. Terahertz waves have wavelengths hundreds of microns long. Because the smallest spot that any kind of light can be focused into is limited by its wavelength, terahertz beams cannot be tightly confined. As a result, a focused terahertz beam is physically too large to interact effectively with microscopic samples, simply washing over these tiny structures without revealing fine detail.

In a paper appearing today in the journal Nature, the scientists report that they have developed a new terahertz microscope that compresses terahertz light down to microscopic dimensions. This pinpoint of terahertz light can resolve quantum details in materials that were previously inaccessible.

The team used the new microscope to send terahertz light into a sample of bismuth strontium calcium copper oxide, or BSCCO (pronounced “BIS-co”) — a material that superconducts at relatively high temperatures. With the terahertz scope, the team observed a frictionless “superfluid” of superconducting electrons that were collectively jiggling back and forth at terahertz frequencies within the BSCCO material.

“This new microscope now allows us to see a new mode of superconducting electrons that nobody has ever seen before,” says Nuh Gedik, the Donner Professor of Physics at MIT.

By using terahertz light to probe BSCCO and other superconductors, scientists can gain a better understanding of properties that could lead to long-coveted room-temperature superconductors. The new microscope can also help to identify materials that emit and receive terahertz radiation. Such materials could be the foundation of future wireless, terahertz-based communications, that could potentially transmit more data at faster rates compared to today’s microwave-based communications.

“There’s a huge push to take Wi-Fi or telecommunications to the next level, to terahertz frequencies,” says Alexander von Hoegen, a postdoc in MIT’s Materials Research Laboratory and lead author of the study. “If you have a terahertz microscope, you could study how terahertz light interacts with microscopically small devices that could serve as future antennas or receivers.”

In addition to Gedik and von Hoegen, the study’s MIT co-authors include Tommy Tai, Clifford Allington, Matthew Yeung, Jacob Pettine, Alexander Kossak, Byunghun Lee, and Geoffrey Beach, along with collaborators at Harvard University, the Max Planck Institute for the Structure and Dynamics of Matter, the Max Planck Institute for the Physics of Complex Systems and the Brookhaven National Lab.

Hitting a limit

Terahertz light is a promising yet largely untapped imaging tool. It occupies a unique spectral “sweet spot”: Like microwaves, radio waves, and visible light, terahertz radiation is nonionizing and therefore does not carry enough energy to cause harmful radiation effects, making it safe for use in humans and biological tissues. At the same time, much like X-rays, terahertz waves can penetrate a wide range of materials, including fabric, wood, cardboard, plastic, ceramics, and even thin brick walls.

Owing to these distinctive properties, terahertz light is being actively explored for applications in security screening, medical imaging, and wireless communications. In contrast, far less effort has been devoted to applying terahertz radiation to microscopy and the illumination of microscopic phenomena. The primary reason is a fundamental limitation shared by all forms of light: the diffraction limit, which restricts spatial resolution to roughly the wavelength of the radiation used.

With wavelengths on the order of hundreds of microns, terahertz radiation is far larger than atoms, molecules, and many other microscopic structures. As a result, its ability to directly resolve microscale features is fundamentally constrained.

“Our main motivation is this problem that, you might have a 10-micron sample, but your terahertz light has a 100-micron wavelength, so what you would mostly be measuring is air, or the vacuum around your sample,” von Hoegen explains. “You would be missing all these quantum phases that have characteristic fingerprints in the terahertz regime.”

Zooming in

The team found a way around the terahertz diffraction limit by using spintronic emitters — a recent technology that produces sharp pulses of terahertz light. Spintronic emitters are made from multiple ultrathin metallic layers. When a laser illuminates the multilayered structure, the light triggers a cascade of effects in the electrons within each layer, such that the structure ultimately emits a pulse of energy at terahertz frequencies.

By holding a sample close to the emitter, the team trapped the terahertz light before it had a chance to spread, essentially squeezing it into a space much smaller than its wavelength. In this regime, the light can bypass the diffraction limit to resolve features that were previously too small to see.

The MIT team adapted this technology to observe microscopic, quantum-scale phenomena. For their new study, the team developed a terahertz microscope using spintronic emitters interfaced with a Bragg mirror. This multilayered structure of reflective films successively filters out certain, undesired wavelengths of light while letting through others, protecting the sample from the “harmful” laser which triggers the terahertz emission.

As a demonstration, the team used the new microscope to image a small, atomically thin sample of BSCCO. They placed the sample very close to the terahertz source and imaged it at temperatures close to absolute zero — cold enough for the material to become a superconductor. To create the image, they scanned the laser beam, sending terahertz light through the sample and looking for the specific signatures left by the superconducting electrons.

“We see the terahertz field gets dramatically distorted, with little oscillations following the main pulse,” von Hoegen says. “That tells us that something in the sample is emitting terahertz light, after it got kicked by our initial terahertz pulse.”

With further analysis, the team concluded that the terahertz microscope was observing the natural, collective terahertz oscillations of superconducting electrons within the material.

“It’s this superconducting gel that we’re sort of seeing jiggle,” von Hoegen says.

This jiggling superfluid was expected, but never directly visualized until now. The team is now applying the microscope to other two-dimensional materials, where they hope to capture more terahertz phenomena.

“There are a lot of the fundamental excitations, like lattice vibrations and magnetic processes, and all these collective modes that happen at terahertz frequencies,” von Hoegen says. “We can now resonantly zoom in on these interesting physics with our terahertz microscope.”

This research was supported, in part, by the U.S. Department of Energy and by the Gordon and Betty Moore Foundation.

MIT winter club sports energized by the Olympics

Wed, 02/04/2026 - 9:00am

With the Milano Cortina 2026 Winter Olympics officially kicking off today, several of MIT’s winter sports clubs are hosting watch parties to cheer on their favorite players, events, and teams.

Members of MIT’s Curling Club are hosting a gathering to support their favorite teams. Co-presidents Polly Harrington and Gabi Wojcik are rooting for the United States.

“I’m looking forward to watching the Olympics and cheering for Team USA. I grew up in Seattle, and during the Vancouver Olympics, we took a family trip to the games. The most affordable tickets were to the curling events, and that was my first exposure to the sport. Seeing it live was really cool. I was hooked,” says Harrington.

Wojcik says, “It’s a very analytical and strategic sport, so it’s perfect for MIT students. Physicists still don't entirely agree on why the rocks behave the way they do. Everyone in the club is welcoming and open to teaching new people to play. I’d never played before and learned from scratch. The other advantage of playing is that it is a lifelong sport.”

The two say the biggest misconception about curling, other than that it is easy, is that it is played on ice skates. It’s neither easy nor played on skates. The stone, or rock, as it is often called, weighs 43 pounds, and is always made from the same weathered granite from Scotland so that the playing field, or in this case, ice, is even.

Both agree that playing is a great way to meet other students from MIT that they might not otherwise have the chance to.

Having seen the American team at a recent tournament, Wojcik is hoping the team does well, but admits that if Scotland wins, she’ll also be happy. Harrington met members of the U.S. men's curling team, Luc Violette and Ben Richardson, when curling in Seattle in high school, and will be cheering for them.

The Curling Club team practices and competes in tournaments in the New England area from late September until mid-March and always welcomes new members, no previous experience is necessary to join.

Figure Skating Club

The MIT Figure Skating Club is also excited for the 2026 Olympics and has been watching preliminary events (nationals) leading up to the games with great anticipation. Eleanor Li, the current club president, and Amanda (Mandy) Paredes Rioboo, former president, say holding small gatherings to watch the Olympics is a great way for the team to bond further.

Li began taking skating lessons at age 14 and fell in love with the sport right away, and has been skating ever since. Paredes Rioboo started lessons at age 5 and practices in the mornings with other club members, saying, “there is no better way to start the day.”

The Figure Skating Club currently has 120 members and offers a great way to meet friends who share the same passion. Any MIT student, regardless of skill level, is welcome to join the club.

Li says, “We have members ranging from former national and international competitors to people who are completely new to the ice.” Adding that her favorite part of skating is, “the freeing feeling of wind coming at you when you’re gliding across the ice! And all the life lessons learned — time management, falling again and again, and getting up again and again, the artistry and expressiveness of this beautiful sport, and most of all the community.”

Paredes Rioboo agrees. “The sport taught me discipline, to work at something and struggle with it until I got good at it. It taught me to be patient with myself and to be unafraid of failure.”

“The Olympics always bring a lot of buzz and curiosity around skating, and we’re excited to hopefully see more people come to our Saturday free group lessons, try skating for the first time, and maybe even join the club,” says Li.

Li and Paredes Rioboo are ready to watch the games with other club members. Li says, “I’m especially excited for women’s singles skating. All of the athletes have trained so hard to get there, and I’m really looking forward to watching all the beautiful skating. Especially Kaori Sakamoto.”

“I’m excited to watch Alysa Liu and Ami Nakai,” adds Paredes Rioboo.

Students interested in joining the Figure Skating Club can find more information here.

Katie Spivakovsky wins 2026 Churchill Scholarship

Tue, 02/03/2026 - 5:25pm

MIT senior Katie Spivakovsky has been selected as a 2026-27 Churchill Scholar and will undertake an MPhil in biological sciences at the Wellcome Sanger Institute at Cambridge University in the U.K. this fall.

Spivakovsky, who is double-majoring in biological engineering and artificial intelligence, with minors in mathematics and biology, aims to integrate computation and bioengineering in an academic research career focused on developing robust, scalable solutions that promote equitable health outcomes.

At MIT’s Bathe BioNanoLab, Spivakovsky investigates therapeutic applications of DNA origami, DNA-scaffolded nanoparticles for gene and mRNA delivery, and co-authored a manuscript in press at Science. She leads the development of an immune therapy for cancer cachexia with a team supported by MIT’s BioMakerSpace; this work earned a silver medal at the international synthetic biology competition iGEM and was published in the MIT Undergraduate Research Journal. Previously, she worked on Merck’s Modeling & Informatics team, characterizing a cancer-associated protein mutation, and at the New York Structural Biology Center, where she improved cryogenic electron microscopy particle detection models.

On campus, Spivakovsky serves as director of the Undergraduate Initiative in the MIT Biotech Group. She is deeply committed to teaching and mentoring, and has served as a lecturer and co-director for class 6.S095 (Probability Problem Solving), a teaching assistant for classes 20.309 (Bioinstrumentation) and 20.A06 (Hands-on Making in Biological Engineering), a lab assistant for 6.300 (Signal Processing), and as an associate advisor.

“Katie is a brilliant researcher who has a keen intellectual curiosity that will make her a leader in biological engineering in the future. We are proud that she will be representing MIT at Cambridge University,” says Kim Benard, associate dean of distinguished fellowships.

The Churchill Scholarship is a highly competitive fellowship that annually offers 16 American students the opportunity to pursue a funded graduate degree in science, mathematics, or engineering at Churchill College within Cambridge University. The scholarship, established in 1963, honors former British Prime Minister Winston Churchill’s vision for U.S.-U.K. scientific exchange. Since 2017, two Kanders Churchill Scholarships have also been awarded each year for studies in science policy.

MIT students interested in learning more about the Churchill Scholarship should contact Kim Benard in MIT Career Advising and Professional Development.

Counter intelligence

Tue, 02/03/2026 - 5:00pm

How can artificial intelligence step out of a screen and become something we can physically touch and interact with?

That question formed the foundation of class 4.043/4.044 (Interaction Intelligence), an MIT course focused on designing a new category of AI-driven interactive objects. Known as large language objects (LLOs), these physical interfaces extend large language models into the real world. Their behaviors can be deliberately generated for specific people or applications, and their interactions can evolve from simple to increasingly sophisticated — providing meaningful support for both novice and expert users.

“I came to the realization that, while powerful, these new forms of intelligence still remain largely ignorant of the world outside of language,” says Marcelo Coelho, associate professor of the practice in the MIT Department of Architecture, who has been teaching the design studio for several years and directs the Design Intelligence Lab. “They lack real-time, contextual understanding of our physical surroundings, bodily experiences, and social relationships to be truly intelligent. In contrast, LLOs are physically situated and interact in real time with their physical environment. The course is an attempt to both address this gap and develop a new kind of design discipline for the age of AI.”

Given the assignment to design an interactive device that they would want in their lives, students Jacob Payne and Ayah Mahmoud focused on the kitchen. While they each enjoy cooking and baking, their design inspiration came from the first home computer: the Honeywell 316 Kitchen Computer, marketed by Neiman Marcus in 1969. Priced at $10,000, there is no record of one ever being sold.

“It was an ambitious but impractical early attempt at a home kitchen computer,” says Payne, an architecture graduate student. “It made an intriguing historical reference for the project.”

“As somebody who likes learning to cook — especially now, in college as an undergrad — the thought of designing something that makes cooking easy for those who might not have a cooking background and just wants a nice meal that satisfies their cravings was a great starting point for me,” says Mahmoud, a senior design major.

“We thought about the leftover ingredients you have in the refrigerator or pantry, and how AI could help you find new creative uses for things that you may otherwise throw away,” says Payne.

Generative cuisine

The students designed their device — named Kitchen Cosmo — with instructions to function as a “recipe generator.” One challenge was prompting the LLM to consistently acknowledge real-world cooking parameters, such as heating, timing, or temperature. One issue they worked out was having the LLM recognize flavor profiles and spices accurate to regional and cultural dishes around the world to support a wider range of cuisines. Troubleshooting included taste-testing recipes Kitchen Cosmo generated. Not every early recipe produced a winning dish.

“There were lots of small things that AI wasn't great at conceptually understanding,” says Mahmoud. “An LLM needs to fundamentally understand human taste to make a great meal.”

They fine-tuned their device to allow for the myriad ways people approach preparing a meal. Is this breakfast, lunch, dinner, or a snack? How advanced of a cook are you? How much meal prep time do you have? How many servings will you make? Dietary preferences were also programmed, as well as the type of mood or vibe you want to achieve. Are you feeling nostalgic, or are you in a celebratory mood? There’s a dial for that.

“These selections were the focal point of the device because we were curious to see how the LLM would interpret subjective adjectives as inputs and use them to transform the type of recipe outputs we would get,” says Payne.

Unlike most AI interactions that tend to be invisible, Payne and Mahmoud wanted their device to be more of a “partner” in the kitchen. The tactile interface was intentionally designed to structure the interaction, giving users a physical control over how the AI responded.

“While I’ve worked with electronics and hardware before, this project pushed me to integrate the components with a level of precision and refinement that felt much closer to a product-ready device,” says Payne of the course work.

Retro and red

After their electronic work was completed, the students designed a series of models using cardboard until settling on the final look, which Payne describes as “retro.” The body was designed in a 3D modeling software and printed. In a nod to the original Honeywell computer, they painted it red.

A thin, rectangular device about 18 inches in height, Kitchen Cosmo has a webcam that hinges open to scan ingredients set on a counter. It translates these into a recipe that takes into consideration general spices and condiments common in most households. An integrated thermal printer delivers a printed recipe that is torn off. Recipes can be stored in a plastic receptacle on its base.

While Kitchen Cosmo made a modest splash in design magazines, both students have ideas where they will take future iterations.

Payne would like to see it “take advantage of a lot of the data we have in the kitchen and use AI as a mediator, offering tips for how to improve on what you’re cooking at that moment.”

Mahmoud is looking at how to optimize Kitchen Cosmo for her thesis. Classmates have given feedback to upgrade its abilities. One suggestion is to provide multi-person instructions that give several people tasks needed to complete a recipe. Another idea is to create a “learning mode” in which a kitchen tool — for example, a paring knife — is set in front of Kitchen Cosmo, and it delivers instructions on how to use the tool. Mahmoud has been researching food science history as well.

“I’d like to get a better handle on how to train AI to fully understand food so it can tailor recipes to a user’s liking,” she says.

Having begun her MIT education as a geologist, Mahmoud’s pivot to design has been a revelation, she says. Each design class has been inspiring. Coelho’s course was her first class to include designing with AI. Referencing the often-mentioned analogy of “drinking from a firehouse” while a student at MIT, Mahmoud says the course helped define a path for her in product design.

“For the first time, in that class, I felt like I was finally drinking as much as I could and not feeling overwhelmed. I see myself doing design long-term, which is something I didn’t think I would have said previously about technology.” 

SMART launches new Wearable Imaging for Transforming Elderly Care research group

Tue, 02/03/2026 - 10:00am

What if ultrasound imaging is no longer confined to hospitals? Patients with chronic conditions, such as hypertension and heart failure, could be monitored continuously in real-time at home or on the move, giving health care practitioners ongoing clinical insights instead of the occasional snapshots — a scan here and a check-up there. This shift from reactive, hospital-based care to preventative, community and home-based care could enable earlier detection and timely intervention, and truly personalized care.

Bringing this vision to reality, the Singapore-MIT Alliance for Research and Technology (SMART), MIT’s research enterprise in Singapore, has launched a new collaborative research project: Wearable Imaging for Transforming Elderly Care (WITEC). 

WITEC marks a pioneering effort in wearable technology, medical imaging, research, and materials science. It will be dedicated to foundational research and development of the world’s first wearable ultrasound imaging system capable of 48-hour intermittent cardiovascular imaging for continuous and real-time monitoring and diagnosis of chronic conditions such as hypertension and heart failure. 

This multi-million dollar, multi-year research program, supported by the National Research Foundation (NRF) Singapore under its Campus for Research Excellence and Technological Enterprise program, brings together top researchers and expertise from MIT, Nanyang Technological University (NTU Singapore), and the National University of Singapore (NUS). Tan Tock Seng Hospital (TTSH) is WITEC’s clinical collaborator and will conduct patient trials to validate long-term heart imaging for chronic cardiovascular disease management.

“Addressing society’s most pressing challenges requires innovative, interdisciplinary thinking. Building on SMART’s long legacy in Singapore as a hub for research and innovation, WITEC will harness interdisciplinary expertise — from MIT and leading institutions in Singapore — to advance transformative research that creates real-world impact and benefits Singapore, the U.S., and societies all over. This is the kind of collaborative research that not only pushes the boundaries of knowledge, but also redefines what is possible for the future of health care,” says Bruce Tidor, chief executive officer and interim director of SMART, who is also an MIT professor of biological engineering and electrical engineering and computer science.

Industry-leading precision equipment and capabilities

To support this work, WITEC’s laboratory is equipped with advanced tools, including Southeast Asia’s first sub-micrometer 3D printer and the latest Verasonics Vantage NXT 256 ultrasonic imaging system, which is the first unit of its kind in Singapore.

Unlike conventional 3D printers that operate at millimeter or micrometer scales, WITEC’s 3D printer can achieve sub‑micrometer resolution, allowing components to be fabricated at the level of single cells or tissue structures. With this capability, WITEC researchers can prototype bioadhesive materials and device interfaces with unprecedented accuracy — essential to ensuring skin‑safe adhesion and stable, long‑term imaging quality.

Complementing this is the latest Verasonics ultrasonic imaging system. Equipped with a new transducer adapter and supporting a significantly larger number of probe control channels than existing systems, it gives researchers the freedom to test highly customized imaging methods. This allows more complex beamforming, higher‑resolution image capture, and integration with AI‑based diagnostic models — opening the door to long‑duration, real‑time cardiovascular imaging not possible with standard hospital equipment.

Together, these technologies allow WITEC to accelerate the design, prototyping, and testing of its wearable ultrasound imaging system, and to demonstrate imaging quality on phantoms and healthy subjects.

Transforming chronic disease care through wearable innovation 

Chronic diseases are rising rapidly in Singapore and globally, especially among the aging population and individuals with multiple long-term conditions. This trend highlights the urgent need for effective home-based care and easy-to-use monitoring tools that go beyond basic wellness tracking.

Current consumer wearables, such as smartwatches and fitness bands, offer limited physiological data like heart rate or step count. While useful for general health, they lack the depth needed to support chronic disease management. Traditional ultrasound systems, although clinically powerful, are bulky, operator-dependent, can only be deployed episodically within the hospitals, and are limited to snapshots in time, making them unsuitable for long-term, everyday use.

WITEC aims to bridge this gap with its wearable ultrasound imaging system that uses bioadhesive technology to enable up to 48 hours of uninterrupted imaging. Combined with AI-enhanced diagnostics, the innovation is aimed at supporting early detection, home-based pre-diagnosis, and continuous monitoring of chronic diseases.

Beyond improving patient outcomes, this innovation could help ease labor shortages by freeing up ultrasound operators, nurses, and doctors to focus on more complex care, while reducing demand for hospital beds and resources. By shifting monitoring to homes and communities, WITEC’s technology will enable patient self-management and timely intervention, potentially lowering health-care costs and alleviating the increasing financial and manpower pressures of an aging population.

Driving innovation through interdisciplinary collaboration

WITEC is led by the following co-lead principal investigators: Xuanhe Zhao, professor of mechanical engineering and professor of civil and environmental engineering at MIT; Joseph Sung, senior vice president of health and life sciences at NTU Singapore and dean of the Lee Kong Chian School of Medicine (LKCMedicine); Cher Heng Tan, assistant dean of clinical research at LKCMedicine; Chwee Teck Lim, NUS Society Professor of Biomedical Engineering at NUS and director of the Institute for Health Innovation and Technology at NUS; and Xiaodong Chen, distinguished university professor at the School of Materials Science and Engineering within NTU. 

“We’re extremely proud to bring together an exceptional team of researchers from Singapore and the U.S. to pioneer core technologies that will make wearable ultrasound imaging a reality. This endeavor combines deep expertise in materials science, data science, AI diagnostics, biomedical engineering, and clinical medicine. Our phased approach will accelerate translation into a fully wearable platform that reshapes how chronic diseases are monitored, diagnosed and managed,” says Zhao, who serves as a co-lead PI of WITEC.

Research roadmap with broad impact across health care, science, industry, and economy

Bringing together leading experts across interdisciplinary fields, WITEC will advance foundational work in soft materials, transducers, microelectronics, data science and AI diagnostics, clinical medicine, and biomedical engineering. As a deep-tech R&D group, its breakthroughs will have the potential to drive innovation in health-care technology and manufacturing, diagnostics, wearable ultrasonic imaging, metamaterials, diagnostics, and AI-powered health analytics. WITEC’s work is also expected to accelerate growth in high-value jobs across research, engineering, clinical validation, and health-care services, and attract strategic investments that foster biomedical innovation and industry partnerships in Singapore, the United States, and beyond.

“Chronic diseases present significant challenges for patients, families, and health-care systems, and with aging populations such as Singapore, those challenges will only grow without new solutions. Our research into a wearable ultrasound imaging system aims to transform daily care for those living with cardiovascular and other chronic conditions — providing clinicians with richer, continuous insights to guide treatment, while giving patients greater confidence and control over their own health. WITEC’s pioneering work marks an important step toward shifting care from episodic, hospital-based interventions to more proactive, everyday management in the community,” says Sung, who serves as co‑lead PI of WITEC.

Led by Violet Hoon, senior consultant at TTSH, clinical trials are expected to commence this year to validate long-term heart monitoring in the management of chronic cardiovascular disease. Over the next three years, WITEC aims to develop a fully integrated platform capable of 48-hour intermittent imaging through innovations in bioadhesive couplants, nanostructured metamaterials, and ultrasonic transducers.

As MIT’s research enterprise in Singapore, SMART is committed to advancing breakthrough technologies that address pressing global challenges. WITEC adds to SMART’s existing research endeavors that foster a rich exchange of ideas through collaboration with leading researchers and academics from the United States, Singapore, and around the world in key areas such as antimicrobial resistance, cell therapy development, precision agriculture, AI, and 3D-sensing technologies.

New tissue models could help researchers develop drugs for liver disease

Tue, 02/03/2026 - 5:00am

More than 100 million people in the United States suffer from metabolic dysfunction-associated steatotic liver disease (MASLD), characterized by a buildup of fat in the liver. This condition can lead to the development of more severe liver disease that causes inflammation and fibrosis.

In hopes of discovering new treatments for these liver diseases, MIT engineers have designed a new type of tissue model that more accurately mimics the architecture of the liver, including blood vessels and immune cells.

Reporting their findings today in Nature Communications, the researchers showed that this model could accurately replicate the inflammation and metabolic dysfunction that occur in the early stages of liver disease. Such a device could help researchers identify and test new drugs to treat those conditions.

This is the latest study in a larger effort by this team to use these types of tissue models, also known as microphysiological systems, to explore human liver biology, which cannot be easily replicated in mice or other animals.

In another recent paper, the researchers used an earlier version of their liver tissue model to explore how the liver responds to resmetirom. This drug is used to treat an advanced form of liver disease called metabolic dysfunction-associated steatohepatitis (MASH), but it is only effective in about 30 percent of patients. The team found that the drug can induce an inflammatory response in liver tissue, which may help to explain why it doesn’t help all patients.

“There are already tissue models that can make good preclinical predictions of liver toxicity for certain drugs, but we really need to better model disease states, because now we want to identify drug targets, we want to validate targets. We want to look at whether a particular drug may be more useful early or later in the disease,” says Linda Griffith, the School of Engineering Professor of Teaching Innovation at MIT, a professor of biological engineering and mechanical engineering, and the senior author of both studies.

Former MIT postdoc Dominick Hellen is the lead author of the resmetirom paper, which appeared Jan. 14 in Communications Biology. Erin Tevonian PhD ’25 and PhD candidate Ellen Kan, both in the Department of Biological Engineering, are the lead authors of today’s Nature Communications paper on the new microphysiological system.

Modeling drug response

In the Communications Biology paper, Griffith’s lab worked with a microfluidic device that she originally developed in the 1990s, known as the LiverChip. This chip offers a simple scaffold for growing 3D models of liver tissue from hepatocytes, the primary cell type in the liver.

This chip is widely used by pharmaceutical companies to test whether their new drugs have adverse effects on the liver, which is an important step in drug development because most drugs are metabolized by the liver.

For the new study, Griffith and her students modified the chip so that it could be used to study MASLD.

Patients with MASLD, a buildup of fat in the liver, can eventually develop MASH, a more severe disease that occurs when scar tissue called fibrosis forms in the liver. Currently, resmetirom and the GLP-1 drug semaglutide are the only medications that are FDA-approved to treat MASH. Finding new drugs is a priority, Griffith says.

“You’re never declaring victory with liver disease with one drug or one class of drugs, because over the long term there may be patients who can’t use them, or they may not be effective for all patients,” she says.

To create a model of MASLD, the researchers exposed the tissue to high levels of insulin, along with large quantities of glucose and fatty acids. This led to a buildup of fatty tissue and the development of insulin resistance, a trait that is often seen in MASLD patients and can lead to type 2 diabetes.

Once that model was established, the researchers treated the tissue with resmetirom, a drug that works by mimicking the effects of thyroid hormone, which stimulates the breakdown of fat.

To their surprise, the researchers found that this treatment could also lead to an increase in immune signaling and markers of inflammation.

“Because resmetirom is primarily intended to reduce hepatic fibrosis in MASH, we found the result quite paradoxical,” Hellen says. “We suspect this finding may help clinicians and scientists alike understand why only a subset of patients respond positively to the thyromimetic drug. However, additional experiments are needed to further elucidate the underlying mechanism.”

A more realistic liver model

In the Nature Communications paper, the researchers reported a new type of chip that allows them to more accurately reproduce the architecture of the human liver. The key advance was developing a way to induce blood vessels to grow into the tissue. These vessels can deliver nutrients and also allow immune cells to flow through the tissue.

“Making more sophisticated models of liver that incorporate features of vascularity and immune cell trafficking that can be maintained over a long time in culture is very valuable,” Griffith says. “The real advance here was showing that we could get an intimate microvascular network through liver tissue and that we could circulate immune cells. This helped us to establish differences between how immune cells interact with the liver cells in a type two diabetes state and a healthy state.”

As the liver tissue matured, the researchers induced insulin resistance by exposing the tissue to increased levels of insulin, glucose, and fatty acids.

As this disease state developed, the researchers observed changes in how hepatocytes clear insulin and metabolize glucose, as well as narrower, leakier blood vessels that reflect microvascular complications often seen in diabetic patients. They also found that insulin resistance leads to an increase in markers of inflammation that attract monocytes into the tissue. Monocytes are the precursors of macrophages, immune cells that help with tissue repair during inflammation and are also observed in the liver of patients with early-stage liver disease.

“This really shows that we can model the immune features of a disease like MASLD, in a way that is all based on human cells,” Griffith says.

The research was funded by the National Institutes of Health, the National Science Foundation Graduate Research Fellowship program, NovoNordisk, the Massachusetts Life Sciences Center, and the Siebel Scholars Foundation.

Your future home might be framed with printed plastic

Tue, 02/03/2026 - 12:00am

The plastic bottle you just tossed in the recycling bin could provide structural support for your future house.

MIT engineers are using recycled plastic to 3D print construction-grade beams, trusses, and other structural elements that could one day offer lighter, modular, and more sustainable alternatives to traditional wood-based framing.

In a paper published in the Solid FreeForm Fabrication Symposium Proceedings, the MIT team presents the design for a 3D-printed floor truss system made from recycled plastic.

A traditional floor truss is made from wood beams that connect via metal plates in a pattern resembling a ladder with diagonal rungs. Set on its edge and combined with other parallel trusses, the resulting structure provides support for flooring material such as plywood that lies over the trusses.

The MIT team printed four long trusses out of recycled plastic and configured them into a conventional plywood-topped floor frame, then tested the structure’s load-bearing capacity. The printed flooring held over 4,000 pounds, exceeding key building standards set by the U.S. Department of Housing and Urban Development.

The plastic-printed trusses weigh about 13 pounds each, which is lighter than a comparable wood-based truss, and they can be printed on a large-scale industrial printer in under 13 minutes. In addition to floor trusses, the group is working on printing other elements and combining them into a full frame for a modest-sized home.

The researchers envision that as global demand for housing eclipses the supply of wood in the coming years, single-use plastics such as water bottles and food containers could get a second life as recycled framing material to alleviate both a global housing crisis and the overwhelming demand for timber.

“We’ve estimated that the world needs about 1 billion new homes by 2050. If we try to make that many homes using wood, we would need to clear-cut the equivalent of the Amazon rainforest three times over,” says AJ Perez, a lecturer in the MIT School of Engineering and research scientist in the MIT Office of Innovation. “The key here is: We recycle dirty plastic into building products for homes that are lighter, more durable, and sustainable.”

Perez’ co-authors on the study are graduate students Tyler Godfrey, Kenan Sehnawi, Arjun Chandar, and professor of mechanical engineering David Hardt, who are all members of the MIT Laboratory for Manufacturing and Productivity.

Printing dirty

In 2019, Perez and Hardt started MIT HAUS, a group within the Laboratory for Manufacturing and Productivity that aims to produce homes from recycled polymer products, using large-scale additive manufacturing, which encompasses technologies that are capable of producing big structures, layer-by-layer, in relatively short timescales.

Today, some companies are exploring large-scale additive manufacturing to 3D-print modest-sized homes. These efforts mainly focus on printing with concrete or clay — materials that have had a large negative environmental impact associated with their production. The house structures that have been printed so far are largely walls. The MIT HAUS group is among the first to consider printing structural framing elements such as foundation pilings, floor trusses, stair stringers, roof trusses, wall studs, and joists.

What’s more, they are seeking to do so not with cement, but with recycled “dirty” plastic — plastic that doesn’t have to be cleaned and preprocessed before reuse. The researchers envision that one day, used bottles and food containers could be fed directly into a shredder, pelletized, then fed into a large-scale additive manufacturing machine to become structural composite construction components. The plastic composite parts would be light enough to transport via pickup truck rather than a traditional lumber-hauling 18-wheeler. At the construction site, the elements could be quickly fitted into a lightweight yet sturdy home frame.

“We are starting to crack the code on the ability to process and print really dirty plastic,” Perez says. “The questions we’ve been asking are, what is the dirty, unwanted plastic good for, and how do we use the dirty plastic as-is?”

Weight class

The team’s new study is one step toward that overall goal of sustainable, recycled construction. In this work, they developed a design for a printed floor truss made from recycled plastic. They designed the truss with a high stiffness-to-weight ratio, meaning that it should be able to support a given amount of weight with minimal deflection, or bending. (Think of being able to walk across a floor without it sagging between the joists.)

The researchers first explored a handful of possible truss designs in simulation, and put each design through a simulated load-bearing test. Their modeling showed that one design in particular exhibited the highest stiffness-to-weight ratio and was therefore the most promising pattern to print and physically test. The design is close to the traditional wood-based floor truss pattern resembling a ladder with diagonal, triangular rungs. The team made a slight adjustment to this design, adding small reinforcing elements to each node where a “rung” met the main truss frame.

To print the design, Perez and his colleagues went to MIT’s Bates Research and Engineering Center, which houses the group’s industrial-scale 3D printer — a room-sized industrial machine that is capable of printing large structures at a fast rate of up to 80 pounds of material per hour. For their preliminary study, the researchers used pellets made of a combination of recycled PET polymers and glass fibers — a mixture that improves the material’s printability and durability. They obtained the material from an aerospace materials company, and then fed the pellets into the printer as composite “ink.”   

The team printed four trusses, each measuring 8 feet long, 1 foot high, and about 1 inch wide. Each truss took about 13 minutes to print. Perez and Godfrey spaced the trusses apart in a parallel configuration similar to traditional wood-based trusses, and screwed them into a sheet of plywood to mimic a 4-x-8-foot floor frame. They placed bags of sand and concrete of increasing weight in the center of the flooring system and measured the amount of deflection that the trusses experienced underneath.

The trusses easily withstood loads of 300 pounds, well above the deflection standards set by the U.S. by the Department of Housing and Urban Development. They didn’t stop there, continuing to add weight. Only when the loads reached over 4,000 pounds did the trusses finally buckle and crack.

In terms of stiffness, the printed trusses meet existing building codes in the U.S. To make them ready for wide adoption, Perez says the cost of producing the structures will have to be brought down to compete with the price of wood. The trusses in the new study were printed using recycled plastic, but from a source that he describes as the “crème de la crème of recycled feedstocks.” The plastic is factory-discarded material, but is not quite the “dirty” plastic that he aims ultimately to shred, print, and build.

The current study demonstrates that it is possible to print structural building elements from recycled plastic. Perez is in the process of working with dirtier plastic such as used soda bottles — that still hold a bit of liquid residue — to see how such contaminants affect the quality of the printed product.

If dirty plastics can be made into durable housing structures, Perez says “the idea is to bring shipping containers close to where you know you’ll have a lot of plastic, like next to a football stadium. Then you could use off-the-shelf shredding technology and feed that dirty shredded plastic into a large-scale additive manufacturing system, which could exist in micro-factories, just like bottling centers, around the world. You could print the parts for entire buildings that would be light enough to transport on a moped or pickup truck to where homes are most needed.”

This research was supported, in part, by the Gerstner Foundation, the Chandler Health of the Planet grant, and Cincinnati Incorporated.

Pages