MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 13 hours 9 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

A new approach to cancer vaccination yields more powerful T cells

13 hours 12 min ago

MIT engineers have developed a new way to amplify the T-cell response to mRNA vaccines — an advance that could lead to much more powerful cancer vaccines and stronger protection against infectious diseases.

Most vaccines generate both antibodies and T cells that can target the vaccine antigen by activating antigen-presenting cells, such as dendritic cells. In this study, the researchers boosted the T-cell response with a new type of vaccine adjuvant (a material that can help stimulate the immune system). The new adjuvant consists of mRNA molecules encoding genes that turn on immune signaling pathways and promote a supercharged T-cell response. 

In studies in mice, this mRNA-encoded adjuvant enabled the immune system to completely eradicate most tumors, either on its own or delivered along with a tumor antigen. The adjuvant also boosted the T-cell response to vaccines against influenza and Covid-19.

“When these adjuvant mRNAs are included in the vaccines, the number of antigen-targeted T cells is substantially increased. These T cells play an important role in the immune response, assisting in the clearance of virally infected cells or, in the case of cancer, killing cancerous cells,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.

Anderson and Christopher Garris, an assistant professor at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears today in Nature Biotechnology. The paper’s lead authors are Akash Gupta, a former Koch Institute research scientist who is now an assistant professor at the University of Houston; Kaelan Reed, an MIT graduate student; and Riddha Das, a research fellow at Harvard Medical School and MGH. Robert Langer, the David H. Koch Institute Professor at MIT, and Ralph Weissleder, a professor of radiology and systems biology at MGH and Harvard Medical School, are also authors.

More powerful vaccines

Vaccines that stimulate the body’s immune system to attack tumors have shown promise in clinical trials, and a handful have been FDA-approved for certain cancers. In some patients, these vaccines stimulate a strong response, but in others, a weak response fails to kill the cancerous cells.

The MIT-MGH team wanted to find a way to make those immune responses more powerful. One way to do that is to deliver immune-stimulating molecules called cytokines along with a vaccine. However, cytokines can overstimulate the immune system, leading to potentially severe side effects.

As an alternative approach, the researchers decided to deliver mRNA strands encoding two genes, IRF8 and NIK, which are involved in antigen presentation and can switch immune cells into a more active state.

NIK is an enzyme that activates a signaling pathway involved in immunity and inflammation, while IRF8 is a transcription factor that helps program dendritic cells, particularly a subset called cDC1, which are especially effective at activating T cells. These antigen-presenting cells can digest foreign antigens and present them to T cells, stimulating the T cells to mount an immune response against the antigen.

“We see that the dendritic cells start shifting toward a more cDC1 phenotype, which is the most important dendritic cell phenotype and can generate a stronger T-cell response,” Gupta says. 

The researchers packaged the mRNA in lipid nanoparticles similar to those used to deliver mRNA Covid vaccines, but with a different chemical composition that promotes their delivery to the spleen after being injected intravenously. 

Inside the spleen, the particles encounter antigen-presenting cells, including dendritic cells. Within 24 hours, these cells begin expressing IRF8 and NIK, and both of these pathways help drive dendritic cells to mature and become activated so that they can prime an anti-tumor response. 

Over a few days to a week, the T-cell population expands. These T cells, along with other immune cells such as natural killer (NK) cells, can then recognize and attack tumors.

“Most cancer immunotherapies rely on external signals to activate immune cells. We take a different approach — reprogramming immune cells from within by targeting their internal signaling machinery, enabling a more potent and durable anti-tumor response,” Das says. 

Stronger T cells

The researchers tested the immune-remodeling mRNAs in several mouse models of cancer, including an aggressive bladder cancer, colon carcinoma, melanoma, and metastatic lung cancer. In nearly all of these mice, the injected mRNA stimulated a strong T-cell response that significantly slowed tumor growth and in many cases completely eradicated the tumors. This happened even when the mice were not given a vaccine against a specific cancer antigen. When they were, the response was even stronger.

“We showed that you can get an anti-cancer response with these adjuvants without including the antigen, just by activating the immune system. However, cancer-specific antigens with the adjuvants in a vaccine further improved the responses,” Anderson says.

The mRNA adjuvant also enhanced the immune response to immunotherapy drugs called checkpoint blockade inhibitors. These drugs, which work by lifting a brake that tumor cells put on T cells, are FDA-approved to treat several kinds of cancer. These drugs don’t work for all patients, but combining them with the mRNA vaccine adjuvant could offer a way to make them more effective, the researchers say.

“The microenvironment of solid tumors is often hostile to T cells and represents a major barrier to effective immunotherapy. We find that immune remodeling with these adjuvants creates a T cell–permissive environment and promotes tumor rejection,” Garris says.

The researchers also explored whether their new adjuvant could boost the immune response to vaccination against viral infection. When they delivered the mRNA particles along with Covid or flu vaccines, they found that the vaccine generated a 10-to-15-fold stronger T cell response in the mice.

The researchers now plan to test this approach in additional animal models, in hopes of developing it for use in both cancer and infectious diseases. 

“While there are differences between the mouse systems that we’ve worked in and humans, we are optimistic that these adjuvants will work in humans and could improve a range of different vaccines,” Anderson says.

The research was funded by Sanofi, the National Institutes of Health, the Marble Center for Cancer Nanomedicine, and the Koch Institute Support (core) Grant from the National Cancer Institute.

3 Questions: Shedding light on why power grids go dark

Tue, 05/12/2026 - 5:25pm

On April 28, 2025, the power grid serving continental Spain and Portugal went down, causing gridlock in cities, cutting communications networks, and stranding people on trains, in airports, and in elevators all across the Iberian peninsula and briefly in a small area in southwest France close to the Spanish border. The unprecedented, massive blackout lasted as long as 12 hours in some areas, including in the capital city, Madrid. Not surprisingly, placing blame for the outage was rapid. Quick reactions pointed to cyberattack, sabotage, and natural phenomena such as solar flares. 

But such theories were quickly laid to rest, and a panel of experts was formed to determine exactly what caused the blackout. After a year following the outage — and after much analysis by many experts — there isn’t a simple answer: In short, no one technology was to blame. While solar and wind generation was high, experts agree that the renewables weren’t at fault. 

In this Q&A, Pablo Duenas-Martinez, a research scientist at the MIT Energy Initiative and an assistant professor at Universidad Pontificia Comillas in Madrid, provides an update.

Q: How does a proper, well-functioning power grid behave, and what does the system operator do to help?

A: There are two components to the flows on a power grid. One is “active power” — the part that lights up our light bulbs and runs our engines. With active power, the demand on the grid must always equal supply. The other component is “reactive power,” the part we can’t see but controls the voltage at which the power is delivered so it suits our devices. If voltage is too low, lights will flicker. If voltage is too high, devices may not only fail to work, but be damaged beyond repair.

The operator of the transmission system — the TSO — must control both components, and that can be tricky. Active power supply and demand are largely coordinated through markets. But controlling reactive power is harder. The main way the TSO can control it is to call on operators of conventional power generators, so generators burning natural gas, or coal, or nuclear plants. Those systems can be adjusted to either absorb or inject reactive power as needed to control voltage on the power grid — indeed, they are typically required by law to provide “reactive power control.”

In contrast, solar and wind generators always absorb reactive power. The large solar and wind sources can provide reactive power control when it’s needed, but doing so is costly for them — and in Spain, unlike in most countries, it’s not mandated by law, so they typically don’t do it. Meanwhile, there are many small solar systems — imagine lots of rooftop solar installations and small solar farms. Those small systems are directly connected to the distribution system. As a result, they’re not controlled by the TSO; the TSO may not even know whether they’ve shut down or are still running and absorbing reactive power.

Sometimes, fluctuations in voltage called “oscillations” can happen on a power grid: for example, when a transmission line or a generator is connected or disconnected. Oscillations can increase and decrease the voltage rapidly, and if voltage gets too high, generators and user devices can start “tripping” — that is, automatically disconnecting to prevent being damaged. Operators have standard protocols to follow to bring oscillations under control.

Q: So what happened on April 28 of last year?

A: The Spanish grid is loosely connected to the French grid and in practice is merged with the grid serving Portugal. Within Spain, we have many large solar and wind farms and lots of small installations of solar systems, many located in the southwestern area of the country. On April 28 — as on most spring days, when demand is low — about two-thirds of the power on the grid came from renewable sources. The rest came from a mix of nuclear and natural gas plants.

The day before the blackout, the TSO confirmed that there were no conventional generators scheduled to run. So, to ensure safe operation the next day, the TSO took steps that included dispatching 12 conventional generators, 10 of them to provide reactive power control. One of the units in the south called him back and said, “I won’t be available. I cannot switch on tomorrow.” The TSO thought he had things under control and continued operations with only nine units available to provide reactive power control.

During the morning on April 28, several small oscillations on the power grid were detected coming from Europe, plus one from Spain. To stabilize the weakened grid, the TSO connected additional transmission lines and took other technical actions.

At 12:19 p.m., a major oscillation was detected on the grid, again coming from Europe. In response, the TSO — again following standard protocol — reduced exports to Portugal, switched the flows to France from alternating current to direct current, and connected five more transmission lines within Spain. While those steps stabilized the voltage, the TSO recognized that there was now limited capacity on the system to control voltage. So, he called on a different conventional generator to begin running. But that unit couldn’t be available for an hour.

Suddenly, as a consequence of the previous actions, the voltage increased dramatically, and generating units began to trip. Within half-a-second, many of the small solar generators — especially prone to damage from high voltages — automatically shut down. Twenty milliseconds later, a big solar plant in southwestern Spain tripped. Because the solar plants were no longer absorbing reactive power, voltage on the system went up even more, and more systems shut down. The grid went into what some have called a death spiral, resulting in a total blackout across the Iberian peninsula and some areas of southern France.

Q: What have we learned from this Iberian blackout, and have changes been implemented to ensure that the same won’t happen again — or happen elsewhere?

A: A resilient power system must prevent, mitigate, respond, and recover. In this case, the first three components clearly failed. Preventive mechanisms were insufficient; they initially mitigated the oscillatory events, but left the system in a weakened state, and the response triggered the death spiral that led to the final blackout.

The good news is that the recovery was quick. The northern and southern sections of the peninsula had power back within a few hours. I live in the suburbs of Madrid, and I had power back just six hours later. My parents live downtown, so that was far more challenging — a big city with a large, complex load. Even so, they had power back in 12 hours — and 12 hours is quick for such a major, widespread blackout.

In the end, experts and analysts have agreed that the blackout was caused by a series of events that were all happening in the same place, at the same time. And the experience did provide a number of valuable learnings:

Lesson 1

The experience clearly demonstrated the importance of having a sufficient number of conventional power plants prepared to provide reactive power control, or to turn on right away when called on. There’s a recommendation calling for a set ratio between conventional generators and renewables on a power grid. Conventional facilities such as nuclear, hydroelectric, and fossil fuel plants rely on heavy metal wheels to generate electricity. Those massive rotating wheels have high inertia, so they’ll keep running and can help stabilize frequency and voltage even when solar and wind plants shut down. Before the blackout, Spain had a sufficient number of “rotating units” to meet the recommended ratio. However, in southern Spain, there was just one such unit — well below the recommended number, given the huge number of small solar units plus several large solar units in the area.

The message here is that you can't just look at the country as a whole. You have to look at regions. Voltage is a local problem that can propagate at the system level. Before the blackout, southern Spain typically had at most three conventional power plants. Now the region usually has six or seven at the ready to help with reactive power control.

Lesson 2

The rules or protocols for controlling reactive power and dealing with oscillations were not well designed. By law, rotating generators must automatically — and without being paid — do a defined amount of reactive power control. But making the needed operational change costs money, and a plant can do less than the required amount and not incur any kind of penalty. However, the TSO doesn’t know in advance how much reactive power control a given plant will actually do. Now that loophole in the law has been reviewed by the regulator.

The main rules have been updated, and now also require large solar and wind power plants — those above 5 megawatts — to provide reactive power control. More importantly, voltage control will be auctioned and remunerated, incentivizing rotating conventional generators and bringing in a new money stream for solar and wind power plants. Those power plants that do not upgrade their installation for voltage control might be disconnected by the TSO if local voltage issues arise.

Lesson 3

Another learning concerns the many small solar power generators and the protections that cause them to trip. The TSO doesn’t know in advance when this may happen because the small solar sources are directly connected to the distribution system, and therefore are under the umbrella of the distribution system operator. So, the learning here is that there should be more communication and coordination between the operator of the transmission system — the TSO — and the operator of the distribution system.

Lesson 4

In most countries, laws dictate a range of voltage that is approved. In Spain, the upper limit is high — in fact, it’s very near a voltage at which equipment may be damaged. And the Spanish grid tends to hover close to that upper limit, even during normal operation, and that can be a big problem: If there are strong oscillations — as there were leading up to the blackout — voltage can reach that upper limit, and protections on devices will automatically trip. The panel of experts has strongly recommended to lower this upper limit in Spain and align it with the rules in neighboring countries, including Portugal and France. The TSO is still studying the recommended change.

Lesson 5

During normal operation, the TSO controls voltage by activating rotating generators that can provide reactive power control. But as we saw in conditions leading up to the blackout, the TSO doesn’t always have rotating generators available.

Theoretically, TSOs have two more ways to control voltage. They can connect a device called a shunt reactor, which absorbs reactive power — a means of dealing with voltage rise. And they can regulate voltage directly using a “STATCOM,” a special device that provides rapid, dynamic voltage control.

However, neither the shunt reactors nor the STATCOM could help prevent the blackout. The shunt reactors available at that time were operated manually, and collapse of the grid happened so quickly that the TSO didn’t have time to connect them. And at that time, there was a single STATCOM device on the Spanish system. Planning was under way to install three more devices — and that installation is being rapidly completed.

From newspaper articles and off-the-record conversations, I’ve learned that the system has — due to similar external circumstances — been close to blackout again during the past year. But in part due to the learnings and to changes that have been implemented as a result, it didn’t happen again.

A new unit of measurement to honor an influential MIT alumnus

Tue, 05/12/2026 - 5:15pm

The hallowed history of student pranks (often known as hacks) at MIT includes the annual Baker House Piano Drop and the MIT weather balloon at the Harvard-Yale football game in 1982. One hack that has shown remarkable staying power in local lore is the 1958 measurement of the Massachusetts Ave. Bridge in “smoots,” a now accepted unit of meausrement named for the 5-foot, 7-inch Oliver R. Smoot Jr. ’62. Then a first-year pledge at the Lambda Chi Alpha fraternity, Smoot famously laid down hundreds of times across the span one storied night as his peers painted markers across the bridge, totaling 364.4 smoots (plus 1 ear). Nearly 70 years later, the smoot markings remain.

On April 4, an MIT team set out on a similar journey across the Charles River to pull off a new hack, this time measuring the Longfellow Bridge in “kleins.” This new measurement is named after Smoot’s classmate Martin Klein ’62. One klein (4 feet, 9.5 inches) is equal to 0.85820896 smoots. The expedition was undertaken in honor of both Smoot and the 85th birthday of Klein.

Known as the father of commercial side-scan sonar, Martin Klein serves on the MIT Sea Grant Advisory Board and the MIT Museum Collections Committee. He is a life fellow of both the Marine Technology Society and the Explorers Club, an international organization dedicated to the advancement of field exploration and scientific inquiry. His sonar technology has been used worldwide to help locate countless famous shipwrecks, including the Titanic, the World War I ocean liner RMS Lusitania, and the treasure-laden Nuestra Señora de Atocha.

Appropriately, the MIT team used a “side-scan” method to survey the Longfellow Bridge. Reclined on a custom-engineered wooden cart topped with a mission-specific chaise lounge pillow, Klein himself acted as the official observation device — by looking to the sides — as the team pulled him along the bridge. Some of the noted anomalies and discoveries included a Duck Boat passing underneath, a mermaid tail, a kayak paddle, a sleeping goose, and a tenacious survey team.

The initiative was spearheaded by Makenna Reilly, a second-year undergraduate in mechanical engineering, and Andrew Bennett ’85, PhD ’97, MIT Sea Grant education administrator and senior lecturer in the Department of Mechanical Engineering (MechE). Over a dozen surveyors joined the expedition, including alumni, faculty, and staff from MechE, MIT Sea Grant, MIT Edgerton Center, MIT Museum Hart Nautical Collections, Harvard Extension School, and Woods Hole Oceanographic Institution. MIT students also joined the effort, including senior Teagan Sullivan, junior Adrienne Lai, and graduate students Ansel Garcia-Langley, Erin Menezes, Manuel Valencia, and Gerardo Berlanga Molina.

The Longfellow Bridge was determined to be 442 kleins (plus 2 legs) and was celebrated as the “Shortfellow Bridge” in a ceremony following the event. 

One klein = 57.5 inches = 146.05 centimeters = 1.4605 meters = .0009075126 miles = 1.597222 yards = 4.791667 feet = .0007886069 nautical miles = .007260087 furlongs = 0.7986111 fathoms = 172.5 barleycorns = 292,100,000 beard seconds = 647.4421 Ligne = 14.375 horse hands = 4.819655 shaku = .85820896 smoots.

Additional participants in the event include:

  • Elisabeth (Libby) Meier, assistant curator for the Hart Nautical Collections at the MIT Museum;
  • Dana Yoerger, PhD ’82, senior scientist applied ocean physics and engineering at WHOI;
  • Professor George Buckley, assistant director of sustainability at Harvard University Extension School and diver of the year of the Boston Sea Rovers;
  • Paul K. Matthias, senior program manager of the Ocean Observatories Initiative at the WHOI;
  • Jim Bales, associate director of the Edgerton Center at MIT;
  • John Freidah of MechE; and
  • Joice Himawan ’83.

A new way to spot signs of dark matter

Tue, 05/12/2026 - 1:00pm

Dark matter is thought to make up most of the matter in the universe, but the only way it interacts with its surroundings is through gravity. If two colliding black holes spiral through a dense region of dark matter and merge, gravitational waves rippling across space and time could carry an imprint of that dark matter.

Now, physicists may be able to spot such imprints of dark matter in gravitational waves that are detected on Earth. 

Researchers at MIT and in Europe have developed a method that makes predictions for what a gravitational wave should look like if it were produced by black holes that moved through dark matter, rather than empty space. They applied the technique to publicly available gravitational-wave data previously recorded by LIGO-Virgo-KAGRA (LVK), the global network of observatories that detect gravitational waves from black hole mergers and other far-off astrophysical sources.

The researchers looked through the gravitational-wave signals recorded over the LVK’s first three observing runs. From 28 of the clearest signals, the team found that 27 originated from black holes that merged in a vacuum, as physicists expected. But the pattern of one signal, GW190728, showed possible signs of a dark matter imprint. 

The scientists emphasize that they have not detected dark matter. Rather, the new method offers a new way to screen gravitational-wave data for hints of dark matter, which physicists can then follow up and confirm with other techniques. 

“We know that dark matter is around us. It just has to be dense enough for us to see its effects,” says Josu Aurrekoetxea, a postdoc in the MIT Department of Physics. “Black holes provide a mechanism to enhance this density, which we can now search for by analyzing the gravitational waves emitted when they merge.”

Aurrekoetxea and his colleagues report their results in a study appearing today in Physical Review Letters. The study’s co-authors are LVK member Soumen Roy of Université Catholique de Louvain (UCLouvain) in Belgium, Rodrigo Vicente of the University of Amsterdam, Katy Clough of Queen Mary University of London, and Pedro Ferreira of Oxford University. 

A dark pull

Dark matter is an invisible, hypothetical form of matter that, unlike normal everyday matter, has no interactions with the electromagnetic force. Dark matter can pass through light, magnetic fields, and any other form of energy along the electromagnetic spectrum without leaving a trace. The only evidence that dark matter exists is through its apparent interaction with one other force: gravity. 

By observing how gravity bends around distant galaxies, astronomers have surmised that there must be an extra force, outside of the galaxies’ own gravitational pull, to explain the bending fields, or “lensing.” This extra force, physicists suspect, is dark matter, which could account for over 85 percent of the matter in the universe. But exactly what dark matter is is a matter of huge debate, with theories for dark matter particles that range widely in particle size and properties. 

One class of proposed dark matter consists of “light scalar” particles, whose masses are many orders of magnitude lighter than an electron. Theorists predict that such dark matter should behave not just as particles, but also as coordinated waves when moving near black holes.

When waves of dark matter come in contact with a rapidly spinning black hole, physicists predict that the black hole's rotational energy can be transferred to the dark matter, amplifying it. This phenomenon, known as superradiance, would whip up the waves to extremely high densities of dark matter, akin to churning cream into butter.

At high enough densities, light scalar dark matter, which is invisible by all other accounts, should leave an imprint on the gravitational waves that reverberate from the colliding black holes. 

But exactly what would that imprint look like? And could such an imprint be detectable in gravitational waves that arrive on Earth, from black holes that merged many millions of light years away? 

For answers to those questions, Aurrekoetxea and his colleagues developed a model to predict the gravitational waveform, or the pattern of gravitational waves that two black holes would produce, if they collided in an environment of dark matter, versus in a vacuum (empty space, with no dark matter). 

An imprint’s prediction

For their new study, the team performed detailed numerical simulations to predict the gravitational wave that would be produced given various properties of two colliding black holes — a system known as a “black hole binary.” They considered black hole binaries across a range of scenarios and properties, for example, varying the size and mass of each black hole, the environment of dark matter that the black holes might pass through, and the density of the dark matter that the black holes would spin up. 

They designed the model to predict what a gravitational wave from a black hole binary would look like if it carried an imprint of dark matter, and furthermore, what that wave would look like if it traveled a given distance across space and time, to eventually arrive at a detector on Earth.

With their model, they looked to see whether any gravitational-wave signals that have been detected on Earth match their predicted patterns of dark matter imprints. To do so, they applied the model to publicly-available data recorded by LVK over the observatories’ first three observing runs. The observatories have picked up hundreds of gravitational-wave signals during this period. For their purposes, the researchers focused on the clearest signals, comprising gravitational waves from 28 separate events. 

For each event, the team compared the pattern of the actual gravitational wave against their model of what the signal would look like if it were generated by the same event in an environment of dark matter. They also compared the gravitational wave to the more expected scenario in which the signal was produced in a vacuum. 

Of the 28 clearest signals that they analyzed, 27 were solidly within the predictions for having been produced in a vacuum. However, the pattern of one event, GW190728, showed a “preference,” or an agreement with the team’s dark matter model. In other words, the signal may carry an imprint of dark matter. 

GW190728 is a gravitational wave that is named after the date that it was detected — on July 28, 2019. Scientists previously determined that the gravitational wave originated from a black hole binary with a total mass of about 20 times the mass of the sun. With their model, the team showed that such a system could have merged through a dense cloud of dark matter and produced a similar gravitational wave to GW190728. 

“The statistical significance of this is not high enough to claim a detection of dark matter, and further checks should be performed by independent groups,” Aurrekoetxea says. “What we think is important to highlight is that without waveform models like ours, we could be detecting black hole mergers in dark matter environments, but systematically classifying them as having occurred in vacuum.”

“We now have the potential to discover dark matter around black holes as the LVK detectors keep collecting data in the coming years,” says co-author Soumen Roy, who led the data analysis part of the work. “It is an exciting time to search for new physics using gravitational waves.”

“Using black holes to look for dark matter would be fantastic,” adds co-author Rodrigo Vicente, who developed the analytical model of the signal. “We would be able to probe dark matter at scales much smaller than ever before.”

This work was supported, in part, by the U.S. National Science Foundation and MIT’s Center for Theoretical Physics — a Leinweber Institute.

Powerful shrinking technique could enable devices that compute with light

Tue, 05/12/2026 - 5:00am

Using a new technique that can create vacancies at any site across a material and then shrink it to about 1/2,000 of its original volume, MIT researchers have designed nanotechnology devices that could be used for optical computing and other applications involving the manipulation of visible light.

The new fabrication technique, known as “implosion carving,” allows researchers to imprint features throughout a hydrogel using photopatterning. If patterned with a resolution of about 800 nanometers, these features can then be shrunk to less than 100 nanometers. 

Because that resolution is smaller than the wavelength of light, the devices can bend light in specific ways that allow them to perform optical computations.

“In order to enable nanophotonic applications in visible light, we need to make nanostructures with feature sizes with a resolution less than 100 nanometers. Only in that way can we precisely create the structure that can manipulate visible light,” says Quansan Yang, a former MIT postdoc, now an assistant professor at the University of Washington, and one of the lead authors of the new study.

In their paper, the researchers demonstrated a photonic device that can perform a simple digit-classification task, but future versions could be used for high-speed imaging and information processing, they say.

Gaojie Yang, a former MIT postdoc, is the co-lead author of the paper, which appears today in Nature Photonics. The paper’s senior authors are Peter So, director of the MIT Laser Biomedical Research Center (LBCR) and an MIT professor of biological engineering and mechanical engineering, and Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a professor of biological engineering, media arts and sciences, and brain and cognitive sciences. Boyden is also a Howard Hughes Medical Institute investigator and a member of MIT’s McGovern Institute for Brain Research, the Yang Tan Collective, and Koch Institute for Integrative Cancer Research.

Nanoscale feature sizes

Photonic devices, which transmit and manipulate light, hold potential for use as optical computer chips that could offer an energy-efficient alternative to semiconductor chips. However, existing techniques for creating 3D photonic devices haven’t yet achieved the 100-nanometer resolution that is needed to channel visible light, which has wavelengths between 380 and 750 nanometers.

Using an additive manufacturing technique called two-photon lithography, researchers can use light to create 3D nanoscale features, but with a resolution larger than 100 nanometers. Another technique, known as electron-beam lithography, can be used to etch smaller-resolution features onto a silicon chip, but it doesn’t generate 3D structures. 

To make 3D devices with the necessary feature size, the researchers extended the concept of “implosion fabrication,” which Boyden’s lab developed in 2018, to create a new variant called “implosion carving.” In implosion carving, a laser creates vacancies — tiny voids where the hydrogel material has been removed — at precisely targeted locations. These vacancies exhibit different optical properties than the surrounding hydrogel. The hydrogel is then shrunk to bring the patterned features down to the nanoscale.

The carving process begins with immersing the hydrogel in a photosensitizing dye. Then, the researchers use a laser to excite the photosensitizer at specific places in the gel, which in turn generates reactive oxygen species that cut the bonds holding the hydrogel together. This creates a vacancy in that spot.

Once the desired vacancy pattern has been carved into the hydrogel, the researchers shrink it using a two-step process. First, they soak it in a solution containing ions, which causes it to shrink about tenfold in each dimension. To shrink it a little more, and to remove the watery solution, the hydrogel then undergoes a process called supercritical drying, which can remove liquid from a gel without damaging it.

At the end of the process, the hydrogel has been shrunk more than tenfold in each dimension, leading to a 2,000-fold reduction in volume. 

Computing with light

To demonstrate the versatility of this technique, the researchers used it to create several 3D shapes, including a helix and a structure inspired by a butterfly wing. Some of these structures are too thin, and have too high an aspect ratio, to be stably created using conventional two-photon lithography.

The researchers also created a device that could perform a simple calculation known as digit classification, a task that is traditionally used to test the performance of neural networks. During this task, the device was presented with a digit, such as 1 or 5, and had to light up a specific location to indicate which number was detected.

To achieve this, the researchers patterned vacancies throughout the device so that it would act like a neural network. The pattern of vacancies would diffract input light as it passed through many layers of patterned hydrogel, so that the output light was determined by the shape of the digit that was entered into the system.

“This is a purely optical system that effectively performs optical computing,” So says. 

“One of the very attractive features of this technology is that you can manipulate the property of the material at every tiny location,” says Dushan Wadduwage, an assistant professor at Old Dominion University and former MIT postdoc, who is also an author of the paper. “You have millions of different locations that you need to decide the property of, and that turns into a really interesting design problem where we can use deep-learning algorithms to find designs over these millions of parameters and come up with parts that go into optical systems in new ways.”

The researchers now plan to use the same principles to build optical devices that could classify cells based on their state as they flow through a microfluidic device. This could help identify rare cells such as circulating tumor cells in a blood sample, they say. 

This approach could also enable the creation of high-throughput imaging techniques for applications such as analyzing tissue samples from biopsies or surgical specimens. And, if adapted to work with other materials such as hydrophobic polymers, it could also be used to create channels within 3D nanofluidic devices. 

Other authors of the paper include Gaojie Yang, Takahiro Nambara, Hiroyuki Kusaka, Yuichiro Kunai, Alex Matlock, Corban Swain, Brett Pryor, Yannick Salamin, Daniel Oran, Hasindu Kariyawasam, Ramith Hettiarachchi, and Marin Soljacic. 

The research was funded, in part, by the MIT-Fujikura Partnership Fund, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the U.S. National Institutes of Health.

Improving the reliability of circuits for quantum computers

Tue, 05/12/2026 - 5:00am

Quantum computers could someday solve pressing problems that are too convoluted for classical computers, such as modeling complex molecular interactions to streamline drug discovery and materials development. 

But to build a superconducting quantum computer that is large and resilient enough for real-world applications, scientists must precisely engineer thousands of quantum circuits so they perform operations with the lowest possible error rate.

To help scientists design more predictable circuits, researchers from MIT and Lincoln Laboratory developed a technique to measure a property that can unexpectedly cause a superconducting quantum circuit to deviate from its expected behavior. Their analysis revealed the source of these distortions, known as second-order harmonic corrections, leading to underperforming circuit architectures.

The MIT researchers fabricated a device to detect second-order harmonic corrections, identify their origin, and precisely measure their strength. This technique could help scientists deliberately design quantum circuits that can counteract the effects of these deviations.

This is especially important in larger and more complicated quantum circuits, where the negative impact of second-order harmonic corrections can be amplified. 

“As we make our quantum computers bigger and we want to have more precise control over the parameters of these devices, identifying and measuring these effects is going to be important for us to have a precise understanding of how these systems are constructed. It is always important to keep diving down into the circuit to see if there is an effect you didn’t expect, which impacts how your device is performing,” says Max Hays, a research scientist in the Engineering Quantum Systems (EQuS) group of the Research Laboratory of Electronics (RLE) and co-lead author of a paper on this research.

Hays is joined on the paper by co-lead author Junghyun Kim, an electrical engineering and computer science (EECS) graduate student in the EQuS group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of EECS and professor of physics, leader of the EQuS group, director of the Center for Quantum Engineering, and associate director of RLE; as well as others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.

A pair-wise problem

In a quantum computer that utilizes superconducting circuits, which is one of many potential computing platforms, Josephson junctions are critical elements that enable the transfer and manipulation of information. These devices utilize two superconducting wires that are brought very close together, with a nanometer-scale barrier between them. Like a traditional circuit, the electric charge in Josephson junctions is carried by electrons. 

But in a superconducting circuit, charge-carrying electrons pair up, forming what are called Cooper pairs. These Cooper pairs can “quantum tunnel” through the barrier between the two wires, transporting current from one wire to the other.

Cooper pairs can usually only tunnel one pair at a time, which is a key property that makes quantum computation possible. 

“If you try to force more Cooper pairs through, it just doesn’t work. This non-linear effect is extremely important for all our circuits. If we didn’t have that effect, then we wouldn’t be able to control or manipulate any quantum information that we store in these circuits,” Hays explains.

But sometimes, Cooper pairs can unexpectedly squeeze through the barrier two at a time, an effect that is known as a second-order harmonic correction. This effect limits the performance of a quantum circuit that has been configured to only allow single-pair tunneling.

“If two Cooper pairs tunnel at the same time, then the assumption we used to build our circuit doesn’t apply anymore. We need to fix the circuit so it can handle that,” Kim says.

But before they can fix the circuit, scientists need to know the source and strength of these distortions.

To obtain this information, the MIT researchers fabricated a quantum circuit so it would be very sensitive to these effects. Essentially, the device is designed to suppress the quantum tunneling process of single Cooper pairs, while allowing the two-pair tunneling process to continue. 

In this way, they can detect the presence of second-order harmonic corrections and precisely measure their strength. 

Straight to the source

They can also use this circuit to pinpoint the source of these harmonics, which helps researchers identify the best way to correct for them. 

There are two potential sources of second-order harmonics — one source is intrinsic to the dynamics of the Josephson junction and the other is caused by the wires connecting the junction to other circuit elements. 

While prior research had indicated the second-order harmonics could be due to the dynamics of the junction, the MIT researchers found that additional inductance — the tendency to oppose changes in the flow of electric current —from wires in the circuit was the actual source in their devices. 

“This is important because, if we know where the second-order harmonic correction is coming from, we can predict how strong it is likely to be, and use that information to engineer more predictable circuits that will hopefully perform better,” Hays says.

In the future, the researchers want to design experiments that more accurately predict how a device will perform when second-order harmonic corrections occur. They also want to study other sources of second-order harmonic corrections and whether those sources could have negative impacts on a circuit under different fabrication conditions.

This work is funded, in part, by the U.S. Department of Energy, the U.S. Co-design Center for Quantum Advantage, the U.S. Air Force, the Korea Foundation for Advanced Studies, and the Intelligence Community Postdoctoral Research Fellowship Program at MIT. 

For most US drivers, EVs offer emissions benefits and cost savings

Tue, 05/12/2026 - 12:00am

Despite regional variability in climate, electricity sources, congestion, and the wide variation in individual driving patterns, electric vehicles generate less greenhouse gas emissions and do not cost more than comparable gas-powered vehicles for drivers and vehicle fleet owners in most parts of the United States, according to a new study by MIT researchers.

The team’s approach captures many key factors that contribute to regional and individual differences in the life-cycle emissions and ownership cost of electric vehicles, including meteorological data, the distance and duration of trips, and fuel prices.

To paint a fuller picture of emissions and costs than was previously available, the researchers sourced data from thousands of U.S. zip codes and drilled down to the level of individual drivers within those locations. Their study considers time-averaged fuel prices so as not to be overly influenced by fluctuations in prices at any one point in time. They finalized their analysis at the end of 2024 and early 2025.

Their results indicate that a person’s driving behaviors can matter as much as regional factors like the local electricity mix when it comes to the emissions savings of an electric vehicle, compared to a similar gas-powered vehicle. In most locations, a battery-electric vehicle reduces emissions between 40 and 60 percent, with larger impacts in urban areas. 

They also found that colder climates do not reduce overall emission benefits as much as some media reports assume.

The researchers utilized this detailed analysis to update a public tool they previously developed, carboncounter.com, which enables individuals to compare the life-cycle emissions and total ownership costs of nearly any car on the market. A new version of carboncounter.com is also being released today.

“There are a lot of statements being thrown around, like that electric vehicles don’t reduce emissions very much in cool climates, and we wanted to analyze these factors systematically and evaluate these statements against one another simultaneously. Rather than simply asking, ‘Are EVs better?’, this paper helps answer ‘better for whom, and under what conditions?’” says Marco Miotti PhD ’20, a senior researcher at ETH Zurich who completed this research while a graduate student in the Institute for Data, Systems, and Society (IDSS) at MIT. 

He is joined on the paper by senior author Jessika Trancik, a professor in IDSS. The research appears today in Environmental Research Letters.

A holistic approach

Many prior studies that compare emissions and costs of electric vehicles (EVs) to combustion-engine vehicles cover a few factors, like the amount of renewable energy in the grid and how gas prices impact affordability, Miotti says.

“To our knowledge, there have been few efforts so far that bring all these factors together. But if someone wants to buy a car and have a better understanding of the factors that affect emissions and costs, this holistic approach is important,” he adds.

The researchers focused on two types of EVs: battery-electric vehicles, which only operate on electricity, and plug-in hybrid electric vehicles, which also have a combustion engine that works in tandem with the battery to optimize fuel savings.

The team expanded and improved a set of previously developed vehicle cost and emissions models to incorporate a wider variety of factors and data types.

For instance, they refined an existing model that estimates energy use and gas mileage so it could capture more nuances of local climate variability. 

“But the real effort was not just in extending these different models, but in bringing together all these different data and making them work with the models in a consistent manner,” Miotti says.

The team sourced data on a wide variety of factors for each U.S. zip code, such as typical drive cycles, the amount of traffic, local gas and electricity prices, makeup of the regional electricity mix, meteorological profiles, and more. They used statistical approaches to amalgamate different types of data. 

For example, the team used a probabilistic matching technique to combine data on how often people drive, which was drawn from nationwide travel surveys, with more detailed GPS data that includes factors like drivers’ acceleration patterns and the distance they usually drive on each day of the week.

The researchers designed their analysis to focus on the spatial picture of emissions and costs, based on U.S. zip codes, while simultaneously considering the impact of the size and features of each specific vehicle model.

“At the end of the day, it’s the vehicle and fleet owners who make decisions about vehicle purchases. So, we wanted to make sure to consider their wide-ranging individual perspectives rather than simply performing a region-by-region comparison,” says Trancik.

Lower emissions, comparable costs

In the end, their modeling framework revealed that all factors they analyzed matter about equally in determining emissions-reduction potential of EVs compared to internal combustion vehicles. 

EVs reduce emissions the most in areas with a cleaner electricity mix, denser traffic, higher annual travel distances, and a mild climate, in decreasing order of importance. In each area, emission reductions increase for drivers who drive more often, drive larger vehicles, and are more frequently stuck in traffic. 

In a colder area like North Dakota, fuel economy of battery-electric vehicles might be reduced by as much as 50 percent on a particularly frigid night, but the effect on annual emission benefits is minimal. 

“We even did a sensitivity study to see if the range is reduced in very cold climates, and we found that, even in the most unfavorable conditions, EVs still reduce emissions by a substantial amount,” Miotti says.

On the cost side, the models show that, in most places across the U.S., EVs are competitive with comparable combustion-engine vehicles in terms of lifetime ownership cost, even without clean vehicle tax credits. And in areas where electricity is relatively affordable, battery-electric vehicles tend to cost less than their plug-in hybrid or combustion-engine counterparts.

In the future, the researchers want to expand this analysis to include a temporal dimension, so the framework also considers how changes in vehicle, fuel, and electricity prices affect emissions and costs over time. 

“While we found that the electricity mix is a big driver of the spatial variation in emissions savings of EVs, the electricity grid is decarbonizing everywhere. As that happens, emissions savings across space will become more homogenous for EVs, but the differences across one driver to another will remain,” Miotti says.

They could also use the framework to explore regions outside the United States or incorporate data on hybrid-electric vehicles that cannot be plugged in.

This work was funded, in part, by the MIT Martin Family Society of Fellows for Sustainability.

Solving hard problems in soft electronics

Tue, 05/12/2026 - 12:00am

A crepe cake.

That’s how Camille Cunin describes the polymer-metal “sandwiches” that became a highlight of her doctoral thesis at MIT’s Department of Materials Science and Engineering (DMSE). Over close to five years, these composites were a key component of her research on bioelectronics — devices designed to interface with the human body.

Cunin completed her PhD in February — she’ll attend commencement in May — but traces her interest in bioelectronics to a formative summer internship at Massachusetts General Hospital (MGH) in Boston in 2019. There, she saw a patient with Parkinson’s disease struggle to swallow a tethered “capsule” intended to function as an exploratory gut probe. The device failed, and the gap between lab-based design and real life became all too apparent.

The incident validated the career path Cunin had already begun to pursue: to make usable products that have a positive impact on people’s lives. It’s a purpose that hasn’t gone unnoticed. “Some might be happy with a sketch of a concept and no actual demonstration, but Camille has a remarkable ability in that she wants to do materials science that can translate to real-world applications,” says her advisor, Aristide Gumyusenge.

Building blocks

The daughter of a psychologist and an engineer, Cunin grew up in Paris, encouraged by her parents to be curious about the world around her. LEGO blocks featured prominently in her childhood. When her father found some old lights in a box in the attic, 9-year-old Camille strung them to decorate her LEGO castle by creating a circuit, complete with a fuse.

Strong grades earned her a spot in France’s elite post-secondary preparatory classes for admission to the country’s prestigious grandes écoles. The intensive and competitive prep classes, however, left Cunin with a sour aftertaste — “for a while I hated science, because the environment was too competitive for me,” she says — and a bit rudderless in engineering school.

It was the research internship thousands of miles from home, at MGH — part of her master’s in engineering at École Centrale de Marseille in France — that rebooted her love of science. The open-ended nature of research appealed to her curiosity and helped her regain confidence in solving problems. She was delighted to be accepted at MIT DMSE for her doctoral studies. “In Boston, I thrived in collaborative environments, and it felt like anything was possible,” she says.

Stretching possibilities

Before starting at MIT, Cunin had a wealth of interdisciplinary experience, from internships and her graduate studies. Unsure about how to slot it all together, she was looking for an advisor at a time when Gumyusenge, Henry L. Doherty Career Development Professor in Ocean Utilization and assistant professor of materials science and engineering, was himself just establishing his lab at DMSE.

When Gumyusenge shared plans to work on projects to turn biological signals into electronic data, Cunin was excited to build on her prior research in biomedical devices. “Here was a chance to fine-tune the materials and to optimize the performance of bioelectronic devices. I really felt I could leverage my strengths in Aristide’s lab,” she remembers.

Gumyusenge proved a great fit, supporting Cunin’s broad research ambitions while helping her shape and integrate them into a coherent doctoral project. She tackled everything from developing and characterizing new materials to fabricating transistors and learning surgery to test the devices in animal models. The final dissertation focused on organic transistors, which boost body signals for easier detection in soft electronics.

Biological signals, like those from nerves in the body, are weak, and transistors amplify them so they can be measured. The challenge with developing bioelectronic devices is that traditional components are hard and rigid, while the human body is not. Devices must perform as needed and be soft and flexible to avoid irritating human tissue.

Another complication: Biological processes involve charged ions moving through fluids, while electronics rely on electrons moving through materials. Before transistors can amplify signals, they first have to convert biological signals into electronic ones for circuits to pick up.

Cunin’s transistor design needed to solve two major challenges: first, to facilitate the movement of electrons and ions in the “channel,” the hub of all signal activity, in soft, hydrated environments; and second, to be pliable enough to conform to the human body.

It was no easy task.

Elegant simplicity

Gumyusenge’s lab typically uses chemistry to modify material behavior, but Cunin took a different tack. Since polymers are soft, and metals are good conductors, she looked to the classic French pastry mille-feuille, which inspired the layered design: thin metal sheets sandwiched between layers of porous elastomer. The metal stretches with the elastomer and forms microcracks. Charges get trapped in the cracks but can still flow through the stack, while the elastomer’s strong adhesion keeps the layers together.

Her approach won Cunin high marks from her advisor. “Camille was working on a complex problem, but she found a way to simplify it with a straightforward approach,” Gumyusenge says.

Of course, even an elegant solution needs test drives. “The more crystalline the polymers are, the better the charges percolate and travel in the material,” Cunin points out, referring to how ordered the semiconducting polymers in the transistor channel are. But if they’re packed too tightly, ions don’t move freely, and the transistor channel can’t switch properly. The arrangement of the spaghetti-like polymer chains controls this balance, so Cunin studied the composites’ structure to optimize both ionic and electronic performance.

Professor Polina Anikeeva, who co-advised Cunin with Gumyusenge and calls her “unstoppable,” says her innovation in the lab was remarkable — but not surprising.

“She didn’t have to be pushed into trying something new,” says Anikeeva, head of DMSE. “I would have higher and higher expectations, and she would consistently meet those higher and higher expectations.”

That drive continues in industry. Cunin now works at the Cambridge-based neurotechnology startup Axoft — just minutes from her former lab at MIT — researching soft electrodes that can be implanted in the brain. The electrodes detect electrical signals that can shed light on the brain’s many functions. “By understanding the brain better, we can eventually develop therapies and treatments that improve patient outcomes,” Cunin says.

Creative outlets

During her time at MIT, Cunin also made time for activities outside the lab, driven by the same curiosity that fueled her research. Committed to sharing her love of materials science and engineering, she was a leading member of the Polymer Graduate Student Association and organized several editions of MIT Polymer Day, a one-day symposium connecting students, faculty, and industry to showcase cutting-edge polymer research.

She also pursued creative outlets. After learning to use 3D graphics software Blender, Cunin illustrated some of the journal covers featuring her work.

She is also a diehard salsa fan and teaches the dance style a couple of times a week. Salsa’s social and collaborative forms appeal to Cunin, who enjoys sharing her passion, experimenting with choreography, and helping fellow dancers improve. “Salsa is fast — I love the mental challenge it brings. I also like that it exposes you to different aspects of the community; it pushes you out of your bubble,” she says.

Gumyusenge appreciates that Cunin made time for other pursuits throughout the grueling demands of a doctoral degree. “She’d work 14 hours a day in the lab, but also go do some hiking and take a break. I love that — it’s something that other PhD students seem to forget sometimes,” he says.

That balance reflects her determination and resolve. “Camille has never been shy about facing challenging research problems,” he says. “She had a research vision and was dedicated to learning the lessons she needed to get it all done. I learned to not get in her way because when Camille told you she would learn how to do something, she would.”

Mapping the ocean with autonomous sensors

Fri, 05/08/2026 - 12:00am

In late October 2025, Tropical Storm Melissa moved through the Caribbean Sea with moderate winds that didn’t get much attention. But on Oct. 25, aided by a patch of warm ocean, the storm rapidly intensified. By the time it made landfall in Jamaica, it was one of the strongest Atlantic hurricanes on record, uprooting trees, tearing the roofs from buildings, and causing catastrophic flooding and power outages.

Ravi Pappu SM ’95, PhD ’01 blames the surprise on our inability to gather high-quality ocean data.

“The storm intensified because of a small pool of hot water in the Caribbean Ocean that fed it energy,” Pappu explains. “These pools are everywhere. They can be hundreds of kilometers wide and are literally invisible to us. If we knew about that pool, we could say very precisely how the hurricane would intensify and better deal with it.”

Pappu thinks he has a way to solve that problem. He is the founder of Apeiron Labs, a company deploying low-cost autonomous ocean sensors to capture more data, in more places, and at a lower cost than is possible today. The company’s devices roam the ocean up to a quarter mile below the surface and continuously gather data on temperature, acoustics, salinity, and more, providing a real-time look at one of the planet’s last known mysteries. He says the sensors can do for the ocean what small, modular CubeSat satellites did for Earth observation from space.

When the devices are ready to be recharged, trackers make it easy to scoop them from the ocean surface. Pappu envisions the recovery process being done by autonomous boats in the future.

“Humanity needs ocean measurements, and we need them at a scale that has never been attempted before,” Pappu says. “It’s a massively hard problem. In the last century, oceanographers resigned themselves to calling it the century of undersampling. If we are successful, we will have a much more fine-grained understanding of our oceans and how they impact humans. That’s what drives us.”

Homework

Pappu came to MIT after completing a 10-year homework assignment. It started when he was a child in India in the 1980s, when he saw a hologram on the cover of National Geographic for the first time.

“I was so taken by it that I decided I needed to learn how to make those three-dimensional images,” Pappu recalls. “I learned what I could by reading books and papers. I didn’t know who invented the hologram until I read a book about MIT’s Media Lab. The book named the person who invented the rainbow hologram, so I wrote him a letter. I didn’t know his address, so I just wrote on the envelope, ‘Steve Benton, holography researcher, MIT, USA.’”

To Pappu’s surprise, the letter reached Benton, and the former Media Lab professor even wrote back with some further topics he needed to learn about.

Pappu never forgot that. He earned a bachelor’s degree in electrical engineering in India, then earned his master’s degree at Villanova University, taking all the optics classes he could.

“Eventually, about 10 years after I saw my first hologram, I wrote to Steve and I said, ‘I did all these things you asked me, now I want to study with you,’” Pappu says. “That’s how I got into MIT.”

Pappu studied under Benton for the next three years. He also studied under Professor Neil Gershenfeld as part of his PhD. Following graduation, Pappu and four classmates started ThingMagic, a consulting company that eventually sold RFID readers. ThingMagic was acquired 2010. Pappu returned to MIT for two years as a visiting scientist around the time of the acquisition.

Following that experience, Pappu worked at In-Q-Tel, an organization that invested in ThingMagic and other companies with potential to advance national security. It was there that Pappu realized how badly the world needed large-scale, inexpensive ocean sensing.

“All of the ocean sensing up to that point, and even today, was about making a really expensive thing that cost $20 million, goes to the bottom of the ocean, and stays there for five years,” Pappu says. “We needed things that are cheap and scalable to deploy wherever you need them for as long as you want.”

Pappu officially founded Apeiron Labs in 2022.

“What we’re focused on is figuring out how the ocean works,” Pappu says. “How warm is it? What is the pH? How salty is it? These things vary from place to place every 10 kilometers or so. It varies over time, and it varies by season. If we knew the details of the ocean with the same fidelity we have for the atmosphere, we would be able to tell exactly when and where hurricanes hit. It would mean less uncertainty.”

Apeiron’s ocean-sensing devices are each 3 feet long and about 20 pounds. They’re designed to be dropped off a boat or plane with biodegradable parachutes and stay in the ocean for six months. Each device continuously sends data to the cloud, is controllable through a cloud-based ocean operating system, and is accessible on a mobile phone.

“We lower the carbon footprint and cost of gathering ocean data because everything else needs a diesel ship — and a fully crewed ship costs $100,000 a day,” Rappu says. “By the time you collect the first data in the old model, you’ve already committed to a lot of money in addition to millions of dollars for the sensors. “

The company’s devices currently have two types of sensors: one for measuring salinity, temperature, and depth, and the other that uses a hydrophone to passively listen for things like submarines and whales.

That could be used to detect the low-frequency calls and clicks of endangered whales and other fish species. Currently, fishermen must look for whales manually with spotters on ships or planes. The data could also be used to improve weather forecasts, monitor noise from offshore energy projects, and track currents.

“Currents are determined by temperature and salinity, so if there’s an oil spill, our data could help determine where that spill is going,” Pappu says. “Or if you’re a fisherman, knowing where the water changes from warm to cold, which is where the fish hang out, is very useful.”

An ocean of possibilities

Apeiron Labs has worked with government defense agencies including the U.S. Navy over the last two years. The company has also tested its devices off the coast of California and in the Boston Harbor.

“The most important thing is, when we show people our approach and what we’ve demonstrated so far, they are no longer asking, ‘Can it be done?’ they’re asking, ‘What can we do with it?’” Pappu says. “Our customers have spent decades working in the ocean and they understand how novel these capabilities are.”

Of all the possibilities, improved storm forecasting could be the one Pappu is most excited about.

“Our mission is to lower the barriers to ocean data,” Pappu says. “The ocean is a huge determinant of weather, climate, and short-term forecasting. Despite our best efforts to predict the intensity of storms, sudden changes are still the norm, and much of that comes down to a lack of understanding of our oceans. If we were monitoring these things over long periods of time and finer spatial scales, we could see these storms coming much earlier with more certainty.”

MIT student Jack Carson named 2026 Udall Scholar

Thu, 05/07/2026 - 3:50pm

Jack Carson, a second-year undergraduate at MIT majoring in electrical engineering and computer science, has been named a 2026 Udall Scholar, one of up to 65 undergraduates nationally to receive the prestigious $7,500 award. 

The Udall Scholarship honors students who have demonstrated a commitment to the environment, Indigenous health care, or tribal public policy. Carson is only the third MIT student to win this award, and the first to win for tribal policy.

Carson, a member of the Cherokee Nation and resident of Oklahoma, exemplifies the multidisciplinary approach to problem-solving that the Udall Scholarship seeks to honor. His work spans artificial intelligence, biomedical research, Indigenous community development, and ethics.

"Jack is the type of leader the Udall Foundation exists to support," says Kim Benard, associate dean for distinguished fellowships. "He's not only conducting cutting-edge research, but he's actively creating opportunities for Indigenous students to enter tech fields."

At MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Carson works in the Barzilay Lab, developing multiomics models for personalized therapeutic target identification. His work on deep learning and statistical physics has resulted in a sole-author paper published at the International Conference on Machine Learning (ICML).

Carson founded Code.Tulsa, a summer technology program designed to introduce Indigenous high school students to computer science and tech careers. The initiative addresses a significant gap: Indigenous communities remain highly underrepresented in technology fields, despite the potential for tech to advance tribal sovereignty and economic development.

This year, Carson won the Elie Wiesel Prize in Ethics Essay Contest. He is an accomplished musician who has performed at Carnegie Hall and with the National Opera, a motorcycle racer, and a self-described philosopher deeply committed to questions of justice and responsibility.

MIT School of Engineering faculty receive awards in winter 2026

Thu, 05/07/2026 - 12:40pm

Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in winter 2026:

Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the departments of Chemical Engineering, Chemistry, and Physics, and the founding director of the Institute for Medical Engineering and Science, as well as James J. Collins, the Termeer Professor of Medical Engineering and Science in the Department of Biological Engineering and IMES, received the 2026 Laureate of the Tel Aviv University International Prize in Biophysics. The prize recognizes outstanding scientists whose work has significantly advanced the understanding of biological systems through physical principles.

Anantha Chandrakasan, MIT provost and the Vannevar Bush Professor in the Department of Electrical Engineering and Computer Science, received the 2025 IEEE Journal of Solid-State Circuits Test of Time Award. The award recognizes an outstanding paper published in the IEEE Journal of Solid-State Circuits at least 10 years prior that has had significant impact on its field.

Charles Harvey, a professor in the Department of Civil and Environmental Engineering; Piotr Indyk, the Thomas D. and Virginia W. Cabot Professor in the Department of Electrical Engineering and Computer Science; John Henry Lienhard, the Abdul Latif Jameel Professor of Water and Mechanical Engineering in the Department of Mechanical Engineering; Frances Ross, the TDK Professor in Materials Science and Engineering; Zoltán Sandor Spakovszky, the T. Wilson (1953) Professor in Aeronautics; and Ram Sasisekharan, the Alfred H. Caspary Professor of Biological Physics and Physics in the Department of Biological Engineering; were elected to the National Academy of Engineering for 2026. One of the highest professional distinctions for engineers, membership in the NAE is given to individuals who have made outstanding contributions to “engineering research, practice, or education,” and to “the pioneering of new and developing fields of technology, making major advancements in traditional fields of engineering, or developing/implementing innovative approaches to engineering education.”

Michael Howland, the Jeffrey Cheah Career Development Professor and assistant professor in the Department of Civil and Environmental Engineering, received a 2026 Faculty Early Career Development (CAREER) Award from the National Science Foundation. The award supports early-career faculty who have the potential to serve as academic role models in research and education and to lead advances in the mission of their department or organization.

Yoon Kim, associate professor in the Department of Electrical Engineering and Computer Science; Anand Natarajan, an associate professor in the Department of Electrical Engineering and Computer Science; and Mengjia Yan, ITT Career Development Professor in Computer Technology and associate professor in the Department of Electrical Engineering and Computer Science, were named 2026 Sloan Research Fellows. Sloan Research Fellowships support fundamental research conducted by early-career scientists, and they are awarded annually to early-career researchers whose creativity, innovation, and research accomplishments make them stand out as the next generation of leaders.

Carlos Portela, the Robert N. Noyce Career Development Professor and associate professor in the Department of Mechanical Engineering, has received a 2026 Young Investigator Award from the Office of Naval Research. The Young Investigator Program seeks to identify and support academic scientists and engineers who are in their first or second full-time tenure-track or tenure-track-equivalent academic appointment, who have received their doctorate or equivalent degree in the past seven years, and who show exceptional promise for doing creative research.

Ellen Roche, the Abby Rockefeller Mauzé Professor and associate department head for research in the Department of Mechanical Engineering and a professor in the Institute for Medical Engineering and Science, received the 2026 Sony Women in Technology Award with Nature. The award recognizes exceptional early- to mid-career women researchers in technology who through their research are driving a positive impact on society and the planet.

Tess Smidt, an associate professor in the Department of EECS, was named co–principal investigator on a National Science Foundation (NSF) AI Research Institute award and also received a 2025 Department of Energy Office of Science Early Career Research Program Award. The NSF AI Materials Institute (AI-MI) aims to propel foundational AI research past the limitations of existing AI algorithms by pursuing materials discovery and conquering knowledge- and data-centric challenges. The DoE Early Career Research Program provides five-year awards to exceptional early career researchers at U.S. academic institutions, DoE National Laboratories, and Office of Science User Facilities to stimulate new research directions in mission critical areas supported by DoE’s Office of Science.

Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, was elected to the 2025 cohort of Association for Computing Machinery Fellows. ACM Fellows, the highest honor bestowed by the professional organization, are registered members of the society selected by their peers for outstanding accomplishments in computing and information technology and/or outstanding service to ACM and the larger computing community.

Harry L. Tuller, a professor in the Department of Materials Science and Engineering, received The Senior Scientist Award from the International Society for Solid State Ionics. The Senior Scientist Award, the most prestigious award of the International Society for Solid State Ionics, is presented to a senior solid-state ionics researcher who has made outstanding contributions to the science and engineering of solid-state ionics.

Vinod Vaikuntanathan, the Ford Foundation Professor of Engineering in the Department of Electrical Engineering and Computer Science, was named a 2026 fellow of the International Association for Cryptologic Research. ACR has established the IACR Fellows Program to recognize outstanding IACR members for technical and professional contributions.

Celebrating dorm-to-market social entrepreneurship at MIT

Thu, 05/07/2026 - 11:20am

Over 200 students, alumni, faculty, staff, funders, and community collaborators gathered at the MIT Media Lab on April 15 for the 25th annual IDEAS Social Innovation Incubator Showcase and Awards, hosted by the Priscilla King Gray (PKG) Center for Social Impact

Since its founding in 2001, the PKG Center’s IDEAS Incubator has launched hundreds of social ventures in over 60 countries, guiding MIT’s technical talent toward urgent social challenges — from energy and climate to health care, education, and economic development. 

“Global and local challenges are increasingly complex and interconnected,” said Lauren Tyger, assistant dean for social innovation at the PKG Center and director of IDEAS. “IDEAS educates technical founders in systems thinking and community-based innovation, helping students develop business models that achieve both measurable social outcomes and financial sustainability.” 

IDEAS alumni celebrated

The event celebrated the many successful social ventures launched by IDEAS alumni with a 25-Year Impact Report and a keynote speech from IDEAS alumnus Bill Thies ’01, ’02, MNG ’02, PhD ’09. 

Thies traced his tuberculosis medication adherence work in India from a low-cost electronic pillbox through multiple iterations that helped shift India’s treatment policies toward patient autonomy. Ultimately, his work led to Nikshay, a national electronic medical records platform now supporting 150 million people, which recently transitioned to full government control. 

“Innovations can open doors for much more important changes than the innovations themselves,” Thies said. Limitations to technical interventions surface important questions, such as “what policies do we want to change, to become more supportive and human-centered? And how can technology be a bridge to that new world we would envision?”

Thinking back on the influence of IDEAS on his own path, Theis reflected: “I always assumed that in IDEAS we were incubating projects. But what I’ve come to realize is that it’s actually the other way around: the projects are incubating us. We are the ones who will ultimately drive the change we hope to see in the world.”

Vision for scaling social entrepreneurship at MIT and catalytic gift announced  

Thies’ message was echoed by Chancellor for Academic Advancement Eric Grimson, who explained how IDEAS aligns with MIT’s strategic initiatives, including MIT’s Generative AI Impact Consortium (MGAIC), Health and Life Sciences Collaborative (MIT HEALS), and the Climate Project, as well as President Sally Kornbluth’s and Provost Anantha Chandrakasan’s recent call to accelerate entrepreneurship. “Many of the current presidential initiatives naturally include an opportunity for social entrepreneurship,” said Grimson, who applauded IDEAS alumni pursuing ventures in climate, health, and AI-powered social enterprises. 

The PKG Center’s director, Alison Badgett, shared the center’s vision for the future of IDEAS. “As MIT’s only student entrepreneurship program focused solely on social impact,” said Badgett, “we recognize the need to both scale social entrepreneurship programming at MIT and to better position our student founders for scale after graduating.” 

Badgett announced a first-in gift of $150,000 from the Morgridge Family Foundation to help realize the center’s vision. The foundation’s gift will enable the PKG Center to develop a robust social impact investor ecosystem at MIT, connecting student- and alumni-led ventures with potential funders and helping more aspiring entrepreneurs see social impact as a viable path. 

This year’s award-winning social ventures

This year’s top $20,000 award winner was Beyond Words, an assistive application for iPhone and Apple Watch that gives nonverbal individuals a layer of support by passively capturing biometrics, audio, and location, and communicating it to caregivers. 

Other award winners were:

  • AyuConnect ($10,000) uses WhatsApp-native, voice-first electronic health records to enhance care access while reducing clinician burnout in India.
  • PEAR ($7,500) offers a hands-on STEM research program for Nigerian and other African students, equipping them with technical skills to solve community problems.
  • CommonGround ($5,000) connects Bostonians to tailored and hyper-local climate actions through an online platform, replacing eco-anxiety with collective resilience.
  • Sehat Screen ($5,000) is an AI-powered cervical cancer screening device for women in Afghanistan and other resource-constrained countries.
  • Breakthrough Health ($2,500) is a care coordination platform that links hepatitis C patients in recovery centers to health care.
  • Sero ($2,500) is a voice-first AI tool that helps rural borrowers in Nepal understand loan contracts and access fair credit in their own language, with no dependency on literacy.

During the event, Shane Kosinski, executive director of the Office of the Vice President for Energy and Climate, announced inaugural Climate Student Innovators awards, funded by the MIT Climate Project. Four IDEAS teams received this award, which will be presented annually.

“The MIT Climate Project is an all-of-MIT initiative with the ambitious goal to make a measurable difference on climate change within a decade. We reach this global impact not by top down mandates, but by testing good ideas where they are needed most and supporting them to succeed,” explained Kosinski. “This vision is also hardwired into the character, history and purpose of PKG IDEAS.”

This year’s IDEAS teams awarded by the MIT Climate Project were:

  • Q’ochas Resilientes ($15,000) co-designs climate-resilient water technology in the Peruvian Andes to uplift ancestral knowledge and support agricultural livelihoods.
  • NECTICA ($15,000) tackles urban flooding in Lagos by empowering women-led cooperatives with a low-tech sorter bin to separate and monetize composite waste.
  • MittiNav ($15,000) designs production and supply-chain systems to scale biochar technology that restores soil and stores carbon.
  • Resilient Grid ($5,000) collects and processes food waste through anaerobic digestion on skid platforms to produce biogas for electricity and heat in Caribbean island nations.

“The Climate Project is thrilled to present the first-ever Climate Student Innovators Awards to these teams,” said Vice President for Energy and Climate Evelyn Wang. “We applaud this year’s IDEAS winners for developing systemic interventions in partnership with affected communities.” 

Several additional teams received $1,000 awards: 

  • ​​1for1Health is a fertility platform offering education, testing, and insights to expand access and reduce disparities in reproductive health decisions.
  • Ceed CRM brings cutting-edge AI to mission-driven organizations that have been stuck with tools built for sales teams, not social impact.
  • CerviSeal created a medical device that reduces pain, tissue trauma, and risk during cervical manipulation for women undergoing hysteroscopy.
  • FoodLoop connects farms and restaurants through matchmaking, demand forecasting, and forward contracts to strengthen local food systems.
  • Homeroom Hero is an AI tool for teachers that instantly grades short-form assessments, reducing workload and improving student learning without putting tech in front of kids.
  • Gees Health is a noninvasive, at-home hormone monitor that helps women with polycystic ovary syndrome track and manage their health with continuous insights.
  • Illume makes discreet wearables that are a safe way for recovering victims of human trafficking to contact trusted people, building their support network.
  • Longevia is an AI-powered platform that translates complex medical data into personalized, actionable insights for chronic kidney disease patients.
  • Opta is an AI talent refinery upskilling Brazil's low-socioeconomic status students for small and medium business jobs, driving economic mobility.
  • Recover Hospitality scales recovery-informed wellness coaching for hospitality workers through AI-powered motivational interviewing and benefits navigation.

The event closed with Tyger thanking the vast network of alumni, mentors, funders, and campus partners who make IDEAS possible, and the 104 volunteers who supported this year’s incubator challenge. “IDEAS builds more than social enterprises — we’re building the infrastructure and community needed for alumni and their ventures to achieve long-lasting impact. Our vision is a future where MIT entrepreneurship is not only groundbreaking, but fundamentally grounded in social impact.” 

Rethinking how our brains use categories to make sense of the world

Thu, 05/07/2026 - 10:55am

In the new review article, “Categorization is Baked into the Brain,” cognitive scientists Earl K. Miller, Picower Professor of Neuroscience at MIT, and Lisa Feldman Barrett, university distinguished professor at Northeastern University, contend that categorization is part of a predictive process the brain uses to efficiently meet the body’s needs in a fast-paced, otherwise overwhelming sensory world. In that sense, their paper in Nature Reviews Neuroscience challenges decades of dogma about how and why the brain boils down what it sees, hears, smells, tastes, and feels.

Categories are groups of things that are similar enough to be considered functionally equivalent. When you walk through a neighborhood, you’ll naturally experience the furry, four-legged, barking animal ahead of you as a “dog.” In the classic view of cognition, your brain arrives at that categorization by soaking in lots of basic sensory features of the hound — its shape, its size, the sounds it makes, its behavior — and compares that to some prototype “dog” stored in your memory. Hundreds of milliseconds after the first sensory inputs, you can then decide what you might want to do about the dog.

Barrett and Miller argue that that’s wrong. Instead, they propose that your brain comes prepared for sensory patterns with predictions of the motor action plans that are most likely to achieve the needs and goals you bring to the moment. Those prediction signals can be described as a momentary category that the brain constructs to shape the processing of sensory signals. 

From the very start, incoming sensory signals are compressed and abstracted into that category to efficiently select the best predicted plan. If you are in an unfamiliar neighborhood your brain might construct the category “dog” to avoid being bit, resulting in: “Back away slowly while saying nice doggie.” If you are on your own block and encounter a familiar dog, your brain might construct a category to kneel and open up your arms to summon your neighbor’s adorable pup for a satisfying petting.

In either case, the category “dog” arises in the context of your needs and your prediction from a menu of learned action plans for similar situations, not from an intellectual exercise of neutrally regarding sensory inputs, comparing them to a fixed prototype, and then planning from there. If the brain really worked the classically believed way, you’d be on the back foot when the unfamiliar dog lunged at you.

“One of the main things your brain has to do is predict the world,” says Miller, a faculty member of The Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences at MIT. “It takes several hundred milliseconds to process things, and meanwhile the world is moving on. Your brain has to anticipate things.”

The most pragmatic and efficient way to survive and thrive in such a world, Barrett says, is to have your needs and potential plans ready for the sensory situation. If your predictions are right, you’re prepared in time. If they are wrong, you adjust and learn from it.

“The stimulus, cognition, response model of the brain is wrong,” says Barrett, a faculty member in Northeastern’s Department of Psychology and co-director of the Interdisciplinary Affective Science Laboratory. “The brain prepares for a response and then perceives a stimulus. A brain is not reactive. It’s predictive. Action planning comes first. Perception comes second, as a function of the action plan.”

Anatomical and functional evidence

Throughout the review, Barrett and Miller ground the provocative proposal in copious anatomical, electrophysiological, and imaging evidence about the brain. They cite numerous experimental results that show how the brain is structured to broadcast memories to create motor plans that flow back toward signals that arrive from the body’s sensory surfaces, actively whittling them down and shaping them to give them meaning.

“The capacity to create similarities from differences — to abstract — is embedded in the architecture of the nervous system, and you can see that by looking at what is connected to what and by observing signal flow,” Barrett says.

For example, as circuits feed signals “forward” from sensory surfaces (such as the retina) to regions of the cerebral cortex that are focused on sensory processing (such as the visual cortex) toward the areas that are important for executive control (the prefrontal cortex) and control of the body (limbic cortex), information passes from many small, barely connected neurons to fewer, bigger, and more well-connected neurons. Such an architecture compresses sensory details into increasingly abstract representations that group many different features into smaller groups of similar features, and in doing so helps to select a predicted action plan from the broader category that’s already there.

“Your brain is a big funnel to take the outside world and turn it into an output,” Miller says.

Moreover, anatomical evidence shows that the neurons in the cortex maintain many more connections to provide feedback from memory that control sensory regions than to feed sensory information forward. As much as 90 percent of synapses in the visual cortex are “feedback” instead of “feedforward,” Barrett and Miller wrote. In other words, the brain is built to use memory to filter incoming sensory signals, consistent with imposing needs and goals on what would otherwise be a deluge of sights, sounds, and other sensations.

Yet another line of evidence are numerous studies from Miller’s own lab showing that at the broad network level of information flow in the cortex, the brain uses beta frequency waves that carry information about goals and plans, to constrain the expression of gamma frequency waves that carry information about specific sensory inputs.

Finally, the dominance of “feedback” over “feedforward” signals in the cortical architecture allows for the possibility that sensory signals are made meaningful in terms of predicted plans. When these plans are wrong, the resulting surprise can be integrated for future use.

“In science, there is a special name for that: learning,” Barrett says.

Implications for human thought and disease

In the end, Barrett and Miller’s proposal completely changes the idea of categorization, shifting it from being a particular intellectual skill to being a fundamental function for predictively meeting the body’s needs (or, “allostasis”).

“A category may not be a representation that an animal has, but a signal processing event than an animal does, predictively, to constrain the meaning of a high-dimensional ensemble of signals in a particular situation,” the authors wrote. “Categorization renders these signals meaningful — similar to one another and to past allostatic events — in terms of some goal or function.”

Humans, Barrett says, have a relatively massive amount of the neural network architecture to perform these pragmatic abstractions, and therefore can make categorizations that seem outright metaphorical (e.g., a functional similarity between “climbing the career ladder” and climbing a literal physical ladder).

But these processes can also go awry in disease, Barrett and Miller note. Depression can be seen as a disorder in which the brain imposes overly broad categories, such as “threat” or “criticism” on sensory episodes that don’t have to be perceived that way. By contrast, autism can manifest with features of inadequate compression of incoming sensory signals, not generalizing enough to recognize when a situation is similar enough to a prior one to select the appropriate plan.

Funding to support the paper came from the National Institutes of Health, The U.S. Army Research Institute for the Behavioral and Social Sciences, the Office of Naval Research, the Unlikely Collaborators Foundation, The Freedom Together Foundation, and The Picower Institute for Learning and Memory.

Photonics advance could enable compact, high-performance lidar sensors

Thu, 05/07/2026 - 5:00am

Lidar systems use pulses of infrared light to measure distance and map a 3D scene with high resolution, allowing autonomous vehicles to rapidly react to obstacles that appear in their path. But traditional lidar sensors are expensive, bulky systems with many moving parts that degrade over time, limiting how the sensors can be deployed.

A new study from MIT researchers could help to enable next-generation lidar sensors that are compact, durable, and have no moving parts. The key advance is a novel design for a silicon-photonics chip, which is a semiconductor device that manipulates light rather than electricity. 

Typically, such silicon-photonics chip-based systems have a restricted field of view, so a silicon-photonics-based lidar would not be able to scan angles in the periphery. Existing workarounds to this problem increase noise and hamper precision.

To avoid these drawbacks, the MIT researchers designed and demonstrated an array of integrated antennas that minimizes unwanted crosstalk between the antennas. Their innovation allows a lidar chip to scan a wider field of view while maintaining low-noise operation compared to other silicon-photonics-based approaches.

This novel demonstration could fuel the development of advanced lidar sensors for demanding applications like autonomous vehicle navigation, aerial surveying, and construction site monitoring.

“The functionality we demonstrated in this work solves a fundamental problem for integrated optical-phased-array technology, enabling future lidar sensors that can achieve significantly higher performance than we could demonstrate previously,” says Jelena Notaros, the Robert J. Shillman Career Development Associate Professor of Electrical Engineering and Computer Science (EECS) at MIT, a member of the Research Laboratory of Electronics, and senior author of a paper on this innovation.

She is joined on the paper by lead author and EECS graduate student Henry Crawford-Eng as well as EECS graduate students Andres Garcia Coleto, Benjamin M. Mazur, Daniel M. DeSantis, and Tal Sneh. The research appears today in Nature Communications.

Adjusting an antenna array

Many traditional lidar systems map a scene using a bulky box that spins to send pulses of light in multiple directions. The light bounces off nearby objects and returns to the sensor, providing data that are used to reconstruct the environment. 

Instead, silicon-photonics-based lidar sensors systematically scan an emitted light beam in multiple directions non-mechanically using a system called an integrated optical phased array (OPA).

Key to an OPA is an array of integrated antennas that have tiny perturbations placed periodically along their length. These corrugations allow the antenna to scatter light from an input source up and out of the photonic chip.

By adjusting the phase of light routed to each antenna, the researchers can change the angle at which the light is emitted out of the array. In this way, they can steer the beam with no moving parts.

But if engineers place the antennas too close together, the antennas will couple with each other and the light they emit will get jumbled. To avoid this, scientists typically space the antennas farther apart, but this also has downsides.

If the antennas are spaced too far apart, the array will emit multiple copies of the light beam at different angles. The researchers can only steer the primary beam so far in either direction until it is undiscernible from its neighboring copies.

“This limits our field of view, so the autonomous vehicle now only knows what is in front of it for a certain angular range,” Garcia Coleto explains.

These beam copies, known as grating lobes, can cause false positives by confusing the sensor. They also waste power.

The MIT researchers solved this problem by designing a set of reduced-crosstalk antennas that can be placed close together without causing a significant coupling effect.

In a standard OPA, all the antennas have the same design, meaning the same arrangement of corrugations. These identical antennas couple very strongly when placed close together.

To address this fundamental roadblock, the MIT researchers designed a set of three antennas with different geometries, varying the width of each antenna and the size and arrangement of corrugations. With varied geometries, each antenna has a different propagation coefficient, which determines how light travels down the antenna.

“Because the antennas have very different propagation coefficients, when we put them close together, essentially each antenna doesn’t ‘see’ the antenna next to it. Therefore, it won’t couple with its neighbor,” Garcia Coleto says. 

A photonic balancing act

But even though the antennas have different propagation coefficients, the researchers still need them to emit light in the same way. 

They achieved this by carefully designing the antennas to meet three parameters. 

First, each antenna must emit the same amount of light. Second, each antenna must emit a beam at the same angle for the same wavelength of light. Third, the emission angle must change uniformly across the array as the researchers steer it.

“We have this challenge where we require the antennas to have different geometries to reduce the crosstalk, but we need to simultaneously design the antennas to have the same emission characteristics. While it is possible to engineer this, it is extremely difficult because, typically, when antennas are designed with different geometries, they tend to behave differently,” Crawford-Eng says.

The researchers first developed the fundamental electromagnetic theory behind how radiative modes couple. They used that theory as a guide to design and simulate their antennas.

Building on those analyses, they fabricated the OPA with reduced-crosstalk antennas spaced significantly closer than they would be in a traditional OPA, then experimentally tested the system.

While a typical OPA would have coupling of about 100 percent in this experiment, their OPA reduced coupling to about 1 percent while generating a single, precise beam. Using this design, they demonstrated accurate beam steering across a wide field of view without any grating lobes. 

In the future, the researchers plan to further improve their technique to enable an even wider field of view. In addition, they are exploring a new potential solution to wide field-of-view functionality that they discovered while developing the underlying theory.

“This work addresses a longstanding challenge in integrated optical phased arrays: simultaneously achieving both a wide field of view, which requires dense antenna spacing, and high beam quality, which requires low crosstalk between neighboring antennas. The authors solve this problem with an elegant antenna design. Their innovation is an important step forward for chip-scale, solid-state beam-steering technology,” says Joyce Poon, professor of electrical and computer engineering at the University of Toronto and director of the Max Planck Institute of Microstructure Physics, who was not involved with this work.

This research was supported, in part, by the Semiconductor Research Corporation, the National Science Foundation, an MIT MathWorks Fellowship, the U.S. Department of War, and the MIT Rolf G. Locher Endowed Fellowship.

Study: Firms often use automation to control certain workers’ wages

Thu, 05/07/2026 - 12:00am

When we hear about automation and artificial intelligence replacing jobs, it may seem like a tsunami of technology is going to wipe out workers broadly, in the name of greater efficiency. But a study co-authored by an MIT economist shows markedly different dynamics in the U.S. since 1980. 

Rather than implement automation in pursuit of maximal productivity, firms have often used automation to replace employees who specifically receive a “wage premium,” earning higher salaries than other comparable workers. In practice, that means automation has frequently reduced the earnings of non-college-educated workers who had obtained better salaries than most employees with similar qualifications. 

This finding has at least two big implications. For one thing, automation has affected the growth in U.S. income inequality even more than many observers realize. At the same time, automation has yielded a mediocre productivity boost, plausibly due to the focus of firms on controlling wages rather than finding more tech-driven ways to enhance efficiency and long-term growth.

“There has been an inefficient targeting of automation,” says MIT’s Daron Acemoglu, co-author of a published paper detailing the study’s results. “The higher the wage of the worker in a particular industry or occupation or task, the more attractive automation becomes to firms.” In theory, he notes, firms could automate efficiently. But they have not, by emphasizing it as a tool for shedding salaries, which helps their own internal short-term numbers without building an optimal path for growth.

The study estimates that automation is responsible for 52 percent of the growth in income inequality from 1980 to 2016, and that about 10 percentage points derive specifically from firms replacing workers who had been earning a wage premium. This inefficient targeting of certain employees has offset 60-90 percent of the productivity gains from automation during the time period.

“It’s one of the possible reasons productivity improvements have been relatively muted in the U.S., despite the fact that we’ve had an amazing number of new patents, and an amazing number of new technologies,” Acemoglu says. “Then you look at the productivity statistics, and they are fairly pitiful.”

The paper, “Automation and Rent Dissipation: Implications for Wages, Inequality, and Productivity,” appears in the May print issue of the Quarterly Journal of Economics. The authors are Acemoglu, who is an Institute Professor at MIT; and Pascual Restrepo, an associate professor of economics at Yale University.

Inequality implications

Dating back to the 2010s, Acemoglu and Restrepo have combined to conduct many studies about automation and its effects on employment, wages, productivity, and firm growth. In general, their findings have suggested that the effects of automation on the workforce after 1980 are more significant than many other scholars have believed. 

To conduct the current study, the researchers used data from many sources, including U.S. Census Bureau statistics, data from the bureau’s American Community Survey, industry numbers, and more. Acemoglu and Restrepo analyzed 500 detailed demographic groups, sorted by five levels of education, as well as gender, age, and ethnic background. The study links this information to an analysis of changes in 49 U.S. industries, for a granular look at the way automation affected the workforce. 

Ultimately, the analysis allowed the scholars to estimate not just the overall amount of jobs erased due to automation, but how much of that consisted of firms very specifically trying to remove the wage premium accruing to some of their workers. 

Among other findings, the study shows that within groups of workers affected by automation, the biggest effects occur for workers in the 70th-95th percentile of the salary range, indicating that higher-earning employees bear much of the brunt of this process. 

And as the analysis indicates, about one-fifth of the overall growth in income inequality is attributable to this sole factor.

“I think that is a big number,” says Acemoglu, who shared the 2024 Nobel Prize in economic sciences with his longtime collaborators Simon Johnson of MIT and James Robinson of the University of Chicago.

He adds: “Automation, of course, is an engine of economic growth and we’re going to use it, but it does create very large inequalities between capital and labor, and between different labor groups, and hence it may have been a much bigger contributor to the increase in inequality in the United States over the last several decades.” 

The productivity puzzle

The study also illuminates a basic choice for firm managers, but one that gets overlooked. Imagine a type of automation — call-center technology, for instance — that might actually be inefficient for a business. Even so, firm managers have incentive to adopt it, reduce wages, and oversee a less productive business with increased net profits.

Writ large, some version of this seems to have been happening to the U.S. economy since 1980: Greater profitability is not the same as increased productivity.

“Those two things are different,” says Acemoglu. “You can reduce costs while reducing productivity.” 

Indeed, the current study by Acemoglu and Restrepo calls to mind an observation by the late MIT economist Robert M. Solow, who in 1987 wrote, “You can see the computer age everywhere but in the productivity statistics.” 

In that vein, Acemoglu observes, “If managers can reduce productivity by 1 percent but increase profits, many of them might be happy with that. It depends on their priorities and values. So the other important implication of our paper is that good automation at the margins is being bundled with not-so-good automation.” 

To be clear, the study does not necessarily imply that less automation is always better. Certain types of automation can boost productivity and feed a virtuous cycle in which a firm makes more money and hires more workers. 

But currently, Acemoglu believes, the complexities of automation are not yet recognized clearly enough. Perhaps seeing the broad historical pattern of U.S. automation, since 1980, will help people better grasp the tradeoffs involved — and not just economists, but firm managers, workers, and technologists. 

“The important thing is whether it becomes incorporated into people’s thinking and where we land in terms of the overall holistic assessment of automation, in terms of inequality, productivity and labor market effects,” Acemoglu says. “So we hope this study moves the dial there.”

Or, as he concludes, “We could be missing out on potentially even better productivity gains by calibrating the type and extent of automation more carefully, and in a more productivity-enhancing way. It’s all a choice, 100 percent.”

MIT BrainTrust supports neighbors living with brain injuries

Wed, 05/06/2026 - 2:25pm

Since 1998, members of MIT’s BrainTrust club have helped Boston-area residents with brain injuries or other neurological disorders through their buddy program. The organization’s members also visit patients in nursing homes suffering from neurological issues.

BrainTrust is one of the founding chapters of Synapse National, an organization created by MIT alumna Alissa Totman ’13. Synapse’s goal is to provide social support for individuals living with brain injuries and to educate and inspire student leaders in the field of brain injury.

“Learning directly from individuals who had experienced brain injury during my time in BrainTrust gave me an appreciation of the gaps in resources and opportunities for improvement in brain injury care, which ultimately motivated me to pursue a career in brain injury medicine. My experience in BrainTrust continues to shape my approach to patient care and my professional goal of improving access to specialized care for individuals with brain injury by serving as a consulting provider in the acute care hospital, as well as by training the next generation of leaders in the field,” says Totman.

The club’s president, junior Karie Shen, who is pursuing a double major in biology (Course 7) and brain and cognitive science (Course 9), says, “BrainTrust is a student-run service organization that provides support for individuals with brain injury and other neurological disorders. I joined BrainTrust because it seemed like the perfect intersection of community service and neuroscience, and I care about these two things deeply.”

BrainTrust volunteers participate in training and then are paired with a local buddy who has experienced a brain injury. Members can also spend time on the weekends with patients in nursing homes who have dementia, Alzheimer’s disease, or who have had a stroke.

Shen, along with Elizabeth Zhang, president of the MIT Pre-Med Society, recently developed a program that allows BrainTrust members to visit patients in hospice. “It’s an experience that is deeply valuable for students. We work through a third-party organization called Compassus. Because the pairing process is HIPAA-protected, our role as BrainTrust executive members is to recruit students and connect them with the hospice volunteer coordinator for training. We also provide funding for transportation, generously supported by the UA Community Service Committee,” says Shen.

Shen, who plans to go to medical school and specialize in neurology, neuro-oncology, or geriatric medicine when she completes her degree, finds the experience rewarding, at times difficult, but also offers a glimpse into the reality of working with people with brain injuries.

“Visiting the people in hospice or a nursing home is hard. I’ve seen residents cry for no apparent reason that the nurses or I can understand. But I have also come to understand that caring for a patient’s quality of life and dignity is equally important. What I came to realize is that my presence itself mattered. That perspective has shaped how I think about the kind of physician I want to become,” says Shen.

First-year student Jordan Lacsamana heard about the club during Campus Preview Weekend and was immediately interested. Lacsamana, who will major in brain and cognitive sciences, is a volunteer in the Buddy Program and meets with her buddy at least once a month.

“I joined the club because it aligned with my interests academically, but I also wanted to support someone in the Boston community. I’m pre-med, and I’m interested in surgery, possibly neurosurgery or cardiovascular surgery. But I also think it’s nice to have someone outside of MIT to talk with. It’s great to learn more about them and have that one-on-one friendship, which really is the goal,” says Lacsamana.

Lacsamana says she enjoys spending time with Amanda, her buddy, and exploring Boston and Harvard Square, meeting for coffee or meals, and getting as much out of the relationship as Amanda does.

“I see her as a mentor because coming to Boston from Dallas was such a big change, so I’ve also been able to look to her for advice. But I think one of the great things about the program is that you get to learn more about them as an individual, instead of seeing them as just a person with an injury,” says Lacsamana.

“Many of our brain injury buddies simply enjoy being around students, staying connected to what we are learning and doing. Some have been with the club for years, even upwards of a decade, and still keep up with former student members long after they graduate. It is really wonderful to see how BrainTrust has created this web of friendships between people who would otherwise never have met,” says Shen.

“Amanda has stayed in touch with her former buddy since she graduated from MIT and is going to her wedding,” says Lacsamana. “I think it’s a testament to how amazing this program is at forming those connections.”

MIT students who seek real-world opportunities in fields such as cognitive science, health care, medicine, and cognitive/neurological prosthetics, or who want to help a local resident, can join BrainTrust. Email braintrust-exec@mit.edu for more information.

Method for stress-testing cloud computing algorithms helps avoid network failures

Wed, 05/06/2026 - 12:00am

Researchers from MIT and elsewhere have developed a more user-friendly and efficient method to help networking engineers identify potential system failures before they cause major problems, like a cloud service outage that leaves millions of users unable to access applications. 

The technique uncovers hidden blind spots that might cause a shortcut algorithm to fail unexpectedly when it is deployed. 

This new approach can identify worse-case scenarios that an engineer might miss if they use a traditional method that compares an algorithm against a set of human-designed past test cases. It is also less labor-intensive than other verification tools that require engineers to rewrite an algorithm in a complex mathematical code each time they want to test it.

Instead of needing a mathematical reformulation, the new method reads the algorithm’s source code directly and automatically searches for worse-case scenarios that lead to the highest level of underperformance.

By helping engineers quickly and easily stress-test a networking algorithm before deployment, the method could catch failure modes that might otherwise only appear in a real outage. The technique could also be used to analyze the risks of deploying AI-generated code.

“We need to have good tools to measure the worse-case scenario performance of our algorithms so we know what could happen before we put them into production. This is an easy-to-use tool that can be plugged into current systems so we can find the best algorithm to use and ensure the worse-case scenarios are identified in advance,” says Pantea Karimi, an electrical engineering and computer science (EECS) graduate student and lead author of a paper on this new technique. 

She is joined on the paper by senior authors Mohammad Alizadeh, an associate professor of EECS and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Behnaz Arzani, a principal researcher at Microsoft Research; along with Ryan Beckett, Siva Kesava Reddy Karkarla, and Pooria Namyar, researchers at Microsoft Research; and Santiago Segarra, a professor at Rice University. The research will be presented at the USENIX Symposium on Networked Systems Design and Implementation. 

Assessing algorithms

In large systems like cloud servers, the tried-and-true algorithms that route data from one place to another or are often too computationally intensive to run in a feasible amount of time. 

So, engineers and researchers develop suboptimal algorithms called heuristics that can run much faster. However, there could be unexpected but plausible circumstances that will cause a heuristic to underperform or fail when deployed.

A heuristic can route millions of data requests across a cloud network in seconds, but under the wrong conditions — like an unusual traffic pattern or a sudden spike in demand — the shortcut can break down in ways the designer never anticipated.

When these problems occur, a company may have no choice but to drop some requests that can’t be processed. 

The firm could also deliberately allocate more resources in advance to head-off a potential disaster, leading to higher overall costs and wasted electricity from underutilization.

“This is really bad for a company because, either way, they are going to lose a lot of money. If this particular scenario hasn’t happened before and was never tested, how would a developer know in advance before it happens?” Karimi says.

Stress-testing heuristics typically involves running a new algorithm in simulation using a set of human-designed test cases and manually comparing the performance with a previous algorithm. But this is time-consuming and can leave blind spots if an engineer doesn’t know to test for certain situations.

Alternatively, engineers could use a verification tool to evaluate the performance of their heuristic more systematically. However, these tools require the engineer to encode the algorithm into a complex, mathematical formula that can take days to flesh out. The process, which doesn’t work for every type of heuristic, must be repeated each time the engineer changes the code.

Instead, the researchers developed a more user-friendly and efficient verification tool, called MetaEase, that analyzes the heuristic’s existing implementation code directly to identify the biggest risks of deploying it.

“This would reduce the friction of using these heuristic analysis tools,” Karimi says.

She began this work during an internship at Microsoft Research, where the team previously developed MetaOpt, a heuristic analyzer that requires engineers to rewrite their algorithms as formal optimization models. MetaEase grew out of the desire to remove that barrier.

Maximizing the gap

MetaEase is driven by two key innovations. First, it uses a technique called symbolic execution to map out the different decision points in the heuristic's code. These are places where the algorithm might behave differently depending on the input.

This technique produces a set of representative starting points, each corresponding to a distinct behavior the heuristic could exhibit.

Second, from these starting points, MetaEase utilizes a guided search to systematically move toward inputs that make the heuristic perform as poorly as possible, compared to the optimal algorithm.

In machine learning, for instance, an input could be a set of user queries to an AI chatbot at a given time.

“In this way, we have exploited every possible heuristic behavior and used special techniques to move in the direction where we think the performance gap is going to increase,” Karimi explains.

In the end, MetaEase identifies the input that maximizes the performance gap between the heuristic and an optimal benchmark.

With this information, a heuristic developer could inspect the input to understand what went wrong and incorporate safeguards that will prevent the problem from happening during deployment.

In simulated experiments, MetaEase often identified inputs with larger performance gaps than traditional methods — pinpointing more catastrophic worse-case scenarios. And it did so much more efficiently. 

It was also able to analyze a recent networking heuristic that no state-of-the-art method could handle.

In the future, the researchers want to enhance MetaEase so it can process additional types of types of data, like categorical inputs. They also want to improve the scalability of their method and adapt MetaEase to evaluate more complex heuristics.

“Reasoning about the worst-case performance of deployed heuristics is a hard and longstanding problem. MetaEase makes tangible progress by analyzing heuristics directly from source code, eliminating the need for formal models that have historically limited who can use such analysis tools. I was pleasantly surprised that it handles non-convex and randomized heuristics by combining symbolic execution with gradient-based search in a practical and effective way,” says Ratul Mahajan of the University of Washington Paul G. Allen School of Computer Science and Engineering, who was not involved with this research.

This research was funded, in part, by a Microsoft Research internship and the U.S. National Science Foundation (NSF).

Games people — and machines — play: Untangling strategic reasoning to advance AI

Tue, 05/05/2026 - 5:00pm

Gabriele Farina grew up in a small town in a hilly winemaking region of northern Italy. Neither of his parents had college degrees, and although both were convinced they “didn’t understand math,” Farina says, they bought him the technical books he wanted and didn’t discourage him from attending the science-oriented, rather than the classical, high school.

By around age 14, Farina had focused on an idea that would prove foundational to his career.

“I was fascinated very early by the idea that a machine could make predictions or decisions so much better than humans,” he says. “The fact that human-made mathematics and algorithms could create systems that, in some sense, outperform their creators, all while building on simple building blocks, has always been a major source of awe for me.”

At age 16, Farina wrote code to solve a board game he played with his 13-year-old sister.

“I used game after game to compute the optimal move and prove to my sister that she had already lost long before either of us could see it ourselves,” Farina says, adding that his sister was less enthralled with his new system.

Now an assistant professor in MIT’s Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS), Farina combines concepts from game theory with such tools as machine learning, optimization, and statistics to advance theoretical and algorithmic foundations for decision-making.

Enrolling at Politecnico di Milano for college, Farina studied automation and control engineering. Over time, however, he realized that what activated his interest was not “just applying known techniques, but understanding and extending their foundations,” he says. “I gradually shifted more and more toward theory, while still caring deeply about demonstrating concrete applications of that theory.”

Farina’s advisor at Politecnico di Milano, Nicola Gatti, professor and researcher in computer science and engineering, introduced Farina to research questions in computational game theory and encouraged him to apply for a PhD. At the time, being the first in his immediate family to earn a college degree and living in Italy, where doctoral degrees are handled differently, Farina says he didn’t even know what a PhD was.

Nevertheless, one month after graduating with his undergraduate degree, Farina began a doctoral degree in computer science at Carnegie Mellon University. There, he won distinctions for his research and dissertation, as well as a Facebook Fellowship in Economics and Computation.

As he was finishing his doctorate, Farina worked for a year as a research scientist in Meta’s Fundamental AI Research Labs. One of his major projects was helping to develop Cicero, an AI that was able to beat human players in a game that involves forming alliances, negotiating, and detecting when other players are bluffing.

Farina says, “when we built Cicero, we designed it so that it would not agree to form an alliance if it was not in its interest, and it likewise understood whether a player was likely lying, because for them to do as they proposed would be against their own incentives.”

A 2022 article in the MIT Technology Review said Cicero could represent advancement toward AIs that can solve complex problems requiring compromise.

After his year at Meta, Farina joined the MIT faculty. In 2025, he was distinguished with the National Science Foundation CAREER Award. His work — based on game theory and its mathematical language describing what happens when different parties have different objectives, and then quantifying the “equilibrium” where no one has a reason to change their strategy — aims to simplify massive, complex real-world scenarios where calculating such an equilibrium could take a billion years.

“I research how we can use optimization and algorithms to actually find these stable points efficiently,” he says. “Our work tries to shed new light on the mathematical underpinnings of the theory, better control and predict these complex dynamical systems, and uses these ideas to compute good solutions to large multi-agent interactions.”

Farina is especially interested in settings with “imperfect information,” which means that some agents have information that is unknown to other participants. In such scenarios, information has value, and participants must be strategic about acting on the information they possess so as not to reveal it and reduce its value. An everyday example occurs in the game of poker, where players bluff in order to conceal information about their cards.

According to Farina, “we now live in a world in which machines are far better at bluffing than humans.”

A situation with “massive amounts of imperfect information,” has brought Farina back to his board-game beginnings. Stratego is a military strategy game that has inspired research efforts costing millions of dollars to produce systems capable of beating human players. Requiring complex risk calculation and misdirection, or bluffing, it was possibly the only classical game for which major efforts had failed to produce superhuman performance, Farina says.

With new algorithms and training costing less than $10,000, rather than millions, Farina and his research team were able to beat the best player of all time — with 15 wins, four draws, and one loss. Farina says he is thrilled to have produced such results so economically, and he hopes “these new techniques will be incorporated into future pipelines,” he says.

“We have seen constant progress towards constructing algorithms that can reason strategically and make sound decisions despite large action spaces or imperfect information. I am excited about seeing these algorithms incorporated into the broader AI revolution that’s happening around us.”

MIT marks first Robert R. Taylor Day with Tuskegee University

Tue, 05/05/2026 - 4:35pm

On April 10, MIT marked its first official Robert R. Taylor Day with a program centered on the life and work of Robert Robinson Taylor (Class of 1892), the Institute’s first Black graduate and the first academically trained Black architect in the United States.

After graduating from MIT, Taylor joined Tuskegee Institute (now Tuskegee University), where he designed campus buildings, developed a curriculum, and helped establish an approach to architectural education grounded in making and community life — an orientation that continues to shape the relationship between MIT and Tuskegee today. 

Taylor returned to MIT on April 10, 1911, to speak at the 50th anniversary of the Institute’s founding — the date now observed as Robert R. Taylor Day. Reflecting on his education, he credited MIT with the “methods and plans” he carried to Tuskegee Institute. “Certainly the spirit,” he said, was found “in the love of doing things correctly, of putting logical ways of thinking into the humblest task … to build up the immediate community in which the persons live.”

One hundred fifteen years later, at the MIT Museum, students and faculty gathered around Taylor’s original thesis, “A Soldiers Home.” The work was presented alongside archival materials from Taylor’s time at MIT by Jonathan Duval, assistant curator of architecture and design. Rather than framing Taylor as a distant historical figure, the encounter with the work itself — its drawings, assumptions, and ambitions — set the terms for the day, bringing forward not only his accomplishments but the ideas and methods that continue to inform teaching and collaboration today. Attendees then gathered for a lunch-and-learn session including a hybrid panel involving MIT and Tuskegee University faculty. 

“It is so important to continue to develop the MIT-Tuskegee relationship begun by Robert R. Taylor,” says Kwesi Daniels, associate professor and head of the architecture department at Tuskegee University. “MIT students are provided an opportunity to experience the campus Taylor designed and his ethos of social architecture. For the Tuskegee students, they are able to appreciate the foundation Taylor received at MIT. The engagement epitomizes the ‘mind and hand’ philosophy of MIT and the head, hand, heart philosophy of Tuskegee.”

An ongoing exchange

Student and faculty exchanges, launched by the architecture departments at both institutions, have extended these connections in recent years. MIT students travel to Tuskegee for work in historic preservation and community engagement, sampling Daniels’ scanning and drone equipment, while Tuskegee students come to MIT to engage with digital fabrication and entrepreneurship.

For Nicholas de Monchaux, professor and head of the Department of Architecture at MIT, the relationship reflects continuity. “We are not uniting. We’re reuniting,” he says. “This year’s celebration should really be seen as the kickoff of a year of reflecting on Robert Taylor’s legacy and imagining what the day, and his legacy, can become over time.”

The day’s program — the vision for which originally emerged from a suggestion made by MIT literature professor Joshua Bennett during a meeting at Tuskegee with de Monchaux, Daniels, and Tuskegee President Mark Brown — moved into a broader effort among faculty and collaborators across architecture, history, and the humanities. As Bennett put it, “The primary aim of Robert R. Taylor Day is to lift up not only Taylor’s accomplishments, but his ideas — and the fact that his ideas live on in those of us who have inherited his legacy.”

That emphasis is also visible in the dedicated coursework and research that has accompanied the exchange since 2022. In class 4.s12 (Brick x Brick: Drawing a Particular Survey), taught by Carrie Norman, assistant professor in architecture at MIT, students document buildings on the Tuskegee campus through measured drawings and archival interpretation. Working from limited historical material, they reconstruct both form and intent.

“My role has been to structure this work pedagogically,” Norman says, “guiding students in methods of close looking, measured drawing, and archival interpretation.” She describes Taylor’s work as “an ongoing research agenda,” adding that “the broader aim is not only to deepen engagement with Taylor’s legacy, but to build on it through new forms of design research.”

Related work has contributed to a recent exhibition on the Tuskegee Chapel at the National Building Museum, curated by Helen Bechtel of the Yale School of Architecture. Building on research conducted in Norman’s course, students developed large-scale models that form part of the exhibition. New 3D fabrications use a limited set of archival materials to reconstruct the chapel originally designed by Taylor as the first electrified building in Alabama’s Macon County, which was destroyed by fire in 1957.

Looking ahead

Timothy Hyde, professor in the MIT Department of Architecture, has also been involved in the ongoing MIT–Tuskegee collaboration and in efforts to situate Taylor’s work within a broader historical context. He notes that Taylor’s training at MIT helped shape the curriculum he later developed at Tuskegee. “The other influence I would like to mention is the city of Boston itself,” Hyde adds. “Boston was a prosperous city with a wealth of civic architecture that Taylor would have seen and studied.” 

A documentary project on Taylor’s life, supported by the MIT Human Insight Collaborative and led by Hyde and historian Christopher Capozzola, senior associate dean for MIT Open Learning, is currently in development.

For some students, these encounters shape longer trajectories. As an undergraduate at Tuskegee, Myles Sampson participated in the MIT Summer Research Program (MSRP), where he began to connect architecture with a growing interest in computation. He later enrolled in MIT’s Master of Science in Architecture Studies (SMArchS) computation program, working with Professor Larry Sass, who introduced him to robotic fabrication.

“I never looked back,” Sampson says. “Without that hands-on research experience, I would never have looked past contemporary architectural practice.” He is now pursuing a doctorate in computational design at Carnegie Mellon University, focused on the role of automation in architecture and construction.

Sampson contributed significant work to the National Building Museum’s exhibition. His installation, Brick Parable, brings together historical reference and robotic construction. As de Monchaux notes, the project reflects the long arc of Taylor’s legacy: “bricks were fired by students as part of Taylor’s training program … Myles [Sampson]’s piece, made with a robotic assembly of bricks, explores the architectural idea of the chapel in contemporary form.”

For Daniels, the continued circulation of students between the two institutions remains central. Viewing Taylor’s thesis in particular offers a shared point of reference. “Whether the student is from Tuskegee or MIT, they are able to appreciate the quality of work Taylor completed as a student,” he says, “and how he built on that work by creating a college campus, beginning at age 25.”

Across these activities, Taylor’s work is approached not as a fixed legacy, but as a set of methods and commitments that continue to be tested. As Catherine Armwood, dean of Tuskegee University Robert R. Taylor School of Architecture and Construction Science, describes it: “While our students leverage [the design and entrepreneurship program] MITdesignX to turn architectural concepts into social enterprises through advanced fabrication and venture mentorship, MIT students come to Tuskegee for an immersion in historic preservation. By surveying buildings handcrafted by our founding students, they learn a legacy of self-reliance and community impact that can’t be found anywhere else,” Armwood says. “Together, we are bridging technical innovation with deep-rooted heritage to train a new generation of visionary leaders.” 

Pages