Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Friday Squid Blogging: Squid Fishing Tips
This is a video of advice for squid fishing in Puget Sound.
As usual, you can also use this squid post to talk about the security stories in the news that I haven’t covered.
I Am in the Epstein Files
Once. Someone named “Vincenzo lozzo” wrote to Epstein in email, in 2016: “I wouldn’t pay too much attention to this, Schneier has a long tradition of dramatizing and misunderstanding things.” The topic of the email is DDoS attacks, and it is unclear what I am dramatizing and misunderstanding.
Rabbi Schneier is also mentioned, also incidentally, also once. As far as either of us know, we are not related.
“This is science!” – MIT president talks about the importance of America’s research enterprise on GBH’s Boston Public Radio
In a wide-ranging live conversation, MIT President Sally Kornbluth joined Jim Braude and Margery Eagan live in studio for GBH’s Boston Public Radio on Thursday, February 5. They talked about MIT, the pressures facing America’s research enterprise, the importance of science, that Congressional hearing on antisemitism in 2023, and more – including Sally’s experience as a Type 1 diabetic.
Reflecting on how research and innovation in the treatment of diabetes has advanced over decades of work, leading to markedly better patient care, Kornbluth exclaims: “This is science!”
With new financial pressures facing universities, increased competition for talented students and scholars from outside the U.S., as well as unprecedented pressures on university leaders and campuses, co-host Eagan asks Kornbluth what she thinks will happen in years to come.
“For us, one of the hardest things now is the endowment tax,” remarks Kornbluth. “That is $240 million a year. Think about how much science you can get for $240 million a year. Are we managing it? Yes. Are we still forging ahead on all of our exciting initiatives? Yes. But we’ve had to reconfigure things. We’ve had to merge things. And it’s not the way we should be spending our time and money.”
Watch and listen to the full episode on YouTube. President Kornbluth appears one hour and seven minutes into the broadcast.
Following Kornbluth’s appearance, MIT Assistant Professor John Urschel – also a former offensive lineman for the Baltimore Ravens – joined Edgar B. Herwick III, host of GBH’s newest show, The Curiosity Desk, to talk about his love of his family, linear algebra, and football.
On how he eventually chose math over football, Urschel quips: “Well, I hate to break it to you, I like math better… let me tell you, when I started my PhD at MIT, I just fell in love with the place. I fell in love with this idea of being in this environment [where] everyone loves math, everyone wants to learn. I was just constantly excited every day showing up.”
Prof. Urschel appears about 2 hours and 40 minutes into the webcast on YouTube.
Coming up on Curiosity Desk later this month…
Airing weekday afternoons from 1-2 p.m., The Curiosity Desk will welcome additional MIT guests in the coming weeks. On Thursday, Feb. 12 Anette “Peko” Hosoi, Pappalardo Professor of Mechanical Engineering, and Jerry Lu MFin ’24, a former researcher at the MIT Sports Lab, visit The Curiosity Desk to discuss their work using AI to help Olympic figure skaters improve their jumps.
Then, on Thursday, Feb. 19, Professors Sangeeta Bhatia and Angela Belcher talk with Herwick about their research to improve diagnostics for ovarian cancer. We learn that about 80% of the time ovarian cancer starts in the fallopian tubes and how this points the way to a whole new approach to diagnosing and treating the disease.
MIT News · Curiosity Desk PreviewSource: GBH
iPhone Lockdown Mode Protects Washington Post Reporter
404Media is reporting that the FBI could not access a reporter’s iPhone because it had Lockdown Mode enabled:
The court record shows what devices and data the FBI was able to ultimately access, and which devices it could not, after raiding the home of the reporter, Hannah Natanson, in January as part of an investigation into leaks of classified information. It also provides rare insight into the apparent effectiveness of Lockdown Mode, or at least how effective it might be before the FBI may try other techniques to access the device.
“Because the iPhone was in Lockdown mode, CART could not extract that device,” the court record reads, referring to the FBI’s Computer Analysis Response Team, a unit focused on performing forensic analyses of seized devices. The document is written by the government, and is opposing the return of Natanson’s devices...
Here’s what could happen when the endangerment finding dies
Equinor CEO: Energy investments becoming ‘politicalized and polarized’
Swedish youth sue to force government to act on climate change
State Farm seeks to block prosecutor access to internal records
Prominent environmental groups revive ‘superbill’ priority list
EU to soften emissions curbs on companies in flagship market
As winter comes, a river in Bosnia chokes in tons of waste each year
Fund managers saw historic withdrawals from ESG labels last year
I’m walking here! A new model maps foot traffic in New York City
Early in the 1969 film “Midnight Cowboy,” Dustin Hoffman, playing the character of Ratso Rizzo, crosses a Manhattan street and angrily bangs on the hood of an encroaching taxi. Hoffman’s line — “I’m walking here!” — has since been repeated by thousands of New Yorkers. Where cars and people mix, tensions rise.
And yet, governments and planners across the U.S. haven’t thoroughly tracked where it is that cars and people mix. Officials have long measured vehicle traffic closely while largely ignoring pedestrian traffic. Now, an MIT research group has assembled a routable dataset of sidewalks, crosswalks, and footpaths for all of New York City — a massive mapping project and the first complete model of pedestrian activity in any U.S. city.
The model could help planners decide where to make pedestrian infrastructure and public space investments, and illuminate how development decisions could affect non-motorized travel in the city. The study also helps pinpoint locations throughout the city where there are both lots of pedestrians and high pedestrian hazards, such as traffic crashes, and where streets or intersections are most in need of upgrades.
“We now have a first view of foot traffic all over New York City and can check planning decisions against it,” says Andres Sevtsuk, an associate professor in MIT’s Department of Urban Studies and Planning (DUSP), who led the study. “New York has very high densities of foot traffic outside of its most well-known areas.”
Indeed, one upshot of the model is that while Manhattan has the most foot traffic per block, the city’s other boroughs contain plenty of pedestrian-heavy stretches of sidewalk and could probably use more investment on behalf of walkers.
“Midtown Manhattan has by far the most foot traffic, but we found there is a probably unintentional Manhattan bias when it comes to policies that support pedestrian infrastructure,” Sevtsuk says. “There are a whole lot of streets in New York with very high pedestrian volumes outside of Manhattan, whether in Queens or the Bronx or Brooklyn, and we’re able to show, based on data, that a lot of these streets have foot-traffic levels similar to many parts of Manhattan.”
And, in an advance that could help cities anywhere, the model was used to quantify vehicle crashes involving pedestrians not only as raw totals, but on a per-pedestrian basis.
“A lot of cities put real investments behind keeping pedestrians safe from vehicles by prioritizing dangerous locations,” Sevtsuk says. “But that’s not only where the most crashes occur. Here we are able to calculate accidents per pedestrian, the risk people face, and that broadens the picture in terms of where the most dangerous intersections for pedestrians really are.”
The paper, “Spatial Distribution of Foot-traffic in New York City and Applications for Urban Planning,” is published today in Nature Cities.
The authors are Sevtsuk, the Charles and Ann Spaulding Associate Professor of Urban Science and Planning in DUSP and head of the City Design and Development Group; Rounaq Basu, an assistant professor at Georgia Tech; Liu Liu, a PhD student at the City Form Lab in DUSP; Abdulaziz Alhassan, a PhD student at MIT’s Center for Complex Engineering Systems; and Justin Kollar, a PhD student at MIT’s Leventhal Center for Advanced Urbanism in DUSP.
Walking everywhere
The current study continues work Sevtsuk and his colleagues have conducted charting and modeling pedestrian traffic around the world, from Melbourne to MIT’s Kendall Square neighborhood in Cambridge, Massachusetts. Many cities collect some pedestrian count data — but not much. And while officials usually request vehicle traffic impact assessments for new development plans, they rarely study how new developments or infrastructure proposals affect pedestrians.
However, New York City does devote part of its Department of Transportation (DOT) to pedestrian issues, and about 41 percent of trips city-wide are made on foot, compared to just 28 percent by vehicle, likely the highest such ratio in any big U.S. city. To calibrate the model, the MIT team used pedestrian counts that New York City’s DOT recorded in 2018 and 2019, covering up to 1,000 city sidewalk segments on weekdays and up to roughly 450 segments on weekends.
The researchers were able to test the model — which incorporates a wide range of factors — against New York City’s pedestrian-count data. Once calibrated, the model could expand foot-traffic estimates throughout the whole city, not just the points where pedestrian counts were observed.
The results showed that in Midtown Manhattan, there are about 1,697 pedestrians, on average, per sidewalk segment per hour during the evening peak of foot traffic, the highest in the city. The financial district in lower Manhattan comes in second, at 740 pedestrians per hour, with Greenwich Village third at 656.
Other parts of Manhattan register lower levels of foot traffic, however. Morningside Heights and East Harlem register 226 and 227 pedestrians per block per hour. And that’s similar to, or lower than, some parts of other boroughs. Brooklyn Heights has 277 pedestrians per sidewalk segment per hour; University Heights in the Bronx has 263; Borough Park in Brooklyn and the Grand Concourse in the Bronx average 236; and a slice of Queens in the Corona area averages 222. Many other spots are over 200.
The model overlays many different types of pedestrian journeys for each time period and shows that people are generally headed to work and schools in the morning, but conduct more varied types of trips in mid-day and the evening, as they seek out amenities or conduct social or recreational visits.
“Because of jobs, transit stops are the biggest generators of foot traffic in the morning peak,” Liu observes. “In the evening peak, of course people need to get home too, but patterns are much more varied, and people are not just returning from work or school. More social and recreational travel happens after work, whether it’s getting together with friends or running errands for family or family care trips, and that’s what the model detects too.”
On the safety front, pedestrians face danger in many places, not just the intersections with the most total accidents. Many parts of the city are riskier than others on a per-pedestrian basis, compared to the locations with the most pedestrian-related crashes.
“Places like Times Square and Herald Square in Manhattan may have numerous crashes, but they have very high pedestrian volumes, and it’s actually relatively safe to walk there,” Basu says. “There are other parts of the city, around highway off-ramps and heavy car-infrastructure, including the relatively low-density borough of Staten Island, which turn out to have a disproportionate number of crashes per pedestrian.”
Taking the model across the U.S.
The MIT model stands a solid chance of being applied in New York City policy and planning circles, since officials there are aware of the research and have been regularly communicating with the MIT team about it.
For his part, Sevtsuk emphasizes that, as distinct as New York City might be, the MIT model can be applied to cities and town anywhere in the U.S. As it happens, the team is working with municipal officials in two other places at the moment. One is Los Angeles, where city officials are not only trying to upgrade pedestrian and public transit mobility for regular daily trips, but making plans to handle an influx of visitors for the 2028 summer Olympics.
Meanwhile the state of Maine is working with the MIT team to evaluate pedestrian movement in over 140 of its cities and towns, to better understand the kinds of upgrades and safety improvements it could make for pedestrians across the state. Sevtsuk hopes that still other places will take notice of the New York City study and recognize that the tools are in place to analyze foot traffic more broadly in U.S. cities, to address the urgent need to decarbonize cities, and to start balancing what he views as the disproportionate focus on car travel prevalent in 20th century urban planning.
“I hope this can inspire other cities to invest in modeling foot traffic and mapping pedestrian infrastructure as well,” Sevtsuk says. “Very few cities make plans for pedestrian mobility or examine rigorously how future developments will impact foot-traffic. But they can. Our models serve as a test bed for making future changes.”
Some early life forms may have breathed oxygen well before it filled the atmosphere
Oxygen is a vital and constant presence on Earth today. But that hasn’t always been the case. It wasn’t until around 2.3 billion years ago that oxygen became a permanent fixture in the atmosphere, during a pivotal period known as the Great Oxidation Event (GOE), which set the evolutionary course for oxygen-breathing life as we know it today.
A new study by MIT researchers suggests some early forms of life may have evolved the ability to use oxygen hundreds of millions of years before the GOE. The findings may represent some of the earliest evidence of aerobic respiration on Earth.
In a study appearing today in the journal Palaeogeography, Palaeoclimatology, Palaeoecology, MIT geobiologists traced the evolutionary origins of a key enzyme that enables organisms to use oxygen. The enzyme is found in the vast majority of aerobic, oxygen-breathing life forms today. The team discovered that this enzyme evolved during the Mesoarchean — a geological period that predates the Great Oxidation Event by hundreds of millions of years.
The team’s results may help to explain a longstanding puzzle in Earth’s history: Why did it take so long for oxygen to build up in the atmosphere?
The very first producers of oxygen on the planet were cyanobacteria — microbes that evolved the ability to use sunlight and water to photosynthesize, releasing oxygen as a byproduct. Scientists have determined that cyanobacteria emerged around 2.9 billion years ago. The microbes, then, were presumably churning out oxygen for hundreds of millions of years before the Great Oxidation Event. So, where did all of cyanobacteria’s early oxygen go?
Scientists suspect that rocks may have drawn down a large portion of oxygen early on, through various geochemical reactions. The MIT team’s new study now suggests that biology may have also played a role.
The researchers found that some organisms may have evolved the enzyme to use oxygen hundreds of millions of years before the Great Oxidation Event. This enzyme may have enabled the organisms living near cyanobacteria to gobble up any small amounts of oxygen that the microbes produced, in turn delaying oxygen’s accumulation in the atmosphere for hundreds of millions of years.
“This does dramatically change the story of aerobic respiration,” says study co-author Fatima Husain, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “Our study adds to this very recently emerging story that life may have used oxygen much earlier than previously thought. It shows us how incredibly innovative life is at all periods in Earth’s history.”
The study’s other co-authors include Gregory Fournier, associate professor of geobiology at MIT, along with Haitao Shang and Stilianos Louca of the University of Oregon.
First respirers
The new study adds to a long line of work at MIT aiming to piece together oxygen’s history on Earth. This body of research has helped to pin down the timing of the Great Oxidation Event as well as the first evidence of oxygen-producing cyanobacteria. The overall understanding that has emerged is that oxygen was first produced by cyanobacteria around 2.9 billion years ago, while the Great Oxidation Event — when oxygen finally accumulated enough to persist in the atmosphere — took place much later, around 2.33 billion years ago.
For Husain and her colleagues, this apparent delay between oxygen’s first production and its eventual persistence inspired a question.
“We know that the microorganisms that produce oxygen were around well before the Great Oxidation Event,” Husain says. “So it was natural to ask, was there any life around at that time that could have been capable of using that oxygen for aerobic respiration?”
If there were in fact some life forms that were using oxygen, even in small amounts, they might have played a role in keeping oxygen from building up in the atmosphere, at least for a while.
To investigate this possibility, the MIT team looked to heme-copper oxygen reductases, which are a set of enzymes that are essential for aerobic respiration. The enzymes act to reduce oxygen to water, and they are found in the majority of aerobic, oxygen-breathing organism today, from bacteria to humans.
“We targeted the core of this enzyme for our analyses because that’s where the reaction with oxygen is actually taking place,” Husain explains.
Tree dates
The team aimed to trace the enzyme’s evolution backward in time to see when the enzyme first emerged to enable organisms to use oxygen. They first identified the enzyme’s genetic sequence and then used an automated search tool to look for this same sequence in databases containing the genomes of millions of different species of organisms.
“The hardest part of this work was that we had too much data,” Fournier says. “This enzyme is just everywhere and is present in most modern living organism. So we had to sample and filter the data down to a dataset that was representative of the diversity of modern life and also small enough to do computation with, which is not trivial.”
The team ultimately isolated the enzyme’s sequence from several thousand modern species and mapped these sequences onto an evolutionary tree of life, based on what scientists know about when each respective species has likely evolved and branched off. They then looked through this tree for specific species that might offer related information about their origins.
If, for instance, there is a fossil record for a particular organism on the tree, that record would include an estimate of when that organism appeared on Earth. The team would use that fossil’s age to “pin” a date to that organism on the tree. In a similar way, they could place pins across the tree to effectively tighten their estimates for when in time the enzyme evolved from one species to the next.
In the end, the researchers were able to trace the enzyme as far back as the Mesoarchean — a geological era that lasted from 3.2 to 2.8 billion years ago. It’s around this time that the team suspects the enzyme — and organisms’ ability to use oxygen — first emerged. This period predates the Great Oxidation Event by several hundred million years.
The new findings suggest that, shortly after cyanobacteria evolved the ability to produce oxygen, other living things evolved the enzyme to use that oxygen. Any such organism that happened to live near cyanobacteria would have been able to quickly take up the oxygen that the bacteria churned out. These early aerobic organisms may have then played some role in preventing oxygen from escaping to the atmosphere, delaying its accumulation for hundreds of millions of years.
“Considered all together, MIT research has filled in the gaps in our knowledge of how Earth’s oxygenation proceeded,” Husain says. “The puzzle pieces are fitting together and really underscore how life was able to diversify and live in this new, oxygenated world.”
This research was supported, in part, by the Research Corporation for Science Advancement Scialog program.
Expert agreement on key elements of transformational adaptation to climate risks
Nature Climate Change, Published online: 06 February 2026; doi:10.1038/s41558-025-02548-y
Despite the growing literature and widespread interest in transformational adaptation, its definition remains contested. The results of a global expert survey reveal broad agreement on 13 key elements that should be included in defining transformational adaptation.Yes to the “ICE Out of Our Faces Act”
Immigration and Customs Enforcement (ICE) and Customs and Border Protection (CBP) have descended into utter lawlessness, most recently in Minnesota. The violence is shocking. So are the intrusions on digital rights and civil liberties. For example, immigration agents are routinely scanning faces of people they suspect of unlawful presence in the country – 100,000 times, according to the Wall Street Journal. The technology has already misidentified at least one person, according to 404 Media.
Face recognition technology is so dangerous that government should not use it at all—least of all these out-of-control immigration agencies.
To combat these abuses, EFF is proud to support the “ICE Out of Our Faces Act.” This new federal bill would ban ICE and CBP agents, and some local police working with them, from acquiring or using biometric surveillance systems, including face recognition technology, or information derived from such systems by another entity. This bill would be enforceable, among other ways, by a strong private right of action.
The bill’s lead author is Senator Ed Markey. We thank him for his longstanding leadership on this issue, including introducing similar legislation that would ban all federal law enforcement agencies, and some federally-funded state agencies, from using biometric surveillance systems (a bill that EFF also supported). The new “ICE Out of My Face Act” is also sponsored by Senator Merkley, Senator Wyden, and Representative Jayapal.
As EFF explains in the new bill’s announcement:
It’s past time for the federal government to end its use of this abusive surveillance technology. A great place to start is its use for immigration enforcement, given ICE and CBP’s utter disdain for the law. Face surveillance in the hands of the government is a fundamentally harmful technology, even under strict regulations or if the technology was 100% accurate. We thank the authors of this bill for their leadership in taking steps to end this use of this dangerous and invasive technology.
You can read the bill here, and the bill’s announcement here.
T. Alan Hatton receives Bernard M. Gordon Prize for Innovation in Engineering and Technology Education
The National Academy of Engineering (NAE) has announced T. Alan Hatton, MIT’s Ralph Landau Professor of Chemical Engineering Practice, Post-Tenure, as the recipient of the 2026 Bernard M. Gordon Prize for Innovation in Engineering and Technology Education, recognizing his transformative leadership of the Institute’s David H. Koch School of Chemical Engineering Practice. The award citation highlights his efforts to advance “an immersive, industry-integrated educational model that has produced thousands of engineering leaders, strengthening U.S. technological competitiveness and workforce readiness.”
The Gordon Prize recognizes “new modalities and experiments in education that develop effective engineering leaders.” The prize is awarded annually and carries a $500,000 cash award, half granted to the recipient and the remainder granted to their institution to support the recognized innovation.
“As engineering challenges become more complex and interdisciplinary, education must evolve alongside them,” says Paula Hammond, Institute Professor and dean of the School of Engineering. “Under Alan’s leadership, the Practice School has demonstrated how rigorous academics, real industrial problems, and student responsibility can be woven together into an educational experience that is both powerful and adaptable. His work offers a compelling blueprint for the future of engineering education.”
Hatton served as director of the Practice School for 36 years, from 1989 until his retirement in 2025. When he assumed the role, the program worked with a limited number of host companies, largely within traditional chemical industries. Over time, Hatton reshaped the program’s scope and structure, enabling it to operate across continents and sectors to offer students exposure to diverse technologies, organizational cultures, and geographic settings.
“The MIT Chemical Engineering Practice School represents a level of experiential learning that few programs anywhere can match,” says Kristala L. J. Prather, the Arthur D. Little Professor and head of the Department of Chemical Engineering. “This recognition reflects not only Alan’s extraordinary personal contributions, but also the enduring value of a program that prepares students to deliver impact from their very first day as engineers.”
Central to Hatton’s approach was a deliberate strategy of adaptability. He introduced a model in which new companies are recruited regularly as Practice School hosts, broadening participation while keeping the program aligned with emerging technologies and industry needs. He also strengthened on-campus preparation by launching an intensive project management course during MIT’s Independent Activities Period (IAP) — training that has since become foundational for students entering complex, team-based industrial environments.
This forward-looking vision is shared by current Practice School leadership. Fikile Brushett, Ralph Landau Professor of Chemical Engineering Practice and director of the program, emphasizes that Hatton’s legacy is not a static one. “Alan consistently positioned the Practice School to respond to change — whether in technology, industry expectations, or educational practice,” Brushett says. “The Gordon Prize provides an opportunity to further evolve the program while staying true to its core principles of immersion, rigor, and partnership.”
In recognition of Hatton’s service, the department established the T. Alan Hatton Fund in fall 2025 with support from Practice School alumni. The fund is dedicated to helping launch new Practice School stations, lowering barriers for emerging partners and sustaining the program’s ability to engage with a broad and diverse set of industries.
Learning that delivers value on both sides
The Practice School’s impact extends well beyond the classroom. Student teams are embedded directly within host organizations — often in manufacturing plants or research and development centers — where they tackle open-ended technical problems under real operational constraints. Sponsors routinely cite tangible outcomes from these projects, including improved processes, reduced costs, and new technical directions informed by MIT-level analysis.
For students, the experience offers something difficult to replicate in traditional academic settings: sustained responsibility for complex work, direct interaction with industry professionals, and repeated opportunities to present, defend, and refine their ideas. The result is a training environment that closely mirrors professional engineering practice, while retaining the reflective depth of an academic program.
A program shaped by history — and by change
The Practice School was established in 1916 to complement classroom instruction with hands-on industrial experience, an idea that was unconventional at the time. More than a century later, the program has not only endured but continually reinvented itself, expanding far beyond its early focus on regional chemical manufacturing.
Today, Practice School students work with companies around the world in fields that include pharmaceuticals, food production, energy, advanced materials, software, and finance. The program remains a defining feature of graduate education in MIT’s Department of Chemical Engineering, linking research strengths with the practical demands of industry.
Participation in the Practice School is a required component of the department’s Master of Science in Chemical Engineering Practice (MSCEP) and PhD/ScD Chemical Engineering Practice (CEP) programs. After completing coursework, students attend two off-campus stations, spending two months at each site. Teams of two or three students work on month-long projects, culminating in formal presentations and written reports delivered to host organizations. Recent stations have included placements with Evonik in Germany, AstraZeneca in Maryland, EGA in the United Arab Emirates, AspenTech in Massachusetts, and Shell Technology Center and Dimensional Energy in Texas.
“I’m deeply honored by this recognition,” Hatton says. “The Practice School has always been about learning through responsibility — placing students in situations where their work matters. This award will help MIT build on that foundation and explore ways to extend the model so it can serve even more students and partners in the years ahead.”
Hatton obtained his BS and MS degrees in chemical engineering at the University of Natal in Durban, South Africa, before spending three years as a researcher at the Council for Scientific and Industrial Research in Pretoria. He later earned his PhD at the University of Wisconsin at Madison and joined the MIT faculty in 1982 as an assistant professor.
Over the course of his career at MIT, Hatton helped extend the Practice School model beyond campus through his involvement in the Singapore–MIT Alliance for Research and Technology and the Cambridge–MIT Institute, contributing to the development of practice-based engineering education in international settings. He also served as co-director of the MIT Energy Initiative’s Low-Carbon Energy Center focused on carbon capture, utilization, and storage.
Hatton has long been recognized for his commitment to education and service. From 1983 to 1986, he served as a junior faculty housemaster (now known as an associate head of house) in MacGregor House and received MIT’s Everett Moore Baker Teaching Award in 1983. His professional honors include being named a founding fellow of the American Institute of Medical and Biological Engineering and an honorary professorial fellow at the University of Melbourne in Australia.
In addition to his educational leadership, Hatton has made substantial contributions to the broader engineering community, chairing multiple national and international conferences in the areas of colloids and separation processes and delivering numerous plenary, keynote, and invited lectures worldwide.
Hatton will formally receive the Bernard M. Gordon Prize at a ceremony hosted by the National Academy of Engineering at MIT on April 30.
A satellite language network in the brain
The ability to use language to communicate is one of things that makes us human. At MIT’s McGovern Institute for Brain Research, scientists led by Evelina Fedorenko have defined an entire network of areas within the brain dedicated to this ability, which work together when we speak, listen, read, write, or sign.
Much of the language network lies within the brain’s neocortex, where many of our most sophisticated cognitive functions are carried out. Now, Fedorenko’s lab, which is part of MIT's Department of Brain and Cognitive Sciences, has identified language-processing regions within the cerebellum, extending the language network to a part of the brain better known for helping to coordinate the body’s movements. Their findings are reported Jan. 21 in the journal Neuron.
“It’s like there’s this region in the cerebellum that we’ve been forgetting about for a long time,” says Colton Casto, a graduate student at Harvard and MIT who works in Fedorenko’s lab. “If you’re a language researcher, you should be paying attention to the cerebellum.”
Imaging the language network
There have been hints that the cerebellum makes important contributions to language. Some functional imaging studies detected activity in this area during language use, and people who suffer damage to the cerebellum sometimes experience language impairments. But no one had been able to pin down exactly which parts of the cerebellum were involved, or tease out their roles in language processing.
To get some answers, Fedorenko’s lab took a systematic approach, using methods they have used to map the language network in the neocortex. For 15 years, the lab has captured functional brain imaging data as volunteers carried out various tasks inside an MRI scanner. By monitoring brain activity as people engaged in different kinds of language tasks, like reading sentences or listening to spoken words, as well as non-linguistic tasks, like listening to noise or memorizing spatial patterns, the team has been able identify parts of the brain that are exclusively dedicated to language processing.
Their work shows that everyone’s language network uses the same neocortical regions. The precise anatomical location of these regions varies, however, so to study the language network in any individual, Fedorenko and her team must map that person’s network inside an MRI scanner using their language-localizer tasks.
Satellite language network
While the Fedorenko lab has largely focused on how the neocortex contributes to language processing, their brain scans also capture activity in the cerebellum. So Casto revisited those scans, analyzing cerebellar activity from more than 800 people to look for regions involved in language processing. Fedorenko points out that teasing out the individual anatomy of the language network turned out to particularly vital in the cerebellum, where neurons are densely packed and areas with different functional specializations sit very close to one another. Ultimately, Casto was able to identify four cerebellar areas that consistently got involved during language use.
Three of these regions were clearly involved in language use, but also reliably became engaged during certain kinds of non-linguistic tasks. Casto says this was a surprise, because all the core language areas in the neocortex are dedicated exclusively to language processing. The researchers speculate that the cerebellum may be integrating information from different parts of the cortex — a function that could be important for many cognitive tasks.
“We’ve found that language is distinct from many, many other things — but at some point, complex cognition requires everything to work together,” Fedorenko says. “How do these different kinds of information get connected? Maybe parts of the cerebellum serve that function.”
The researchers also found a spot in the right posterior cerebellum with activity patterns that more closely echoed those of the language network in the neocortex. This region stayed silent during non-linguistic tasks, but became active during language use. For all of the linguistic activities that Casto analyzed, this region exhibited patterns of activity that were very similar to what the lab has seen in neocortical components of the language network. “Its contribution to language seems pretty similar,” Casto says. The team describes this area as a “cerebellar satellite” of the language network.
Still, the researchers think it’s unlikely that neurons in the cerebellum, which are organized very differently than those in the neocortex, replicate the precise function of other parts of the language network. Fedorenko’s team plans to explore the function of this satellite region more deeply, investigating whether it may participate in different kinds of tasks.
The researchers are also exploring the possibility that the cerebellum is particularly important for language learning — playing an outsized role during development, or when people learn languages later in life.
Fedorenko says the discovery may also have implications for treating language impairments caused when an injury or disease damages the brain’s neocortical language network. “This area may provide a very interesting potential target to help recovery from aphasia,” Fedorenko says.
Currently, researchers are exploring the possibility that non-invasively stimulating language-associated parts of the brain might promote language recovery. “This right cerebellar region may be just the right thing to potentially stimulate to up-regulate some of that function that’s lost,” Fedorenko says.
Helping AI agents search to get the best results out of large language models
Whether you’re a scientist brainstorming research ideas or a CEO hoping to automate a task in human resources or finance, you’ll find that artificial intelligence tools are becoming the assistants you didn’t know you needed. In particular, many professionals are tapping into the talents of semi-autonomous software systems called AI agents, which can call on AI at specific points to solve problems and complete tasks.
AI agents are particularly effective when they use large language models (LLMs) because those systems are powerful, efficient, and adaptable. One way to program such technology is by describing in code what you want your system to do (the “workflow”), including when it should use an LLM. If you were a software company trying to revamp your old codebase to use a more modern programming language for better optimizations and safety, you might build a system that uses an LLM to translate the codebase one file at a time, testing each file as you go.
But what happens when LLMs make mistakes? You’ll want the agent to backtrack to make another attempt, incorporating lessons it learned from previous mistakes. Coding this up can take as much effort as implementing the original agent; if your system for translating a codebase contained thousands of lines of code, then you’d be making thousands of lines of code changes or additions to support the logic for backtracking when LLMs make mistakes.
To save programmers time and effort, researchers with MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Asari AI have developed a framework called “EnCompass.”
With EnCompass, you no longer have to make these changes yourself. Instead, when EnCompass runs your program, it automatically backtracks if LLMs make mistakes. EnCompass can also make clones of the program runtime to make multiple attempts in parallel in search of the best solution. In full generality, EnCompass searches over the different possible paths your agent could take as a result of the different possible outputs of all the LLM calls, looking for the path where the LLM finds the best solution.
Then, all you have to do is to annotate the locations where you may want to backtrack or clone the program runtime, as well as record any information that may be useful to the strategy used to search over the different possible execution paths of your agent (the search strategy). You can then separately specify the search strategy — you could either use one that EnCompass provides out of the box or, if desired, implement your own custom search strategy.
“With EnCompass, we’ve separated the search strategy from the underlying workflow of an AI agent,” says lead author Zhening Li ’25, MEng ’25, who is an MIT electrical engineering and computer science (EECS) PhD student, CSAIL researcher, and research consultant at Asari AI. “Our framework lets programmers easily experiment with different search strategies to find the one that makes the AI agent perform the best.”
EnCompass was used for agents implemented as Python programs that call LLMs, where it demonstrated noticeable code savings. EnCompass reduced coding effort for implementing search by up to 80 percent across agents, such as an agent for translating code repositories and for discovering transformation rules of digital grids. In the future, EnCompass could enable agents to tackle large-scale tasks, including managing massive code libraries, designing and carrying out science experiments, and creating blueprints for rockets and other hardware.
Branching out
When programming your agent, you mark particular operations — such as calls to an LLM — where results may vary. These annotations are called “branchpoints.” If you imagine your agent program as generating a single plot line of a story, then adding branchpoints turns the story into a choose-your-own-adventure story game, where branchpoints are locations where the plot branches into multiple future plot lines.
You can then specify the strategy that EnCompass uses to navigate that story game, in search of the best possible ending to the story. This can include launching parallel threads of execution or backtracking to a previous branchpoint when you get stuck in a dead end.
Users can also plug-and-play a few common search strategies provided by EnCompass out of the box, or define their own custom strategy. For example, you could opt for Monte Carlo tree search, which builds a search tree by balancing exploration and exploitation, or beam search, which keeps the best few outputs from every step. EnCompass makes it easy to experiment with different approaches to find the best strategy to maximize the likelihood of successfully completing your task.
The coding efficiency of EnCompass
So just how code-efficient is EnCompass for adding search to agent programs? According to researchers’ findings, the framework drastically cut down how much programmers needed to add to their agent programs to add search, helping them experiment with different strategies to find the one that performs the best.
For example, the researchers applied EnCompass to an agent that translates a repository of code from the Java programming language, which is commonly used to program apps and enterprise software, to Python. They found that implementing search with EnCompass — mainly involving adding branchpoint annotations and annotations that record how well each step did — required 348 fewer lines of code (about 82 percent) than implementing it by hand. They also demonstrated how EnCompass enabled them to easily try out different search strategies, identifying the best strategy to be a two-level beam search algorithm, achieving an accuracy boost of 15 to 40 percent across five different repositories at a search budget of 16 times the LLM calls made by the agent without search.
“As LLMs become a more integral part of everyday software, it becomes more important to understand how to efficiently build software that leverages their strengths and works around their limitations,” says co-author Armando Solar-Lezama, who is an MIT professor of EECS and CSAIL principal investigator. “EnCompass is an important step in that direction.”
The researchers add that EnCompass targets agents where a program specifies the steps of the high-level workflow; the current iteration of their framework is less applicable to agents that are entirely controlled by an LLM. “In those agents, instead of having a program that specifies the steps and then using an LLM to carry out those steps, the LLM itself decides everything,” says Li. “There is no underlying programmatic workflow, so you can execute inference-time search on whatever the LLM invents on the fly. In this case, there’s less need for a tool like EnCompass that modifies how a program executes with search and backtracking.”
Li and his colleagues plan to extend EnCompass to more general search frameworks for AI agents. They also plan to test their system on more complex tasks to refine it for real-world uses, including at companies. What’s more, they’re evaluating how well EnCompass helps agents work with humans on tasks like brainstorming hardware designs or translating much larger code libraries. For now, EnCompass is a powerful building block that enables humans to tinker with AI agents more easily, improving their performance.
“EnCompass arrives at a timely moment, as AI-driven agents and search-based techniques are beginning to reshape workflows in software engineering,” says Carnegie Mellon University Professor Yiming Yang, who wasn’t involved in the research. “By cleanly separating an agent’s programming logic from its inference-time search strategy, the framework offers a principled way to explore how structured search can enhance code generation, translation, and analysis. This abstraction provides a solid foundation for more systematic and reliable search-driven approaches to software development.”
Li and Solar-Lezama wrote the paper with two Asari AI researchers: Caltech Professor Yisong Yue, an advisor at the company; and senior author Stephan Zheng, who is the founder and CEO. Their work was supported by Asari AI.
The team’s work was presented at the Conference on Neural Information Processing Systems (NeurIPS) in December.
