MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 19 hours 45 min ago

MIT engineers develop a magnetic transistor for more energy-efficient electronics

Wed, 09/23/3035 - 10:32am

Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.

MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity. 

The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.

The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.

“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.

Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.

Overcoming the limits

In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.

But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.

To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.

So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.

“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.

The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.

Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”

“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.

They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.

To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.

“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.

Leveraging magnetism

This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.

They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.

The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.

The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.

A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.

“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.

Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.

This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.

2026 MIT Sloan Sports Analytics Conference shows why data make a difference

Tue, 03/10/2026 - 5:30pm

With time dwindling in the Olympic women’s ice hockey gold medal game on Feb. 19, players for Team USA and Team Canada lined up for a key faceoff in Canada’s end. Canada had a 1-0 lead. USA had 2:23 left, and an ace up their sleeve: analytics.

USA Coach John Wroblewski pulled the goalkeeper, to get a player advantage, and had forward Alex Carpenter take the faceoff. Statistics show that Carpenter is not only very good at winning faceoffs; she also wins a lot of them cleanly. That allows her team to quickly regain possession, without too many teammates nearby. Knowing that, Wroblewski directed the USA players to spread out, largely away from the faceoff circle, in position to circulate the puck as soon as they got it back.

Carpenter won the faceoff, and Team USA quickly started a passing move. Laila Edwards soon launched a shot that longtime star Hilary Knight deflected in for the crucial, game-tying goal with 2:04 left. Team USA then won in overtime. And data-driven decision-making had also won big; indeed, it helped change the Olympics.

“What it does for a coach, the other thing these analytics do, is … it allows you to move forward with this confidence level,” Wroblewski said on Saturday at the 20th annual MIT Sloan Sports Analytics Conference (SSAC), during a hockey analytics panel where he detailed his decision-making for that faceoff, and in the gold medal game generally.

Using the data, he added, lets coaches “limit the emotion” that might cloud their in-game decisions.

“By the time you get to that decision, you’re then allowed the freedom to step away from the decision, to allow the players to go earn their medal,” Wroblewski added.

You don’t usually find coaches divulging their tactical secrets just three weeks after a big game has been played. But then, this is the MIT Sloan conference, a trailblazing forum that has helped analytics ideas spread throughout sports. Coaches, players, and analysts know any data-driven discussion will find an interested audience.

“Analytics was massive for us going into the gold medal game,” Wroblewski said.

20 years on: From classrooms to convention halls

The 20th edition of SSAC was a strong one, with many substantive panel discussions and interviews; the annual research paper, hackathon, and case study contests; mentorship events and informal networking opportunities; and more. Over 2,500 people attended the two-day event, held at Boston’s Menino Conference and Exhibition Center (MCEC). The conference was founded in 2007 by Daryl Morey, now president of basketball operations for the NBA Philadelphia 76ers, and Jessica Gelman, now CEO of the Kraft Analytics Group.

The first three editions of the conference were held on the MIT campus. In 2010, it first moved to the MCEC (one of two regular convention-center sites it uses), and starting in 2011, the conference became a two-day event.

Today people attend for the panels, the career opportunities, and, in some cases, to make news. NBA Commissioner Adam Silver was on hand this year, engaging in an on-stage conversation with former WNBA great Sue Bird, publicly addressing some of the key issues facing his league, and drawing wide media coverage.

First, though, Silver reflected about attending the second edition of the conference on the MIT campus in 2008, when he was deputy commissioner.

“It was literally a classroom of 20 people we were talking to,” Silver recalled. “I think it was the beginning of the moment when people were taking sports as a discipline more seriously. … I give Jessica and Daryl a lot of credit [for that].”

Addressing tanking and gambling

A core part of Silver’s comments focused on two big issues in pro basketball: tanking and gambling. About eight NBA teams appear to be tanking this season, that is, losing games in order to increase their chances of getting a high draft pick.

“We are going to make substantial changes for next year,” Silver said, although he also added: “I am an incrementalist. I think we’ve got to be a little bit careful about how huge a change we make at once. I’m not ruling anything out. But I am paying attention to that.”

To be sure, tanking has long been a part of professional basketball, as Bird noted during the conversation.

“We did it in Seattle, to be honest,” Bird said. “Breanna Stewart was coming out of college. We were in a ‘rebuild.’”

Still, in this NBA season, tanking has become an epidemic, in “a little bit of a perfect storm,” as Silver put it on Friday. And almost every proposed solution seems to have drawbacks. Perhaps the simplest cure for tanking, actually, would be robust analytical studies showing that it is not a very effective team-building strategy. If that is what the numbers reveal, of course.

Meanwhile, multiple arrests of NBA players and coaches at the beginning of the season show further that sports gambling continues to present challenges to professional sports leagues.

“I personally think there should be more regulation now, not less,” Silver said on Friday, suggesting that federal rules would simplify things in the U.S., where 39 states allow sports gambling to some extent. He also said the NBA can continue to work on monitoring data to protect against gambling scandals.

“I think there are some large-platform companies are that are looking at a business opportunity to come in and in a much more sophisticated way work as a detection service with the league,” Silver said.

Through it all, Silver said, the NBA will continue to be a data-driven operation. Have you watched a game with a long instant-replay review, and gotten a little impatient? Still, have you kept watching that game? So does almost everyone.

“For years people would tell us, ‘Don’t use instant replay, because you’ll turn fans off,’” Silver said. However, he added, “The data suggests, in terms of ratings and what servers tell us, you almost never lose a fan when you’re going to replay. Because they want to see the replay and they want to see what happened.”

The minnows got big

Sports analytics took root in baseball, with its discrete pitcher-hitter actions. Legendary MLB general manager Branch Rickey employed a statistician for the great Brooklyn Dodgers of the 1950s; the famous manager Earl Weaver thought analytically with the Baltimore Orioles in the 1970s. Baseball analyst Bill James made sports analytics a viable pursuit with his annual “Baseball Abstract” bestsellers in the 1980s, and Michael Lewis’ “Moneyball” popularized it.

But data can be applied to all sports — and sometimes is most valuable when only some teams are interested in it. Take soccer. In the English Premier League, about three clubs have been heavily oriented around analytics over the last decade: Liverpool FC, Brighton FC, and Brentford FC. That has helped Liverpool win multiple titles, while Brighton and Brentford, smaller clubs, have startled many with their success.

Saturday at SSAC, Brentford’s majority owner Matthew Benham made one of his most visible public appearances, in an onstage interview with podcaster Roger Bennett. Benham first made money wagering on soccer, then invested in Brentford, his childhood club.

“The information we used in the early days was really, really rudimentary,” Benham said. In his account, his success building an analytics-based club has only partly been about the numbers.

“A lot of the success has just been in running things efficiently.” Benham said. He prefers to have management discussions that are an “exchange of views, rather than debate,” since the latter implies an interaction with a clear winner and loser. Instead, compiling independent-minded views from his executives is more important.

Brentford also uses “a combination of old-style scouting and data” for its player acquisition decisions, Benham said. Not every decision works. Brentford could have signed current Arsenal FC star Eberechi Eze for a mere $4 million pounds in 2019, and passed; Crystal Palace FC acquired Eze, then realized a windfall when Arsenal purchased his services.

Still, pressed by Bennett to specify a little more about his analytical thinking, Benham implied that strikers are valuable not only for their finishing skills, but for consistently getting open for shots on goal. Fans tend to focus too much on a player’s misses, rather than how many chances are created by their off-ball work.

“Getting in position is way, way more informative than finishing,” Benham said.

A similar insight seems to have guided Liverpool’s thinking. As it happens, a Friday panel at SSAC featured Ian Graham, who ran Liverpool’s analytics operations from 2012 to 2023, and weighed in on a number of subjects. Among other things, Graham noted, teams are too cautious when tied late in a match; soccer grants three points for a win, one for a draw, and zero for a loss, so from a tied position, the reward for winning is twice as great as the penalty for losing.

“Teams don’t go for it enough,” Graham said. “Teams think a draw is an okay result.”

The limits of knowledge

Sports, of course, are ultimately played by imperfect, injury-prone, and sometimes exhausted athletes. One consistent lesson from the MIT Sloan conference involves the limits of data and plans.

“We think the data is giving us an answer, when actually it’s giving us some information, and we still have to make a choice,” said Ariana Andonian, vice president of player personnel for the Philadelphia 76ers, during a basketball panel on Saturday.

Asked about the promise of artificial intelligence for sports analytics, Sonia Raman, head coach of the WNBA’s Seattle Storm, noted that its insights might always be limited by circumstances.

“It’s not like you can just get an AI report in the middle of the game that says, ‘Get some shooting in,’” said Raman, who, prior to coaching in the WNBA and NBA served for 12 years as head coach of the MIT women’s basketball team.

“You can have a great plan, but if it’s poorly executed, it’s way worse than a poor plan that’s well executed,” added Steven Adams, a center for the NBA’s Houston Rockets (who is currently not playing due to injury), during the same panel.

And yet, in some games and matches, the analytics do work, the plans do come to fruition, and the numbers do make a difference. When that happens, as John Wroblewski can now attest, the results are golden. 

3 Questions: Building predictive models to characterize tumor progression

Tue, 03/10/2026 - 4:50pm

Just as Darwin’s finches evolved in response to natural selection in order to endure, the cells that make up a cancerous tumor similarly counter selective pressures in order to survive, evolve, and spread. Tumors are, in fact, complex sets of cells with their own unique structure and ability to change. 

Today, artificial Intelligence and machine learning tools offer an unparalleled opportunity to illuminate the generalizable rules governing tumor progression on the genetic, epigenetic, metabolic, and microenvironmental levels. 

Matthew G. Jones, an assistant professor in the MIT Department of Biology, the Koch Institute for Integrative Cancer Research, and the Institute for Medical Engineering and Science, hopes to use computational approaches to build predictive models — to play a game of chess with cancer, making sense of a tumor’s ability to evolve and resist treatment with the ultimate goal of improving patient outcomes. In this interview, he describes his current work.

Q: What aspect of tumor progression are you working to explore and characterize? 

A: A very common story with cancer is that patients will respond to a therapy at first, and then eventually that treatment will stop working. The reason this largely happens is that tumors have an incredible, and very challenging, ability to evolve: the ability to change their genetic makeup, protein signaling composition, and cellular dynamics. The tumor as a system also evolves at a structural level. Oftentimes, the reason why a patient succumbs to a tumor is because either the tumor has evolved to a state we can no longer control, or it evolves in an unpredictable manner. 

In many ways, cancers can be thought of as, on the one hand, incredibly dysregulated and disorganized, and on the other hand, as having their own internal logic, which is constantly changing. The central thesis of my lab is that tumors follow stereotypical patterns in space and time, and we’re hoping to use computation and experimental technology to decode the molecular processes underlying these transformations.  

We’re focused on one specific way tumors are evolving through a form of DNA amplification called extrachromosomal DNA. Excised from the chromosome, these ecDNAs are circularized and exist as their own separate pool of DNA particles in the nucleus. 

Initially discovered in the 1960s, ecDNA were thought to be a rare event in cancer. However, as researchers began applying next-generation sequencing to large patient cohorts in the 2010s, it seemed like not only were these ecDNA amplifications conferring the ability of tumors to adapt to stresses, and therapies, faster, but that they were far more prevalent than initially thought.

We now know these ecDNA amplifications are apparent in about 25 percent of cancers, in the most aggressive cancers: brain, lung, and ovarian cancers. We have found that, for a variety of reasons, ecDNA amplifications are able to change the rule book by which tumors evolve in ways that allow them to accelerate to a more aggressive disease in very surprising ways. 

Q: How are you using machine learning and artificial intelligence to study ecDNA amplifications and tumor evolution? 

A: There’s a mandate to translate what I’m doing in the lab to improve patients’ lives. I want to start with patient data to discover how various evolutionary pressures are driving disease and the mutations we observe. 

One of the tools we use to study tumor evolution is single-cell lineage tracing technologies. Broadly, they allow us to study the lineages of individual cells. When we sample a particular cell, not only do we know what that cell looks like, but we can (ideally) pinpoint exactly when aggressive mutations appeared in the tumor’s history. That evolutionary history gives us a way of studying these dynamic processes that we otherwise wouldn’t be able to observe in real time, and helps us make sense of how we might be able to intercept that evolution. 

I hope we’re going to get better at stratifying patients who will respond to certain drugs, to anticipate and overcome drug resistance, and to identify new therapeutic targets.

Q: What excited you about joining the MIT community?

A: One of the things that I was really attracted to was the integration of excellence in both engineering and biological sciences. At the Koch Institute, every floor is structured to promote this interface between engineers and basic scientists, and beyond campus, we can connect with all the biomedical research enterprises in the greater Boston area. 

Another thing that drew me to MIT was the fact that it places such a strong emphasis on education, training, and investing in student success. I’m a personal believer that what distinguishes academic research from industry research is that academic research is fundamentally a service job, in that we are training the next generation of scientists. 

It was always a mission of mine to bring excellence to both computational and experimental technology disciplines. The types of trainees I’m hoping to recruit are those who are eager to collaborate and solve big problems that require both disciplines. The KI [Koch Institute] is uniquely set up for this type of hybrid lab: my dry lab is right next to my wet lab, and it’s a source of collaboration and connection, and that reflects the KI’s general vision. 

How Joseph Paradiso’s sensing innovations bridge the arts, medicine, and ecology

Tue, 03/10/2026 - 4:25pm

Joseph Paradiso thinks that the most engaging research questions usually span disciplines. 

Paradiso was trained as a physicist and completed his PhD in experimental high-energy physics at MIT in 1981. His father was a photographer and filmmaker working at MIT, MIT Lincoln Laboratory, and the MITRE Corporation, so he grew up in a house where artists, scientists, and engineers regularly gathered and interesting music was always playing. 

That mix of influences led him to the MIT Media Lab, where he is the Alexander W. Dreyfoos Professor, academic head of the Program in Media Arts and Sciences, and director of the Responsive Environments research group.

At the Media Lab, Paradiso conducts research that engages sensing of different kinds and applies it across diverse and often extreme applications. He works on developing technologies that can efficiently capture and process multiple sensing modalities, and leverages this capability in application domains like the internet of things, medicine, environmental sensing, space exploration, and artistic expression. These efforts use that information to help people better understand the world, express themselves, and connect with one another.

Early in his career, Paradiso helped pioneer the field of wireless wearable sensing. He built many systems with multiple embedded sensors that could send information from the human body in real-time. One of his early flagship projects in this area was a pair of shoes fielded in 1997 for real-time augmented dance performance that embedded 16 sensors in each shoe, allowing wearers’ movements to directly generate music through algorithmic mapping. And Paradiso’s research at the Media Lab has consistently focused on sensing and using that information in new ways. 

“When I would list all the sensors … people would laugh. But now, my watch is measuring most of these things,” Paradiso notes. “The world has moved.” 

That progression from early prototypes to everyday technology helped lay the groundwork for devices people now use regularly to track activity, health, and performance.

As sensing systems improved, Paradiso expanded his work from individuals to groups. He developed platforms that allowed dance ensembles to create music together through their collective motion. Achieving this required Paradiso and his team to develop new ways for compact wearable devices to communicate wirelessly at high speed, as well as new approaches to real-time data processing and extending the range of available microelectromechanical systems (MEMS) sensors.

Those same sensing platforms were later adapted for sports medicine in 2006. Working with doctors who support elite athletes, his array of compact, wearable sensors captured large amounts of high-speed motion data from multiple points on the body, aimed at helping clinicians assess injury risk, performance, and recovery on the go, without the complex equipment typically associated with biomechanical monitoring and clinical settings.

More recently, Paradiso’s research has extended beyond humans. Through collaborations with National Geographic Explorers, his team has deployed sensors in remote environments to study animal behavior, including low-power compact wearable devices to detect the environmental conditions around the animal as well as track them (currently on lions and hyenas in Botswana and goats in Chile), and acoustic sensors with onboard AI to detect and monitor populations of endangered honeybees in Patagonia. This work provides new ways to understand how ecosystems function and how the planet is changing.

Paradiso was named an IEEE Fellow in January, recognizing his achievement in wireless wearable sensing and mobile energy harvesting. This is the highest grade of membership in IEEE, the world’s leading professional association dedicated to advancing technology for the benefit of humanity.

Across art, health, and the natural world, Paradiso’s work reflects how foundational research at MIT can seed technologies that ripple outward over time, shaping new applications and opening new fields. As advances in wearable technologies drive the rush toward the ever-more-connected human, a persistent existential question lurks. 

“Where do I stop, versus others begin?” Paradiso asks. 

For him, the aim is not novelty for its own sake, but amplification: using technology to help people become more perceptive, better connected, and more aware of their place in a larger system.

MIT School of Engineering faculty receive awards in fall 2025

Tue, 03/10/2026 - 4:00pm

Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in fall 2025:

Hal Abelson, the Class of 1922 Professor in the Department of Electrical Engineering and Computer Science, received the 2025 Lifetime Achievement Award for Excellence from Open Education Global. The award honors his foundational impact on open education, Creative Commons, and open knowledge movements.

Faez Ahmed, the Henry L. Doherty Career Development Professor in Ocean Utilization in the Department of Mechanical Engineering, received an Amazon Research Award for his project “AutoDA‑Sim: A Multi‑Agent Framework for Safe, Aesthetic, and Aerodynamic Vehicle Design.” Amazon Research Awards provide unrestricted funds and AWS Promotional Credits to academic researchers investigating various research topics in multiple disciplines.

Pulkit Agrawal, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to robot learning, policy learning, agile locomotion, and dexterous manipulation. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.

Ahmad Bahai, a professor of the practice in the Department of Electrical Engineering and Computer Science, was elected to the 2025 class of Fellows of the National Academy of Inventors for contribution to innovation in new semiconductor devices with extensive applications in clinical grade personal sensors for a variety of biomarkers. The honor recognizes inventors whose patented work has made a meaningful global impact.

Yufeng (Kevin) Chen, an associate professor in the Department of Electrical Engineering and Computer Science, received the 2025 IROS Toshio Fukuda Young Professional Award for contributions to insect‑scale multimodal robots and soft‑actuated aerial systems. The award recognizes outstanding contributions of an individual of the IROS community who has pioneered activities in robotics and intelligent systems.

Angela Koehler, the Charles W. and Jennifer C. Johnson Professor in the Department of Biological Engineering, received the 2025 Sato Memorial International Award from the Pharmaceutical Society of Japan, recognizing advancements in pharmaceutical sciences and U.S.–Japan scientific collaboration.

Dina Katabi, the Thuan (1990) and Nicole Pham Professor in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Medicine for pioneering digital health technology that enables noninvasive, off-body remote health monitoring via AI and wireless signals, and for developing digital biomarkers for Parkinson’s progression and detection. Election to the academy is considered one of the highest honors in the fields of health and medicine, and recognizes individuals who have demonstrated outstanding professional achievement and commitment to service.

Darcy McRose, the Thomas D. and Virginia W. Cabot Career Development Professor in the Department of Civil and Environmental Engineering, was selected as a 2025 Packard Fellow for Science and Engineering. The Packard Foundation established the Packard Fellowships for Science and Engineering to allow the nation’s most promising early-career scientists and engineers flexible funding to take risks and explore new frontiers in their fields of study.

Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of Electrical Engineering and Computer Science, received the 2026 IEEE Richard W. Hamming Medal for contributions to coding for reliable communications and networking. Recognized for breakthroughs in network coding and information theory, Médard’s innovations improve the reliability of data transmission in applications such as streaming video, wireless networks, and satellite communications. The award is given for exceptional contributions to information sciences, systems and technology.

Tess Smidt, an associate professor in the Department of Electrical Engineering and Computer Science, was selected as a 2025 AI2050 Fellow by Schmidt Sciences for her project, “Hierarchical Representations of Complex Physical Systems with Euclidean Neural Networks.” The program supports research that aims to help AI benefit humanity by mid‑century.

MIT undergraduates help US high schoolers tackle calculus

Tue, 03/10/2026 - 12:00am

This year in a rural school district in southeastern Montana, one high school student is taking calculus. For many people, calculus is daunting enough, even when teachers are used to offering it and peers are around to help. Studying it solo can be even harder. Yet this lone student has an unusual source of support: weekly tutoring directly from an MIT undergraduate, by Zoom, a long-distance but helpful way to stay on track.

It's part of a new program called the MIT4America Calculus Project, launched from the Institute last summer, in which MIT undergraduates and alumni work with school districts across the U.S., from Montana to Texas to New York, to tutor high school students. The logic is compelling: Students are highly proficient at calculus at MIT, where it is almost a requirement for admissions and success. The new civic-minded outreach program lets those MIT people share their knowledge and skills, getting high schoolers ready for further studies and even jobs, especially in STEM fields. 

“Calculus is a gateway for many students into STEM higher education and careers,” says MIT Professor Eric Klopfer, a co-director of the MIT4America Calculus Project. “We can help more students, in more places, fulfill requirements and get into great universities across the country, whether MIT or others, and then into STEM careers. We want to make sure they have the skills to do that.”

At this point, the project is working closely with 14 school districts across the U.S., deploying 30 current MIT undergraduates and seven alumni as tutors. The weekly sessions are carefully coordinated with school administrators and teachers, and the MIT tutors have all received training. The program started with an in-person summer calculus camp in 2025; by next summer, the goal is to be collaborating with about 20 schools districts.

“We want it to have a lasting impact,” says Claudia Urrea, an education scholar and co-director of the MIT4America Calculus Project “It’s not just about students passing an exam, but having tutors who look like what the students want to be in the future, who are mentors, have conversations, and make sure the high school students are learning.” 

Klopfer and Urrea bring substantial experience to the project. Klopfer is a professor and director of the Scheller Teacher Education Program and the Education Arcade at MIT; Urrea is executive director for the PreK-12 Initiative at MIT Open Learning.

The MIT4America Calculus Project is supported through a gift from the Siegel Family Endowment and was developed as a project in consultation with David Siegel SM ’86, PhD ’91, a computer scientist and entrepreneur who is chairman of the firm Two Sigma.

“David Siegel came to us with two powerful questions: How can we spread the educational impact of MIT beyond our walls? And how can we open doors to STEM careers for U.S. high school students who don’t have access to calculus?” says MIT President Sally Kornbluth.

She adds: “The MIT4America Calculus Project answers those questions in a perfectly MIT way: Reflecting the Institute’s longstanding commitment to national service, the MIT4America Calculus Project supplies an innovative answer to a hard practical problem, and it taps the uncommon skill of the people of MIT to create opportunity for others. We’re enormously grateful to David for his inspiration and guidance, and to the Siegel Family Endowment for the financial support that brought this idea to life.”

The U.S. has more than 13,000 school districts, and about half of them offer calculus classes. The MIT effort aims to work with districts that already have existing programs but are striving to add educational support for them, often while facing funding constraints or other limitations.

In contrast to the one-student calculus situation in Montana, the project is also working with a 5,000-student district in Texas, south of Dallas, where about 60 high school students take calculus; currently five Institute undergraduates are tutoring 15 students from the district’s schools.

“Other organizations are involved in efforts like this, but I think MIT brings some unique things to it,” Klopfer says. “I think involving our undergraduates in this is an awesome contribution. Our students really do come from all over the place, and are sometimes connecting back to their home states and communities, and that makes a difference on both sides.”

He adds: “I see benefits for our students, too. They develop good ways of communicating, working with other people and building skills. They can gain a lot of great experience.”

In addition to the in-person summer calculus camp, which is expected to continue, and the weekly video tutoring, the MIT4America Calculus Project is working on developing online tools that help guide high school students as well. Still, Urrea emphasizes, the project is built around “the importance of people. A community of support is very important, to have connections that build over time.  The human aspect of the program is irreplaceable.”

The MIT tutors must pass rigorous training sessions that cover pedagogy and other aspects of working with high school students, and know they are making a substantial commitment of time and effort.

It has been worth it, as teachers say their high school students have been responding very well to the MIT tutors.

“For students to be able to see themselves in their tutors is a really cool thing,” says Shilpa Agrawal ’15, director of computer science and an AP calculus AB teacher at Comp Sci High in the Bronx, New York, where 15 students are participating in the project.

“It’s led to a lot of success for my students,” adds Agrawal, who majored in computer science at MIT. She is part of the national network of MIT-connected teachers who have been helping the program grow organically, having reached out to Jenny Gardony, manager of the MIT4America Calculus Project.

Gardony, who is also the math project manager in MIT’s Scheller Teacher Education program, has been receiving enthusiastic emails from teachers in other participating districts since the project started.

“I have to start by saying thank you,” one teacher wrote to Gardony, adding that one student “was so excited in class today. The session she had with you made her so confident. She’s always nervous, but today she was smiling and helping others, and that was 100 percent because of you.”

Gardony adds: “The fact that a busy teacher takes the time to send that email, I’m touched they would do that.” 

Understanding how “marine snow” acts as a carbon sink

Mon, 03/09/2026 - 3:00pm

In some parts of the deep ocean, it can look like it’s snowing. This “marine snow” is the dust and detritus that organisms slough off as they die and decompose. Marine snow can fall several kilometers to the deepest parts of the ocean, where the particles are buried in the seafloor for millennia.

Now, researchers at MIT and their collaborators have found that as marine snow falls, tiny hitchhikers may limit how deep the particles can sink before dissolving away. The team shows that when bacteria hitch a ride on marine snow particles, the microbes can eat away at calcium carbonate, which is an essential ballast that helps particles sink.

The findings, which appear this week in the Proceedings of the National Academy of Sciences, could explain how calcium carbonate dissolves in shallow layers of the ocean, where scientists had assumed it should remain intact. The results could also change scientists’ understanding of how quickly the ocean can sequester carbon from the atmosphere.

Marine snow is a main vehicle by which the ocean stores carbon. At the ocean’s surface, phytoplankton absorb carbon dioxide from the atmosphere and convert the gas into other forms of carbon, including calcium carbonate — the same stuff that’s found in shells and corals. When they die, bits of phytoplankton drift down through the ocean as marine snow, carrying the carbon with them. If the particles make it to the deep ocean, the carbon they carry can be buried and locked away for hundreds to thousands of years.

But the new study suggests bacteria may be working against the ocean’s ability to sequester carbon. By eroding the particles’ calcium carbonate, bacteria can significantly slow the sinking of marine snow. The more they linger, the more likely the particles are to be respired quickly, releasing carbon dioxide into the shallow ocean, and possibly back into the atmosphere.

“What we’ve shown is that carbon may not sink as deep or as fast as one may expect,” says study co-author Andrew Babbin, an associate professor in the Department of Earth, Atmospheric and Planetary Sciences and a mission director at the Climate Project at MIT. “As humanity tries to design our way out of the problem of having so much CO2 in the atmosphere, we have to take into account these natural microbial mechanisms and feedbacks.”

The study’s primary author is Benedict Borer, a former MIT postdoc who is now an assistant professor of marine and coastal sciences at the Rutgers School of Environmental and Biological Sciences; co-authors include Adam Subhas and Matthew Hayden at the Woods Hole Oceanographic Institution and Ryan Woosley, a principal research scientist at MIT’s Center for Sustainability Science and Strategy.

Losing weight

Marine snow acts as the ocean’s main “biological pump,” the process by which the ocean pulls carbon from the surface down into the deep ocean. Scientists estimate that marine snow is responsible for drawing down billions of tons of carbon each year. Marine snow’s ability to sink comes mainly from minerals such as calcium carbonate embedded within the particles. The mineral is a dense ballast that weighs down the particle. The more calcium carbonate a particle has, the faster it sinks.

Scientists had assumed based on thermodynamics that calcium carbonate should not dissolve within the ocean’s upper layers, given the general temperature and pH conditions in the surface ocean. Any calcium carbonate that is bound up in marine snow should then safely sink to depths greater than 1,000 meters without dissolving along the way.

But oceanographers have long observed signs of dissolved calcium carbonate in the upper layers of the ocean, suggesting that something other than the ocean’s macroscale conditions was dissolving the mineral and slowing down the ocean’s biological pump.

And indeed, the MIT team has found that what is dissolving calcium carbonate in shallow waters is a microscale process that occurs within the immediate environment of an individual particle.

“Most oceanographers think about the macroscale, and in this instance what’s happening in microscopic particles is what is actually controlling bulk seawater chemistry,” Borer says. “Consequences abound for the ocean’s carbon dioxide sequestration capacity.”

A sinking sweetspot

In their new study, the researchers set up an experiment to simulate a sinking particle of marine snow and its interactions at the microscale. The team synthesized particles similar to marine snow that they made from varying concentrations of calcium carbonate and bacteria — organisms that are often found feasting on the particles in the ocean.

“The ocean is a fairly dilute medium with respect to organic matter,” Babbin says. “So organisms like bacteria have to search for food. And particles of marine snow are like cheeseburgers for bacteria.”

The team designed a small microfluidic chip to contain the particles, and flowed seawater through the chip at various rates to simulate different sinking speeds in the ocean. Their experiments revealed that whenever particles hosted any bacteria, they also rapidly lost some calcium carbonate, which dissolved into the surrounding seawater. As bacteria feed on the particles’ organic material, the microbes excrete acidic waste products that act to dissolve the particles’ inorganic, ballasting calcium carbonate.

The researchers also found that the amount of calcium carbonate that dissolves depends on how fast the particles sink. They flowed seawater around the particles at slow, intermediate, and fast speeds and found that both slow and fast sinking limit the amount of calcium carbonate that’s dissolved. With slow sinking, particles don’t receive as much oxygen from their surroundings, which essentially suffocates any hitchhiking bacteria. When particles sink quickly, bacteria may be sufficiently oxygenated, but any waste products that they produce can be easily flushed away before they can dissolve the particles’ calcium carbonate.

At intermediate speeds, there is a sweet spot: Bacteria are sufficiently oxygenated and can also build up enough waste, enabling the microbes to efficiently dissolve calcium carbonate.

Overall, the work shows that bacteria can have a significant effect on marine snow’s ability to sink and sequester carbon in the deep ocean. Bacteria can be found everywhere, and particularly in the shallower ocean regions. Even if macroscale conditions in these upper layers should not dissolve calcium carbonate, the study finds bacteria working at the microscale most likely do.

The findings could explain oceanographers’ observations of dissolved calcium carbonate in shallow ocean regions. They also illustrate that bacteria and other microbes may be working against the ocean’s natural ability to sequester carbon, by dissolving marine snow’s ballast and slowing its descent into the deep ocean. As humans consider climate solutions that involve enhancing the ocean’s biological pump, the researchers emphasize that bacteria’s role must be taken into account.

“Insights from this work are vital to predict how ecosystems will respond to marine carbon dioxide removal attempts, and overall how the oceans will change in response to future climate scenarios,” says Benedict Borer, who carried out the study’s experiments as a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences.

This work was supported, in part, by the Simons Foundation, the National Science Foundation, and the Climate Project at MIT.

Neurons receive precisely tailored teaching signals as we learn

Mon, 03/09/2026 - 12:50pm

When we learn a new skill, the brain has to decide — cell by cell — what to change. New research from MIT suggests it can do that with surprising precision, sending targeted feedback to individual neurons so each one can adjust its activity in the right direction.

The finding echoes a key idea from modern artificial intelligence. Many AI systems learn by comparing their output to a target, computing an “error” signal, and using it to fine-tune connections within the network. A long-standing question has been whether the brain also uses that kind of individualized feedback. In an open-access study published in the Feb. 25 issue of the journal Nature, MIT researchers report evidence that it does.

A research team led by Mark Harnett, a McGovern Institute for Brain Research investigator and associate professor in the Department of Brain and Cognitive Sciences at MIT, discovered these instructive signals in mice by training animals to control the activity of specific neurons using a brain-computer interface (BCI). Their approach, the researchers say, can be used to further study the relationships between artificial neural networks and real brains, in ways that are expected to both improve understanding of biological learning and enable better brain-inspired artificial intelligence.

The changing brain

Our brains are constantly changing as we interact with the world, modifying their circuitry as we learn and adapt. “We know a lot from 50 years of studies that there are many ways to change the strength of connections between neurons,” Harnett says. “What the field really lacks is a way of understanding how those changes are orchestrated to actually produce efficient learning.”

Some actions — and the neural connections that enable them — are reinforced with the release of neuromodulators like dopamine or norepinephrine in the brain. But those signals are broadcast to large groups of neurons, without discriminating between cells’ individual contributions to a failure or a success. “Reinforcement learning via neuromodulators works, but it’s inefficient, because all the neurons and all the synapses basically get only one signal,” Harnett says.

Machine learning uses an alternative, and extremely powerful, way to learn from mistakes. Using a method called back propagation, artificial neural networks compute an error signal and use it to adjust their individual connections. They do this over and over, learning from experience how to fine-tune their networks for success. “It works really well and it’s computationally very effective,” Harnett says.

It seemed likely that brains might use similar error signals for learning. But neuroscientists were skeptical that brains would have the precision to send tailored signals to individual neurons, due to the constraints imposed by using living cells and circuits instead of software and equations. A major problem for testing this idea was how to find the signals that provide personalized instructions to neurons, which are called vectorized instructive signals. The challenge, explains Valerio Francioni, first author of the Nature paper and a former postdoc in Harnett’s lab, is that scientists don’t know how individual neurons contribute to specific behaviors.

“If I was recording your brain activity while you were learning to play piano,” Francioni explains, “I would learn that there is a correlation between the changes happening in your brain and you learning piano. But if you asked me to make you a better piano player by manipulating your brain activity, I would not be able to do that, because we don’t know how the activity of individual neurons map to that ultimate performance.”

Without knowing which neurons need to become more active and which ones should be reined in, it is impossible to look for signals directing those changes.

Understanding neuron function

To get around this problem, Harnett’s team developed a brain-computer interface task to directly link neural activity and reward outcome — akin to linking the keys of the piano directly to the activity of single neurons. To succeed at the task, certain neurons needed to increase their activity, whereas others were required to decrease their activity.

They set up a BCI to directly link activity in those neurons — just eight to 10 of the millions of neurons in a mouse’s brain — to a visual readout, providing sensory feedback to the mice about their performance. Success was accompanied by delivery of a sugary reward.

“Now if you ask me, ‘How does the mouse get more rewards? Which neuron do you have to activate and which neuron do you have to inhibit?’ I know exactly what the answer to that question is,” says Francioni, whose work was supported by a Y. Eva Tan Fellowship from the Yang Tan Collective at MIT.

The scientists didn’t know the exact function of the particular neurons they linked to the BCI, but the cells were active enough that mice received occasional rewards whenever the signals happened to be right. Within a week, mice learned to switch on the right neurons while leaving the other set of neurons inactive, earning themselves more rewards.

Francioni monitored the target neurons daily during this learning process using a powerful microscope to visualize fluorescent indicators of neural activity. He zeroed in on the neurons’ branching dendrites, where the appropriate feedback signals have long been suspected to arrive. At the same time, he tracked activity in the parent cell bodies of those neurons. The team used these data to examine the relationship between signals received at a neuron’s dendrites and its activity, as well as how these changed when mice were rewarded for activating the right neurons or when they failed at their task.

Vectorized neural signals

They concluded that the two groups of neurons whose activity controlled the BCI in opposite ways, also received opposing error signals at their dendrites as the mice learned. Some were told to ramp up their activity during the task, while others were instructed to dial it down. What’s more, when the team manipulated the dendrites to inhibit these instructive signals, mice failed to learn the task. “This is the first biological evidence that vectorized [neuron-specific] signal-based instructive learning is taking place in the cortex,” Harnett says.

The discovery of vectorized signals in the brain — and the team’s ability to find them — should promote more back-and-forth between neuroscientists and machine learning researchers, says postdoc Vincent Tang. “It provides further incentive for the machine learning community to keep developing models and proposing new hypotheses along this direction,” he says. “Then we can come back and test them.”

The researchers say they are just as excited about applying their approach to future experiments as they are about their current discovery.

“Machine learning offers a robust, mathematically tractable way to really study learning. The fact that we can now translate at least some of this directly into the brain is very powerful,” Francioni says.

Harnett says the approach opens new opportunities to investigate possible parallels between the brain and machine learning. “Now we can go after figuring out, how does cortex learn? How do other brain regions learn? How similar or how different is it to this particular algorithm? Can we figure out how to build better, more brain-inspired models from what we learn from the biology?” he says. “This feels like a really big new beginning.” 

Improving AI models’ ability to explain their predictions

Mon, 03/09/2026 - 12:00am

In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.

Concept bottleneck modeling is one method that enables artificial intelligence systems to explain their decision-making process. These methods force a deep-learning model to use a set of concepts, which can be understood by humans, to make a prediction. In new research, MIT computer scientists developed a method that coaxes the model to achieve better accuracy and clearer, more concise explanations.

The concepts the model uses are usually defined in advance by human experts. For instance, a clinician could suggest the use of concepts like “clustered brown dots” and “variegated pigmentation” to predict that a medical image shows melanoma.

But previously defined concepts could be irrelevant or lack sufficient detail for a specific task, reducing the model’s accuracy. The new method extracts concepts the model has already learned while it was trained to perform that particular task, and forces the model to use those, producing better explanations than standard concept bottleneck models.

The approach utilizes a pair of specialized machine-learning models that automatically extract knowledge from a target model and translate it into plain-language concepts. In the end, their technique can convert any pretrained computer vision model into one that can use concepts to explain its reasoning.

“In a sense, we want to be able to read the minds of these computer vision models. A concept bottleneck model is one way for users to tell what the model is thinking and why it made a certain prediction. Because our method uses better concepts, it can lead to higher accuracy and ultimately improve the accountability of black-box AI models,” says lead author Antonio De Santis, a graduate student at Polytechnic University of Milan who completed this research while a visiting graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) at MIT.

He is joined on a paper about the work by Schrasing Tong SM ’20, PhD ’26; Marco Brambilla, professor of computer science and engineering at Polytechnic University of Milan; and senior author Lalana Kagal, a principal research scientist in CSAIL. The research will be presented at the International Conference on Learning Representations.

Building a better bottleneck

Concept bottleneck models (CBMs) are a popular approach for improving AI explainability. These techniques add an intermediate step by forcing a computer vision model to predict the concepts present in an image, then use those concepts to make a final prediction.

This intermediate step, or “bottleneck,” helps users understand the model’s reasoning.

For example, a model that identifies bird species could select concepts like “yellow legs” and “blue wings” before predicting a barn swallow.

But because these concepts are often generated in advance by humans or large language models (LLMs), they might not fit the specific task. In addition, even if given a set of pre-defined concepts, the model sometimes utilizes undesirable learned information anyway, which is a problem known as information leakage.

“These models are trained to maximize performance, so the model might secretly use concepts we are unaware of,” De Santis explains.

The MIT researchers had a different idea: Since the model has been trained on a vast amount of data, it may have learned the concepts needed to generate accurate predictions for the particular task at hand. They sought to build a CBM by extracting this existing knowledge and converting it into text a human can understand.

In the first step of their method, a specialized deep-learning model called a sparse autoencoder selectively takes the most relevant features the model learned and reconstructs them into a handful of concepts. Then, a multimodal LLM describes each concept in plain language.

This multimodal LLM also annotates images in the dataset by identifying which concepts are present and absent in each image. The researchers use this annotated dataset to train a concept bottleneck module to recognize the concepts.

They incorporate this module into the target model, forcing it to make predictions using only the set of learned concepts the researchers extracted.

Controlling the concepts

They overcame many challenges as they developed this method, from ensuring the LLM annotated concepts correctly to determining whether the sparse autoencoder had identified human-understandable concepts.

To prevent the model from using unknown or unwanted concepts, they restrict it to use only five concepts for each prediction. This also forces the model to choose the most relevant concepts and makes the explanations more understandable.

When they compared their approach to state-of-the-art CBMs on tasks like predicting bird species and identifying skin lesions in medical images, their method achieved the highest accuracy while providing more precise explanations.

Their approach also generated concepts that were more applicable to the images in the dataset. 

“We’ve shown that extracting concepts from the original model can outperform other CBMs, but there is still a tradeoff between interpretability and accuracy that needs to be addressed. Black-box models that are not interpretable still outperform ours,” De Santis says.

In the future, the researchers want to study potential solutions to the information leakage problem, perhaps by adding additional concept bottleneck modules so unwanted concepts can’t leak through. They also plan to scale up their method by using a larger multimodal LLM to annotate a bigger training dataset, which could boost performance.

“I’m excited by this work because it pushes interpretable AI in a very promising direction and creates a natural bridge to symbolic AI and knowledge graphs,” says Andreas Hotho, professor and head of the Data Science Chair at the University of Würzburg, who was not involved with this work. “By deriving concept bottlenecks from the model’s own internal mechanisms rather than only from human-defined concepts, it offers a path toward explanations that are more faithful to the model and opens many opportunities for follow-up work with structured knowledge.”

This research was supported by the Progetto Rocca Doctoral Fellowship, the Italian Ministry of University and Research under the National Recovery and Resilience Plan, Thales Alenia Space, and the European Union under the NextGenerationEU project.

Personal tech, social media, and the “decline of humanity”

Fri, 03/06/2026 - 2:00pm

Social psychologist Jonathan Haidt presented a forceful analysis of the damage smartphones and social media are doing to our cognition, our civic fabric, and our children’s wellbeing, while calling for renewed action to ward off their effects, in the latest of MIT’s Compton Lectures on Wednesday.

“Around the world, people are getting diminished,” Haidt said. “Less intelligent, less happy, less competent. And it’s happening very fast … My argument is that if we continue with current trends as AI is coming in, it’s going to accelerate. The decline of humanity is going to accelerate.”

Haidt is the Thomas Cooley Professor of Ethical Leadership at New York University’s Stern School of Business and the author of the recent bestseller “The Anxious Generation,” which suggests that the widespread adoption of social media in the 2010s has been especially damaging to young women, making them prone to anxiety and depression.

But as Haidt has continued to examine the effects of social media on society, he has started focusing on additional issues. Our inability to put our phones away, our compulsion to check social media, and the way we spend hours a day watching short-form videos, may be causing problems that go far beyond any rise in anxiety and depression.

“It turns out, it’s not the biggest thing,” Haidt said. “There’s something bigger. It is the destruction of the human capacity to pay attention. Because this is affecting most people, including most adults. And if you imagine humanity with 10 to 50 percent of its attentional ability sucked out of it, there’s not much left. We’re not very capable of doing things if we can’t focus or stay on a task for more than 30 seconds.”

Whatever solution may emerge to these problems, Haidt declared, is going to have to come from “human agency. People see a problem, they figure out a way around it. That’s what I’m hoping to promote here [to] this very important audience. So please consider what I’m saying, these trends, and then work to change them.”

Haidt’s lecture, titled, “Life After Babel: Democracy and Human Development in the Fractured, Lonely World That Technology Gave Us,” was delivered before a capacity audience of over 400 people in MIT’s Huntington Hall (Room 10-250).

The lecture spanned a variety of related topics, with Haidt presenting chart after chart showing the onset of declines in cognition, educational achievement, and happiness, which all have seemed to occur soon after the widespread adoption of smartphones in the 2010s. The individual adoption of smartphones, he notes, has been compounded by the way schools brought internet-connected computing devices into classrooms around the same time.

“The biggest, the most costly mistake we’ve ever made in the history of American education [was] to put computers and high tech on people’s desks,” Haidt said.

Distractible students with shorter attention spans are reading fewer books, he noted; some cinema students cannot sit through films. The top quartile of students is continuing to do well, he noted, but for most students, proficiency levels have dipped notably since the 2010s.

“Fifty years of progress in education, 50 years of progress, up in smoke, gone,” Haidt said. “We’re back to where we were 50 years ago. That’s pretty big, that’s pretty serious.”

As Haidt mentioned multiple times in his remarks, he is not an opponent of all forms of technology, or even personal communication technology, but rather is seeking to mitigate its harmful effects.

“I love tech, I love modernity, we’re all dependent on it, I love my iPhone,” Haidt said. Just as he finished that sentence, an audience member’s cellphone started ringing loudly — drawing a huge laugh from the audience.

“I did not plant that, that was a truly spontaneous demonstration of what I’m talking about,” Haidt said.

Haidt was introduced by MIT President Sally A. Kornbluth, who called him “a leading voice for reforming society’s relationship with technology.” She praised Haidt’s work, noting that he wants to “encourage us to imagine a more positive role for technology in humanity’s future.”

The Karl Taylor Compton Lecture Series was introduced in 1957. It is named for MIT’s ninth president, who led the Institute from 1930 to 1948 and also served as chair of the MIT Corporation from 1948 to 1954.

Compton, as Kornbluth observed, helped MIT evolve from being more strictly an engineering school into “a great global university” with “a new focus on fundamental scientific research.” During World War II, she added, Compton “helped invent the longstanding partnership between the federal government and America’s research universities.”

Haidt received his undergraduate degree from Yale University and his PhD from the University of Pennsylvania. He taught on the faculty at the University of Virginia for 16 years before joining New York University. He has written several widely discussed books about contemporary civic life. Haidt observed that the problems stemming from device distraction and compulsion appear to have hit so-called Gen Z — those born from roughly the mid 1990s to the early 2010s — especially hard, though he emphasized that people in that cohort are essentially victims of circumstance.

“I am not blaming Gen Z,” Haidt said. “I am saying we raised our kids in a way — we allowed the technology companies to take over childhood. We allowed a few giant companies to own our children’s attention, to show them millions of short videos, to destroy their ability to pay attention, to stop them from reading books, and this is the result.”

For a portion of his remarks, Haidt also examined the consequences of social media for politics, showing data that chart the global diminishment of democracy since the 2010s, while the world has become soaked in misinformation and conflictual online interactions.

“That, I think, is what digital technology has done to us,” Haidt said. “It was supposed to connect us, but instead it has broken things, divided us, and made it very, very hard to ever have common facts, common truths, common stories again.”

Towards the end of his remarks, Haidt also speculated that the effects of using AI will be corrosive as well, intellectually and psychologically.

“AI is not exactly going to make us better at interacting with human beings,” Haidt said.

With all this in mind, what is to be done, to limit the intellectual and social damage from tech devices and social media? For one thing, Haidt suggested, we should be less impressed by high-tech innovations and social media.

“We need to disenthrall ourselves from technology,” Haidt said, paraphrasing a line written by President Abraham Lincoln. He added: “I suggest that we have a generally negative view … of social media and of AI.” This kind of “more emotionally negative or ambivalent view” will make it easier for us to reverse the way technology seems to control us.

As a practical matter, Haidt suggested, that means taking steps to limit our exposure to technology. His own public-advocacy group, The Anxious Generation Movement, suggests a set of four reforms: No smartphones for kids before they are high-school age; no social media before age 16; making school phone-free, from bell to bell; and giving kids more independence, free play, and responsibility in the world.

Certainly there is movement toward some of these concepts. Some school districts in the U.S. are banning or limiting phone usage; Australia has also instituted a ban on social media for anyone under 16, while a handful of other countries have announced similar plans.

“There’s a gigantic techlash happening right now,” Haidt suggested. For all the sudden changes technology has introduced within the last 15 years, it is still possible, for now, for people to find a way out of our tech-induced predicament.

“The good news is, there is human agency,” Haidt said.

Seeds of something different

Fri, 03/06/2026 - 12:00am

In Berlin in the early 1870s, tourists began visiting a neighborhood called Barackia. It did not have museums, palaces, or any other typical attractions. Barackia was a working-class neighborhood where people grew their own food, lived in small dwellings, and established communal arrangements outside the normal reach of government. For a while, anyway: In 1872, authorities moved in and cleared out Barackia.

Still, the concept of small urban farming caught on, and by 1900, about 50,000 Berlin households were growing food, often in so-called arbor colonies. The practice has never really been abandoned: Today, by law, Germany provides residents the right to garden, still a very popular activity in urban areas.

“In a little space, you can grow a lot of produce,” says MIT Professor Kate Brown, author of a new history of urban gardening. “Once you set things up, it need not take too much of your time. You can have another job and still grow food. You go to Berlin, and many German cities, and you’re surrounded by these allotment gardens.”

But as the residents of Barackia found out, there is a politics that comes with growing your own food on common land. Other interests may want to claim or at least control the land themselves. Or they may want to tap into the labor being applied to gardening. One way or another, when many people start gardening for themselves, core questions about the organization of society seem to sprout up, too.

Brown examines urban gardening and its politics in her book, “Tiny Gardens Everywhere: The Past Present, and Future of the Self-Provisioning City,” published by W.W. Norton. Brown is the Thomas M. Siebel Distinguished Professor in History of Science within MIT’s Program in Science, Technology, and Society. In a book with global scope, ranging from Estonia to Amsterdam and Washington, Brown contends that urban gardening has many positive spillover effects, from health and environmental benefits to community-building — apart from periods of pushback when others are trying to eliminate it.

“Community after community, people work together to create food provisioning practices,” Brown says. “And after people come together for food and gardening, then they start to solve other problems they have.”

Whose land?

“Tiny Gardens Everywhere” was several years in making, featuring extensive archival research, with firsthand material interspersed too. Brown’s story begins in England, which had a very long tradition of people farming on common land, often in ingenious, productive ways. “Every bit of space was used,” Brown says.

Then in the late 18th century, the advent of “enclosures” for wealthy landowners privatized much land and changed social life for many. Poorer residents, even when given allotments, found them not big enough for self-sustaining farming.

“Private property is largely an English invention of the late 18th century,” Brown says. “Before that, and in many parts of the world to this day, people live with a communal sense of the ownership of the land.”

In Brown’s interpretation, the enclosure movement did not just claim more land for Britain’s upper class. In an industrializing society, it forced peasants into the factory labor force, whether in cities or in rural mills.

“Really what they were doing when they were enclosing land was trying to control labor, as much as controlling land,” Brown says. “Because of their reliance on the commons, peasants were self-sufficient. Who wants to go work in a factory when you could be out having fun in the forest? Expelling people was a way to force them to become homeless, the landless proletariat, with nothing to sell but their labor, for 10 or 18 hours a day.”

As Brown chronicles in detail, conflicts between communal agriculture and propertied classes have often arisen since then, in varying forms. And sometimes, in now-surprising places, because urban gardening has been more extensive than we realize.

A core section of “Tiny Gardens Everywhere” focuses on Washington, in the middle of the 20th century. During the Great Migration, which started a few decades earlier, African Americans moved north en masse, resettling in cities. They brought extensive knowledge with them about agricultural practices. In the part of Washington east of the Anacostia River, Black neighborhoods relied heavily on local gardening.

“They set up workers’ cooperatives and food cooperatives,” Brown observes. Despite often living in difficult circumstances, she adds, “I think it’s very interesting that people found really smart ways to adapt. If the neighborhood had no garbage collection, they’ll compost. No sewers, they’ll compost.”

Over time, though, authorities started claiming more land, designating homes to be torn down, and restricting the ability of residents to garden. And as Brown chronicles in the book, local officials have used restrictions on urban gardening as a form of social control, with one outcome being a homogenized social and physical landscape characterized by grass lawns for the affluent.

How much food?

Even if urban gardening has been fairly common in the past, it is natural to ask: How much food can it really provide? As Brown sees it, there is not one simple answer to that question. At one point, victory gardens provided about 40 percent of all produce grown in the U.S. during World War II, for one thing. More recently, In 1996, 91 percent of the potatoes Russians ate came from urban allotment gardens on 1.5 percent of the country’s arable land.

As Brown also points out in the book, we may not be growing as much produce on giant farms as we think. Only 2 percent of agricultural land in the U.S. is used to produce fruit and vegetables, for instance. The U.S., as a variety of analysts and writers have observed, has corn-and soy-heavy agricultural systems at its largest scales, principally yielding corn-based products. That means, Brown says, “They’re really inefficiently [working] to produce ethanol, corn syrup, chips, and cookies.”

In sum, she adds, “Yes, I do think it’s possible to take an urban space and grow a good part of the fruits and vegetables that people need there.”

It is possible, Brown believes, for things to change on this front. For instance, Florida, Illinois, and Maine, three fairly different states in terms of politics, all have laws providing the right to garden. Oklahoma has a similar bill in the works.

“I think this approach to looking at our right to grow food, to self-provision, to step outside of markets for our most essential needs, is something that represents a unifying set of desires in our hyperpolarized political landscape,” Brown says.

Other scholars have praised “Tiny Gardens Everywhere.” Sunil Amrith, a professor of history at Yale University, has said that Brown uses “enviable skill, craft, and insight” to show “that the past of small-scale urban provisioning contains the seeds of a more resilient future for us all.”

For her part, Brown hopes the book will not only appeal to readers, but spur them to become more active about the issue, as gardeners, local policy advocates, or both.

“One of the drumbeats of this book is that people do — and maybe we all should — win the right to garden,” Brown says. 

Studying the genetic basis of disease to explore fundamental biological questions

Fri, 03/06/2026 - 12:00am

When Associate Professor Eliezer Calo PhD ’11 was applying for faculty positions, he was drawn to MIT not only because it’s his alma mater, but also because the Department of Biology places high value on exploring fundamental questions in biology.

In his own lab, Calo studies how craniofacial malformations arise. One motivation is to seek new treatments for those conditions, but another is to learn more about fundamental biological processes such as protein synthesis and embryonic development.

“We use genes that are mutated in disease to uncover fundamental biology,” Calo says. “Mutations that happen in disease are an experiment of nature, telling us that those are the important genes, and then we follow them up not only to understand the disease, but to fundamentally understand what the genes are doing.”

Calo’s work has led to new insights into how ribosomes form and how they control protein synthesis, as well as how the nucleolus, the birthplace of ribosomes in eukaryotic cells, has evolved over hundreds of millions of years.

In addition to earning his PhD at MIT, Calo is also an alumnus of MIT’s Summer Research Program (MSRP), which helps to prepare undergraduate students to pursue graduate education. Since starting his lab at MIT, Calo has made a point to serve as a research mentor for the program every summer.

“I feel that it’s important to pay back to the program that helped me realize what I wanted to do,” he says.

A nontraditional path

Growing up in a mountainous region of Puerto Rico, Calo was the first person from his family to finish high school. While attending the University of Puerto Rico at Rio Piedras, the largest university in Puerto Rico, he explored a few different majors before settling on chemistry.

One of Calo’s chemistry professors invited him to work in her lab, where he did a research project studying the pharmacokinetics of cell receptors found on the surface of astrocytes, a type of brain cell.

“It was a good mix of biology and chemistry,” he says. “I think that that was the catalyst to my pursuit of a career in the sciences.”

He learned about MSRP from Mandana Sassanfar, a senior lecturer in biology at MIT and director of outreach for several MIT departments, at an event hosted by the University of Puerto Rico for students interested in careers in science. He was accepted into the program, and during the summer after his junior year, he worked in the lab of Stephen Bell, an MIT professor of biology. That experience, he says, was transformative.

“Without that experience, I would have probably chosen another career,” Calo says. In Puerto Rico, “science was fun, but it was a struggle. We had to make everything from scratch, and then you spend more time making reagents than doing the experiments. When I came to MIT, I was always doing experiments.”

During that time, he realized he liked working in biology labs more than chemistry labs, so when he applied to graduate school, he decided to move into biology. He applied to five schools, including MIT. “Once MIT sent me the acceptance, I just had to say yes. There was no saying no.”

At MIT, Calo thought he might study biochemistry, but he ended up focusing on cancer biology instead, working with Jacqueline Lees, an MIT biology professor, to study the role of the tumor suppressor protein Rb.

After finishing his PhD, Calo felt burnt out and wasn’t sure if he wanted to continue along the academic track. His thesis committee advisors encouraged him to do a postdoc just to try it out, and he ended up going to Stanford University, where he fell in love with California and switched to a new research focus. Working with Joanna Wysocka, a professor of developmental biology at Stanford, he began investigating how development is affected by the regulation of proteins that make up cellular ribosomes — a topic his lab still studies today.

Returning to MIT

When searching for faculty jobs, Calo focused mainly on schools in California, but also sent an application to MIT. As he was deciding between offers from MIT and the University of California at Berkeley, a phone call from Angelika Amon, the late MIT professor of biology, convinced him to take the cross-country leap back to MIT.

“She had me on the phone for more than one hour telling me why I should come to MIT,” he recalls. “And that was so heartwarming that I could not say no.”

Since starting his lab in 2017, Calo has been studying how defects in the production of ribosomes give rise to diseases, in particular craniofacial malformations such as cleft palate.

Ribosomes, the organelles where protein synthesis occurs, consist of two subunits made of about 80 proteins. A longstanding question in biology has been why mutations that affect ribosome formation appear to primarily affect the development of the face, but not the rest of the body.

In a 2018 study, Calo discovered that this is because the mutations that affect ribosomes can have secondary effects that influence craniofacial development. In embryonic cells that form the face, a mutation in a gene called TCOF1 activates p53 at a higher level than in other embryonic cells. High levels of p53 cause some of those cells to undergo programmed cell death, leading to Treacher-Collins Syndrome, a disorder that produces underdeveloped bones in the jaw and cheek.

His lab has shown that p53 overactivation is also responsible for craniofacial disorders caused by mutations in RNA splicing factors.

Calo’s work on ribosome formation also led him to explore another cell organelle known as the nucleolus, whose role is to help build ribosomes. In 2023, he found that a gene called TCOF1, which can lead to craniofacial malformations when mutated, is critical for forming the three compartments that make up the nucleolus.

That finding, he says, could help to explain a major evolutionary shift that occurred around 300 million years ago, when the nucleolus transitioned from two to three compartments. This “tripartite” nucleolus is found in all reptiles, birds, and mammals.

“That was quite surprising,” Calo says. “Studying disease-related genes allowed us to understand a very fundamental biological process of how the nucleolus evolved, which has been a question in the field that nobody could figure out the answer for.”

X-raying rocks reveals their carbon-storing capacity

Fri, 03/06/2026 - 12:00am

To avoid the worst effects of climate change, many billions of metric tons of industrially generated carbon dioxide will have to be captured and stored away by the end of this century. One place to store such an enormous amount of greenhouse gas is in the Earth itself. If carbon dioxide were pumped into the cracks and crevices of certain underground rocks, the fluid would react with the rocks and solidify carbon into minerals. In this way, carbon dioxide could potentially be locked in the rocks in stable form for millions of years without escaping back into the atmosphere.

Some pilot projects are already underway to demonstrate such “carbon mineralization.” These efforts have shown promising results in terms of successfully mineralizing a large fraction of injected CO2. However, it’s less clear how the rocks will evolve in response. As carbonate minerals build up, could they clog up cracks and crevices, and ultimately limit the amount of CO2 that can be stored there?

In a new study appearing today in the journal AGU Advances, MIT geophysicists explored this question by injecting fluid into rocks and using X-ray imaging to reveal how the rocks’ pores and cracks changed as the fluid mineralized over time.

Their experiments showed that as fluid was pumped into a rock, the rock’s permeability (the ability of fluid to flow through the rock) dropped sharply. Meanwhile, the rock’s porosity (its total amount of empty space, in the form of pores, cracks, and crevices) remained relatively the same.

The researchers found that the minerals were precipitating out of the fluid in the narrower tunnels connecting larger pores, preventing the fluid from flowing into larger pore spaces. Even so, the fluid did keep flowing through the rock, albeit at a lower rate, and minerals continued to form in some cracks and crevices.

“This study gives you information about what the rock does during this complex mineralization process, which could give you ideas of how to engineer it in your favor,” says study co-author Matėj Peč, an associate professor of geophysics at MIT. 

“If you were injecting CO2 into the Earth and saw a massive drop in permeability, some operators might think they clogged up the well,” adds co-author Jonathan Simpson, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “But as this study shows, in some cases, it might not matter that much. As long as you maintain some flow rate, you could still form minerals and sequester carbon.”

The study’s co-authors include EAPS Research Scientist Hoagy O’Ghaffari as well as Sharath Mahavadi and Jean Elkhoury of the Schlumberger-Doll Research Center.

Drilling down

Basalt is a type of erupted volcanic rock that is found in places such as Hawaii and Iceland. When fresh, it’s highly porous, with many pores, cracks, and fractures running through the rock. The material also is highly concentrated in iron, calcium, and magnesium. When these elements come in contact with fluid that is rich in carbon dioxide, they can dissolve and mix with CO2, and eventually form a new carbon-based mineral such as calcite or dolomite.

A project based in Iceland and piloted by the company CarbFix is currently injecting CO2-rich water into the region’s underground basalt to see how much of the gas can be converted and stored as minerals in the rock. The company’s runs have shown that more than 95 percent of the CO2 injected into the ground turns into minerals within two years. The project is proving that the chemistry works: CO2 can be stored as stone.

But the MIT team wondered how this mineralization process would change the basalt itself and its capacity to store carbon over time.

“Most studies investigating carbon mineralization have focused on optimizing the geochemistry, but we wanted to know how mineralization would affect real reservoir rocks,” Peč says.

Rocky X-rays

The team set out to study how the permeability and porosity of basalt changes as carbonate-rich fluid is pumped into and mineralized throughout the rock.

“Porosity refers to the total amount of open space in the rock, which could be in the form of vesicles, or fractures that connect vesicles, or even areas between sand grains,” Simpson explains. “Because there is so much variability in porosity patterns, there is no one-to-one relationship between porosity and permeability. You could have a lot of pores that are not necessarily connected. So, even if 20 percent of the rock is porous, if they’re not connected, then permeability would be zero.”

“The details of that are important to understand for all these problems of injecting fluids into the subsurface,” Peč emphasizes.

For their experiments, the team used samples of basalt that Peč and others collected during a trip to Iceland in 2023. They placed small samples of basalt in a custom-built holder that they connected to two tubes, through which they flowed two different fluids, each containing a solution that, when mixed, quickly forms carbonate minerals. The team chose this combination of fluids in order to speed up the mineralization process.

In the actual process of injecting CO2 into the ground, CO2 is mixed with water. When it is pumped through rock, the fluid first goes through a “dissolution” phase, in which it draws elements such as iron, calcium, and magnesium out from the basalt and into the CO2-rich fluid. This dissolution process can take some time, before the mineralization process, in which CO2 mixes with the drawn-out elements, can proceed.

The researchers used two different fluids that quickly mineralize when combined, in order to skip over the dissolution phase and efficiently study the effects of the mineralization process. The team was able to see the mineralization process occurring within the rock, at an unprecedented level of detail, by performing experiments inside an X-ray CT scanner. The team set up their experiment in a CT scanner (similar to the ones used for medical imaging in hospitals) and took frequent, high-resolution, three-dimensional snapshots of the basalt periodically over several days to weeks as they flowed the fluids through.

Their imaging revealed how the pores, cracks, and crevices in the rock evolved, and filled in with minerals as the fluid flowed through over time. Over multiple experiments, they found that the rock’s permeability quickly dropped within a day, by an order of magnitude. The rock’s porosity, however, decreased at a much slower rate. At the end of the longest-duration experiments, only about 5 percent of the original pore space was filled with new minerals.

“Our findings tell us that the minerals are initially forming in really small microcracks that connect the bigger pore spaces, and clogging up those spaces,” Simpson says. “You don’t need much to clog up the tiny microfractures. But when you do clog them up, that really drops the permeability.”

Even after the initial drop in permeability, however, the team could continue to flow fluid through, and minerals continued to form in tight spaces within the rock. This suggests that even when it seems like an underground reservoir is full, it might still be able to store more carbon.

The researchers also monitored the rock with ultrasonic sensors during each experiment and found that the sensor could track even small changes in the rock’s porosity. The less porous, or more filled in the rock was with minerals, the faster sound waves traveled through the material. These results suggest that seismic waves could be a reliable way to monitor the porosity of underground rocks and ultimately their capacity to store carbon.

“Overall, we think that carbon mineralization seems like a promising avenue to permanently store large volumes of CO2,” Peč concludes. “There are plenty of reservoirs and they should be injectable over extended periods of time if our results can be extrapolated.”

This work was supported by MIT’s Advanced Carbon Mineralization Initiative funded by Beth Siegelman SM ’84 and Russ Siegelman ’84, with additional funding from the Chan-Zuckerberg Foundation.

A winning formula for student project teams at MIT

Thu, 03/05/2026 - 5:30pm

When Francis Wang ’21, MEng ’22 first joined the MIT Edgerton Center’s Solar Electric Vehicle Team (SEVT), his approach to engineering projects was “to focus my energy and attention on a tidy problem with neat boundaries that I could completely control.”

“But on Solar Car, I realized it takes a very different mindset to manage a substantial project with many moving pieces. It takes engineering leadership,” he recalls.

Wang was determined to strengthen his leadership skills. When he became Solar Car captain, he applied and was accepted into the Gordon Engineering Leadership (GEL) Program.

GEL’s courses and hands-on labs equip students with capabilities they need to lead and contribute to complex, real-world engineering challenges. The one- or two-year program for juniors and seniors complements MIT’s technical education, teaching teamwork, leadership, and communication skills in an engineering context. GEL students also benefit from personalized coaching, mentoring, industry networking, and career support throughout their professional lives.

“Before GEL, I saw the leadership parts of my role as a necessary evil to get to the actual interesting parts, which was the engineering,” says Wang. “The GEL Program gave me an understanding of how engineering leadership is crucial, because in the real world any project worth working on is larger than the scope of an individual engineer.”

In GEL he improved capabilities such as decision-making, taking initiative, and negotiating. He became a more effective SEVT team captain, able to navigate the challenges of taking an engineering project from concept to completion.

“It was often the case that the challenges I faced on Solar Car were not solely technical, involving aspects of communication, coordination, and negotiation. From GEL, I had the framework and the language to approach them,” says Wang.

Each year, 30-40 Edgerton students are accepted into the GEL Program. They come from a variety of teams and clubs including Arcturus, Assistive Technology Club, ChemE Club, Combat Robotics Club, Design Build Fly (DBF), Design for America, Electric Vehicle Team, Engineers Without Borders, First Nations Launch, MIT Electronics Research Society (MITERS), Motorsports, Robotics Team, Rocket Team, and Solar Electric Vehicle Team (SEVT).

“MIT’s best engineering students have GEL training and authentic project management experience with our competition teams,” says Professor J. Kim Vandiver, director of the Edgerton Center.

Edgerton project teams are entirely student-run organizations responsible for all levels of project and team management including fundraising, recruiting, designing, testing, risk mitigation, and project validation. The most successful teams have skilled leaders.

“Many of the excellent Edgerton project team students admitted to GEL are team or sub-team leaders who credit their GEL experience, particularly the experiential learning component, with improving their leadership skills,” says Leo McGonagle, executive director of GEL.

“It’s a win-win-win. GEL gets hard-working, motivated Edgerton Program students who are intent on self-development and improvement. Edgerton project teams often perform better with leaders who are GEL-trained. And the students gain leadership, teamwork, and communication abilities that they can use beyond their project team — in their capstones, course projects, internships, and jobs after MIT,” says McGonagle.

The overlapping connection between GEL and Edgerton truly becomes obvious when students begin to take ownership of project milestones.

“When you become the leader of a technical project, no one gives you a roadmap to team success,” says senior Hailey Polson, former captain of First Nations Launch team. “Technical expertise is not enough to leverage the talent and skills of an entire team or the ability to coordinate a multifaceted project; that’s where the tools, skills, and leadership theory I learned in GEL helped me bridge the gap between knowing how to accomplish our goals and actually leading my team successfully.”

Faris Elnager ’25 served as testing lead on the Motorsports team, which designs, manufactures, and competes with a formula-style electric race car every year.

“Making tough decisions was something that I learned in GEL. On Motorsports, I had to make high-stakes decisions about testing time that affected how we performed at a competition,” he says.

He found that GEL’s weekly Engineering Leadership Labs were a way to test for himself specific leadership capabilities that he could use to improve his Motorsports team.

“One of the most useful skills from GEL was evaluating your stakeholders and learning how to balance their needs. I remember thinking, we’re doing this right now in the [GEL] lab, and then we’re going back to the [Edgerton] shop to do this for real!” says Elnager. “It’s like a positive feedback loop. GEL labs make you better on project teams, and project teams make you better in GEL.”

Now a startup co-founder, Elnager says that the communication skills that he learned through Motorsports and GEL have been critical to his company’s early success. “You can build the best tech in the world. If you can’t pitch it to people, you’re never going to raise any money. Being able to explain a technical project to anyone, whether they're an investor or someone in your industry, is something that’s incredibly valuable.”

Adrienne Lai ’25 served as both mechanical lead and then captain of the Solar Electric Vehicle Team. She recalls how her GEL training would kick in on race day.

“It’s quite tricky to be captain of a build team, because there’s no adult to tell you what to do. You have to figure it all out for yourself. When you’re competing, it can be very chaotic. You are trying to maximize a score by driving more miles, but that comes with a trade-off of spending energy or ending the day in a more rural area, or with less sun, so there are a lot of trade-offs to consider. Sometimes someone just has to make a decision. I was very comfortable doing that because I had learned how to take initiative, which is one of the GEL capabilities,” she says.

Now a course assistant in GEL, Lai helps design scenarios that enable GEL students to become better and more resilient leaders. She particularly enjoys playing the role of an uncooperative supplier.

“We close our store randomly. We don’t have what they need. We won’t tell them what we have,” she laughs. “Students get very frustrated. They think that we’re just being mean. But from a real-world perspective, that is all very true. It simulates unpredictability, which is important not just in a job, but in life.”

The value of the engineering leadership skills learned in GEL and honed on Edgerton project teams carries forward into industry, graduate studies, and entrepreneurial ventures.

“GEL preparation, coupled with authentic project management on a competition team, prepares MIT students for great careers in industry,” says Vandiver.

Henry Smith ’25 says he still relies on skills such as negotiation, communication, and understanding stakeholder needs that he used when he was a Motorsports mechanical lead.

“I was doing high-level management, planning, and organization on the team. Being in the GEL Program really increased my value for the team and helped me be prepared to enter the job field. When I graduated, I wasn’t worried about being ready or not. It was a definite yes,” says Smith.

As project teams continue to address ambitious engineering challenges, the synergy between Edgerton and the Gordon Engineering Leadership (GEL) Program ensures that as students graduate, they’re prepared to not only become strong technical contributors, but confident leaders prepared to tackle complex engineering problems in the real world.

New insights into a hidden process that protects cells from harmful mutations

Thu, 03/05/2026 - 5:15pm

Some genetic mutations that are expected to completely stop a gene from working surprisingly cause only mild or even no symptoms. Researchers in previous studies have discovered one reason why: Cells can ramp up the activity of other genes that perform similar functions to make up for the loss of an important gene’s function. 

A new study published Feb. 12 in the journal Science by researchers in the lab of Jonathan Weissman, an MIT professor of biology and Whitehead Institute for Biomedical Research member, now reveals insights into how cells can coordinate this compensation response.

Cells are constantly reading instructions stored in DNA. These instructions, called genes, tell them how to make the many proteins that carry out complex processes needed to sustain life. But first, they need to make a temporary copy of these genetic instructions called messenger RNA, or mRNA.

As part of normal maintenance, cells routinely break down these temporary messages. This process helps control gene activity — or how much protein is made from a given gene — and ensures that old or unnecessary messages don’t accumulate. Cells also destroy faulty mRNAs that contain errors. These messages, if used, could produce damaged proteins that clump together and interfere with normal cellular processes.

In 2019, external studies suggested that this cleanup could be serving as more than just a quality-control check. Researchers showed that when faulty mRNAs are broken down, this breakdown can signal cells to activate the compensation response. These works also suggested that cells decide which backup genes to turn up based on how closely these genes resemble the mRNA that’s being degraded. 

But mRNA decay is a process that happens in the cytoplasm, outside the nucleus where DNA, and thereby genes, are stored. So, Mohamed El-Brolosy, a postdoc in the Weissman Lab and lead author of the study, and colleagues wondered how those two processes in different compartments of the cell could be connected. Understanding this mechanism with greater depth could enable development of therapeutics that trigger it in a targeted fashion.

The researchers started by investigating a specific gene that scientists know triggers a compensation response when its mRNA is destroyed by causing a closely related gene to become more active. To find out which molecules within the cell aid this process, the researchers systematically switched other genes off, one at a time.

That’s when they found a protein called ILF3. When the gene encoding this protein was turned off, cells could no longer ramp up the activity of the backup gene following mRNA decay.

Upon further investigation, the researchers identified small RNA fragments — left behind when faulty mRNAs are destroyed — underlying this response. These fragments contain a special sequence that acts like an “address.” The team proposed that this address guides ILF3 to related backup genes that share the same sequence as the faulty mRNA.

In fact, when they introduced mutations in this sequence, the cells’ compensation response dropped, suggesting that the system relies on precise sequence matching to target the correct backup genes.

“That was very exciting for us,” says Weissman, who is also an investigator at the Howard Hughes Medical Institute. “It showed us that this isn’t a generic stress response. It’s a regulated system.”

The researchers’ findings point toward new therapeutic possibilities, where boosting the activity of a related gene could mitigate symptoms of certain genetic diseases. More broadly, their work characterizes a mysterious layer of gene regulation.

Recreating the forms and sounds of historical musical instruments

Thu, 03/05/2026 - 5:00pm

What if there were a way to create accurate replicas of ancient and historical instruments that could be played and heard? 

In late 2024, senior MIT postdoc Benjamin Sabatini wrote MIT Professor Eran Egozy to ask just that, and about a collaborative research project between the Center for Materials Research in Archeology and Ethnology (CMRAE) and the MIT School of Humanities, Arts, and Social Sciences (SHASS) to CT scan, chemically and structurally characterize, and produce replicas of the ancient and historical musical instruments housed at the Museum of Fine Arts, Boston (MFA).

He was soon introduced to Mark Rau, a newly hired MIT professor in music technology and electrical engineering. Sharing similar interests, the two together contacted Jared Katz, the Pappalardo Curator of Musical Instruments at the MFA, to propose a cross-institutional project. Rau, an avid museum-goer, particularly of musical instrument collections, has always wanted to hear the instruments on display, commenting that “my biggest qualm is often there are no accompanying audio examples. I want to hear these instruments; I want to play these instruments.” 

Katz, fortuitously, specializes in ancient musical practices and has developed a technique for 3D scanning and printing playable replicas of ancient instruments for his research. He had long dreamed of having access to a CT scanner to better understand how ancient instruments were constructed. The MFA was also an ideal institution for the project, since, according to Katz, the MFA’s musical instrument collection began in 1917 and has since grown to just over 1,450 instruments from six continents, with the earliest dating to approximately 1550 BCE. 

Rau and Sabatini, soon after, applied to and were funded by the MIT Human Insight Collaborative (MITHIC) with Katz's support. The team of five, including Nate Steele, program associate in the MFA’s Department of Musical Instruments and MIT postdoc Jin Woo Lee, now meets regularly at the MFA to scan and acoustically measure the instruments.

Using a CT scanner from Lumafield, a company founded by MIT alumni, the team measures both internal and external dimensions. When combined with non-destructive vibration and acoustic testing and numerical simulations, these measurements are used to digitally replicate the instruments’ sound accurately. 

“For example, if we’re trying to recreate a violin, we can use an impact hammer — a very small hammer with a transducer in it — so we’re imparting a known force signal into the instrument, and then measure the resulting [surface] vibrations with a laser Doppler vibrometer,” says Rau.

The team then uses 3D-printed copies of the instruments to create plaster mold negatives, which are cast into using slip, such as with the Paracas whistle, a ceramic artifact from Peru dating from 600-175 BCE, to replicate the instruments physically. The team demonstrated a playable replica at the MITHIC Annual Event in November. They also intend to build replicas of wooden instruments using old-growth wood in collaboration with local luthiers.

Sabatini, a member of CMRAE, sees the humanistic implications of the project and the importance of studying the instruments from a materials and archaeological perspective, which is to explore and understand the cultures that were involved in their production, stating that “[from our] perspective, we want to understand the people who made these instruments through both the materials that they’re made of, but also the sound that they have.”

With his team of Undergraduate Research Opportunities Program (UROP) students, including Irene Dong and Mouhammad Seck, Sabatini reproduced several ancient and historical clay instruments in the CMRAE archaeology lab, including the Paracas whistle, which was showcased at the MITHIC event.

So far, the team has scanned approximately 30 instruments from the MFA’s collection, with the goal of scanning at least 100 instruments over the duration of the project, documenting them, and supporting future study. The data from the scans are used to reconstruct the instruments, both physically and in software, matching their physical form and sound.

“They’re both visually beautiful and striking objects, but they are meant to be heard,” Katz says. Further stating that his “hope for this research is to provide us with a way to protect the original instrument while still allowing them to be heard and experienced in the way they were intended to be experienced.”

Katz also sees potential for outreach and community engagement through these playable replicas, which is a goal written into the project’s proposal, further stating that “[i]t shows how powerful it can be when art and science come together to create new understandings and to help us reactivate these instruments in exciting ways.”

Students have also been drawn to the project, including Victoria Pham, a second-year undergraduate in materials science and engineering, who is working with Sabatini as a UROP student. Pham was “drawn to this project because I love history,” she says. “I love wandering through the halls of the MFA and immersing myself in the descriptions of paintings and artifacts. I find learning about ancient peoples to be fascinating, especially in how their legacy affects us today.”

Her work involves finite element modeling of a Veracruz poly-glabular flute, dating to 500-900 CE, to investigate its acoustics non-destructively. She notes that “[m]y work is fulfilling because I was able to learn new software and problem-solve to improve my model, which was very satisfying.”

Pham thinks that “contributing to the new, budding field of music technology scratches an itch in my brain, and I hope that my work inspires others to get interested in archaeology, material science, or music technology.”

Alexander Mazurenko, a second-year undergraduate majoring in music and mathematics, has also been working on the project. He began last summer and continued during this year's Independent Activities Period in January.

Mazurenko notes that his involvement in this project has furthered his interdisciplinary education at MIT, commenting that “[t]he opportunity to participate in this UROP with Professor Rau was the perfect chance to begin to work in the intersection of my passions.” His work, and that of Pham, will be presented at upcoming conferences, and are expected to produce academic papers under the guidance of Sabatini and Rau.

For one learner, online MIT courses are “like getting a Ferrari for the price of an electric scooter”

Thu, 03/05/2026 - 4:50pm

As a professional mechanical engineer, Badri Ratnam was inspired when MIT started offering massive open online courses (MOOCs) in engineering and science in 2012. He wondered if he was up to the challenge of solving problem sets and successfully completing exams from MIT.

Ratnam first began his journey with the course 8.MReVx/8.MReV (Mechanics ReView), and he hasn’t looked back since. As he grew in his career in mechanical design and computer-aided engineering, he also completed nearly 40 MITx courses in physics, mechanical engineering, and materials science. 

Part of MIT Open Learning, MITx offers free online courses across a wide variety of subjects to learners around the world. Learners may also opt for the certificate track for a low fee. 

Ratnam has worked for companies such as Freudenberg e-Power Systems, Siemens, GE, and Westport Fuel Systems. His continued learning through MITx courses, as well as courses offered by other universities, has expanded his expertise to include areas such as physics, mechanics of materials, transport phenomena, failure and root cause analysis, validation and verification testing, vibration signal processing, certification and compliance statistical quality control, manufacturing, reliability, supplier selection, and more.

“There are many different learning styles,” says Ratnam. “Some people might need to be in a classroom, and others might be able to learn entirely on their own from a textbook. Personally, I benefit from some amount of structure, including having timelines and deadlines, as well as assignments and discussion forums. With MITx, there is also the excitement of the rigor that can be a boost of adrenaline — trying to see whether you can tackle some of the toughest material, presented by a top institution.”

Supplementing engineering education with extensive course offerings

Ratnam earned a bachelor’s degree in engineering from the University of Delhi. He says during his undergraduate program he tended to study the night before exams, and was “more focused on passing the subject than deep learning.”

He followed his undergrad studies with a master of science degree in mechanical engineering from the University of South Florida and an MS in computational and applied mathematics from Simon Fraser University in British Columbia. Even with all of his degrees, he felt that he needed to revisit the engineering subjects he had initially learned as an undergraduate student, pursuing online courses to review the fundamentals and gain greater understanding and mastery.

The MITx courses Ratnam has taken have covered many different areas within engineering, physics, mathematics, supply chains, and manufacturing. He has recently completed Vibrations and Waves, taught by Yen-Jie Lee, Alex Shvonski, and Michelle Tomasik.

“It’s an 18-week class with over 40 lessons, 13 assignments, and three exams, all designed very deliberately. I don’t think I could have ever learned this very difficult subject without this structure,” says Ratnam. “It’s also important to note that I paid less than $100 for this class. MITx does not follow the dictum that ‘you get what you pay for.’ It’s like getting a Ferrari for the price of an electric scooter.”

Ratnam has also recently finished Information Entropy: Energy and Exergy, taught by former MIT Open Learning dean for digital learning Krishna Rajagopal, Peter Dourmaskin, and Aidan MacDonagh, as well as Shvonski and Tomasik.

Although Ratnam says he can’t pick a favorite course — and is hard-pressed to even pick a few favorites of the many MITx courses he has taken — he says he has especially liked these recent courses and Elements of Structures, taught by Alexie M. Kolpak and Simona Socrate. In addition to the many MITx courses he has taken, he has also completed a few MIT Professional Education programs in smart manufacturing and design. 

“As I’ve taken more and more courses, I’ve learned to never fear learning new things and exploring new areas,” says Ratnam. “I used to think of more unfamiliar subjects and feel a little terrified, not knowing where to start, but I don’t feel that any more. I know that with some time and effort, I can pick up new skills and knowledge.”

Ratnam has found the discussion forums for MITx courses to be especially useful to the learning process.

“This is where the rigorous, engaging, yet automated, courses come to life,” says Ratnam. “Learners from all over the world help each other in the problem sets and discuss their conceptual doubts. And the forums are diligently monitored by MIT staff to ensure there are no open questions, and all errors are corrected.”

Increasing value in the workplace

Ratnam says that his MITx studies have deepened his understanding of a variety of engineering topics, which have given him new insights to apply as an engineer.

“My learnings from MITx courses have really helped me gain the confidence of having a deep understanding on the theoretical side,” says Ratman. “I’ve developed a wide base of knowledge and have become the go-to person whom people come to with questions.”

Ratnam has found MITx to be an excellent professional development resource. He notes that while many professionals have access to and complete courses offered at or through their workplaces, these usually aim to enable people to complete a very specific goal — such as performing a set task at work — within a short period of time. He says that with online courses, it’s a much different timeline and result.

MITx classes have provided me with a much broader overview of engineering phenomena,” says Ratnam. “The benefit of the classes might not always come immediately. It can be a long gestation period for the information to all gel together. It’s much more of a profound and long-term benefit.”

Explore lifelong learning opportunities from the Institute, including online courses, resources, and professional programs, on MIT Learn.

New catalog more than doubles the number of gravitational-wave detections made by LIGO, Virgo, and KAGRA observatories

Thu, 03/05/2026 - 8:00am

When the densest objects in the universe collide and merge, the violence sets off ripples, in the form of gravitational waves, that reverberate across space and time, over hundreds of millions and even billions of years. By the time they pass through Earth, such cosmic ripples are barely discernible.

And yet, scientists are able to detect them, thanks to a global network of gravitational-wave observatories: the U.S.-based National Science Foundation Laser Interferometer Gravitational-Wave Observatory (NSF LIGO), the Virgo interferometer in Italy, and the Kamioka Gravitational Wave Detector (KAGRA) in Japan. Together, the observatories “listen” for faint wobbles in the gravitational field that could have come from far-off astrophysical smash-ups.

Now the LIGO-Virgo-KAGRA (LVK) Collaboration is publishing its latest compilation of gravitational-wave detections, presented in a forthcoming special issue of Astrophysical Journal Letters. From the findings, it appears that the universe is echoing all over with a kaleidoscope of cosmic collisions.

The LVK’s Gravitational-Wave Transient Catalog-4.0 (GWTC-4) comprises detections of gravitational waves from a portion of the observatories’ fourth and most recent observing run, which occurred between May 2023 and January 2024. During this nine-month period, the observatories detected 128 new gravitational-wave “candidates,” meaning that the signals are likely from extreme, far-off astrophysical sources. (The LVK detected about 300 mergers so far in the fourth run, but not all of these appear yet in the LVK catalog.)

This newest crop more than doubles the size of the gravitational-wave catalog, which previously contained 90 candidates compiled from all three previous observing runs.

“The beautiful science that we are able to do with this catalog is enabled by significant improvements in the sensitivity of the gravitational-wave detectors as well as more powerful analysis techniques,” says LVK member Nergis Mavalvala, who is dean of the MIT School of Science and the Curtis and Kathleen Marble Professor of Astrophysics.

“In the past decade, gravitational wave astronomy has progressed from the first detection  to the observation of hundreds of black hole mergers,” says Stephen Fairhurst, a professor at Cardiff University and LIGO Scientific Collaboration spokesperson. “These observations enable us to better understand how black holes form from the collapse of massive stars, probe the cosmological evolution of the universe and provide increasingly rigorous confirmations of the theory of general relativity.”

“Pushing the edges”

Black holes are created when all the matter in a dying star collapses into a single point. Black holes are therefore among the densest objects in the universe. Black holes often form in pairs, bound together through the gravitational attraction. As they spiral toward each other, they emit enormous amounts of energy in the form of gravitational waves, before merging into a more massive black hole.

A binary black hole was the source of the very first gravitational-wave detection, made by NSF’s LIGO observatories in 2015, and colliding black holes are the source of many of the gravitational waves detected since then. Such “bread-and-butter” binaries typically consist of two black holes of similar size (usually several tens of times more massive than the sun) that merge into one larger black hole.

Gravitational waves can also be produced by the collision of a black hole with a neutron star, which is an extremely dense remnant core of a massive star. While the collision of two black holes only produces gravitational waves, a smash-up involving a neutron star can also generate light, which provides more information about the event that scientists can probe. In its first three observing runs, the LVK observatories detected signals from a handful of collisions involving a black hole and neutron star, as well as two collisions between two neutron stars.

The newest detections published today reveal a greater variety of binaries that produce gravitational waves. In addition to the black hole binaries, the updated catalog includes the heaviest black hole binary; a binary with black holes of asymmetric, lopsided masses; and a binary where both black holes have exceptionally high spins. The catalog also holds two black hole-neutron star binaries.

“The message from this catalog is: We are expanding into new parts of what we call ‘parameter space’ and a whole new variety of black holes,” says co-author Daniel Williams, a research fellow at the University of Glasgow and a member of the LVK. “We are really pushing the edges, and are seeing things that are more massive, spinning faster, and are more astrophysically interesting and unusual.”

Unusual signals

The LIGO, Virgo, and KAGRA observatories detect gravitational waves using L-shaped, kilometer-scale instruments, called interferometers. Scientists send laser light down the length of each tunnel and precisely measure the time it takes each beam to return to its source. Any slight difference in their timing can mean that a gravitational wave passed through and minutely wobbled the laser’s light.

For the first segment of the LVK’s fourth observing run, gravitational-wave detections were made using only LIGO’s identical interferometers — one located in Hanford, Washington, and the other in Livingston, Louisiana. Recent upgrades to LIGO’s detectors enabled them to search for signals from binary neutron stars as far out as 360 megaparsecs, or about 1 billion light-years away, and for signals from binaries including black holes tens of times farther away.

“You can’t ever predict when a gravitational wave is going to come into your detector,” says co-author and LVK member Amanda Baylor, a graduate student at the University of Wisconsin at Milwaukee who was involved in the signal search process. “We could have five detections in one day, or one detection every 20 days. The universe is just so random.”

Among the more unusual signals that LIGO detected in the first phase of the O4 observing run was GW231123_135430, which is the heaviest black hole binary detected to date. Scientists estimate that the signal arose from the collision of two heavier-than-normal black holes, each roughly 130 times as massive as the sun. (Most of the detected merging black holes are around 30 solar masses.) The much heavier black holes of GW231123_135430 suggest that each may be a product of a prior collision of lighter “progenitor” black holes.

Another standout is GW231028_153006, which is a black hole binary with the highest inspiral spin, meaning that both black holes appear to be spinning very fast, at about 40 percent the speed of light. Again, scientists suspect that these black holes were also products of previous mergers that spun them up as they were created from two smaller, inspiraling black holes.

The O4 run also detected GW231118_005626 — an unusually lopsided pair, with one black hole twice as massive as the other. 

“One of the striking things about our collection of black holes is their broad range of properties,” says co-author LVK member Jack Heinzel, an MIT graduate student who contributed to the catalog’s analysis. “Some of them are over 100 times the mass of our sun, others are as small as only a few times the mass of the sun. Some black holes are rapidly spinning, others have no measurable spin. We still don’t completely understand how black holes form in the universe, but our observations offer a crucial insight into these questions.”

Cosmic connections

From the newest gravitational-wave detections, scientists have begun to make connections about the properties of black holes as a population.

“For instance, this dataset has increased our belief that black holes that collided earlier in the history of the universe could more easily have had larger spins than the ones that collided later,” says LVK member Salvatore Vitale, associate professor of physics at MIT and member of the MIT LIGO Lab.

This idea raises interesting questions about what sort of conditions could have spun up black holes in the early universe.

The new detections have also allowed scientists to test Albert Einstein’s general theory of relativity, which describes gravity as a geometric property of space and time.

“Black holes are one of the most iconic and mind-bending predictions of general relativity,” says co-author and LVK member Aaron Zimmerman, associate professor of physics at the University of Texas at Austin, adding that when black holes collide, they “shake up space and time more intensely than almost any other process we can imagine observing. When testing our physical theories, it’s good to look at the most extreme situations we can, since this is where our theories are most likely to break down, and where we have the best chance of discovery.”

Scientists put Einstein’s theory to the test using GW230814_230901, which is one of the “loudest” gravitational-wave signals observed to date. The surprisingly clear signal gave scientists a chance to probe it in detail, to see if any aspects of the signal might deviate from what Einstein’s theory predicts. This signal pushed the limits of their tests of general relativity, passing most with flying colors but illustrating how environmental noise can challenge others in such an extreme scenario.

“So far, the theory is passing all our tests,” Zimmerman says. “But we’re also learning that we have to make even more accurate predictions to keep up with all the data the universe is giving us.”

The updated catalog is also helping scientists to nail down a key mystery in cosmology: How fast is the universe expanding today? Scientists have tried to answer this by measuring a rate known as the Hubble constant. Various methods, using different astrophysical sources, have given conflicting answers.

Gravitational waves offer an alternative way to measure the Hubble constant, since scientists are able to work out, in relatively straightforward fashion, how far these waves traveled from their source.

“Merging black holes have a really unique property: We can tell how far away they are from Earth just from analyzing their signals,” says co-author and LVK member Rachel Gray, a lecturer at the University of Glasgow who was involved in the cosmological interpretations of the catalog’s data. “So, every merging black hole gives us a measurement of the Hubble constant, and by combining all of the gravitational wave sources together, we can vastly improve how accurate this measurement is.”

By analyzing all the gravitational-wave detections in the LVK’s entire catalog, scientists have come up with a new, independent estimate of the Hubble constant, that suggests the universe is expanding at a rate of 76 kilometers, per second, per megaparsec (a square volume of about half a billion light-years wide).

“It’s still early days for this method, and we expect to significantly improve our precision as we detect more gravitational wave sources,” Gray says.

“Each new gravitational-wave detection allows us to unlock another piece of the universe’s puzzle in ways we couldn’t just a decade ago,” says Lucy Thomas, who led part of the catalog’s analysis, and is a postdoc in the Caltech LIGO Lab. “It’s incredibly exciting to think about what astrophysical mysteries and surprises we can uncover with future observing runs."

Nitrous oxide, a product of fertilizer use, may harm some soil bacteria

Wed, 03/04/2026 - 9:00am

Plant growth is supported by millions of tiny soil microbes competing and cooperating with each other as they perform important roles at the plant root, including improving access to nutrients and protecting against pathogens. As a byproduct of their metabolism, soil microbes can also produce nitrous oxide, or N2O, a potent greenhouse gas that has mostly been studied for its impact on the climate. While some N2O occurs naturally, its production can spike due to fertilizer application and other factors.

While it has long been believed that nitrous oxide doesn’t meaningfully interact with living organisms, a new paper by two MIT researchers shows that it may in fact shape microbial communities, making some bacterial strains more likely to grow than others.

Based on the prevalence of the biological processes disrupted by nitrous oxide, the researchers estimate about 30 percent of all bacteria with sequenced genomes are susceptible to nitrous oxide toxicity, suggesting the substance could play an important and underappreciated role in the intricate microbial ecosystems that influence plant growth.

The researchers have published their findings today in mBio, a journal of the American Society for Microbiology. If their lab findings carry over to agricultural settings, it could influence the way farmers go about everyday tasks that expose crops to spikes in nitrous oxide, such as watering and fertilization.

“This work suggests N2O production in agricultural settings is worth paying attention to for plant health,” says senior author Darcy McRose, MIT’s Thomas D. and Virginia W. Cabot Career Development Professor, who wrote the paper with lead author and PhD student Philip Wasson. “It hasn’t been on people’s radar, but it is particularly harmful for certain microbes. This could be another knock against N2O in addition to its climate impact. With more research, you might be able to understand how the timing of N2O production influences these microbial relationships, and that timing could be managed to improve crop health.”

A toxic gas

Nitrous oxide was shown to be toxic decades ago when researchers realized it can deactivate vitamin B12 in the human body. Since then, it has mostly drawn attention as a long-lived greenhouse gas that can eat away at the ozone. But when it comes to agricultural settings, most people have assumed it doesn’t interact with organisms growing in the soil around the plant root, a region called the rhizosphere.

“In general, there’s an assumption that N2O is not harmful at all despite this history of published studies showing that it can be toxic in specific contexts,” says McRose, who joined the faculty of the Department of Civil and Environmental Engineering in 2022. “People have not extended that understanding to microbial communities in the rhizosphere.”

While some studies have shown nitrous oxide sensitivity in a handful of microorganisms, less is known about how it impacts the distribution of microbial communities at the plant root. McRose and Wasson sought to fill that research gap.

They started by looking at a ubiquitous process that cells use to grow called methionine biosynthesis. Methionine biosynthesis can be carried out by enzymes that are dependent on B12 — and by other enzymes that are not. Many bacteria have both types.

Using a well-studied microbe named Pseudomonas aeruginosa, the researchers genetically removed the enzyme that isn’t dependent on B12 and found the microbe became sensitive to nitrous oxide, with its growth harmed even by nitrous oxide it produced itself.

Next the researchers looked at a synthetic microbial community from the plant Arabidopsis thaliana, finding many root-based microbes were also sensitive to nitrous oxide. Combining sensitive microbes with nitrous oxide-producing bacteria hampered their growth.

“This suggests that N2O-producing bacteria can affect the survival of their immediate neighbors,” Wasson explains. Together, the experiments confirmed the researchers’ suspicion that the production of nitrous oxide can hamper the growth of soil bacteria dependent on vitamin B12 to make methionine.

“These results suggest nitrous oxide producers shape microbial communities,” McRose says. “In the lab the result is very clear, and the work goes beyond just looking at a single organism. The co-culture experiments aren’t the same as a study in the field, but it’s a strong demonstration.”

From the lab to the farm

In farms, soil commonly experiences spikes of nitrous oxide for days or weeks from the addition of nitrogen fertilizer, rainfall, thawing, and other events. The researchers caution that their lab experiments are only the first step toward understanding how nitrous oxide affects microbial populations in agricultural settings.

Wasson calls the paper a proof of concept and plans to study agricultural soil next.

“In agricultural environments, N2O has been historically high,” Wasson says. “We want to see if we can detect a signature for this N2O exposure through genome sequencing studies, where the only microbes sticking around are not sensitive to N2O. This is the obvious next step.”

McRose says the findings could lead to a new way for researchers and farmers to think about nitrous oxide.

“What’s important and exciting about this case is it predicts that microbes with one version of an enzyme are going to be sensitive to N2O and those with a different version of the enzyme are not going to be sensitive,” McRose says. “This suggests that in the environment, exposure to N2O is going to select for certain types of organisms based on their genomic content, which is a highly testable hypothesis.”

The work was supported, in part, by the MIT Research Support Committee and a MIT Health and Life Sciences Collaborative Graduate Fellowship (HEALS).

How some skills become second nature

Wed, 03/04/2026 - 12:00am

Expertise isn’t easy to pass down. Take riding a bike: A seasoned cyclist might talk a beginner through the basics of how to sit and when to push off. But other skills, like how hard to pedal to keep balanced, are more intuitive and harder to articulate. This implicit know-how is known as tacit knowledge, and very often, it can only be learned with experience and time.

But a team of MIT engineers wondered: Could an expert’s unconscious know-how be accessed, and even taught, to quickly bring a novice up to an expert’s level?

The answer appears to be “yes,” at least for a particular type of visual-learning task.

In a study published today in the Journal of Neural Engineering, the engineers identified tacit knowledge in volunteers who were tasked with classifying images of various shapes and patterns. As the volunteers were shown images to organize, the team recorded their eye movements and brain activity to measure their visual focus and cognitive attention, respectively.

The measurements showed that, over time, the volunteers shifted their focus and attention to a part of each image that made it easier to classify. However, when asked directly, the volunteers were not aware that they had made such a shift. The researchers concluded that this unconscious shift in attention and focus was a form of tacit knowledge that the volunteers possessed, even if they could not articulate it. What’s more, when the volunteers were made aware of this tacit knowledge, their accuracy in classifying images improved significantly.

The study is the first to directly show that visual attention can reveal unconscious, tacit knowledge during image classification tasks. It also finds for the first time that bringing this concealed knowledge to the surface can enhance experts’ performance.

While the results are specific to the study’s experiment, the researchers say they suggest that some forms of hidden know-how can be made explicit and applied to boost one’s learning experience. They suspect that tacit knowledge could be accessed for disciplines that require keen observation skills, including certain physical trades and crafts, sports, and image analysis, such as medical X-ray diagnoses.

“We as humans have a lot of knowledge, some that is explicit that we can translate into books, encyclopedias, manuals, equations. The tacit knowledge is what we cannot verbalize, that’s hidden in our unconscious,” says study author Alex Armengol-Urpi, a research scientist in MIT’s Department of Mechanical Engineering. “If we can make that knowledge explicit, we can then allow for it to be transferred easier, which can help in education and learning in general.”

The study’s co-authors include Andrés F. Salazar-Gomez, research scientist at the MIT Media Lab; Pawan Sinha, professor of vision and computational neuroscience in MIT’s Department of Brain and Cognitive Sciences; and Sanjay Sarma, the Fred Fort Flowers (1941) and Daniel Fort Flowers (1941) Professor in Mechanical Engineering.

Hidden gaze

The concept of tacit knowledge is credited to the scientist and philosopher Michael Polyani, who in the mid 20th century was the first to investigate the notion that “we know more than we can tell.” His insights revealed that humans can hold a form of knowledge that is internalized, almost second nature, and often difficult to express or translate to others.

Since Polyani’s work, many studies have highlighted how tacit knowledge may play a part in perfecting certain skills, spanning everything from diagnosing medical images to discerning the sex of cats from images of their faces.

For Armengol-Urpi, these studies raised a question: Could a person’s tacit knowledge be revealed through unconscious signals, such as patterns in their eye movements? His PhD work focused on visual attention, and he had developed methods to study how humans focus their attention, by using cameras to follow the direction of their gaze, and electroencephalography (EEG) monitors to record their brain activity. In his research, he learned of a previous study that used similar methods to investigate how radiologists diagnose nodules in X-ray images. That study showed that the doctors unconsciously focused on areas of an image that helped them to correctly detect the nodules.

“That paper didn’t focus on tacit knowledge, but it suggested that there are some hidden clues in our gaze that could be explored further,” Armengol-Urpi says.

The shape of knowledge

For their new study, the team looked at whether they could identify signs of tacit knowledge from measurements of visual focus and attention. In their experiment, they asked 30 volunteers to look sequentially at over 120 images. They could look at each image for several seconds and then were asked to classify the image as belonging to either group A, or group B, before they were shown the next image.

Each image contained two simple shapes on either side of the image — a square, a triangle, a circle, and any combination of the three, along with different colors and patterns for each shape. The researchers designed the images such that they should be classified into one of two groups, based on an intricate combination of shape, color, and pattern. Importantly, only one side of each image was relevant for the classification.

The volunteers, however, were given no guidelines on how to classify the images. Therefore, for about the first half of the experiment, they were considered “novices,” and more or less guessed at their classifications. Over time, and many more images, their accuracy improved to a level that the researchers considered “expert.” Throughout the experiment, the team used cameras to follow each participant’s eye movements, as a measure of visual focus.

They also outfitted volunteers with EEG sensors to record their brain waves, which they used as a measure of cognitive attention. They designed each image to show two shapes, each of which flickered at different, imperceptible frequencies. They found they could identify where a volunteer’s attention landed, based on which shape’s flicker their brain waves synced up with.

For each volunteer, the team created maps of where their gaze and attention were focused, both during their novice and expert phases. Overall, these maps showed that in the beginning, the volunteers focused on all parts of an image as they tried to make sense of how to classify it. Toward the end, as they got a grasp of the exercise and improved their accuracy, their attention shifted to just one side of each image. This side happened to be the side that the researchers designed to be most relevant, while the other side was just random noise.

The maps showed that the volunteers picked up some knowledge of how to accurately classify the images. But when they were given a survey and asked to articulate how they learned the task, they always maintained that they focused on each entire image. It seemed their actual shift in focus was an unconscious, tacit skill.

“They were unconsciously focusing their attention on the part of the image that was actually informative,” Armengol-Urpi says. “So the tacit knowledge they had was hidden inside them.”

Going a step further, the team then showed each participant the maps of their gaze and attention, and how the maps changed from their novice to expert phases. When they were then shown additional images, the volunteers seemed to use this once-tacit knowledge, and further improved their classification accuracy.

“We are currently extending this approach to other domains where tacit knowledge plays a central role,” says Armengol-Urpi, who is exploring tacit knowledge in skilled crafts and sports such as glassblowing and table tennis, as well as in diagnosing medical imaging. “We believe the underlying principle — capturing and reinforcing implicit expertise through physiological signals — can generalize to a wide range of perceptual and skill-based domains.”

This research was supported, in part, by Takeda Pharmaceutical Company.

Pages