MIT Latest News

Molecules that fight infection also act on the brain, inducing anxiety or sociability
Immune molecules called cytokines play important roles in the body’s defense against infection, helping to control inflammation and coordinating the responses of other immune cells. A growing body of evidence suggests that some of these molecules also influence the brain, leading to behavioral changes during illness.
Two new studies from MIT and Harvard Medical School, focused on a cytokine called IL-17, now add to that evidence. The researchers found that IL-17 acts on two distinct brain regions — the amygdala and the somatosensory cortex — to exert two divergent effects. In the amygdala, IL-17 can elicit feelings of anxiety, while in the cortex it promotes sociable behavior.
These findings suggest that the immune and nervous systems are tightly interconnected, says Gloria Choi, an associate professor of brain and cognitive sciences, a member of MIT’s Picower Institute for Learning and Memory, and one of the senior authors of the studies.
“If you’re sick, there’s so many more things that are happening to your internal states, your mood, and your behavioral states, and that’s not simply you being fatigued physically. It has something to do with the brain,” she says.
Jun Huh, an associate professor of immunology at Harvard Medical School, is also a senior author of both studies, which appear today in Cell. One of the papers was led by Picower Institute Research Scientist Byeongjun Lee and former Picower Institute research scientist Jeong-Tae Kwon, and the other was led by Harvard Medical School postdoc Yunjin Lee and Picower Institute postdoc Tomoe Ishikawa.
Behavioral effects
Choi and Huh became interested in IL-17 several years ago, when they found it was involved in a phenomenon known as the fever effect. Large-scale studies of autistic children have found that for many of them, their behavioral symptoms temporarily diminish when they have a fever.
In a 2019 study in mice, Choi and Huh showed that in some cases of infection, IL-17 is released and suppresses a small region of the brain’s cortex known as S1DZ. Overactivation of neurons in this region can lead to autism-like behavioral symptoms in mice, including repetitive behaviors and reduced sociability.
“This molecule became a link that connects immune system activation, manifested as a fever, to changes in brain function and changes in the animals’ behavior,” Choi says.
IL-17 comes in six different forms, and there are five different receptors that can bind to it. In their two new papers, the researchers set out to map which of these receptors are expressed in different parts of the brain. This mapping revealed that a pair of receptors known as IL-17RA and IL-17RB is found in the cortex, including in the S1DZ region that the researchers had previously identified. The receptors are located in a population of neurons that receive proprioceptive input and are involved in controlling behavior.
When a type of IL-17 known as IL-17E binds to these receptors, the neurons become less excitable, which leads to the behavioral effects seen in the 2019 study.
“IL-17E, which we’ve shown to be necessary for behavioral mitigation, actually does act almost exactly like a neuromodulator in that it will immediately reduce these neurons’ excitability,” Choi says. “So, there is an immune molecule that’s acting as a neuromodulator in the brain, and its main function is to regulate excitability of neurons.”
Choi hypothesizes that IL-17 may have originally evolved as a neuromodulator, and later on was appropriated by the immune system to play a role in promoting inflammation. That idea is consistent with previous work showing that in the worm C. elegans, IL-17 has no role in the immune system but instead acts on neurons. Among its effects in worms, IL-17 promotes aggregation, a form of social behavior. Additionally, in mammals, IL-17E is actually made by neurons in the cortex, including S1DZ.
“There’s a possibility that a couple of forms of IL-17 perhaps evolved first and foremost to act as a neuromodulator in the brain, and maybe later were hijacked by the immune system also to act as immune modulators,” Choi says.
Provoking anxiety
In the other Cell paper, the researchers explored another brain location where they found IL-17 receptors — the amygdala. This almond-shaped structure plays an important role in processing emotions, including fear and anxiety.
That study revealed that in a region known as the basolateral amygdala (BLA), the IL-17RA and IL-17RE receptors, which work as a pair, are expressed in a discrete population of neurons. When these receptors bind to IL-17A and IL-17C, the neurons become more excitable, leading to an increase in anxiety.
The researchers also found that, counterintuitively, if animals are treated with antibodies that block IL-17 receptors, it actually increases the amount of IL-17C circulating in the body. This finding may help to explain unexpected outcomes observed in a clinical trial of a drug targeting the IL-17-RA receptor for psoriasis treatment, particularly regarding its potential adverse effects on mental health.
“We hypothesize that there’s a possibility that the IL-17 ligand that is upregulated in this patient cohort might act on the brain to induce suicide ideation, while in animals there is an anxiogenic phenotype,” Choi says.
During infections, this anxiety may be a beneficial response, keeping the sick individual away from others to whom the infection could spread, Choi hypothesizes.
“Other than its main function of fighting pathogens, one of the ways that the immune system works is to control the host behavior, to protect the host itself and also protect the community the host belongs to,” she says. “One of the ways the immune system is doing that is to use cytokines, secreted factors, to go to the brain as communication tools.”
The researchers found that the same BLA neurons that have receptors for IL-17 also have receptors for IL-10, a cytokine that suppresses inflammation. This molecule counteracts the excitability generated by IL-17, giving the body a way to shut off anxiety once it’s no longer useful.
Distinctive behaviors
Together, the two studies suggest that the immune system, and even a single family of cytokines, can exert a variety of effects in the brain.
“We have now different combinations of IL-17 receptors being expressed in different populations of neurons, in two different brain regions, that regulate very distinct behaviors. One is actually somewhat positive and enhances social behaviors, and another is somewhat negative and induces anxiogenic phenotypes,” Choi says.
Her lab is now working on additional mapping of IL-17 receptor locations, as well as the IL-17 molecules that bind to them, focusing on the S1DZ region. Eventually, a better understanding of these neuro-immune interactions may help researchers develop new treatments for neurological conditions such as autism or depression.
“The fact that these molecules are made by the immune system gives us a novel approach to influence brain function as means of therapeutics,” Choi says. “Instead of thinking about directly going for the brain, can we think about doing something to the immune system?”
The research was funded, in part, by Jeongho Kim and the Brain Impact Foundation Neuro-Immune Fund, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain, the Marcus Foundation, the N of One: Autism Research Foundation, the Burroughs Wellcome Fund, the Picower Institute Innovation Fund, the MIT John W. Jarve Seed Fund for Science Innovation, Young Soo Perry and Karen Ha, and the National Institutes of Health.
Breakerspace image contest showcases creativity, perseverance
The MIT Department of Materials Science and Engineering Breakerspace transformed into an art gallery on March 10, with six easels arranged in an arc to showcase arresting images — black-and-white scanning electron microscope (SEM) images of crumpled biological structures alongside the brilliant hues of digital optical microscopy.
The images were the winning entries from the inaugural Breakerspace Microscope Image Contest, which opened in fall 2024. The contest invited all MIT undergraduates to train on the Breakerspace’s microscopic instruments, explore material samples, and capture images that were artistic, instructive, or technically challenging.
“The goal of the contest is to inspire curiosity and creativity, encouraging students to explore the imaging tools in the Breakerspace,” says Professor Jeffrey Grossman of the Department of Materials Science and Engineering (DMSE). “We want students to see the beauty and complexity of materials at the microscopic level, to think critically about the images they capture, and to communicate what they mean to others.”
Grossman was a driving force behind the Breakerspace, a laboratory and lounge designed to encourage MIT undergraduates to explore the world of materials.
The contest drew about 50 entries across four categories:
- Most Instructive, for images illustrating key concepts with documentation
- Most Challenging, requiring significant sample preparation
- Best Optical Microscope Image of a sample, rendered in color
- Best Electron Microscope Image, magnified hundreds or even thousands of times
Winners in the four categories received $500, and two runners-up received $100.
“By making this a competition with prizes, we hope to motivate more students to explore microscopy and develop a stronger connection to the materials science community at MIT,” Grossman says.
A window onto research
Amelia How, a DMSE sophomore and winner of the Most Instructive category, used an SEM to show how hydrogen atoms seep into titanium — a phenomenon called hydrogen embrittlement, which can weaken metals and lead to material failure in applications such as aerospace, energy, or construction. The image stemmed from How’s research in Associate Professor Cem Tasan’s research lab, through MIT’s Undergraduate Research Opportunities Program (UROP). She trained on the SEM for the contest after seeing an email announcement.
“It helped me realize how to explain what I was actually doing,” How says, “because the work that I’m doing is something that’s going into a paper, but most people won’t end up reading that.”
Mishael Quraishi, a DMSE senior and winner of Best SEM Image, captured the flower Alstroemeria and its pollen-bearing structure, the anther. She entered the contest mainly to explore microscopy — but sharing that experience was just as rewarding.
“I really love how electron images look,” Quraishi says. “But as I was taking the images, I was also able to show people what pollen looked like at a really small scale — it’s kind of unrecognizable. That was the most fun part: sharing the image and then telling people about the technique.”
Quraishi, president of the Society of Undergraduate Materials Scientists, also organized the event, part of Materials Week, a student-run initiative that highlights the department’s people, research, and impact.
Persistence in practice
The winner of the Most Challenging category, DMSE sophomore Nelushi Vithanachchi gained not just microscopy experience, but also perseverance. The category called for significant effort put into the sample preparation — and Vithanachchi spent hours troubleshooting.
Her sample — a carving of MIT’s Great Dome in silicon carbide — was made using a focused ion beam, a tool that sculpts materials by bombarding them with ions, or charged atoms. The process requires precision, as even minor shifts can ruin a sample.
In her first attempt, while milling the dome’s façade, the sample shifted and broke. A second try with a different design also failed. She credits her UROP advisor, Aaditya Bhat from Associate Professor James LeBeau’s research group, for pushing her to keep going.
“It was four in the morning, and after failing for the third time, I said, ‘I’m not doing this,’” Vithanachchi recalls. “Then Aaditya said, ‘No, we’ve got to finish what we started.’” After a fourth attempt, using the lessons learned from the previous failures, they were finally able to create a structure that resembled the MIT dome.
Anna Beck, a DMSE sophomore and runner-up for Best Electron Microscope Image, had a much different experience. “It was very relaxed for me. I just sat down and took images,” she says. Her entry was an SEM image of high-density polyethylene (HDPE) fibers from an event wrist band. HDPE is a durable material used in packaging, plumbing, and consumer goods.
Through the process, Beck gained insight into composition and microscopy techniques — and she’s excited to apply what she’s learned in the next competition in fall 2025. “In hindsight, I look at mine now and I wish I turned the brightness up a little more.”
Although 35 percent of the entries came from DMSE students, a majority — 65 percent — came from other majors, or first-year students.
With the first contest showcasing both creativity and technical skill, organizers hope even more students will take on the challenge, bringing fresh perspectives and discoveries to the microscopic world. The contest will run again in fall 2025.
“The inaugural contest brought in an incredible range of submissions. It was exciting to see students engage with microscopy in new ways and share their discoveries,” Grossman says. “The Breakerspace was designed for all undergraduates, regardless of major or experience level — whether they’re conducting research, exploring new materials, or simply curious about what something is made of. We’re excited to expand participation and encourage even more entries in the next competition.”
Lincoln Laboratory honored for technology transfer of hurricane-tracking satellites
The Federal Laboratory Consortium (FLC) has awarded MIT Lincoln Laboratory a 2025 FLC Excellence in Technology Transfer Award. The award recognizes the laboratory's exceptional efforts in commercializing microwave sounders hosted on small satellites called CubeSats. The laboratory first developed the technology for NASA, demonstrating that such satellites could work in tandem to collect hurricane data more frequently than previously possible and significantly improve hurricane forecasts. The technology is now licensed to the company Tomorrow.io, which will launch a large constellation of the sounder-equipped satellites to enhance hurricane prediction and expand global weather coverage.
"This FLC award recognizes a technology with significant impact, one that could enhance hourly weather forecasting for aviation, logistics, agriculture, and emergency management, and highlights the laboratory's important role in bringing federally funded innovation to the commercial sector," says Asha Rajagopal, Lincoln Laboratory's chief technology transfer officer.
A nationwide network of more than 300 government laboratories, agencies, and research centers, the FLC helps facilitate the transfer of technologies out of federal labs and into the marketplace to benefit the U.S. economy, society, and national security.
Lincoln Laboratory originally proposed and demonstrated the technology for NASA's TROPICS (Time-Resolved Observations of Precipitation structure and storm Intensity with a Constellation of SmallSats) mission. For TROPICS, the laboratory put its microwave sounders on low-cost, commercially available CubeSats for the first time.
Of all the technology used for sensing hurricanes, microwave sounders provide the greatest improvement to forecasting models. From space, these instruments detect a range of microwave frequencies that penetrate clouds, allowing them to measure 3D temperature, humidity, and precipitation in a storm. State-of-the-art instruments are typically large (the size of a washing machine) and hosted aboard $2 billion polar-orbiting satellites, which collectively may revisit a storm every six hours. If sounders could be miniaturized, laboratory researchers imagined, then they could be put on small satellites and launched in large numbers, working together to revisit storms more often.
The TROPICS sounder is the size of a coffee cup. The laboratory team worked for several years to develop and demonstrate the technology that resulted in a miniaturized instrument, while maintaining performance on par with traditional sounders for the frequencies that provide the most useful tropical cyclone observations. By 2023, NASA launched a constellation of four TROPICS satellites, which have since collected rapidly refreshed data of many tropical storms.
Now, Tomorrow.io plans to increase that constellation to a global network of 18 satellites. The resulting high-rate observations — under an hour — are expected to improve weather forecasts, hurricane tracking, and early-warning systems.
"This partnership with Tomorrow.io expands the impact of the TROPICS mission. Tomorrow.io’s increased constellation size, software pipeline, and resilient business model enable it to support a number of commercial and government organizations. This transfer to industry has resulted in a self-sustaining national capability, one that is expected to help the economy and the government for years to come," says Tom Roy, who managed the transfer of the technology to Tomorrow.io.
The technology transfer spanned 18 months. Under a cooperative research and development agreement (CRADA), the laboratory team adapted the TROPICS payload to an updated satellite design and delivered to Tomorrow.io the first three units, two of which were launched in September 2024. The team also provided in-depth training to Tomorrow.io and seven industry partners who will build, test, launch, and operate the future full commercial constellation. The remaining satellites are expected to launch before the end of this year.
"With these microwave sounders, we can set a new standard in atmospheric data collection and prediction. This technology allows us to capture atmospheric data with exceptional accuracy, especially over oceans and remote areas where traditional observations are scarce," said Rei Goffer, co-founder of Tomorrow.io, in a press release announcing the September launches.
Tomorrow.io will use the sounder data as input into their weather forecasts, data products, and decision support tools available to their customers, who range from major airlines to governments. Tomorrow.io's nonprofit partner, TomorrowNow, also plans to use the data as input to its climate model for improving food security in Africa.
This technology is especially relevant as hurricanes and severe weather events continue to cause significant destruction. In 2024, the United States experienced a near-record 27 disaster events that each exceeded $1 billion in damage, resulting in a total cost of approximately $182.7 billion, and that caused the deaths of at least 568 people. Globally, these storm systems cause thousands of deaths and billions of dollars in damage each year.
“It has been great to see the Lincoln Laboratory, Tomorrow.io, and industry partner teams work together so effectively to rapidly incorporate the TROPICS technology and bring the new Tomorrow.io microwave sounder constellation online,” says Bill Blackwell, principal investigator of the NASA TROPICS mission and the CRADA with Tomorrow.io. “I expect that the improved revisit rate provided by the Tomorrow.io constellation will drive further improvements in hurricane forecasting performance over and above what has already been demonstrated by TROPICS.”
The team behind the transfer includes Tom Roy, Bill Blackwell, Steven Gillmer, Rebecca Keenan, Nick Zorn, and Mike DiLiberto of Lincoln Laboratory and Kai Lemay, Scott Williams, Emma Watson, and Jan Wicha of Tomorrow.io. Lincoln Laboratory will be honored among other winners of 2025 FLC Awards at the FLC National Meeting to be held virtually on May 13.
Carsten Rasmussen, LEGO Group COO, discusses the production network that enables the builders of tomorrow
LEGOs are no stranger to many members of the MIT community. Faculty, staff, and students, alike, have developed a love of building and mechanics while playing with the familiar plastic bricks. In just a few hours, a heap of bricks can become a house, a ship, an airplane, or a cat. The simplicity lends itself to creativity and ingenuity, and it has inspired many MIT faculty members to bring LEGOs into the classroom, including class 2.S00 (Introduction to Manufacturing), where students use LEGO bricks to learn about manufacturing processes and systems.
It was perhaps no surprise, then, that the lecture hall in the MIT Schwarzman College of Computing was packed with students, faculty, staff, and guests to hear Carsten Rasmussen, chief operating officer of the LEGO Group, speak as part of the Manufacturing@MIT Distinguished Speaker Series on March 20.
In his engaging and inspiring talk, Rasmussen asked one of the most important questions in manufacturing: How do you balance innovation with sustainability while keeping a complex global supply chain running smoothly? He emphasized that success in modern manufacturing isn’t just about cutting costs — it’s about creating value across the entire network, and integrating every aspect of the business.
Successful manufacturing is all about balance
The way the toy industry views success is evolving, Rasmussen said. In the past, focusing on “cost, quality, safety, delivery, and service” may have been enough, but today’s landscape is far more demanding. “Now, it’s about availability, customers’ happiness, and innovation,” he said.
Rasmussen, who has been with the LEGO Group since 2001, started as a buyer before moving to various leadership roles within the organization. Today, he oversees the LEGO Group’s operations strategy, including manufacturing and supply chain planning, quality, engineering, and sales and operations planning.
“The way we can inspire the builders of tomorrow is basically, whatever we develop, we are able to produce, and we are able to sell,” he said.
The LEGO Group’s operations are intricate. Focusing on areas such as capacity and infrastructure, network utilization, analysis and design, and sustainability, keeps the company true to its mission, “to inspire and develop the builders of tomorrow.” Within the organization, departments operate with a focus on how their decisions will impact the rest of the company. To do this, they need to communicate effectively.
Intuition and experience play a big role in effective decision-making
In a time where data analytics is a huge part of decision-making in manufacturing and supply-chain management, Rasmussen highlighted the importance of blending data with intuition and experience.
“Many of the decisions you have to make are very, very complex,” he explained. “A lot of the data you’re going to provide me is based on history. And what happened in history is not what you’re facing right now. So, you need to really be able to take great data and blend that with your intuition and your experience to make a decision.”
This shift reflects a broader trend in industries where leaders are beginning to see the benefits of looking beyond purely data-driven decision-making. With global supply chains disrupted by unforeseen events like the Covid-19 pandemic, there’s growing acknowledgement that historical data may not be the most effective way to predict the future. Rasmussen said that the audience should practice blending their own intuition and experience with data by asking themselves: “Does it make sense? Does it feel right?”
Prioritizing sustainability
Rasmussen also highlighted the LEGO Group’s ambitious sustainability goals, signaling that innovation cannot come at the expense of environmental responsibility. “There is no excuse for us to not leave a better planet for the next generation, for the next hundred years,” he said.
With an ambition to make its products from more renewable or recycled materials by 2032 and eliminate single-use packaging, the company aims to lead a shift in trends in manufacturing toward being more environmentally friendly, including an effort to turn waste into bricks.
Innovation doesn’t exist in a vacuum
Throughout his talk, Rasmussen underscored the importance of innovation. The only way to stay on top is to be constantly thinking of new ideas, he said.
“Are you daring to put new products into the market?” he asked, adding that it’s not enough to come up with a novel product or approach. How its implementation will work within the system is essential, too. “Our challenge that you need to help me with,” he said to the audience, “is how can we bring in innovation, because we can’t stand still either. We also need to be fit for the future … that is actually one of our bigger challenges.”
He reminded the audience that innovation is not a linear path. It involves risk, some failure, and continuous evolution. “Resilience is absolutely key,” he said.
Q&A
After his presentation, Rasmussen sat down with Professor John Hart for a brief Q&A, followed by audience questions. Among the questions that Hart asked Rasmussen was how he would respond to a designer who presented a model of MIT-themed LEGO set, assuring Rasmussen it would break sales records. “Oh, I’ve heard that so many times,” Rasmussen laughed.
Hart asked what it would take to turn an idea into reality. “How long does it take from bricks to having it on my doorstep?” he asked.
“Typically, a new product takes between 12 to 18 months from idea to when we put it out on the market,” said Rasmussen, explaining that the process requires a good deal of integration and that there is a lot of planning to make sure that new ideas can be implemented across the organization.
Then the microphone was opened up to the crowd. The first audience questions came from Emerson Linville-Engler, the youngest audience member at just 5 years old, who wanted to know what the most difficult LEGO set to make was (the Technic round connector pieces), as well as Rasmussen’s favorite LEGO set (complex builds, like buildings or Technic models).
Other questions showcased how much LEGO inspired the audience. One member asked Rasmussen if it ever got old being told that he worked for a company that inspires the inner child? “No. It motivates me every single day when you meet them,” he said.
Through the Q&A, the audience was also able to ask more about the manufacturing process from ideas to execution, as well as whether Rasmussen was threatened by imitators (he welcomes healthy competition, but not direct copycats), and whether the LEGO Group plans on bringing back some old favorites (they are discussing whether to bring back old sets, but there are no set plans to do so at this time).
For the aspiring manufacturing leaders and innovators in the room, the lesson of Rasmussen’s talk was clear: Success isn’t just about making the right decision, it’s about understanding the entire system, having the courage to innovate, and being resilient enough to navigate unexpected challenges.
The event was hosted by the Manufacturing@MIT Working Group as part of the Manufacturing@MIT Distinguished Speaker Series. Past speakers include the TSMC founder Morris Chang, Office of Science and Technology Policy Director Arati Prabhakar, Under Secretary of Defense for Research and Engineering Heidi Shyu, and Pennsylvania Governor Tom Wolf.
New method assesses and improves the reliability of radiologists’ diagnostic reports
Due to the inherent ambiguity in medical images like X-rays, radiologists often use words like “may” or “likely” when describing the presence of a certain pathology, such as pneumonia.
But do the words radiologists use to express their confidence level accurately reflect how often a particular pathology occurs in patients? A new study shows that when radiologists express confidence about a certain pathology using a phrase like “very likely,” they tend to be overconfident, and vice-versa when they express less confidence using a word like “possibly.”
Using clinical data, a multidisciplinary team of MIT researchers in collaboration with researchers and clinicians at hospitals affiliated with Harvard Medical School created a framework to quantify how reliable radiologists are when they express certainty using natural language terms.
They used this approach to provide clear suggestions that help radiologists choose certainty phrases that would improve the reliability of their clinical reporting. They also showed that the same technique can effectively measure and improve the calibration of large language models by better aligning the words models use to express confidence with the accuracy of their predictions.
By helping radiologists more accurately describe the likelihood of certain pathologies in medical images, this new framework could improve the reliability of critical clinical information.
“The words radiologists use are important. They affect how doctors intervene, in terms of their decision making for the patient. If these practitioners can be more reliable in their reporting, patients will be the ultimate beneficiaries,” says Peiqi Wang, an MIT graduate student and lead author of a paper on this research.
He is joined on the paper by senior author Polina Golland, a Sunlin and Priscilla Chou Professor of Electrical Engineering and Computer Science (EECS), a principal investigator in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and the leader of the Medical Vision Group; as well as Barbara D. Lam, a clinical fellow at the Beth Israel Deaconess Medical Center; Yingcheng Liu, at MIT graduate student; Ameneh Asgari-Targhi, a research fellow at Massachusetts General Brigham (MGB); Rameswar Panda, a research staff member at the MIT-IBM Watson AI Lab; William M. Wells, a professor of radiology at MGB and a research scientist in CSAIL; and Tina Kapur, an assistant professor of radiology at MGB. The research will be presented at the International Conference on Learning Representations.
Decoding uncertainty in words
A radiologist writing a report about a chest X-ray might say the image shows a “possible” pneumonia, which is an infection that inflames the air sacs in the lungs. In that case, a doctor could order a follow-up CT scan to confirm the diagnosis.
However, if the radiologist writes that the X-ray shows a “likely” pneumonia, the doctor might begin treatment immediately, such as by prescribing antibiotics, while still ordering additional tests to assess severity.
Trying to measure the calibration, or reliability, of ambiguous natural language terms like “possibly” and “likely” presents many challenges, Wang says.
Existing calibration methods typically rely on the confidence score provided by an AI model, which represents the model’s estimated likelihood that its prediction is correct.
For instance, a weather app might predict an 83 percent chance of rain tomorrow. That model is well-calibrated if, across all instances where it predicts an 83 percent chance of rain, it rains approximately 83 percent of the time.
“But humans use natural language, and if we map these phrases to a single number, it is not an accurate description of the real world. If a person says an event is ‘likely,’ they aren’t necessarily thinking of the exact probability, such as 75 percent,” Wang says.
Rather than trying to map certainty phrases to a single percentage, the researchers’ approach treats them as probability distributions. A distribution describes the range of possible values and their likelihoods — think of the classic bell curve in statistics.
“This captures more nuances of what each word means,” Wang adds.
Assessing and improving calibration
The researchers leveraged prior work that surveyed radiologists to obtain probability distributions that correspond to each diagnostic certainty phrase, ranging from “very likely” to “consistent with.”
For instance, since more radiologists believe the phrase “consistent with” means a pathology is present in a medical image, its probability distribution climbs sharply to a high peak, with most values clustered around the 90 to 100 percent range.
In contrast the phrase “may represent” conveys greater uncertainty, leading to a broader, bell-shaped distribution centered around 50 percent.
Typical methods evaluate calibration by comparing how well a model’s predicted probability scores align with the actual number of positive results.
The researchers’ approach follows the same general framework but extends it to account for the fact that certainty phrases represent probability distributions rather than probabilities.
To improve calibration, the researchers formulated and solved an optimization problem that adjusts how often certain phrases are used, to better align confidence with reality.
They derived a calibration map that suggests certainty terms a radiologist should use to make the reports more accurate for a specific pathology.
“Perhaps, for this dataset, if every time the radiologist said pneumonia was ‘present,’ they changed the phrase to ‘likely present’ instead, then they would become better calibrated,” Wang explains.
When the researchers used their framework to evaluate clinical reports, they found that radiologists were generally underconfident when diagnosing common conditions like atelectasis, but overconfident with more ambiguous conditions like infection.
In addition, the researchers evaluated the reliability of language models using their method, providing a more nuanced representation of confidence than classical methods that rely on confidence scores.
“A lot of times, these models use phrases like ‘certainly.’ But because they are so confident in their answers, it does not encourage people to verify the correctness of the statements themselves,” Wang adds.
In the future, the researchers plan to continue collaborating with clinicians in the hopes of improving diagnoses and treatment. They are working to expand their study to include data from abdominal CT scans.
In addition, they are interested in studying how receptive radiologists are to calibration-improving suggestions and whether they can mentally adjust their use of certainty phrases effectively.
“Expression of diagnostic certainty is a crucial aspect of the radiology report, as it influences significant management decisions. This study takes a novel approach to analyzing and calibrating how radiologists express diagnostic certainty in chest X-ray reports, offering feedback on term usage and associated outcomes,” says Atul B. Shinagare, associate professor of radiology at Harvard Medical School, who was not involved with this work. “This approach has the potential to improve radiologists’ accuracy and communication, which will help improve patient care.”
The work was funded, in part, by a Takeda Fellowship, the MIT-IBM Watson AI Lab, the MIT CSAIL Wistrom Program, and the MIT Jameel Clinic.
Tabletop factory-in-a-box makes hands-on manufacturing education more accessible
For over a decade, through a collaboration managed by MIT.nano, MIT and Tecnológico de Monterrey (Tec), one of the largest universities in Latin America, have worked together to develop innovative academic and research initiatives with a particular focus in nanoscience and nanotechnology and, more recently, an emphasis on design and smart manufacturing. Now, the collaboration has also expanded to include undergraduate education. Seven Tec undergrads are developing methods to manufacture low-cost, desktop fiber-extrusion devices, or FrEDs, alongside peers at MIT in an “in-the-lab” teaching and learning factory, the FrED Factory.
“The FrED Factory serves as a factory-like education platform for manufacturing scale-up, enabling students and researchers to engage firsthand in the transition from prototype development to small-scale production,” says Brian Anthony, MIT.nano associate director and principal research scientist in the MIT Department of Mechanical Engineering (MechE).
Through on-campus learning, participants observe, analyze, and actively contribute to this process, gaining critical insights into the complexities of scaling manufacturing operations. The product of the FrED Factory are FrED kits — tabletop manufacturing kits that themselves produce fiber and that are used to teach smart manufacturing principles. “We’re thrilled to have students from Monterrey Tec here at MIT, bringing new ideas and perspectives, and helping to develop these new ways to teach manufacturing at both MIT and Tec,” says Anthony.
The FrED factory was originally built by a group of MIT graduate students in 2022 as their thesis project in the Master of Engineering in Advanced Manufacturing and Design program. They adapted and scaled the original design of the device, built by Anthony’s student David Kim, into something that could be manufactured into multiple units at a substantially lower cost. The resulting computer-aided design files were shared with Tec de Monterrey for use by faculty and students. Since launching the FrED curriculum at Tec in 2022, MIT has co-hosted two courses led by Tec faculty: “Mechatronics Design: (Re) Design of FrED,” and “Automation of Manufacturing Systems: FrED Factory Challenge.”
New this academic year, undergraduate Tec students are participating in FrED Factory research immersions. The students engage in collaborative FrED projects at MIT and then return to Tec to implement their knowledge — particularly to help replicate and implement what they have learned, with the launch of a new FrED Factory at Tec de Monterrey this spring. The end goal is to fully integrate this project into Tec’s mechatronics engineering curriculum, in which students learn about automation and robotics firsthand through the devices.
Russel Bradley, a PhD student in MechE supervised by Anthony, is the project lead of FrED Factory and has been working closely with the undergraduate Tec students.
“The process of designing and manufacturing FrEDs is an educational experience in itself,” says Bradley. “Unlike a real factory, which likely wouldn’t welcome students to experiment with the machines, the FrED factory provides an environment where you can fail and learn.”
The Tec undergrads are divided into groups working on specific projects, including Development of an Education 4.0 Framework for FrED, Immersive Technology (AR) for Manufacturing Operations, Gamifying Advanced Manufacturing Education in FrED Factory, and Immersive Cognitive Factory Twins.
Sergio Siller Lobo is a Tec student who is working on the development of the education framework for FrED. He and other students are revising the code to make the interface more student-friendly and best enable the students to learn while working with the devices. They are focused particularly on helping students to engage with the topics of control systems, computer vision, and internet of things (IoT) in both a digital course that they are developing, and in directly working with the devices. The digital course can be presented by an instructor or done autonomously by students.
“Students can be learning the theory with the digital courses, as well as having access to hands-on, practical experience with the device,” says Siller Lobo. “You can have the best of both ways of learning, both the practical and the theoretical.”
Arik Gómez Horita, an undergrad from Tec who has also been working on the education framework, says that the technology that currently exists in terms of how to teach students about control systems, computer vision, and IoT is often very limited in either its capability or quantity.
“A key aspect of the value of the FrEDs is that we are integrating all these concepts and a module for education into a single device,” says Gómez Horita. “Bringing FrED into a classroom is a game-changer. Our main goal is trying to put FrED into the hands of the teacher, to use it for all its teaching capabilities.”
Once the students return to Tec de Monterrey with the educational modules they’ve developed, there will be workshops with the FrEDs and opportunities for Tec students to use their own creativity and iterate on the devices.
“The FrED is really a lab in a box, and one of the best things that FrEDs do is create data,” says Siller Lobo. “Finding new ways to get data from FrED gives it more value.”
Tec students Ángel Alarcón and André Mendoza are preparing to have MIT students test the FrED factory, running a simulation with the two main roles of engineer and operator. The operator role assembles the FrEDs within the workstations that simulate a factory. The engineer role analyzes the data created on the factory side by the operator and tries to find ways to improve production.
“This is a very immersive way to teach manufacturing systems,” says Alarcón. “Many students studying manufacturing, undergraduate and even graduate, finish their education never having even gone to an actual factory. The FrED Factory gives students the valuable opportunity to get to know what a factory is like and experience an industry environment without having to go off campus.”
The data gained from the workstations — including cycle time and defects in an operation — will be used to teach different topics about manufacturing. Ultimately, the FrED factory at Tec will be used to compare the benefits and drawbacks of automation versus manual labor.
Bradley says that the Tec students bring a strong mechatronics background that adds a lot of important insights to the project, and beyond the lab, it’s also a valuable multicultural exchange.
“It’s not just about what the students are learning from us,” says Bradley, “but it’s really a collaborative process in which we’re all complementing each other.”
Taking the “training wheels” off clean energy
Renewable power sources have seen unprecedented levels of investment in recent years. But with political uncertainty clouding the future of subsidies for green energy, these technologies must begin to compete with fossil fuels on equal footing, said participants at the 2025 MIT Energy Conference.
“What these technologies need less is training wheels, and more of a level playing field,” said Brian Deese, an MIT Institute Innovation Fellow, during a conference-opening keynote panel.
The theme of the two-day conference, which is organized each year by MIT students, was “Breakthrough to deployment: Driving climate innovation to market.” Speakers largely expressed optimism about advancements in green technology, balanced by occasional notes of alarm about a rapidly changing regulatory and political environment.
Deese defined what he called “the good, the bad, and the ugly” of the current energy landscape. The good: Clean energy investment in the United States hit an all-time high of $272 billion in 2024. The bad: Announcements of future investments have tailed off. And the ugly: Macro conditions are making it more difficult for utilities and private enterprise to build out the clean energy infrastructure needed to meet growing energy demands.
“We need to build massive amounts of energy capacity in the United States,” Deese said. “And the three things that are the most allergic to building are high uncertainty, high interest rates, and high tariff rates. So that’s kind of ugly. But the question … is how, and in what ways, that underlying commercial momentum can drive through this period of uncertainty.”
A shifting clean energy landscape
During a panel on artificial intelligence and growth in electricity demand, speakers said that the technology may serve as a catalyst for green energy breakthroughs, in addition to putting strain on existing infrastructure. “Google is committed to building digital infrastructure responsibly, and part of that means catalyzing the development of clean energy infrastructure that is not only meeting the AI need, but also benefiting the grid as a whole,” said Lucia Tian, head of clean energy and decarbonization technologies at Google.
Across the two days, speakers emphasized that the cost-per-unit and scalability of clean energy technologies will ultimately determine their fate. But they also acknowledged the impact of public policy, as well as the need for government investment to tackle large-scale issues like grid modernization.
Vanessa Chan, a former U.S. Department of Energy (DoE) official and current vice dean of innovation and entrepreneurship at the University of Pennsylvania School of Engineering and Applied Sciences, warned of the “knock-on” effects of the move to slash National Institutes of Health (NIH) funding for indirect research costs, for example. “In reality, what you’re doing is undercutting every single academic institution that does research across the nation,” she said.
During a panel titled “No clean energy transition without transmission,” Maria Robinson, former director of the DoE’s Grid Deployment Office, said that ratepayers alone will likely not be able to fund the grid upgrades needed to meet growing power demand. “The amount of investment we’re going to need over the next couple of years is going to be significant,” she said. “That’s where the federal government is going to have to play a role.”
David Cohen-Tanugi, a clean energy venture builder at MIT, noted that extreme weather events have changed the climate change conversation in recent years. “There was a narrative 10 years ago that said … if we start talking about resilience and adaptation to climate change, we’re kind of throwing in the towel or giving up,” he said. “I’ve noticed a very big shift in the investor narrative, the startup narrative, and more generally, the public consciousness. There’s a realization that the effects of climate change are already upon us.”
“Everything on the table”
The conference featured panels and keynote addresses on a range of emerging clean energy technologies, including hydrogen power, geothermal energy, and nuclear fusion, as well as a session on carbon capture.
Alex Creely, a chief engineer at Commonwealth Fusion Systems, explained that fusion (the combining of small atoms into larger atoms, which is the same process that fuels stars) is safer and potentially more economical than traditional nuclear power. Fusion facilities, he said, can be powered down instantaneously, and companies like his are developing new, less-expensive magnet technology to contain the extreme heat produced by fusion reactors.
By the early 2030s, Creely said, his company hopes to be operating 400-megawatt power plants that use only 50 kilograms of fuel per year. “If you can get fusion working, it turns energy into a manufacturing product, not a natural resource,” he said.
Quinn Woodard Jr., senior director of power generation and surface facilities at geothermal energy supplier Fervo Energy, said his company is making the geothermal energy more economical through standardization, innovation, and economies of scale. Traditionally, he said, drilling is the largest cost in producing geothermal power. Fervo has “completely flipped the cost structure” with advances in drilling, Woodard said, and now the company is focused on bringing down its power plant costs.
“We have to continuously be focused on cost, and achieving that is paramount for the success of the geothermal industry,” he said.
One common theme across the conference: a number of approaches are making rapid advancements, but experts aren’t sure when — or, in some cases, if — each specific technology will reach a tipping point where it is capable of transforming energy markets.
“I don’t want to get caught in a place where we often descend in this climate solution situation, where it’s either-or,” said Peter Ellis, global director of nature climate solutions at The Nature Conservancy. “We’re talking about the greatest challenge civilization has ever faced. We need everything on the table.”
The road ahead
Several speakers stressed the need for academia, industry, and government to collaborate in pursuit of climate and energy goals. Amy Luers, senior global director of sustainability for Microsoft, compared the challenge to the Apollo spaceflight program, and she said that academic institutions need to focus more on how to scale and spur investments in green energy.
“The challenge is that academic institutions are not currently set up to be able to learn the how, in driving both bottom-up and top-down shifts over time,” Luers said. “If the world is going to succeed in our road to net zero, the mindset of academia needs to shift. And fortunately, it’s starting to.”
During a panel called “From lab to grid: Scaling first-of-a-kind energy technologies,” Hannan Happi, CEO of renewable energy company Exowatt, stressed that electricity is ultimately a commodity. “Electrons are all the same,” he said. “The only thing [customers] care about with regards to electrons is that they are available when they need them, and that they’re very cheap.”
Melissa Zhang, principal at Azimuth Capital Management, noted that energy infrastructure development cycles typically take at least five to 10 years — longer than a U.S. political cycle. However, she warned that green energy technologies are unlikely to receive significant support at the federal level in the near future. “If you’re in something that’s a little too dependent on subsidies … there is reason to be concerned over this administration,” she said.
World Energy CEO Gene Gebolys, the moderator of the lab-to-grid panel, listed off a number of companies founded at MIT. “They all have one thing in common,” he said. “They all went from somebody’s idea, to a lab, to proof-of-concept, to scale. It’s not like any of this stuff ever ends. It’s an ongoing process.”
Surprise discovery could lead to improved catalysts for industrial reactions
The process of catalysis — in which a material speeds up a chemical reaction — is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.
A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.
Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang PhD ’23, and MIT professor of chemistry and chemical engineering Yogesh Surendranath.
There are two broad classes of catalysts: homogeneous catalysts, which consist of dissolved molecules, and heterogeneous catalysts, which are solid materials whose surface provides the site for the chemical reaction. “For the longest time,” Surendranath says, “there’s been a general view that you either have catalysis happening on these surfaces, or you have them happening on these soluble molecules.” But the new research shows that in the case of vinyl acetate — an important material that goes into many polymer products such as the rubber in the soles of your shoes — there is an interplay between both classes of catalysis.
“What we discovered,” Surendranath explains, “is that you actually have these solid metal materials converting into molecules, and then converting back into materials, in a cyclic dance.”
He adds: “This work calls into question this paradigm where there’s either one flavor of catalysis or another. Really, there could be an interplay between both of them in certain cases, and that could be really advantageous for having a process that’s selective and efficient.”
The synthesis of vinyl acetate has been a large-scale industrial reaction since the 1960s, and it has been well-researched and refined over the years to improve efficiency. This has happened largely through a trial-and-error approach, without a precise understanding of the underlying mechanisms, the researchers say.
While chemists are often more familiar with homogeneous catalysis mechanisms, and chemical engineers are often more familiar with surface catalysis mechanisms, fewer researchers study both. This is perhaps part of the reason that the full complexity of this reaction was not previously captured. But Harraz says he and his colleagues are working at the interface between disciplines. “We’ve been able to appreciate both sides of this reaction and find that both types of catalysis are critical,” he says.
The reaction that produces vinyl acetate requires something to activate the oxygen molecules that are one of the constituents of the reaction, and something else to activate the other ingredients, acetic acid and ethylene. The researchers found that the form of the catalyst that worked best for one part of the process was not the best for the other. It turns out that the molecular form of the catalyst does the key chemistry with the ethylene and the acetic acid, while it’s the surface that ends up doing the activation of the oxygen.
They found that the underlying process involved in interconverting the two forms of the catalyst is actually corrosion, similar to the process of rusting. “It turns out that in rusting, you actually go through a soluble molecular species somewhere in the sequence,” Surendranath says.
The team borrowed techniques traditionally used in corrosion research to study the process. They used electrochemical tools to study the reaction, even though the overall reaction does not require a supply of electricity. By making potential measurements, the researchers determined that the corrosion of the palladium catalyst material to soluble palladium ions is driven by an electrochemical reaction with the oxygen, converting it to water. Corrosion is “one of the oldest topics in electrochemistry,” says Lodaya, “but applying the science of corrosion to understand catalysis is much newer, and was essential to our findings.”
By correlating measurements of catalyst corrosion with other measurements of the chemical reaction taking place, the researchers proposed that it was the corrosion rate that was limiting the overall reaction. “That’s the choke point that’s controlling the rate of the overall process,” Surendranath says.
The interplay between the two types of catalysis works efficiently and selectively “because it actually uses the synergy of a material surface doing what it’s good at and a molecule doing what it’s good at,” Surendranath says. The finding suggests that, when designing new catalysts, rather than focusing on either solid materials or soluble molecules alone, researchers should think about how the interplay of both may open up new approaches.
“Now, with an improved understanding of what makes this catalyst so effective, you can try to design specific materials or specific interfaces that promote the desired chemistry,” Harraz says. Since this process has been worked on for so long, these findings may not necessarily lead to improvements in this specific process of making vinyl acetate, but it does provide a better understanding of why the materials work as they do, and could lead to improvements in other catalytic processes.
Understanding that “catalysts can transit between molecule and material and back, and the role that electrochemistry plays in those transformations, is a concept that we are really excited to expand on,” Lodaya says.
Harraz adds: “With this new understanding that both types of catalysis could play a role, what other catalytic processes are out there that actually involve both? Maybe those have a lot of room for improvement that could benefit from this understanding.”
This work is “illuminating, something that will be worth teaching at the undergraduate level," says Christophe Coperet, a professor of inorganic chemistry at ETH Zurich, who was not associated with the research. “The work highlights new ways of thinking. ... [It] is notable in the sense that it not only reconciles homogeneous and heterogeneous catalysis, but it describes these complex processes as half reactions, where electron transfers can cycle between distinct entities.”
The research was supported, in part, by the National Science Foundation as a Phase I Center for Chemical Innovation; the Center for Interfacial Ionics; and the Gordon and Betty Moore Foundation.
Engineers develop a way to mass manufacture nanoparticles that deliver cancer drugs directly to tumors
Polymer-coated nanoparticles loaded with therapeutic drugs show significant promise for cancer treatment, including ovarian cancer. These particles can be targeted directly to tumors, where they release their payload while avoiding many of the side effects of traditional chemotherapy.
Over the past decade, MIT Institute Professor Paula Hammond and her students have created a variety of these particles using a technique known as layer-by-layer assembly. They’ve shown that the particles can effectively combat cancer in mouse studies.
To help move these nanoparticles closer to human use, the researchers have now come up with a manufacturing technique that allows them to generate larger quantities of the particles, in a fraction of the time.
“There’s a lot of promise with the nanoparticle systems we’ve been developing, and we’ve been really excited more recently with the successes that we’ve been seeing in animal models for our treatments for ovarian cancer in particular,” says Hammond, who is also MIT’s vice provost for faculty and a member of the Koch Institute for Integrative Cancer Research. “Ultimately, we need to be able to bring this to a scale where a company is able to manufacture these on a large level.”
Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Advanced Functional Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital and a visiting scientist at the Koch Institute, and Ezra Gordon ’24 are the lead authors of paper. Heikyung Suh, an MIT research technician, is also an author.
A streamlined process
More than a decade ago, Hammond’s lab developed a novel technique for building nanoparticles with highly controlled architectures. This approach allows layers with different properties to be laid down on the surface of a nanoparticle by alternately exposing the surface to positively and negatively charged polymers.
Each layer can be embedded with drug molecules or other therapeutics. The layers can also carry targeting molecules that help the particles find and enter cancer cells.
Using the strategy that Hammond’s lab originally developed, one layer is applied at a time, and after each application, the particles go through a centrifugation step to remove any excess polymer. This is time-intensive and would be difficult to scale up to large-scale production, the researchers say.
More recently, a graduate student in Hammond’s lab developed an alternative approach to purifying the particles, known as tangential flow filtration. However, while this streamlined the process, it still was limited by its manufacturing complexity and maximum scale of production.
“Although the use of tangential flow filtration is helpful, it’s still a very small-batch process, and a clinical investigation requires that we would have many doses available for a significant number of patients,” Hammond says.
To create a larger-scale manufacturing method, the researchers used a microfluidic mixing device that allows them to sequentially add new polymer layers as the particles flow through a microchannel within the device. For each layer, the researchers can calculate exactly how much polymer is needed, which eliminates the need to purify the particles after each addition.
“That is really important because separations are the most costly and time-consuming steps in these kinds of systems,” Hammond says.
This strategy eliminates the need for manual polymer mixing, streamlines production, and integrates good manufacturing practice (GMP)-compliant processes. The FDA’s GMP requirements ensure that products meet safety standards and can be manufactured in a consistent fashion, which would be highly challenging and costly using the previous step-wise batch process. The microfluidic device that the researchers used in this study is already used for GMP manufacturing of other types of nanoparticles, including mRNA vaccines.
“With the new approach, there’s much less chance of any sort of operator mistake or mishaps,” Pires says. “This is a process that can be readily implemented in GMP, and that’s really the key step here. We can create an innovation within the layer-by-layer nanoparticles and quickly produce it in a manner that we could go into clinical trials with.”
Scaled-up production
Using this approach, the researchers can generate 15 milligrams of nanoparticles (enough for about 50 doses) in just a few minutes, while the original technique would take close to an hour to create the same amount. This could enable the production of more than enough particles for clinical trials and patient use, the researchers say.
“To scale up with this system, you just keep running the chip, and it is much easier to produce more of your material,” Pires says.
To demonstrate their new production technique, the researchers created nanoparticles coated with a cytokine called interleukin-12 (IL-12). Hammond’s lab has previously shown that IL-12 delivered by layer-by-layer nanoparticles can activate key immune cells and slow ovarian tumor growth in mice.
In this study, the researchers found that IL-12-loaded particles manufactured using the new technique showed similar performance as the original layer-by-layer nanoparticles. And, not only do these nanoparticles bind to cancer tissue, but they show a unique ability to not enter the cancer cells. This allows the nanoparticles to serve as markers on the cancer cells that activate the immune system locally in the tumor. In mouse models of ovarian cancer, this treatment can lead to both tumor growth delay and even cures.
The researchers have filed for a patent on the technology and are now working with MIT’s Deshpande Center for Technological Innovation in hopes of potentially forming a company to commercialize the technology. While they are initially focusing on cancers of the abdominal cavity, such as ovarian cancer, the work could also be applied to other types of cancer, including glioblastoma, the researchers say.
The research was funded by the U.S. National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Vana is letting users own a piece of the AI models trained on their data
In February 2024, Reddit struck a $60 million deal with Google to let the search giant use data on the platform to train its artificial intelligence models. Notably absent from the discussions were Reddit users, whose data were being sold.
The deal reflected the reality of the modern internet: Big tech companies own virtually all our online data and get to decide what to do with that data. Unsurprisingly, many platforms monetize their data, and the fastest-growing way to accomplish that today is to sell it to AI companies, who are themselves massive tech companies using the data to train ever more powerful models.
The decentralized platform Vana, which started as a class project at MIT, is on a mission to give power back to the users. The company has created a fully user-owned network that allows individuals to upload their data and govern how they are used. AI developers can pitch users on ideas for new models, and if the users agree to contribute their data for training, they get proportional ownership in the models.
The idea is to give everyone a stake in the AI systems that will increasingly shape our society while also unlocking new pools of data to advance the technology.
“This data is needed to create better AI systems,” says Vana co-founder Anna Kazlauskas ’19. “We’ve created a decentralized system to get better data — which sits inside big tech companies today — while still letting users retain ultimate ownership.”
From economics to the blockchain
A lot of high school students have pictures of pop stars or athletes on their bedroom walls. Kazlauskas had a picture of former U.S. Treasury Secretary Janet Yellen.
Kazlauskas came to MIT sure she’d become an economist, but she ended up being one of five students to join the MIT Bitcoin club in 2015, and that experience led her into the world of blockchains and cryptocurrency.
From her dorm room in MacGregor House, she began mining the cryptocurrency Ethereum. She even occasionally scoured campus dumpsters in search of discarded computer chips.
“It got me interested in everything around computer science and networking,” Kazlauskas says. “That involved, from a blockchain perspective, distributed systems and how they can shift economic power to individuals, as well as artificial intelligence and econometrics.”
Kazlauskas met Art Abal, who was then attending Harvard University, in the former Media Lab class Emergent Ventures, and the pair decided to work on new ways to obtain data to train AI systems.
“Our question was: How could you have a large number of people contributing to these AI systems using more of a distributed network?” Kazlauskas recalls.
Kazlauskas and Abal were trying to address the status quo, where most models are trained by scraping public data on the internet. Big tech companies often also buy large datasets from other companies.
The founders’ approach evolved over the years and was informed by Kazlauskas’ experience working at the financial blockchain company Celo after graduation. But Kazlauskas credits her time at MIT with helping her think about these problems, and the instructor for Emergent Ventures, Ramesh Raskar, still helps Vana think about AI research questions today.
“It was great to have an open-ended opportunity to just build, hack, and explore,” Kazlauskas says. “I think that ethos at MIT is really important. It’s just about building things, seeing what works, and continuing to iterate.”
Today Vana takes advantage of a little-known law that allows users of most big tech platforms to export their data directly. Users can upload that information into encrypted digital wallets in Vana and disburse it to train models as they see fit.
AI engineers can suggest ideas for new open-source models, and people can pool their data to help train the model. In the blockchain world, the data pools are called data DAOs, which stands for decentralized autonomous organization. Data can also be used to create personalized AI models and agents.
In Vana, data are used in a way that preserves user privacy because the system doesn’t expose identifiable information. Once the model is created, users maintain ownership so that every time it’s used, they’re rewarded proportionally based on how much their data helped trained it.
“From a developer’s perspective, now you can build these hyper-personalized health applications that take into account exactly what you ate, how you slept, how you exercise,” Kazlauskas says. “Those applications aren’t possible today because of those walled gardens of the big tech companies.”
Crowdsourced, user-owned AI
Last year, a machine-learning engineer proposed using Vana user data to train an AI model that could generate Reddit posts. More than 140,000 Vana users contributed their Reddit data, which contained posts, comments, messages, and more. Users decided on the terms in which the model could be used, and they maintained ownership of the model after it was created.
Vana has enabled similar initiatives with user-contributed data from the social media platform X; sleep data from sources like Oura rings; and more. There are also collaborations that combine data pools to create broader AI applications.
“Let’s say users have Spotify data, Reddit data, and fashion data,” Kazlauskas explains. “Usually, Spotify isn’t going to collaborate with those types of companies, and there’s actually regulation against that. But users can do it if they grant access, so these cross-platform datasets can be used to create really powerful models.”
Vana has over 1 million users and over 20 live data DAOs. More than 300 additional data pools have been proposed by users on Vana’s system, and Kazlauskas says many will go into production this year.
“I think there’s a lot of promise in generalized AI models, personalized medicine, and new consumer applications, because it’s tough to combine all that data or get access to it in the first place,” Kazlauskas says.
The data pools are allowing groups of users to accomplish something even the most powerful tech companies struggle with today.
“Today, big tech companies have built these data moats, so the best datasets aren’t available to anyone,” Kazlauskas says. “It’s a collective action problem, where my data on its own isn’t that valuable, but a data pool with tens of thousands or millions of people is really valuable. Vana allows those pools to be built. It’s a win-win: Users get to benefit from the rise of AI because they own the models. Then you don’t end up in scenario where you don’t have a single company controlling an all-powerful AI model. You get better technology, but everyone benefits.”
MIT welcomes 2025 Heising-Simons Foundation 51 Pegasi b Fellow Jess Speedie
The MIT School of Science welcomes Jess Speedie, one of eight recipients of the 2025 51 Pegasi b Fellowship. The announcement was made March 27 by the Heising-Simons Foundation.
The 51 Pegasi b Fellowship, named after the first exoplanet discovered orbiting a sun-like star, was established in 2017 to provide postdocs with the opportunity to conduct theoretical, observational, and experimental research in planetary astronomy.
Speedie, who expects to complete her PhD in astronomy at the University of Victoria, Canada, this summer, will be hosted by the Department of Earth, Atmospheric and Planetary Sciences (EAPS). She will be mentored by Kerr-McGee Career Development Professor Richard Teague as she uses a combination of observational data and simulations to study the birth of planets and the processes of planetary formation.
“The planetary environment is where all the good stuff collects … it has the greatest potential for the most interesting things in the universe to happen, such as the origin of life,” she says. “Planets, for me, are where the stories happen.”
Speedie’s work has focused on understanding “cosmic nurseries” and the detection and characterization of the youngest planets in the galaxy. A lot of this work has made use of the Atacama Large Millimeter/submillimeter Array (ALMA), located in northern Chile. Made up of a collection of 66 parabolic dishes, ALMA studies the universe with radio wavelengths, and Speedie has developed a novel approach to find signals in the data of gravitational instability in protoplanetary disks, a method of planetary formation.
“One of the big, big questions right now in the community focused on planet formation is, where are the planets? It is that simple. We think they’re developing in these disks, but we’ve detected so few of them,” she says.
While working as a fellow, Speedie is aiming to develop an algorithm that carefully aligns and stacks a decade of ALMA observational data to correct for a blurring effect that happens when combining images captured at different times. Doing so should produce the sharpest, most sensitive images of early planetary systems to date.
She is also interested in studying infant planets, especially ones that may be forming in disks around protoplanets, rather than stars. Modeling how these ingredient materials in orbit behave could give astronomers a way to measure the mass of young planets.
“What’s exciting is the potential for discovery. I have this sense that the universe as a whole is infinitely more creative than human minds — the kinds of things that happen out there, you can’t make that up. It’s better than science fiction,” she says.
The other 51 Pegasi b Fellows and their host institutions this year are Nick Choksi (Caltech), Yan Liang (Yale University), Sagnick Mukherjee (Arizona State University), Matthew Nixon (Arizona State University), Julia Santos (Harvard University), Nour Skaf (University of Hawaii), and Jerry Xuan (University of California at Los Angeles).
The fellowship provides up to $450,000 of support over three years for independent research, a generous salary and discretionary fund, mentorship at host institutions, an annual summit to develop professional networks and foster collaboration, and an option to apply for another grant to support a future position in the United States.
A flexible robot can help emergency responders search through rubble
When major disasters hit and structures collapse, people can become trapped under rubble. Extricating victims from these hazardous environments can be dangerous and physically exhausting. To help rescue teams navigate these structures, MIT Lincoln Laboratory, in collaboration with researchers at the University of Notre Dame, developed the Soft Pathfinding Robotic Observation Unit (SPROUT). SPROUT is a vine robot — a soft robot that can grow and maneuver around obstacles and through small spaces. First responders can deploy SPROUT under collapsed structures to explore, map, and find optimum ingress routes through debris.
"The urban search-and-rescue environment can be brutal and unforgiving, where even the most hardened technology struggles to operate. The fundamental way a vine robot works mitigates a lot of the challenges that other platforms face," says Chad Council, a member of the SPROUT team, which is led by Nathaniel Hanson. The program is conducted out of the laboratory's Human Resilience Technology Group.
First responders regularly integrate technology, such as cameras and sensors, into their workflows to understand complex operating environments. However, many of these technologies have limitations. For example, cameras specially built for search-and-rescue operations can only probe on a straight path inside of a collapsed structure. If a team wants to search further into a pile, they need to cut an access hole to get to the next area of the space. Robots are good for exploring on top of rubble piles, but are ill-suited for searching in tight, unstable structures and costly to repair if damaged. The challenge that SPROUT addresses is how to get under collapsed structures using a low-cost, easy-to-operate robot that can carry cameras and sensors and traverse winding paths.
SPROUT is composed of an inflatable tube made of airtight fabric that unfurls from a fixed base. The tube inflates with air, and a motor controls its deployment. As the tube extends into rubble, it can flex around corners and squeeze through narrow passages. A camera and other sensors mounted to the tip of the tube image and map the environment the robot is navigating. An operator steers SPROUT with joysticks, watching a screen that displays the robot's camera feed. Currently, SPROUT can deploy up to 10 feet, and the team is working on expanding it to 25 feet.
When building SPROUT, the team overcame a number of challenges related to the robot's flexibility. Because the robot is made of a deformable material that bends at many points, determining and controlling the robot's shape as it unfurls through the environment is difficult — think of trying to control an expanding wiggly sprinkler toy. Pinpointing how to apply air pressure within the robot so that steering is as simple as pointing the joystick forward to make the robot move forward was essential for system adoption by emergency responders. In addition, the team had to design the tube to minimize friction while the robot grows and engineer the controls for steering.
While a teleoperated system is a good starting point for assessing the hazards of void spaces, the team is also finding new ways to apply robot technologies to the domain, such as using data captured by the robot to build maps of the subsurface voids. "Collapse events are rare but devastating events. In robotics, we would typically want ground truth measurements to validate our approaches, but those simply don't exist for collapsed structures," Hanson says. To solve this problem, Hanson and his team made a simulator that allows them to create realistic depictions of collapsed structures and develop algorithms that map void spaces.
SPROUT was developed in collaboration with Margaret Coad, a professor at the University of Notre Dame and an MIT graduate. When looking for collaborators, Hanson — a graduate of Notre Dame — was already aware of Coad's work on vine robots for industrial inspection. Coad's expertise, together with the laboratory's experience in engineering, strong partnership with urban search-and-rescue teams, and ability to develop fundamental technologies and prepare them for transition to industry, "made this a really natural pairing to join forces and work on research for a traditionally underserved community," Hanson says. "As one of the primary inventors of vine robots, Professor Coad brings invaluable expertise on the fabrication and modeling of these robots."
Lincoln Laboratory tested SPROUT with first responders at the Massachusetts Task Force 1 training site in Beverly, Massachusetts. The tests allowed the researchers to improve the durability and portability of the robot and learn how to grow and steer the robot more efficiently. The team is planning a larger field study this spring.
"Urban search-and-rescue teams and first responders serve critical roles in their communities but typically have little-to-no research and development budgets," Hanson says. "This program has enabled us to push the technology readiness level of vine robots to a point where responders can engage with a hands-on demonstration of the system."
Sensing in constrained spaces is not a problem unique to disaster response communities, Hanson adds. The team envisions the technology being used in the maintenance of military systems or critical infrastructure with difficult-to-access locations.
The initial program focused on mapping void spaces, but future work aims to localize hazards and assess the viability and safety of operations through rubble. "The mechanical performance of the robots has an immediate effect, but the real goal is to rethink the way sensors are used to enhance situational awareness for rescue teams," says Hanson. "Ultimately, we want SPROUT to provide a complete operating picture to teams before anyone enters a rubble pile."
Cem Tasan to lead the Materials Research Laboratory
C. Cem Tasan has been appointed director of MIT’s Materials Research Laboratory (MRL), effective March 15. The POSCO Associate Professor of Metallurgy in the Department of Materials Science and Engineering (DMSE), Tasan succeeds Lionel “Kim” Kimerling, who has held the post of interim director since Carl Thompson stepped down in August 2023.
“MRL is a strategic asset for MIT, and Cem has a clear vision to build upon the lab’s engagement with materials researchers across the breadth of the Institute as well as with external collaborators and sponsors,” wrote Vice President for Research Ian Waitz, in a letter announcing the appointment.
The MRL is a leading interdisciplinary center dedicated to materials science and engineering. As a hub for innovation, the MRL unites researchers across disciplines, fosters industry and government partnerships, and drives advancements that shape the future of technology. Through groundbreaking research, the MRL supports MIT’s mission to advance science and technology for the benefit of society, enabling discoveries that have a lasting impact across industries and everyday life.
“MRL has a position at the core of materials research activities across departments at MIT,” Tasan says. “It can only grow from where it is, right in the heart of the Institute’s innovative hub.”
As director, Tasan will lead MRL’s research mission, with a view to strengthening internal collaboration and building upon the interdisciplinary laboratory’s long history of industry engagement. He will also take on responsibility for the management of Building 13, the Vannevar Bush Building, which houses key research facilities and labs.
“MRL is in very good hands with Cem Tasan’s leadership,” says Kimerling, the outgoing interim director. “His vision for a united MIT materials community whose success is stimulated by the convergence of basic science and engineering solutions provides the nutrition for MIT’s creative relevance to society. His collegial nature, motivating energy, and patient approach will make it happen.”
Tasan is a metallurgist with expertise in the fracture in metals and the design of damage-resistant alloys. Among other advances, his lab has demonstrated a multiscale means of designing high-strength/high-ductility titanium alloys; and explained the stress intensification mechanism by which human hair damages hard steel razors, pointing the way to stronger and longer-lasting blades.
“We need better materials that operate in more and more extreme conditions, for almost all of our critical industries and applications,” says Tasan. “Materials research in MRL identifies interdisciplinary pathways to address this important challenge.”
He studied in Turkey and the Netherlands, earning his PhD at Eindhoven University of Technology before spending several years leading a research group at the Max Planck Institute for Sustainable Materials in Germany. He joined the MIT faculty in 2016 and earned tenure in 2022.
“Cem has led one of the major collaborative research teams at MRL, and he expects to continue developing a strong community among the MIT materials research faculty,” wrote Waitz in his letter on March 14.
The MRL was established in 2017 through the merger of the MIT Materials Processing Center (MPC) and the Center for Materials Science and Engineering. This unification aimed to strengthen MIT’s leadership in materials research by fostering interdisciplinary collaboration and advancing breakthroughs in areas such as energy conversion, quantum materials, and materials sustainability.
From 2008 to 2017, Thompson, the Stavros Salapatas Professor of Materials Science and Engineering, served as director of the MPC. During his tenure, he played a crucial role in expanding materials research and building partnerships with industry, government agencies, and academic institutions. With the formation of the MRL in 2017, Thompson was appointed its inaugural director, guiding the new laboratory to prominence as a hub for cutting-edge materials science. He stepped down from this role in August 2023.
At that time, Kimerling stepped in to serve as interim director of MRL. He brought special knowledge of the lab’s history, having served as director of the MPC from 1993 to 2008, transforming it into a key industry-academic interface. Under his leadership, the MPC became a crucial gateway for industry partners to collaborate with MIT faculty across materials-related disciplines, bridging fundamental research with industrial applications. His vision helped drive technological innovation and economic development by aligning academic expertise with industry needs. As interim director of MRL these past 18 months, Kimerling has ensured continuity in leadership.
“I’m delighted that Cem will be the next MRL director,” says Thompson. “He’s a great fit. He has been affiliated with MPC, and then MRL, since the beginning of his faculty career at MIT. He’s also played a key role in leading a renaissance in physical metallurgy at MIT and has many close ties to industry.”
Researchers teach LLMs to solve complex planning challenges
Imagine a coffee company trying to optimize its supply chain. The company sources beans from three suppliers, roasts them at two facilities into either dark or light coffee, and then ships the roasted coffee to three retail locations. The suppliers have different fixed capacity, and roasting costs and shipping costs vary from place to place.
The company seeks to minimize costs while meeting a 23 percent increase in demand.
Wouldn’t it be easier for the company to just ask ChatGPT to come up with an optimal plan? In fact, for all their incredible capabilities, large language models (LLMs) often perform poorly when tasked with directly solving such complicated planning problems on their own.
Rather than trying to change the model to make an LLM a better planner, MIT researchers took a different approach. They introduced a framework that guides an LLM to break down the problem like a human would, and then automatically solve it using a powerful software tool.
A user only needs to describe the problem in natural language — no task-specific examples are needed to train or prompt the LLM. The model encodes a user’s text prompt into a format that can be unraveled by an optimization solver designed to efficiently crack extremely tough planning challenges.
During the formulation process, the LLM checks its work at multiple intermediate steps to make sure the plan is described correctly to the solver. If it spots an error, rather than giving up, the LLM tries to fix the broken part of the formulation.
When the researchers tested their framework on nine complex challenges, such as minimizing the distance warehouse robots must travel to complete tasks, it achieved an 85 percent success rate, whereas the best baseline only achieved a 39 percent success rate.
The versatile framework could be applied to a range of multistep planning tasks, such as scheduling airline crews or managing machine time in a factory.
“Our research introduces a framework that essentially acts as a smart assistant for planning problems. It can figure out the best plan that meets all the needs you have, even if the rules are complicated or unusual,” says Yilun Hao, a graduate student in the MIT Laboratory for Information and Decision Systems (LIDS) and lead author of a paper on this research.
She is joined on the paper by Yang Zhang, a research scientist at the MIT-IBM Watson AI Lab; and senior author Chuchu Fan, an associate professor of aeronautics and astronautics and LIDS principal investigator. The research will be presented at the International Conference on Learning Representations.
Optimization 101
The Fan group develops algorithms that automatically solve what are known as combinatorial optimization problems. These vast problems have many interrelated decision variables, each with multiple options that rapidly add up to billions of potential choices.
Humans solve such problems by narrowing them down to a few options and then determining which one leads to the best overall plan. The researchers’ algorithmic solvers apply the same principles to optimization problems that are far too complex for a human to crack.
But the solvers they develop tend to have steep learning curves and are typically only used by experts.
“We thought that LLMs could allow nonexperts to use these solving algorithms. In our lab, we take a domain expert’s problem and formalize it into a problem our solver can solve. Could we teach an LLM to do the same thing?” Fan says.
Using the framework the researchers developed, called LLM-Based Formalized Programming (LLMFP), a person provides a natural language description of the problem, background information on the task, and a query that describes their goal.
Then LLMFP prompts an LLM to reason about the problem and determine the decision variables and key constraints that will shape the optimal solution.
LLMFP asks the LLM to detail the requirements of each variable before encoding the information into a mathematical formulation of an optimization problem. It writes code that encodes the problem and calls the attached optimization solver, which arrives at an ideal solution.
“It is similar to how we teach undergrads about optimization problems at MIT. We don’t teach them just one domain. We teach them the methodology,” Fan adds.
As long as the inputs to the solver are correct, it will give the right answer. Any mistakes in the solution come from errors in the formulation process.
To ensure it has found a working plan, LLMFP analyzes the solution and modifies any incorrect steps in the problem formulation. Once the plan passes this self-assessment, the solution is described to the user in natural language.
Perfecting the plan
This self-assessment module also allows the LLM to add any implicit constraints it missed the first time around, Hao says.
For instance, if the framework is optimizing a supply chain to minimize costs for a coffeeshop, a human knows the coffeeshop can’t ship a negative amount of roasted beans, but an LLM might not realize that.
The self-assessment step would flag that error and prompt the model to fix it.
“Plus, an LLM can adapt to the preferences of the user. If the model realizes a particular user does not like to change the time or budget of their travel plans, it can suggest changing things that fit the user’s needs,” Fan says.
In a series of tests, their framework achieved an average success rate between 83 and 87 percent across nine diverse planning problems using several LLMs. While some baseline models were better at certain problems, LLMFP achieved an overall success rate about twice as high as the baseline techniques.
Unlike these other approaches, LLMFP does not require domain-specific examples for training. It can find the optimal solution to a planning problem right out of the box.
In addition, the user can adapt LLMFP for different optimization solvers by adjusting the prompts fed to the LLM.
“With LLMs, we have an opportunity to create an interface that allows people to use tools from other domains to solve problems in ways they might not have been thinking about before,” Fan says.
In the future, the researchers want to enable LLMFP to take images as input to supplement the descriptions of a planning problem. This would help the framework solve tasks that are particularly hard to fully describe with natural language.
This work was funded, in part, by the Office of Naval Research and the MIT-IBM Watson AI Lab.
Looking under the hood at the brain’s language system
As a young girl growing up in the former Soviet Union, Evelina Fedorenko PhD ’07 studied several languages, including English, as her mother hoped that it would give her the chance to eventually move abroad for better opportunities.
Her language studies not only helped her establish a new life in the United States as an adult, but also led to a lifelong interest in linguistics and how the brain processes language. Now an associate professor of brain and cognitive sciences at MIT, Fedorenko studies the brain’s language-processing regions: how they arise, whether they are shared with other mental functions, and how each region contributes to language comprehension and production.
Fedorenko’s early work helped to identify the precise locations of the brain’s language-processing regions, and she has been building on that work to generate insight into how different neuronal populations in those regions implement linguistic computations.
“It took a while to develop the approach and figure out how to quickly and reliably find these regions in individual brains, given this standard problem of the brain being a little different across people,” she says. “Then we just kept going, asking questions like: Does language overlap with other functions that are similar to it? How is the system organized internally? Do different parts of this network do different things? There are dozens and dozens of questions you can ask, and many directions that we have pushed on.”
Among some of the more recent directions, she is exploring how the brain’s language-processing regions develop early in life, through studies of very young children, people with unusual brain architecture, and computational models known as large language models.
From Russia to MIT
Fedorenko grew up in the Russian city of Volgograd, which was then part of the Soviet Union. When the Soviet Union broke up in 1991, her mother, a mechanical engineer, lost her job, and the family struggled to make ends meet.
“It was a really intense and painful time,” Fedorenko recalls. “But one thing that was always very stable for me is that I always had a lot of love, from my parents, my grandparents, and my aunt and uncle. That was really important and gave me the confidence that if I worked hard and had a goal, that I could achieve whatever I dreamed about.”
Fedorenko did work hard in school, studying English, French, German, Polish, and Spanish, and she also participated in math competitions. As a 15-year-old, she spent a year attending high school in Alabama, as part of a program that placed students from the former Soviet Union with American families. She had been thinking about applying to universities in Europe but changed her plans when she realized the American higher education system offered more academic flexibility.
After being admitted to Harvard University with a full scholarship, she returned to the United States in 1998 and earned her bachelor’s degree in psychology and linguistics, while also working multiple jobs to send money home to help her family.
While at Harvard, she also took classes at MIT and ended up deciding to apply to the Institute for graduate school. For her PhD research at MIT, she worked with Ted Gibson, a professor of brain and cognitive sciences, and later, Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience. She began by using functional magnetic resonance imaging (fMRI) to study brain regions that appeared to respond preferentially to music, but she soon switched to studying brain responses to language.
She found that working with Kanwisher, who studies the functional organization of the human brain but hadn’t worked much on language before, helped Fedorenko to build a research program free of potential biases baked into some of the early work on language processing in the brain.
“We really kind of started from scratch,” Fedorenko says, “combining the knowledge of language processing I have gained by working with Gibson and the rigorous neuroscience approaches that Kanwisher had developed when studying the visual system.”
After finishing her PhD in 2007, Fedorenko stayed at MIT for a few years as a postdoc funded by the National Institutes of Health, continuing her research with Kanwisher. During that time, she and Kanwisher developed techniques to identify language-processing regions in different people, and discovered new evidence that certain parts of the brain respond selectively to language. Fedorenko then spent five years as a research faculty member at Massachusetts General Hospital, before receiving an offer to join the faculty at MIT in 2019.
How the brain processes language
Since starting her lab at MIT’s McGovern Institute for Brain Research, Fedorenko and her trainees have made several discoveries that have helped to refine neuroscientists’ understanding of the brain’s language-processing regions, which are spread across the left frontal and temporal lobes of the brain.
In a series of studies, her lab showed that these regions are highly selective for language and are not engaged by activities such as listening to music, reading computer code, or interpreting facial expressions, all of which have been argued to be share similarities with language processing.
“We’ve separated the language-processing machinery from various other systems, including the system for general fluid thinking, and the systems for social perception and reasoning, which support the processing of communicative signals, like facial expressions and gestures, and reasoning about others’ beliefs and desires,” Fedorenko says. “So that was a significant finding, that this system really is its own thing.”
More recently, Fedorenko has turned her attention to figuring out, in more detail, the functions of different parts of the language processing network. In one recent study, she identified distinct neuronal populations within these regions that appear to have different temporal windows for processing linguistic content, ranging from just one word up to six words.
She is also studying how language-processing circuits arise in the brain, with ongoing studies in which she and a postdoc in her lab are using fMRI to scan the brains of young children, observing how their language regions behave even before the children have fully learned to speak and understand language.
Large language models (similar to ChatGPT) can help with these types of developmental questions, as the researchers can better control the language inputs to the model and have continuous access to its abilities and representations at different stages of learning.
“You can train models in different ways, on different kinds of language, in different kind of regimens. For example, training on simpler language first and then more complex language, or on language combined with some visual inputs. Then you can look at the performance of these language models on different tasks, and also examine changes in their internal representations across the training trajectory, to test which model best captures the trajectory of human language learning,” Fedorenko says.
To gain another window into how the brain develops language ability, Fedorenko launched the Interesting Brains Project several years ago. Through this project, she is studying people who experienced some type of brain damage early in life, such as a prenatal stroke, or brain deformation as a result of a congenital cyst. In some of these individuals, their conditions destroyed or significantly deformed the brain’s typical language-processing areas, but all of these individuals are cognitively indistinguishable from individuals with typical brains: They still learned to speak and understand language normally, and in some cases, they didn’t even realize that their brains were in some way atypical until they were adults.
“That study is all about plasticity and redundancy in the brain, trying to figure out what brains can cope with, and how” Fedorenko says. “Are there many solutions to build a human mind, even when the neural infrastructure is so different-looking?”
Deep-dive dinners are the norm for tuna and swordfish, MIT oceanographers find
How far would you go for a good meal? For some of the ocean’s top predators, maintaining a decent diet requires some surprisingly long-distance dives.
MIT oceanographers have found that big fish like tuna and swordfish get a large fraction of their food from the ocean’s twilight zone — a cold and dark layer of the ocean about half a mile below the surface, where sunlight rarely penetrates. Tuna and swordfish have been known to take extreme plunges, but it was unclear whether these deep dives were for food, and to what extent the fishes’ diet depends on prey in the twilight zone.
In a study published recently in the ICES Journal of Marine Science, the MIT student-led team reports that the twilight zone is a major food destination for three predatory fish — bigeye tuna, yellowfin tuna, and swordfish. While the three species swim primarily in the shallow open ocean, the scientists found these fish are sourcing between 50 and 60 percent of their diet from the twilight zone.
The findings suggest that tuna and swordfish rely more heavily on the twilight zone than scientists had assumed. This implies that any change to the twilight zone’s food web, such as through increased fishing, could negatively impact fisheries of more shallow tuna and swordfish.
“There is increasing interest in commercial fishing in the ocean’s twilight zone,” says Ciara Willis, the study’s lead author, who was a PhD student in the MIT-Woods Hole Oceanographic Institution (WHOI) Joint Program when conducting the research and is now a postdoc at WHOI. “If we start heavily fishing that layer of the ocean, our study suggests that could have profound implications for tuna and swordfish, which are very reliant on the twilight zone and are highly valuable existing fisheries.”
The study’s co-authors include Kayla Gardener of MIT-WHOI, and WHOI researchers Martin Arostegui, Camrin Braun, Leah Hougton, Joel Llopiz, Annette Govindarajan, and Simon Thorrold, along with Walt Golet at the University of Maine.
Deep-ocean buffet
The ocean’s twilight zone is a vast and dim layer that lies between the sunlit surface waters and the ocean’s permanently dark, midnight zone. Also known as the midwater, or mesopelagic layer, the twilight zone stretches between 200 and 1,000 meters below the ocean’s surface and is home to a huge variety of organisms that have adapted to live in the darkness.
“This is a really understudied region of the ocean, and it’s filled with all these fantastic, weird animals,” Willis says.
In fact, it’s estimated that the biomass of fish in the twilight zone is somewhere close to 10 billion tons, much of which is concentrated in layers at certain depths. By comparison, the marine life that lives closer to the surface, Willis says, is “a thin soup,” which is slim pickings for large predators.
“It’s important for predators in the open ocean to find concentrated layers of food. And I think that’s what drives them to be interested in the ocean’s twilight zone,” Willis says. “We call it the ‘deep ocean buffet.’”
And much of this buffet is on the move. Many kinds of fish, squid, and other deep-sea organisms in the twilight zone will swim up to the surface each night to find food. This twilight community will descend back into darkness at dawn to avoid detection.
Scientists have observed that many large predatory fish will make regular dives into the twilight zone, presumably to feast on the deep-sea bounty. For instance, bigeye tuna spend much of their day making multiple short, quick plunges into the twilight zone, while yellowfin tuna dive down every few days to weeks. Swordfish, in contrast, appear to follow the daily twilight migration, feeding on the community as it rises and falls each day.
“We’ve known for a long time that these fish and many other predators feed on twilight zone prey,” Willis says. “But the extent to which they rely on this deep-sea food web for their forage has been unclear.”
Twilight signal
For years, scientists and fishers have found remnants of fish from the twilight zone in the stomach contents of larger, surface-based predators. This suggests that predator fish do indeed feed on twilight food, such as lanternfish, certain types of squid, and long, snake-like fish called barracudina. But, as Willis notes, stomach contents give just a “snapshot” of what a fish ate that day.
She and her colleagues wanted to know how big a role twilight food plays in the general diet of predator fish. For their new study, the team collaborated with fishermen in New Jersey and Florida, who fish for a living in the open ocean. They supplied the team with small tissue samples of their commercial catch, including samples of bigeye tuna, yellowfin tuna, and swordfish.
Willis and her advisor, Senior Scientist Simon Thorrold, brought the samples back to Thorrold’s lab at WHOI and analyzed the fish bits for essential amino acids — the key building blocks of proteins. Essential amino acids are only made by primary producers, or members of the base of the food web, such as phytoplankton, microbes, and fungi. Each of these producers makes essential amino acids with a slightly different carbon isotope configuration that then is conserved as the producers are consumed on up their respective food chains.
“One of the hypotheses we had was that we’d be able to distinguish the carbon isotopic signature of the shallow ocean, which would logically be more phytoplankton-based, versus the deep ocean, which is more microbially based,” Willis says.
The researchers figured that if a fish sample had one carbon isotopic make-up over another, it would be a sign that that fish feeds more on food from the deep, rather than shallow waters.
“We can use this [carbon isotope signature] to infer a lot about what food webs they’ve been feeding in, over the last five to eight months,” Willis says.
The team looked at carbon isotopes in tissue samples from over 120 samples including bigeye tuna, yellowfin tuna, and swordfish. They found that individuals from all three species contained a substantial amount of carbon derived from sources in the twilight zone. The researchers estimate that, on average, food from the twilight zone makes up 50 to 60 percent of the diet of the three predator species, with some slight variations among species.
“We saw the bigeye tuna were far and away the most consistent in where they got their food from. They didn’t vary much from individual to individual,” Willis says. “Whereas the swordfish and yellowfin tuna were more variable. That means if you start having big-scale fishing in the twilight zone, the bigeye tuna might be the ones who are most at risk from food web effects.”
The researchers note there has been increased interest in commercially fishing the twilight zone. While many fish in that region are not edible for humans, they are starting to be harvested as fishmeal and fish oil products. In ongoing work, Willis and her colleagues are evaluating the potential impacts to tuna fisheries if the twilight zone becomes a target for large-scale fishing.
“If predatory fish like tunas have 50 percent reliance on twilight zone food webs, and we start heavily fishing that region, that could lead to uncertainty around the profitability of tuna fisheries,” Willis says. “So we need to be very cautious about impacts on the twilight zone and the larger ocean ecosystem.”
This work was part of the Woods Hole Oceanographic Institution’s Ocean Twilight Zone Project, funded as part of the Audacious Project housed at TED. Willis was additionally supported by the Natural Sciences and Engineering Research Council of Canada and the MIT Martin Family Society of Fellows for Sustainability.
On a quest for a better football helmet
Next time you’re watching football you might be looking at an important feat of engineering from an MIT alumnus.
For the last year, former MIT middle linebacker and mechanical engineer Kodiak Brush ’17 has been leading the development of football helmets for the California-based sports equipment manufacturer LIGHT Helmets. In December, Brush notched a major achievement in that work: LIGHT Helmets’ new Apache helmet line was ranked the highest-performing helmet ever in safety tests by Virginia Tech’s renowned helmet-testing lab.
The ranking bolsters LIGHT Helmets’ innovative effort to make football helmets lighter and safer.
“We’re trying to lower the overall amount of energy going into each impact by lowering the weight of the helmet,” Brush says. “It’s a balancing act trying to have a complete, polished product with all the bells and whistles while at the same time keeping the mass of the helmet as low as possible.”
No helmet ensures total safety, and the NFL carries out helmet tests of its own, but for Brush, who played football for most of his life, the latest results were a rewarding milestone.
“It’s really cool to work in the football helmet space after playing the sport for so long,” Brush says. “We did this with a fraction of the research and development budget of our competitors. It’s a great feeling to have worked on something that could help so many people.”
From the field to the lab
Brush spent his playing career at middle linebacker, a position often considered the quarterback of the defense. In that role, he got accustomed to helping teammates understand their assignments on the field and making sure everyone was in the right position. At MIT, he quickly realized his role would be different.
“In high school, I was constantly reminding teammates what their job was and helping linemen when they lined up in the wrong spot,” Brush says. “At MIT, I didn’t need to do that at all. Everyone knew exactly what their job was. It was really cool playing football with such an intelligent group.”
Throughout his football career, Brush says concussions hung over the sport. He was only formally diagnosed with one concussion, but he notes how difficult it can be to accurately diagnose concussions during games.
“We did baseline tests before the season so we could take tests after a suspected concussion to see if our cognitively ability was degraded,” Brush explains. “But as a player, you want to get back out there and keep helping your team, so players often try to downplay injuries. The doctors do their best.”
Brush worked as an accident reconstruction expert immediately after graduation before joining a product design firm. It was through that position that he first began working with LIGHT Helmets through a consulting project. He started full time with LIGHT last year.
Since then, Brush has managed research and development along with the production of new helmet lines, working closely with LIGHT’s technology partner, KOLLIDE.
“I’m currently the only engineer at LIGHT, so I wear a lot of different hats,” Brush says.
A safer helmet
Brush led the development of LIGHT’s Apache helmet. His approach harkened back to his favorite class at MIT, 2.009 (Product Engineering Process). In the process of building prototypes, students in that class are often tasked with taking apart other products to study how they’re made. For Apache, Brush started by disassembling competing helmets to try to understand how they work, where they’re limited, and where each ounce of weight comes from.
“That helped us make decisions around what we wanted to incorporate into our helmets and what we thought was unnecessary,” Brush says.
LIGHT’s Apache helmets use an impact-modified nylon shell and a 3D-printed thermoplastic polyurethane liner. The liner can compress up to 80 percent of its thickness under full compression compared to traditional foam, which Brush says may compress 20 to 30 percent at most. The liner is made up of 20 different cylindrical pods, each of which has variable stiffness depending on the location in the helmet.
Brush says the shell is more flexible than traditional helmets, which is part of a broader trend among companies focusing on concussion avoidance.
“The idea with the flexible shell is we’re now able to squish both the inside and outside of the helmet, which lets you extend the length of the impact and lower the severity of the hit,” Brush says.
A winning formula
Brush says the company’s performance in Virginia Tech’s tests has garnered a lot of excitement in the industry. The Apache helmet is available for use across high school, college, and professional levels, and the company is currently developing a youth version.
“Last year, we sold about 5,000 helmets, but we’re anticipating tenfold growth this year,” Brush says. “Dealers see the opportunity to sell the number-one-rated helmet at the price of a lot of much lower-rated helmets.”
Other helmets from LIGHT are already being used at the highest levels, with players from 30 of the 32 NFL teams choosing a LIGHT Helmet when they suit up, the company says. That traction has changed Brush’s relationship with football.
For instance, he only used to watch NFL games on Sundays occasionally. But now that his helmets are on TV, he finds himself rooting for the players and teams wearing them.
Regardless of who he roots for, when football becomes safer, everyone wins.
Professor Emeritus Frederick Greene, influential chemist who focused on free radicals, dies at 97
Frederick “Fred” Davis Greene II, professor emeritus in the MIT Department of Chemistry who was accomplished in the field of physical organic chemistry and free radicals, passed away peacefully after a brief illness, surrounded by his family, on Saturday, March 22. He had been a member of the MIT community for over 70 years.
“Greene’s dedication to teaching, mentorship, and the field of physical organic chemistry is notable,” said Professor Troy Van Voorhis, head of the Department of Chemistry, upon learning of Greene’s passing. “He was also a constant source of joy to those who interacted with him, and his commitment to students and education was legendary. He will be sorely missed.”
Greene, a native of Glen Ridge, New Jersey, was born on July 7, 1927 to parents Phillips Foster Greene and Ruth Altman Greene. He spent his early years in China, where his father was a medical missionary with Yale-In-China. Greene and his family moved to the Philippines just ahead of the Japanese invasion prior to World War Il, and then back to the French Concession of Shanghai, and to the United States in 1940. He joined the U.S. Navy in December 1944, and afterwards earned his bachelor’s degree from Amherst College in 1949 and a PhD from Harvard University in 1952. Following a year at the University of California at Los Angeles as a research associate, he was appointed a professor of chemistry at MIT by then-Department Head Arthur C. Cope in 1953. Greene retired in 1995.
Greene’s research focused on peroxide decompositions and free radical chemistry, and he reported the remarkable bimolecular reaction between certain diacyl peroxides and electron-rich olefins and aromatics. He was also interested in small-ring heterocycles, e.g., the three-membered ring 2,3-diaziridinones. His research also covered strained olefins, the Greene-Viavattene diene, and 9, 9', 10, 10'-tetradehydrodianthracene.
Greene was elected to the American Academy of Arts and Sciences in 1965 and received an honorary doctorate from Amherst College for his research in free radicals. He served as editor-in-chief of the Journal of Organic Chemistry of the American Chemical Society from 1962 to 1988. He was awarded a special fellowship form the National Science Foundation and spent a year at Cambridge University, Cambridge, England, and was a member of the Chemical Society of London.
Greene and Professor James Moore of the University of Philadelphia worked closely with Greene’s wife, Theodora “Theo” W. Greene, in the conversion of her PhD thesis, which was overseen by Professor Elias J. Corey of Harvard University, into her book “Greene’s Protective Groups in Organic Synthesis.” The book became an indispensable reference for any practicing synthetic organic or medicinal chemist and is now in its fifth edition. Theo, who predeceased Fred in July 2005, was a tremendous partner to Greene, both personally and professionally. A careful researcher in her own right, she served as associate editor of the Journal of Organic Chemistry for many years.
Fred Greene was recently featured in a series of videos featuring Professor Emeritus Dietmar Seyferth (who passed away in 2020) that was spearheaded by Professor Rick Danheiser. The videos cover a range of topics, including Seyferth and Greene’s memories during the 1950s to mid-1970s of their fellow faculty members, how they came to be hired, the construction of various lab spaces, developments in teaching and research, the evolution of the department’s graduate program, and much more.
Danheiser notes that it was a privilege to share responsibility for the undergraduate class 5.43 (Advanced Organic Chemistry) with Greene. “Fred Greene was a fantastic teacher and inspired several generations of MIT undergraduate and graduate students with his superb lectures,” Danheiser recalls. The course they shared was Danheiser’s first teaching assignment at MIT, and he states that Greene’s “counsel and mentoring was invaluable to me.”
The Department of Chemistry recognized Greene’s contributions to its academic program by naming the annual student teaching award the “Frederick D. Greene Teaching Award.” This award recognizes outstanding contributions in teaching in chemistry by undergraduates. Since 1993 the award has been given to 46 students.
Dabney White Dixon PhD ’76 was one of many students with whom Greene formed a lifelong friendship and mentorship. Dixon shares, “Fred Greene was an outstanding scientist — intelligent, ethical, and compassionate in every aspect of his life. He possessed an exceptional breadth of knowledge in organic chemistry, particularly in mechanistic organic chemistry, as evidenced by his long tenure as editor of the Journal of Organic Chemistry (1962 to 1988). Weekly, large numbers of manuscripts flowed through his office. He had an acute sense of fairness in evaluating submissions and was helpful to those submitting manuscripts. His ability to navigate conflicting scientific viewpoints was especially evident during the heated debates over non-classical carbonium ions in the 1970s.
“Perhaps Fred’s greatest contribution to science was his mentorship. At a time when women were rare in chemistry PhD programs, Fred’s mentorship was particularly meaningful. I was the first woman in my scientific genealogical lineage since the 1500s, and his guidance gave me the confidence to overcome challenges. He and Theo provided a supportive and joyful environment, helping me forge a career in academia where I have since mentored 13 PhD students — an even mix of men and women — a testament to the social progress in science that Fred helped foster.
“Fred’s meticulous attention to detail was legendary. He insisted that every new molecule be fully characterized spectroscopically before he would examine the data. Through this, his students learned the importance of thoroughness, accuracy, and organization. He was also an exceptional judge of character, entrusting students with as much responsibility as they could handle. His honesty was unwavering — he openly acknowledged mistakes, setting a powerful example for his students.
“Shortly before the pandemic, I had the privilege of meeting Fred with two of his scientific ‘granddaughters’ — Elizabeth Draganova, then a postdoc at Tufts (now an assistant professor at Emory), and Cyrianne Keutcha, then a graduate student at Harvard (now a postdoc at Yale). As we discussed our work, it was striking how much science had evolved — from IR and NMR of small-ring heterocycles to surface plasmon resonance and cryo-electron microscopy of large biochemical systems. Yet, Fred’s intellectual curiosity remained as sharp as ever. His commitment to excellence, attention to detail, and passion for uncovering chemical mechanisms lived on in his scientific descendants.
“He leaves a scientific legacy of chemists who internalized his lessons on integrity, kindness, and rigorous analysis, carrying them forward to their own students and research. His impact on the field of chemistry — and on the lives of those fortunate enough to have known him — will endure.”
Carl Renner PhD ’74 felt fortunate and privileged to be a doctoral student in the Greene group from 1969 to 1973, and also his teaching assistant for his 5.43 course. Renner recalls, “He possessed a curious mind of remarkable clarity and discipline. He prepared his lectures meticulously and loved his students. He was extremely generous with his time and knowledge. I never heard him complain or say anything unkind. Everyone he encountered came away better for it.”
Gary Breton PhD ’91 credits the development of his interest in physical organic chemistry to his time spent in Greene’s class. Breton says, “During my time in the graduate chemistry program at MIT (1987-91) I had the privilege of learning from some of the world’s greatest minds in chemistry, including Dr. Fred Greene. At that time, all incoming graduate students in organic chemistry were assigned in small groups to a seminar-type course that met each week to work on the elucidation of reaction mechanisms, and I was assigned to Dr. Greene’s class. It was here that not only did Dr. Greene afford me a confidence in how to approach reaction mechanisms, but he also ignited my fascination with physical organic chemistry. I was only too happy to join his research group, and begin a love/hate relationship with reactive nitrogen-containing heterocycles that continues to this day in my own research lab as a chemistry professor.
“Anyone that knew Dr. Greene quickly recognized that he was highly intelligent and exceptionally knowledgeable about all things organic, but under his mentorship I also saw his creativity and cleverness. Beyond that, and even more importantly, I witnessed his kindness and generosity, and his subtle sense of humor. Dr. Greene’s enduring legacy is the large number of undergraduate students, graduate students, and postdocs whose lives he touched over his many years. He will be greatly missed.”
John Dolhun PhD ’73 recalls Greene’s love for learning, and that he “was one of the kindest persons that I have known.” Dolhun shares, “I met Fred Greene when I was a graduate student. His organic chemistry course was one of the most popular, and he was a top choice for many students’ thesis committees. When I returned to MIT in 2008 and reconnected with him, he was still endlessly curious — always learning, asking questions. A few years ago, he visited me and we had lunch. Back at the chemistry building, I reached for the elevator button and he said, ‘I always walk up the five flights of stairs.’ So, I walked up with him. Fred knew how to keep both mind and body in shape. He was truly a beacon of light in the department.”
Liz McGrath, retired chemistry staff member, warmly recalls the regular coffees and conversations she shared with Fred over two decades at the Institute. She shares, “Fred, who was already emeritus by the time of my arrival, imparted to me a deep interest in the history of MIT Chemistry’s events and colorful faculty. He had a phenomenal memory, which made his telling of the history so rich in its content. He was a true gentleman and sweet and kind to boot. ... I will remember him with much fondness.”
Greene is survived by his children, Alan, Carol, Elizabeth, and Phillips; nine grandchildren; and six great grandchildren. A memorial service will be held on April 5 at 11 a.m. at the First Congregational Church in Winchester, Massachusetts.
Pattie Maes receives ACM SIGCHI Lifetime Research Award
Pattie Maes, the Germeshausen Professor of Media Arts and Sciences at MIT and head of the Fluid Interfaces research group within the MIT Media Lab, has been awarded the 2025 ACM SIGCHI Lifetime Research Award. She will accept the award at CHI 2025 in Yokohama, Japan this April.
The Lifetime Research Award is given to individuals whose research in human-computer interaction (HCI) is considered both fundamental and influential to the field. Recipients are selected based on their cumulative contributions, influence on the work of others, new research developments, and being an active participant in the Association for Computing Machinery’s Special Interest Group on Computer-Human Interaction (ACM SIGCHI) community.
Her nomination recognizes her advocacy to place human agency at the center of HCI and artificial intelligence research. Rather than AI replacing human capabilities, Maes has advocated for ways in which human capabilities can be supported or enhanced by the integration of AI.
Pioneering the concept of software agents in the 1990s, Maes’ work has always been situated at the intersection of human-computer interaction and artificial intelligence and has helped lay the foundations for today’s online experience. Her article “Social information filtering: algorithms for automating 'word of mouth'” from CHI 95, co-authored with graduate student Upendra Shardanand, is the second-most-cited paper from ACM SIGCHI.
Beyond her contributions in desktop-based interaction, she has an extensive body of work in the area of novel wearable devices that enhance the human experience, for example by supporting memory, learning, decision-making, or health. Through an interdisciplinary approach, Maes has explored accessible and ethical designs while stressing the need for a human-centered approach.
“As a senior faculty member, Pattie is an integral member of the Media Lab, MIT, and larger HCI communities,” says Media Lab Director Dava Newman. “Her contributions to several different fields, alongside her unwavering commitment to enhancing the human experience in her work, is exemplary of not only the Media Lab’s interdisciplinary spirit, but also our core mission: to create transformative technologies and systems that enable people to reimagine and redesign their lives. We all celebrate this well-deserved recognition for Pattie!”
Maes is the second MIT professor to receive this honor, joining her Media Lab colleague Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences at MIT and head of the Tangible Media research group.
“I am honored to be recognized by the ACM community, especially given that it can be difficult sometimes for researchers doing highly interdisciplinary research to be appreciated, even though some of the most impactful innovations often emerge from that style of research,” Maes comments.
New Alliance for Data, Evaluation and Policy Training will advance data-driven decision-making in public policy
On March 25, the Abdul Latif Jameel Poverty Action Lab (J-PAL) at MIT launched the global Alliance for Data, Evaluation, and Policy Training (ADEPT) with Community Jameel at an event in São Paulo, Brazil.
ADEPT is a network of universities, governments, and other members united by a shared vision: To empower the next generation of policymakers, decision-makers, and researchers with the tools to innovate, test, and scale the most effective social policies and programs. These programs have the potential to improve the lives of millions of people around the world.
Too often, policy decisions in governments and other organizations are driven by ideology or guesswork. This can result in ineffective and inefficient policies and programs that don’t always serve their intended populations. ADEPT will bring a scientific perspective to policymaking, focusing on topics like statistical analysis, data science, and rigorous impact evaluation.
Together with J-PAL, members will create innovative pathways for learners that include virtual and in-person courses, develop new academic programs on policy evaluation and data analysis, and cultivate a network of evidence-informed policy professionals to drive change globally.
At the launch event at Insper, a Brazilian higher education institution, MIT economists Esther Duflo, co-founder of J-PAL, and Sara Fisher Ellison, faculty director of ADEPT, spoke about the importance of building a community aligned in support of evidence-informed policymaking.
“Our aim is to create a vision-driven network of institutions around the world able to equip far more people in far more places with the skills and ambition for evidence-informed policymaking,” said Duflo. “We are excited to welcome Insper to the movement and create new opportunities for learners in Brazil.”
Members of the alliance will also have access to the MITx MicroMasters program in Data, Economics, and Design of Policy (DEDP), which offers online courses taught by MIT Department of Economics faculty through MIT’s Office of Open Learning. The program offers graduate-level courses that combine the tools of economics and policy design with a strong foundation in economic and mathematical principles.
Early members of the alliance include Insper, a leading research and training institution in Brazil; the National School of Statistics and Applied Economics of Abidjan in collaboration with the Cote d’Ivorian government; the Paris School of Economics; and Princeton University.
“This unprecedented initiative in Latin America reinforces Insper’s commitment to academic excellence and the internationalization of teaching, providing Brazilian students with access to a globally renowned program,” says Cristine Pinto, Insper’s director of research. “Promoting large-scale impact through research and data analysis is a core objective of Insper, and shared by J-PAL and the expansion of ADEPT.”
Learners who obtain the DEDP MicroMasters credential through ADEPT can accelerate their pursuit of a master’s degree by applying to participating universities, including Insper and MIT, opening doors for learners who may not otherwise have access to leading economics programs.
By empowering learners with the tools and ambition to create meaningful change, ADEPT seeks to accelerate data-driven decision-making at every step of the policymaking process. Ultimately, the hope is that ADEPT’s impact will be felt not only by alliance members and their individual learners, but by millions of people reached by better policies and programs worldwide.