MIT Latest News
3 Questions: How AI is helping us monitor and support vulnerable ecosystems
A recent study from Oregon State University estimated that more than 3,500 animal species are at risk of extinction because of factors including habitat alterations, natural resources being overexploited, and climate change.
To better understand these changes and protect vulnerable wildlife, conservationists like MIT PhD student and Computer Science and Artificial Intelligence Laboratory (CSAIL) researcher Justin Kay are developing computer vision algorithms that carefully monitor animal populations. A member of the lab of MIT Department of Electrical Engineering and Computer Science assistant professor and CSAIL principal investigator Sara Beery, Kay is currently working on tracking salmon in the Pacific Northwest, where they provide crucial nutrients to predators like birds and bears, while managing the population of prey, like bugs.
With all that wildlife data, though, researchers have lots of information to sort through and many AI models to choose from to analyze it all. Kay and his colleagues at CSAIL and the University of Massachusetts Amherst are developing AI methods that make this data-crunching process much more efficient, including a new approach called “consensus-driven active model selection” (or “CODA”) that helps conservationists choose which AI model to use. Their work was named a Highlight Paper at the International Conference on Computer Vision (ICCV) in October.
That research was supported, in part, by the National Science Foundation, Natural Sciences and Engineering Research Council of Canada, and Abdul Latif Jameel Water and Food Systems Lab (J-WAFS). Here, Kay discusses this project, among other conservation efforts.
Q: In your paper, you pose the question of which AI models will perform the best on a particular dataset. With as many as 1.9 million pre-trained models available in the HuggingFace Models repository alone, how does CODA help us address that challenge?
A: Until recently, using AI for data analysis has typically meant training your own model. This requires significant effort to collect and annotate a representative training dataset, as well as iteratively train and validate models. You also need a certain technical skill set to run and modify AI training code. The way people interact with AI is changing, though — in particular, there are now millions of publicly available pre-trained models that can perform a variety of predictive tasks very well. This potentially enables people to use AI to analyze their data without developing their own model, simply by downloading an existing model with the capabilities they need. But this poses a new challenge: Which model, of the millions available, should they use to analyze their data?
Typically, answering this model selection question also requires you to spend a lot of time collecting and annotating a large dataset, albeit for testing models rather than training them. This is especially true for real applications where user needs are specific, data distributions are imbalanced and constantly changing, and model performance may be inconsistent across samples. Our goal with CODA was to substantially reduce this effort. We do this by making the data annotation process “active.” Instead of requiring users to bulk-annotate a large test dataset all at once, in active model selection we make the process interactive, guiding users to annotate the most informative data points in their raw data. This is remarkably effective, often requiring users to annotate as few as 25 examples to identify the best model from their set of candidates.
We’re very excited about CODA offering a new perspective on how to best utilize human effort in the development and deployment of machine-learning (ML) systems. As AI models become more commonplace, our work emphasizes the value of focusing effort on robust evaluation pipelines, rather than solely on training.
Q: You applied the CODA method to classifying wildlife in images. Why did it perform so well, and what role can systems like this have in monitoring ecosystems in the future?
A: One key insight was that when considering a collection of candidate AI models, the consensus of all of their predictions is more informative than any individual model’s predictions. This can be seen as a sort of “wisdom of the crowd:” On average, pooling the votes of all models gives you a decent prior over what the labels of individual data points in your raw dataset should be. Our approach with CODA is based on estimating a “confusion matrix” for each AI model — given the true label for some data point is class X, what is the probability that an individual model predicts class X, Y, or Z? This creates informative dependencies between all of the candidate models, the categories you want to label, and the unlabeled points in your dataset.
Consider an example application where you are a wildlife ecologist who has just collected a dataset containing potentially hundreds of thousands of images from cameras deployed in the wild. You want to know what species are in these images, a time-consuming task that computer vision classifiers can help automate. You are trying to decide which species classification model to run on your data. If you have labeled 50 images of tigers so far, and some model has performed well on those 50 images, you can be pretty confident it will perform well on the remainder of the (currently unlabeled) images of tigers in your raw dataset as well. You also know that when that model predicts some image contains a tiger, it is likely to be correct, and therefore that any model that predicts a different label for that image is more likely to be wrong. You can use all these interdependencies to construct probabilistic estimates of each model’s confusion matrix, as well as a probability distribution over which model has the highest accuracy on the overall dataset. These design choices allow us to make more informed choices over which data points to label and ultimately are the reason why CODA performs model selection much more efficiently than past work.
There are also a lot of exciting possibilities for building on top of our work. We think there may be even better ways of constructing informative priors for model selection based on domain expertise — for instance, if it is already known that one model performs exceptionally well on some subset of classes or poorly on others. There are also opportunities to extend the framework to support more complex machine-learning tasks and more sophisticated probabilistic models of performance. We hope our work can provide inspiration and a starting point for other researchers to keep pushing the state of the art.
Q: You work in the Beerylab, led by Sara Beery, where researchers are combining the pattern-recognition capabilities of machine-learning algorithms with computer vision technology to monitor wildlife. What are some other ways your team is tracking and analyzing the natural world, beyond CODA?
A: The lab is a really exciting place to work, and new projects are emerging all the time. We have ongoing projects monitoring coral reefs with drones, re-identifying individual elephants over time, and fusing multi-modal Earth observation data from satellites and in-situ cameras, just to name a few. Broadly, we look at emerging technologies for biodiversity monitoring and try to understand where the data analysis bottlenecks are, and develop new computer vision and machine-learning approaches that address those problems in a widely applicable way. It’s an exciting way of approaching problems that sort of targets the “meta-questions” underlying particular data challenges we face.
The computer vision algorithms I’ve worked on that count migrating salmon in underwater sonar video are examples of that work. We often deal with shifting data distributions, even as we try to construct the most diverse training datasets we can. We always encounter something new when we deploy a new camera, and this tends to degrade the performance of computer vision algorithms. This is one instance of a general problem in machine learning called domain adaptation, but when we tried to apply existing domain adaptation algorithms to our fisheries data we realized there were serious limitations in how existing algorithms were trained and evaluated. We were able to develop a new domain adaptation framework, published earlier this year in Transactions on Machine Learning Research, that addressed these limitations and led to advancements in fish counting, and even self-driving and spacecraft analysis.
One line of work that I’m particularly excited about is understanding how to better develop and analyze the performance of predictive ML algorithms in the context of what they are actually used for. Usually, the outputs from some computer vision algorithm — say, bounding boxes around animals in images — are not actually the thing that people care about, but rather a means to an end to answer a larger problem — say, what species live here, and how is that changing over time? We have been working on methods to analyze predictive performance in this context and reconsider the ways that we input human expertise into ML systems with this in mind. CODA was one example of this, where we showed that we could actually consider the ML models themselves as fixed and build a statistical framework to understand their performance very efficiently. We have been working recently on similar integrated analyses combining ML predictions with multi-stage prediction pipelines, as well as ecological statistical models.
The natural world is changing at unprecedented rates and scales, and being able to quickly move from scientific hypotheses or management questions to data-driven answers is more important than ever for protecting ecosystems and the communities that depend on them. Advancements in AI can play an important role, but we need to think critically about the ways that we design, train, and evaluate algorithms in the context of these very real challenges.
Turning on an immune pathway in tumors could lead to their destruction
By stimulating cancer cells to produce a molecule that activates a signaling pathway in nearby immune cells, MIT researchers have found a way to force tumors to trigger their own destruction.
Activating this signaling pathway, known as the cGAS-STING pathway, worked even better when combined with existing immunotherapy drugs known as checkpoint blockade inhibitors, in a study of mice. That dual treatment was successfully able to control tumor growth.
The researchers turned on the cGAS-STING pathway in immune cells using messenger RNA delivered to cancer cells. This approach may avoid the side effects of delivering large doses of a STING activator, and takes advantage of a natural process in the body. This could make it easier to develop a treatment for use in patients, the researchers say.
“Our approach harnesses the tumor’s own machinery to produce immune-stimulating molecules, creating a powerful antitumor response,” says Natalie Artzi, a principal research scientist at MIT’s Institute for Medical Engineering and Science, an associate professor of medicine at Harvard Medical School, a core faculty member at the Wyss Institute for Biologically Inspired Engineering at Harvard, and the senior author of the study.
“By increasing cGAS levels inside cancer cells, we can enhance delivery efficiency — compared to targeting the more scarce immune cells in the tumor microenvironment — and stimulate the natural production of cGAMP, which then activates immune cells locally,” she says. “This strategy not only strengthens antitumor immunity but also reduces the toxicity associated with direct STING agonist delivery, bringing us closer to safer and more effective cancer immunotherapies.”
Alexander Cryer, a visiting scholar at IMES, is the lead author of the paper, which appears this week in the Proceedings of the National Academy of Sciences.
Immune activation
STING (short for stimulator of interferon genes) is a protein that helps to trigger immune responses. When STING is activated, it turns on a pathway that initiates production of type one interferons, which are cytokines that stimulate immune cells.
Many research groups, including Artzi’s, have explored the possibility of artificially stimulating this pathway with molecules called STING agonists, which could help immune cells to recognize and attack tumor cells. This approach has worked well in animal models, but it has had limited success in clinical trials, in part because the required doses can cause harmful side effects.
While working on a project exploring new ways to deliver STING agonists, Cryer became intrigued when he learned from previous work that cancer cells can produce a STING activator known as cGAMP. The cells then secrete cGAMP, which can activate nearby immune cells.
“Part of my philosophy of science is that I really enjoy using endogenous processes that the body already has, and trying to utilize them in a slightly different context. Evolution has done all the hard work. We just need to figure out how push it in a different direction,” Cryer says. “Once I saw that cancer cells produce this molecule, I thought: Maybe there’s a way to take this process and supercharge it.”
Within cells, the production of cGAMP is catalyzed by an enzyme called cGAS. To get tumor cells to activate STING in immune cells, the researchers devised a way to deliver messenger RNA that encodes cGAS. When this enzyme detects double-stranded DNA in the cell body, which can be a sign of either infection or cancer-induced damage, it begins producing cGAMP.
“It just so happens that cancer cells, because they’re dividing so fast and not particularly accurately, tend to have more double-stranded DNA fragments than healthy cells,” Cryer says.
The tumor cells then release cGAMP into tumor microenvironment, where it can be taken up by neighboring immune cells and activate their STING pathway.
Targeting tumors
Using a mouse model of melanoma, the researchers evaluated their new strategy’s potential to kill cancer cells. They injected mRNA encoding cGAS, encapsulated in lipid nanoparticles, into tumors. One group of mice received this treatment alone, while another received a checkpoint blockade inhibitor, and a third received both treatments.
Given on their own, cGAS and the checkpoint inhibitor each significantly slowed tumor growth. However, the best results were seen in the mice that received both treatments. In that group, tumors were completely eradicated in 30 percent of the mice, while none of the tumors were fully eliminated in the groups that received just one treatment.
An analysis of the immune response showed that the mRNA treatment stimulated production of interferon as well as many other immune signaling molecules. A variety of immune cells, including macrophages and dendritic cells, were activated. These cells help to stimulate T cells, which can then destroy cancer cells.
The researchers were able to elicit these responses with just a small dose of cancer-cell-produced cGAMP, which could help to overcome one of the potential obstacles to using cGAMP on its own as therapy: Large doses are required to stimulate an immune response, and these doses can lead to widespread inflammation, tissue damage, and autoimmune reactions. When injected on its own, cGAMP tends to spread through the body and is rapidly cleared from the tumor, while in this study, the mRNA nanoparticles and cGAMP remained at the tumor site.
“The side effects of this class of molecule can be pretty severe, and one of the potential advantages of our approach is that you’re able to potentially subvert some toxicity that you might see if you’re giving the free molecules,” Cryer says.
The researchers now hope to work on adapting the delivery system so that it could be given as a systemic injection, rather than injecting it into the tumor. They also plan to test the mRNA therapy in combination with chemotherapy drugs or radiotherapy that damage DNA, which could make the therapy even more effective because there could be even more double-stranded DNA available to help activate the synthesis of cGAMP.
A faster problem-solving tool that guarantees feasibility
Managing a power grid is like trying to solve an enormous puzzle.
Grid operators must ensure the proper amount of power is flowing to the right areas at the exact time when it is needed, and they must do this in a way that minimizes costs without overloading physical infrastructure. Even more, they must solve this complicated problem repeatedly, as rapidly as possible, to meet constantly changing demand.
To help crack this consistent conundrum, MIT researchers developed a problem-solving tool that finds the optimal solution much faster than traditional approaches while ensuring the solution doesn’t violate any of the system’s constraints. In a power grid, constraints could be things like generator and line capacity.
This new tool incorporates a feasibility-seeking step into a powerful machine-learning model trained to solve the problem. The feasibility-seeking step uses the model’s prediction as a starting point, iteratively refining the solution until it finds the best achievable answer.
The MIT system can unravel complex problems several times faster than traditional solvers, while providing strong guarantees of success. For some extremely complex problems, it could find better solutions than tried-and-true tools. The technique also outperformed pure machine learning approaches, which are fast but can’t always find feasible solutions.
In addition to helping schedule power production in an electric grid, this new tool could be applied to many types of complicated problems, such as designing new products, managing investment portfolios, or planning production to meet consumer demand.
“Solving these especially thorny problems well requires us to combine tools from machine learning, optimization, and electrical engineering to develop methods that hit the right tradeoffs in terms of providing value to the domain, while also meeting its requirements. You have to look at the needs of the application and design methods in a way that actually fulfills those needs,” says Priya Donti, the Silverman Family Career Development Professor in the Department of Electrical Engineering and Computer Science (EECS) and a principal investigator at the Laboratory for Information and Decision Systems (LIDS).
Donti, senior author of an open-access paper on this new tool, called FSNet, is joined by lead author Hoang Nguyen, an EECS graduate student. The paper will be presented at the Conference on Neural Information Processing Systems.
Combining approaches
Ensuring optimal power flow in an electric grid is an extremely hard problem that is becoming more difficult for operators to solve quickly.
“As we try to integrate more renewables into the grid, operators must deal with the fact that the amount of power generation is going to vary moment to moment. At the same time, there are many more distributed devices to coordinate,” Donti explains.
Grid operators often rely on traditional solvers, which provide mathematical guarantees that the optimal solution doesn’t violate any problem constraints. But these tools can take hours or even days to arrive at that solution if the problem is especially convoluted.
On the other hand, deep-learning models can solve even very hard problems in a fraction of the time, but the solution might ignore some important constraints. For a power grid operator, this could result in issues like unsafe voltage levels or even grid outages.
“Machine-learning models struggle to satisfy all the constraints due to the many errors that occur during the training process,” Nguyen explains.
For FSNet, the researchers combined the best of both approaches into a two-step problem-solving framework.
Focusing on feasibility
In the first step, a neural network predicts a solution to the optimization problem. Very loosely inspired by neurons in the human brain, neural networks are deep learning models that excel at recognizing patterns in data.
Next, a traditional solver that has been incorporated into FSNet performs a feasibility-seeking step. This optimization algorithm iteratively refines the initial prediction while ensuring the solution does not violate any constraints.
Because the feasibility-seeking step is based on a mathematical model of the problem, it can guarantee the solution is deployable.
“This step is very important. In FSNet, we can have the rigorous guarantees that we need in practice,” Hoang says.
The researchers designed FSNet to address both main types of constraints (equality and inequality) at the same time. This makes it easier to use than other approaches that may require customizing the neural network or solving for each type of constraint separately.
“Here, you can just plug and play with different optimization solvers,” Donti says.
By thinking differently about how the neural network solves complex optimization problems, the researchers were able to unlock a new technique that works better, she adds.
They compared FSNet to traditional solvers and pure machine-learning approaches on a range of challenging problems, including power grid optimization. Their system cut solving times by orders of magnitude compared to the baseline approaches, while respecting all problem constraints.
FSNet also found better solutions to some of the trickiest problems.
“While this was surprising to us, it does make sense. Our neural network can figure out by itself some additional structure in the data that the original optimization solver was not designed to exploit,” Donti explains.
In the future, the researchers want to make FSNet less memory-intensive, incorporate more efficient optimization algorithms, and scale it up to tackle more realistic problems.
“Finding solutions to challenging optimization problems that are feasible is paramount to finding ones that are close to optimal. Especially for physical systems like power grids, close to optimal means nothing without feasibility. This work provides an important step toward ensuring that deep-learning models can produce predictions that satisfy constraints, with explicit guarantees on constraint enforcement,” says Kyri Baker, an associate professor at the University of Colorado Boulder, who was not involved with this work.
"A persistent challenge for machine learning-based optimization is feasibility. This work elegantly couples end-to-end learning with an unrolled feasibility-seeking procedure that minimizes equality and inequality violations. The results are very promising and I look forward to see where this research will head," adds Ferdinando Fioretto, an assistant professor at the University of Virginia, who was not involved with this work.
Study: Good management of aid projects reduces local violence
Good management of aid projects in developing countries reduces violence in those areas — but poorly managed projects increase the chances of local violence, according to a new study by an MIT economist.
The research, examining World Bank projects in Africa, illuminates a major question surrounding international aid. Observers have long wondered if aid projects, by bringing new resources into developing countries, lead to conflict over those goods as an unintended consequence. Previously, some scholars have identified an increase in violence attached to aid, while others have found a decrease.
The new study shows those prior results are not necessarily wrong, but not entirely right, either. Instead, aid oversight matters. World Bank programs earning the highest evaluation scores for their implementation reduce the likelihood of conflict by up to 12 percent, compared to the worst-managed programs.
“I find that the management quality of these projects has a really strong effect on whether that project leads to conflict or not,” says MIT economist Jacob Moscona, who conducted the research. “Well-managed aid projects can actually reduce conflict, and poorly managed projects increase conflict, relative to no project. So, the way aid programs are organized is very important.”
The findings also suggest aid projects can work well almost anywhere. At times, observers have suggested the political conditions in some countries prevent aid from being effective. But the new study finds otherwise.
“There are ways these programs can have their positive effects without the negative consequences,” Moscona says. “And it’s not the result of what politics looks like on the receiving end; it’s about the organization itself.”
Moscona’s paper detailing the study, “The Management of Aid and Conflict in Africa,” is published in the November issue of the American Economic Journal: Economic Policy. Moscona, the paper’s sole author, is the 3M Career Development Assistant Professor in MIT’s Department of Economics.
Decisions on the ground
To conduct the study, Moscona examined World Bank data from the 1997-2014 time period, using the information compiled by AidData, a nonprofit group that also studies World Bank programs. Importantly, the World Bank conducts extensive evaluations of its projects and includes the identities of project leaders as part of those reviews.
“There are a lot of decisions on the ground made by managers of aid, and aid organizations themselves, that can have a huge impact on whether or not aid leads to conflict, and how aid resources are used and whether they are misappropriated or captured and get into the wrong hands,” Moscona says.
For instance, diligent daily checks about food distribution programs can and have substantially reduced the amount of food that is stolen or “leaks” out of the program. Other projects have created innovative ways of tagging small devices to ensure those objects are used by program participants, reducing appropriation by others.
Moscona combined the World Bank data with statistics from the Armed Conflict Location and Event Data Project (ACLED), a nonprofit that monitors political violence. That enabled him to evaluate how the quality of aid project implementation — and even the quality of the project leadership — influenced local outcomes.
For instance, by looking at the ratings of World Bank project leaders, Moscona found that shifting from a project leader at the 25th percentile, in terms of how frequently projects are linked with conflict, to one at the 75th percentile, increases the chances of local conflict by 15 percent.
“The magnitudes are pretty large, in terms of the probability that a conflict starts in the vicinity of a project,” Moscona observes.
Moscona’s research identified several other aspects of the interaction between aid and conflict that hold up over the region and time period. The establishment of aid programs does not seem to lead to long-term strategic activity by non-government forces, such as land acquisition or the establishment of rebel bases. The effects are also larger in areas that have had recent political violence. And armed conflict is greater when the resources at stake can be expropriated — such as food or medical devices.
“It matters most if you have more divertable resources, like food and medical devices that can be captured, as opposed to infrastructure projects,” Moscona says.
Reconciling the previous results
Moscona also found a clear trend in the data about the timing of violence in relation to aid. Government and other armed groups do not engage in much armed conflict when aid programs are being established; it is the appearance of desired goods themselves that sets off violent activity.
“You don’t see much conflict when the projects are getting off the ground,” Moscona says.” You really see the conflict start when the money is coming in or when the resources start to flow. Which is consistent with the idea of the relevant mechanism being about aid resources and their misappropriation, rather than groups trying to deligitimize a project.”
All told, Moscona’s study finds a logical mechanism explaining the varying results other scholars have found with regard to aid and conflict. If aid programs are not equally well-administered, it stands to reason that their outcomes will not be identical, either.
“There wasn’t much work trying to make those two sets of results speak to each other,” says Moscona. “I see it less as overturning existing results than providing a way to reconcile different results and experiences.”
Moscona’s findings may also speak to the value of aid in general — and provide actionable ideas for institutions such as the World Bank. If better management makes such a difference, then the potential effectiveness of aid programs may increase.
“One goal is to change the conversation about aid,” Moscona says. The data, he suggests, shows that the public discourse about aid can be “less defeatist about the potential negative consequences of aid, and the idea that it’s out of the control of the people who administer it.”
New nanoparticles stimulate the immune system to attack ovarian tumors
Cancer immunotherapy, which uses drugs that stimulate the body’s immune cells to attack tumors, is a promising approach to treating many types of cancer. However, it doesn’t work well for some tumors, including ovarian cancer.
To elicit a better response, MIT researchers have designed new nanoparticles that can deliver an immune-stimulating molecule called IL-12 directly to ovarian tumors. When given along with immunotherapy drugs called checkpoint inhibitors, IL-12 helps the immune system launch an attack on cancer cells.
Studying a mouse model of ovarian cancer, the researchers showed that this combination treatment could eliminate metastatic tumors in more than 80 percent of the mice. When the mice were later injected with more cancer cells, to simulate tumor recurrence, their immune cells remembered the tumor proteins and cleared them again.
“What’s really exciting is that we’re able to deliver IL-12 directly in the tumor space. And because of the way that this nanomaterial is designed to allow IL-12 to be borne on the surfaces of the cancer cells, we have essentially tricked the cancer into stimulating immune cells to arm themselves against that cancer,” says Paula Hammond, an MIT Institute Professor, MIT’s vice provost for faculty, and a member of the Koch Institute for Integrative Cancer Research.
Hammond and Darrell Irvine, a professor of immunology and microbiology at the Scripps Research Institute, are the senior authors of the new study, which appears today in Nature Materials. Ivan Pires PhD ’24, now a postdoc at Brigham and Women’s Hospital, is the lead author of the paper.
“Hitting the gas”
Most tumors express and secrete proteins that suppress immune cells, creating a microenvironment in which the immune response is weakened. One of the main players that can kill tumor cells are T cells, but they get sidelined or blocked by the cancer cells and are unable to attack the tumor. Checkpoint inhibitors are an FDA-approved treatment designed to take those brakes off the immune system by removing the immune-suppressing proteins so that T cells can mount an attack on tumor cells
For some cancers, including some types of melanoma and lung cancer, removing the brakes is enough to provoke the immune system into attacking cancer cells. However, ovarian tumors have many ways to suppress the immune system, so checkpoint inhibitors alone usually aren’t enough to launch an immune response.
“The problem with ovarian cancer is no one is hitting the gas. So, even if you take off the brakes, nothing happens,” Pires says.
IL-12 offers one way to “hit the gas,” by supercharging T cells and other immune cells. However, the large doses of IL-12 required to get a strong response can produce side effects due to generalized inflammation, such as flu-like symptoms (fever, fatigue, GI issues, headaches, and fatigue), as well as more severe complications such as liver toxicity and cytokine release syndrome — which can be so severe they may even lead to death.
In a 2022 study, Hammond’s lab developed nanoparticles that could deliver IL-12 directly to tumor cells, which allows larger doses to be given while avoiding the side effects seen when the drug is injected. However, these particles tended to release their payload all at once after reaching the tumor, which hindered their ability to generate a strong T cell response.
In the new study, the researchers modified the particles so that IL-12 would be released more gradually, over about a week. They achieved this by using a different chemical linker to attach IL-12 to the particles.
“With our current technology, we optimize that chemistry such that there’s a more controlled release rate, and that allowed us to have better efficacy,” Pires says.
The particles consist of tiny, fatty droplets known as liposomes, with IL-12 molecules tethered to the surface. For this study, the researchers used a linker called maleimide to attach IL-12 to the liposomes. This linker is more stable than the one they used in the previous generation of particles, which was susceptible to being cleaved by proteins in the body, leading to premature release.
To make sure that the particles get to the right place, the researchers coat them with a layer of a polymer called poly-L-glutamate (PLE), which helps them directly target ovarian tumor cells. Once they reach the tumors, the particles bind to the cancer cell surfaces, where they gradually release their payload and activate nearby T cells.
Disappearing tumors
In tests in mice, the researchers showed that the IL-12-carrying particles could effectively recruit and stimulate T cells that attack tumors. The cancer models used for these studies are metastatic, so tumors developed not only in the ovaries but throughout the peritoneal cavity, which includes the surface of the intestines, liver, pancreas, and other organs. Tumors could even be seen in the lung tissues.
First, the researchers tested the IL-12 nanoparticles on their own, and they showed that this treatment eliminated tumors in about 30 percent of the mice. They also found a significant increase in the number of T cells that accumulated in the tumor environment.
Then, the researchers gave the particles to mice along with checkpoint inhibitors. More than 80 percent of the mice that received this dual treatment were cured. This happened even when the researchers used models of ovarian cancer that are highly resistant to immunotherapy or to the chemotherapy drugs usually used for ovarian cancer.
Patients with ovarian cancer are usually treated with surgery followed by chemotherapy. While this may be initially effective, cancer cells that remain after surgery are often able to grow into new tumors. Establishing an immune memory of the tumor proteins could help to prevent that kind of recurrence.
In this study, when the researchers injected tumor cells into the cured mice five months after the initial treatment, the immune system was still able to recognize and kill the cells.
“We don’t see the cancer cells being able to develop again in that same mouse, meaning that we do have an immune memory developed in those animals,” Pires says.
The researchers are now working with MIT’s Deshpande Center for Technological Innovation to spin out a company that they hope could further develop the nanoparticle technology. In a study published earlier this year, Hammond’s lab reported a new manufacturing approach that should enable large-scale production of this type of nanoparticle.
The research was funded by the National Institutes of Health, the Marble Center for Nanomedicine, the Deshpande Center for Technological Innovation, the Ragon Institute of MGH, MIT, and Harvard, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Using classic physical phenomena to solve new problems
Quenching, a powerful heat transfer mechanism, is remarkably effective at transporting heat away. But in extreme environments, like nuclear power plants and aboard spaceships, a lot rides on the efficiency and speed of the process.
It’s why Marco Graffiedi, a fifth-year doctoral student at MIT’s Department of Nuclear Science and Engineering (NSE), is researching the phenomenon to help develop the next generation of spaceships and nuclear plants.
Growing up in small-town Italy
Graffiedi’s parents encouraged a sense of exploration, giving him responsibilities for family projects even at a young age. When they restored a countryside cabin in a small town near Palazzolo, in the hills between Florence and Bologna, the then-14-year-old Marco got a project of his own. He had to ensure the animals on the property had enough accessible water without overfilling the storage tank. Marco designed and built a passive hydraulic system that effectively solved the problem and is still functional today.
His proclivity for science continued in high school in Lugo, where Graffiedi enjoyed recreating classical physics phenomena, through experiments. Incidentally, the high school is named after Gregorio Ricci-Curbastro, a mathematician who laid the foundation for the theory of relativity — history that is not lost on Graffiedi. After high school, Graffiedi attended the International Physics Olympiad in Bangkok, a formative event that cemented his love for physics.
A gradual shift toward engineering
A passion for physics and basic sciences notwithstanding, Graffiedi wondered if he’d be a better fit for engineering, where he could use the study of physics, chemistry, and math as tools to build something.
Following that path, he completed a bachelor’s and master’s in mechanical engineering — because an undergraduate degree in Italy takes only three years, pretty much everyone does a master’s, Graffiedi laughs — at the Università di Pisa and the Scuola Superiore Sant’Anna (School of Engineering). The Sant’Anna is a highly selective institution that most students attend to complement their university studies.
Graffiedi’s university studies gradually moved him toward the field of environmental engineering. He researched concentrated solar power in order to reduce the cost of solar power by studying the associated thermal cycle and trying to improve solar power collection. While the project was not very successful, it reinforced Graffiedi’s impression of the necessity of alternative energies. Still firmly planted in energy studies, Graffiedi worked on fracture mechanics for his master’s thesis, in collaboration with (what was then) GE Oil and Gas, researching how to improve the effectiveness of centrifugal compressors. And a summer internship at Fermilab had Graffiedi working on the thermal characterization of superconductive coatings.
With his studies behind him, Graffiedi was still unsure about this professional path. Through the Edison Program from GE Oil and Gas, where he worked shortly after graduation, Graffiedi got to test drive many fields — from mechanical and thermal engineering to exploring gas turbines and combustion. He eventually became a test engineer, coordinating a team of engineers to test a new upgrade to the company’s gas turbines. “I set up the test bench, understanding how to instrument the machine, collect data, and run the test,” Graffiedi remembers, “there was a lot you need to think about, from a little turbine blade with sensors on it to the location of safety exits on the test bench.”
The move toward nuclear engineering
As fun as the test engineering job was, Graffiedi started to crave more technical knowledge and wanted to pivot to science. As part of his exploration, he came across nuclear energy and, understanding it to be the future, decided to lean on his engineering background to apply to MIT NSE.
He found a fit in Professor Matteo Bucci’s group and decided to explore boiling and quenching. The move from science to engineering, and back to science, was now complete.
NASA, the primary sponsor of the research, is interested in preventing boiling of cryogenic fuels, because boiling leads to loss of fuel and the resulting vapor will need to be vented to avoid overpressurizing a fuel tank.
Graffiedi’s primary focus is on quenching, which will play an important role in refueling in space — and in the cooling of nuclear cores. When a cryogen is used to cool down a surface, it undergoes what is known as the Leidenfrost effect, which means it first forms a thin vapor film that acts as an insulator and prevents further cooling. To facilitate rapid cooling, it’s important to accelerate the collapse of the vapor film. Graffiedi is exploring the mechanics of the quenching process on a microscopic level, studies that are important for land and space applications.
Boiling can be used for yet another modern application: to improve the efficiency of cooling systems for data centers. The growth of data centers and electric transportation systems needs effective heat transfer mechanisms to avoid overheating. Immersion cooling using dielectric fluids — fluids that do not conduct electricity — is one way to do so. These fluids remove heat from a surface by leaning on the principle of boiling. For effective boiling, the fluid must overcome the Leidenfrost effect and break the vapor film that forms. The fluid must also have high critical heat flux (CHF), which is the maximum value of the heat flux at which boiling can effectively be used to transfer heat from a heated surface to a liquid. Because dielectric fluids have lower CHF than water, Graffiedi is exploring solutions to enhance these limits. In particular, he is investigating how high electric fields can be used to enhance CHF and even to use boiling as a way to cool electronic components in the absence of gravity. He published this research in Applied Thermal Engineering in June.
Beyond boiling
Graffiedi’s love of science and engineering shows in his commitment to teaching as well. He has been a teaching assistant for four classes at NSE, winning awards for his contributions. His many additional achievements include winning the Manson Benedict Award presented to an NSE graduate student for excellence in academic performance and professional promise in nuclear science and engineering, and a service award for his role as past president of the MIT Division of the American Nuclear Society.
Boston has a fervent Italian community, Graffiedi says, and he enjoys being a part of it. Fittingly, the MIT Italian club is called MITaly. When he’s not at work or otherwise engaged, Graffiedi loves Latin dancing, something he makes time for at least a couple of times a week. While he has his favorite Italian restaurants in the city, Graffiedi is grateful for another set of skills his parents gave him when was just 11: making perfect pizza and pasta.
Q&A: How MITHIC is fostering a culture of collaboration at MIT
The MIT Human Insight Collaborative (MITHIC) is a presidential initiative with a mission of elevating human-centered research and teaching and connecting scholars in the humanities, arts, and social sciences with colleagues across the Institute.
Since its launch in 2024, MITHIC has funded 31 projects led by teaching and research staff representing 22 different units across MIT. The collaborative is holding its annual event on Nov. 17.
In this Q&A, Keeril Makan, associate dean in the MIT School of Humanities, Arts, and Social Sciences, and Maria Yang, interim dean of the MIT School of Engineering, discuss the value of MITHIC and the ways it’s accelerating new research and collaborations across the Institute. Makan is the Michael (1949) Sonja Koerner Music Composition Professor and faculty lead for MITHIC. Yang is the William E. Leonhard (1940) Professor in the Department of Mechanical Engineering and co-chair of MITHIC’s SHASS+ Connectivity Fund.
Q: You each come from different areas of MIT. Looking at MITHIC from your respective roles, why is this initiative so important for the Institute?
Makan: The world is counting on MIT to develop solutions to some of the world’s greatest challenges, such as artificial intelligence, poverty, and health care. These are all issues that arise from human activity, a thread that runs through much of the research we’re focused on in SHASS. Through MITHIC, we’re embedding human-centered thinking and connecting the Institute’s top scholars in the work needed to find innovative ways of addressing these problems.
Yang: MITHIC is very important to MIT, and I think of this from the point of view as an engineer, which is my background. Engineers often think about the technology first, which is absolutely important. But for that technology to have real impact, you have to think about the human insights that make that technology relevant and can be deployed in the world. So really having a deep understanding of that is core to MITHIC and MIT’s engineering enterprise.
Q: How does MITHIC fit into MIT’s broader mission?
Makan: MITHIC highlights how the work we do in the School of Humanities, Arts, and Social Sciences is aligned with MIT’s mission, which is to address the world’s great problems. But MITHIC has also connected all of MIT in this endeavor. We have faculty from all five schools and the MIT Schwarzman College of Computing involved in evaluating MITHIC project proposals. Each of them represent a different point of view and are engaging with these projects that originate in SHASS, but actually cut across many different fields. Seeing their perspectives on these projects has been inspiring.
Yang: I think of MIT’s main mission as using technology and many other things to make impact in the world, especially social impact. The kind of interdisciplinary work that MITHIC catalyzes really enables all of that work to happen in a new and profound way. The SHASS+ Connectivity Fund, which connects SHASS faculty and researchers with colleagues outside of SHASS, has resulted in collaborations that were not possible before. One example is a project being led by professors Mark Rau, who has a shared appointment between Music and Electrical Engineering and Computer Science, and Antoine Allanore in Materials Science and Engineering. The two of them are looking at how they can take ancient unplayable instruments and recreate them using new technologies for scanning and fabrication. They’re also working with the Museum of Fine Arts, so it’s a whole new type of collaboration that exemplifies MITHIC.
Q: What has been the community response to MITHIC in its first year?
Makan: It’s been very strong. We found a lot of pent-up demand, both from faculty in SHASS and faculty in the sciences and engineering. Either there were preexisting collaborations that they could take to the next level through MITHIC, or there was the opportunity to meet someone new and talk to someone about a problem and how they could collaborate. MITHIC also hosted a series of Meeting of the Minds events, which are a chance to have faculty and members of the community get to know one another on a certain topic. This community building has been exciting, and led to an overwhelming number of applications last year. There has also been significant student involvement, with several projects bringing on UROPs [Undergraduate Research Opportunities Program projects] and PhD students to help with their research. MITHIC gives a real morale boost and a lot of hope that there is a focus upon building collaborations at MIT and on not forgetting that the world needs humanists, artists, and social scientists.
Yang: One faculty member told me the SHASS+ Connectivity Fund has given them hope for the kind of research that we do because of the cross collaboration. There’s a lot of excitement and enthusiasm for this type of work.
Q: The SHASS+ Connectivity Fund is designed to support interdisciplinary collaborations at MIT. What’s an example of a SHASS+ project that’s worked particularly well?
Makan: One exciting collaboration is between professors Jörn Dunkel in Mathematics and In Song Kim in Political science. In Song is someone who has done a lot of work on studying lobbying and its effect upon the legislative process. He met Jörn, I believe, at one of MIT’s daycare centers, so it’s a relationship that started in a very informal fashion. But they found they actually had ways of looking at math and quantitative analysis that could complement one another. Their work is creating a new subfield and taking the research in a direction that would not be possible without this funding.
Yang: One of the SHASS+ projects that I think is really interesting is between professors Marzyeh Ghassemi in Electrical Engineering and Computer Science and Esther Duflo in Economics. The two of them are looking at how they can use AI to help health diagnostics in low-resource global settings, where there isn’t a lot of equipment or technology to do basic health diagnostics. They can use handheld, low-cost equipment to do things like predict if someone is going to have a heart attack. And they are not only developing the diagnostic tool, but evaluating the fairness of the algorithm. The project is an excellent example of using a MITHIC grant to make impact in the world.
Q: What has been MITHIC’s impact in terms of elevating research and teaching within SHASS?
Makan: In addition to the SHASS+ Connectivity Fund, there are two other possibilities to help support both SHASS research as well as educational initiatives: the Humanities Cultivation Fund and the SHASS Education Innovation Fund. And both of these are providing funding in excess of what we normally see within SHASS. It both recognizes the importance of the work of our faculty and it also gives them the means to actually take ideas to a much further place.
One of the projects that MITHIC is helping to support is the Compass Initiative. Compass was started by Lily Tsai, one of our professors in Political Science, along with other faculty in SHASS to create essentially an introductory class to the different methodologies within SHASS. So we have philosophers, music historians, etc., all teaching together, all addressing how we interact with one another, what it means to be a good citizen, what it means to be socially aware and civically engaged. This is a class that is very timely for MIT and for the world. And we were able to give it robust funding so they can take this and develop it even further.
MITHIC has also been able to take local initiatives in SHASS and elevate them. There has been a group of anthropologists, historians, and urban planners that have been working together on a project called the Living Climate Futures Lab. This is a group interested in working with frontline communities around climate change and sustainability. They work to build trust with local communities and start to work with them on thinking about how climate change affects them and what solutions might look like. This is a powerful and uniquely SHASS approach to climate change, and through MITHIC, we’re able to take this seed effort, robustly fund it, and help connect it to the larger climate project at MIT.
Q: What excites you most about the future of MITHIC at MIT?
Yang: We have a lot of MIT efforts that are trying to break people out of their disciplinary silos, and MITHIC really is a big push on that front. It’s a presidential initiative, so it’s high on the priority list of what people are thinking about. We’ve already done our first round, and the second round is going to be even more exciting, so it’s only going to gain in force. In SHASS+, we’re actually having two calls for proposals this academic year instead of just one. I feel like there’s still so much possibility to bring together interdisciplinary research across the Institute.
Makan: I’m excited about how MITHIC is changing the culture of MIT. MIT thinks of itself in terms of engineering, science, and technology, and this is an opportunity to think about those STEM fields within the context of human activity and humanistic thinking. Having this shift at MIT in how we approach solving problems bodes well for the world, and it places SHASS as this connective tissue at the Institute. It connects the schools and it can also connect the other initiatives, such as manufacturing and health and life sciences. There’s an opportunity for MITHIC to seed all these other initiatives with the work that goes on in SHASS.
