Feed aggregator

Enabling small language models to solve complex reasoning tasks

MIT Latest News - Fri, 12/12/2025 - 3:30pm

As language models (LMs) improve at tasks like image generation, trivia questions, and simple math, you might think that human-like reasoning is around the corner. In reality, they still trail us by a wide margin on complex tasks. Try playing Sudoku with one, for instance, where you fill in numbers one through nine in such a way that each appears only once across the columns, rows, and sections of a nine-by-nine grid. Your AI opponent will either fail to fill in boxes on its own or do so inefficiently, although it can verify if you’ve filled yours out correctly.

Whether an LM is trying to solve advanced puzzles, design molecules, or write math proofs, the system struggles to answer open-ended requests that have strict rules to follow. The model is better at telling users how to approach these challenges than attempting them itself. Moreover, hands-on problem-solving requires LMs to consider a wide range of options while following constraints. Small LMs can’t do this reliably on their own; large language models (LLMs) sometimes can, particularly if they’re optimized for reasoning tasks, but they take a while to respond, and they use a lot of computing power.

This predicament led researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) to develop a collaborative approach where an LLM does the planning, then divvies up the legwork of that strategy among smaller ones. Their method helps small LMs provide more accurate responses than leading LLMs like OpenAI’s GPT-4o, and approach the precision of top reasoning systems such as o1, while being more efficient than both. Their framework, called “Distributional Constraints by Inference Programming with Language Models” (or “DisCIPL”), has a large model steer smaller “follower” models toward precise responses when writing things like text blurbs, grocery lists with budgets, and travel itineraries.

The inner workings of DisCIPL are much like contracting a company for a particular job. You provide a “boss” model with a request, and it carefully considers how to go about doing that project. Then, the LLM relays these instructions and guidelines in a clear way to smaller models. It corrects follower LMs’ outputs where needed — for example, replacing one model’s phrasing that doesn’t fit in a poem with a better option from another.

The LLM communicates with its followers using a language they all understand — that is, a programming language for controlling LMs called “LLaMPPL.” Developed by MIT's Probabilistic Computing Project in 2023, this program allows users to encode specific rules that steer a model toward a desired result. For example, LLaMPPL can be used to produce error-free code by incorporating the rules of a particular language within its instructions. Directions like “write eight lines of poetry where each line has exactly eight words” are encoded in LLaMPPL, queuing smaller models to contribute to different parts of the answer.

MIT PhD student Gabriel Grand, who is the lead author on a paper presenting this work, says that DisCIPL allows LMs to guide each other toward the best responses, which improves their overall efficiency. “We’re working toward improving LMs’ inference efficiency, particularly on the many modern applications of these models that involve generating outputs subject to constraints,” adds Grand, who is also a CSAIL researcher. “Language models are consuming more energy as people use them more, which means we need models that can provide accurate answers while using minimal computing power.”

“It's really exciting to see new alternatives to standard language model inference,” says University of California at Berkeley Assistant Professor Alane Suhr, who wasn’t involved in the research. “This work invites new approaches to language modeling and LLMs that significantly reduce inference latency via parallelization, require significantly fewer parameters than current LLMs, and even improve task performance over standard serialized inference. The work also presents opportunities to explore transparency, interpretability, and controllability of model outputs, which is still a huge open problem in the deployment of these technologies.”

An underdog story

You may think that larger-scale LMs are “better” at complex prompts than smaller ones when it comes to accuracy and efficiency. DisCIPL suggests a surprising counterpoint for these tasks: If you can combine the strengths of smaller models instead, you may just see an efficiency bump with similar results.

The researchers note that, in theory, you can plug in dozens of LMs to work together in the DisCIPL framework, regardless of size. In writing and reasoning experiments, they went with GPT-4o as their “planner LM,” which is one of the models that helps ChatGPT generate responses. It brainstormed a plan for several “Llama-3.2-1B” models (smaller systems developed by Meta), in which those LMs filled in each word (or token) of the response.

This collective approach competed against three comparable ones: a follower-only baseline powered by Llama-3.2-1B, GPT-4o working on its own, and the industry-leading o1 reasoning system that helps ChatGPT figure out more complex questions, such as coding requests and math problems.

DisCIPL first presented an ability to write sentences and paragraphs that follow explicit rules. The models were given very specific prompts — for example, writing a sentence that has exactly 18 words, where the fourth word must be “Glasgow,” the eighth should be “in”, and the 11th must be “and.” The system was remarkably adept at handling this request, crafting coherent outputs while achieving accuracy and coherence similar to o1.

Faster, cheaper, better

This experiment also revealed that key components of DisCIPL were much cheaper than state-of-the-art systems. For instance, whereas existing reasoning models like OpenAI’s o1 perform reasoning in text, DisCIPL “reasons” by writing Python code, which is more compact. In practice, the researchers found that DisCIPL led to 40.1 percent shorter reasoning and 80.2 percent cost savings over o1.

DisCIPL’s efficiency gains stem partly from using small Llama models as followers, which are 1,000 to 10,000 times cheaper per token than comparable reasoning models. This means that DisCIPL is more “scalable” — the researchers were able to run dozens of Llama models in parallel for a fraction of the cost.

Those weren’t the only surprising findings, according to CSAIL researchers. Their system also performed well against o1 on real-world tasks, such as making ingredient lists, planning out a travel itinerary, and writing grant proposals with word limits. Meanwhile, GPT-4o struggled with these requests, and with writing tests, it often couldn’t place keywords in the correct parts of sentences. The follower-only baseline essentially finished in last place across the board, as it had difficulties with following instructions.

“Over the last several years, we’ve seen some impressive results from approaches that use language models to ‘auto-formalize’ problems in math and robotics by representing them with code,” says senior author Jacob Andreas, who is an MIT electrical engineering and computer science associate professor and CSAIL principal investigator. “What I find most exciting about this paper is the fact that we can now use LMs to auto-formalize text generation itself, enabling the same kinds of efficiency gains and guarantees that we’ve seen in these other domains.” 

In the future, the researchers plan on expanding this framework into a more fully-recursive approach, where you can use the same model as both the leader and followers. Grand adds that DisCIPL could be extended to mathematical reasoning tasks, where answers are harder to verify. They also intend to test the system on its ability to meet users’ fuzzy preferences, as opposed to following hard constraints, which can’t be outlined in code so explicitly. Thinking even bigger, the team hopes to use the largest possible models available, although they note that such experiments are computationally expensive.

Grand and Andreas wrote the paper alongside CSAIL principal investigator and MIT Professor Joshua Tenenbaum, as well as MIT Department of Brain and Cognitive Sciences Principal Research Scientist Vikash Mansinghka and Yale University Assistant Professor Alex Lew SM ’20 PhD ’25. CSAIL researchers presented the work at the Conference on Language Modeling in October and IVADO’s “Deploying Autonomous Agents: Lessons, Risks and Real-World Impact” workshop in November.

Their work was supported, in part, by the MIT Quest for Intelligence, Siegel Family Foundation, the MIT-IBM Watson AI Lab, a Sloan Research Fellowship, Intel, the Air Force Office of Scientific Research, the Defense Advanced Research Projects Agency, the Office of Naval Research, and the National Science Foundation.

New MIT program to train military leaders for the AI age

MIT Latest News - Fri, 12/12/2025 - 1:10pm

Artificial intelligence can enhance decision-making and enable action with reduced risk and greater precision, making it a critical tool for national security. A new program offered jointly by the MIT departments of Mechanical Engineering (Course 2, MechE) and Electrical Engineering and Computer Science (Course 6, EECS) will provide breadth and depth in technical studies for naval officers, as well as a path for non-naval officers studying at MIT, to grow in their understanding of applied AI for naval and military applications.

“The potential for artificial intelligence is just starting to be fully realized. It’s a tool that dramatically improves speed, efficiency, and decision-making with countless applications,” says Commander Christopher MacLean, MIT associate professor of the practice in mechanical engineering, naval construction, and engineering. “AI is a force multiplier that can be used for data processing, decision support, unmanned and autonomous systems, cyber defense, logistics and supply chains, energy management, and many other fields.”

The program, called “2N6: Applied Artificial Intelligence Program for Naval Officers,” comprises a two-year master of science degree in mechanical engineering with an accompanying AI certificate awarded by the MIT Schwarzman College of Computing.

“The officers entering this program will learn from the world’s experts, and conduct cutting-edge relevant research, and will exit the program best prepared for their roles as leaders across the U.S. naval enterprise,” says MacLean.

The 2N6 curriculum is application focused, and the content is built to satisfy the U.S. Navy’s sub-specialty code for Applied Artificial Intelligence. Students will learn core AI concepts, as well as applications to special topics, such as decision-making for computational exercises; AI for manufacturing and design, with special emphasis on navy applications; and AI for marine autonomy of surface and underwater vehicles.

“The expanding influence of artificial intelligence is redefining our approach to problem-solving. AI holds the potential to address some of the most pressing issues in nearly every field,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and Henry Ellis Warren Professor of Electrical Engineering and Computer Science. “I’m honored that the college can contribute to and support such a vital program that will equip our nation’s naval officers with the technical expertise they need for mission-relevant challenges.”

MIT has been a leading center of ship research and design for over a century, with work at the Institute today representing significant advancements in fluid mechanics and hydrodynamics, acoustics, offshore mechanics, marine robotics and sensors, and ocean sensing and forecasting. The 2N program will celebrate its 125th year at MIT in 2026.

“In MechE, we are embracing the use of AI to explore new frontiers in research and education, with deep grounding in the fundamentals, design, and scaling of physical systems,” says John Hart, the Class of 1922 Professor and head of MechE. “With the 2N6 program, we’re proud to be at the helm of such an important charge in training the next generation of leaders for the Navy.”

“Breakthroughs in artificial intelligence are reshaping society and advancing human decision-making and creativity,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of EECS, and MathWorks Professor. “We are delighted to partner with the Department of Mechanical Engineering in launching this important collaboration with the U.S. Navy. The program will explore not only the forefront of AI advances, but also its effective application in Navy operations.”

2N6 was created following a visit to campus from Admiral Samuel Paparo, commander of the U.S. Indo-Pacific Command, with MIT Provost Anantha Chandrakasan, who was then dean of engineering and chief innovation and strategy officer.

“[Admiral Paparo] was given an overview of some of the cutting-edge work and research that MIT has done and is doing in the field of AI, [and was introduced to the 2N program],” says MacLean. “The admiral made the connection, envisioning an applied AI program similar to 2N.”

2N6 will run as a pilot program for at least two years. The program’s first cohort will comprise only U.S. Navy officers, with plans to expand more broadly.

“We are thrilled to build on the long-standing relationship between MIT and the U.S. Navy with this new program,” says Themis Sapsis, William I. Koch Professor in mechanical engineering and the director of the Center for Ocean Engineering at MIT. “It is specifically designed to train naval officers on the fundamentals and applications of AI, but also involve them in research that has direct impact to the Navy. We believe that 2N6 can model a new paradigm for advanced AI education focused more broadly on supporting national security.”

A better DNA material for genetic medicine

MIT Latest News - Fri, 12/12/2025 - 12:00pm

To our immune system, a potentially lifesaving gene therapy can look a lot like a dangerous infection. That’s because most genetic medicine uses viruses or double-stranded DNA to deliver genetic information to target cells. DNA in its traditional double helix form can lead to toxic immune stimulation and be difficult to package into cellular delivery vehicles. As a result, the reach of genetic medicine is limited today.

Kano Therapeutics is taking a different approach to genetic therapies. The company is developing gene-editing technologies using circular single-stranded DNA (cssDNA), a biomolecule that is less toxic than double stranded DNA and more stable than RNA, and could be delivered more efficiently to many parts of the body to treat genetic diseases, cancers, and more.

The company, which was founded by former MIT postdoc Floris Engelhardt, professor of biological engineering Mark Bathe, and John Vroom MBA ’22, is developing a platform for manufacturing cssDNA of customized lengths and sequences, which could deliver genetic material to fix or replace faulty genes.

“We can work with CRISPR and other gene-editing technologies,” Engelhardt says. “CRISPR finds a location in a genome, binds to it, and cuts at that location. That allows you to edit a gene or stop a gene from functioning. But what if you have a loss-of-function disease where you need to insert a new piece of genetic code? Our approach allows you to replace whole genes or add genetic information.”

Making DNA flexible

Around 2019, Bathe’s lab published research describing ways to engineer the sequence and length of cssDNA molecules, which have been used in labs for decades but have increasingly drawn interest for improving gene therapies. Several pharmaceutical companies immediately reached out.

“Single-stranded DNA is a little like messenger RNA, which can code for any protein in any cell, tumor, or organ,” Bathe says. “It fundamentally encodes for a protein, so it can be used across diseases, including rare diseases that may only affect a few people in the country.”

Engelhardt had also worked on cssDNA as a PhD student in Munich. She met Bathe at a conference.

“We were considering collaborating on research,” Engelhardt recalls. “Then Mark heard I was finishing my PhD and said, ‘Wait a minute. Instead of collaborating, I should hire you.’”

Within 48 hours of submitting her PhD thesis, Engelhardt received an email asking her to apply to Bathe’s lab as a postdoc. She was drawn to the position because she would be focusing on research that had the potential to help patients.

“MIT is very good at creating industry-focused postdocs,” Engelhardt says. “I was inspired by the idea of doing postdoc work with the goal of spinning out a company, as opposed to doing solely academic-focused research.”

Bathe and Engelhardt learned from members of the pharmaceutical industry how single-stranded DNA could help overcome limitations in gene and cell therapies. Although CRISPR-based treatments have recently been approved for a few genetic diseases, CRISPR’s effectiveness has been limited by its potential toxicity and inefficient delivery to specific sites in the body. Also, those treatments can only be administered once because CRISPR often gets labeled as foreign by our immune systems and rejected from the body.

Engelhardt began exploring MIT’s resources to help commercialize her research. She met Vroom through an online “founder speed dating” event at MIT. She also received support from the Venture Mentoring Service, took classes at MIT’s Sloan School of Management, and worked with MIT’s Industrial Liaison Program. Early on, Bathe suggested Engelhardt work with MIT’s Technology License Office, something she says she tells every founder to do the moment they start thinking about commercializing their research.

In 2021, Kano won the $20,000 first place prize at the MIT Sloan Healthcare Innovation Prize (SHIP) to commercialize a new way to design and manufacture single-stranded DNA. Kano uses fermentation to produce its cssDNA less expensively than approaches based on chemical DNA synthesis.

“No one had the ability to access this type of genetic material, and so a lot of our work was around creating the highest-quality, economically scalable process to allow circular single-stranded DNA to be commercially viable,” Engelhardt says.

Engelhardt and Vroom began meeting with investors as soon as Engelhardt finished her postdoc work in 2021. The founders worked to raise money over the next year while Vroom finished his MBA.

Today, Kano’s circular ssDNA can be used to insert entire genes, up to 10,000 nucleotides long, into the body. Kano is planning to partner with pharmaceutical companies to make their gene therapies more targeted and potent. For instance, pharmaceutical partners could use Kano’s platform to join the CD19 and CD20 genes, which are expressed in certain tumor cells, and stipulate that only if both genes bind to a cell receptor do they enter that cell’s genome and make edits.

Overall, Engelhardt says working with circular single-stranded DNA makes Kano’s approach more flexible than platforms like CRISPR.

“We realized working with pharmaceutical companies early on in my postdoc there was a lack of design understanding because of the lack of access to these molecules,” Engelhardt says. “When it comes to gene or cell therapies, people just think of the gene itself, not the flanking sequences or anything else that goes around the gene. Now that the DNA isn’t stuck in a double helix all the time, I can create small, three-dimensional structures — think loops or hairpins — that work, for example, as a binding protein that pulls it into the nucleus. That unlocks a completely new path for DNA because it makes it engineerable — not only on a structural level but also a sequence level.”

Partnering for impact

To facilitate more partnerships, Kano is signing agreements with partners that give it a smaller percentage of eventual drug royalties but allow it to work with many companies at the same time. In a recent collaboration with Merck KGaA, Kano combined its circular cssDNA platform with the company’s lipid nanoparticles solutions for delivering gene therapies. Kano is also in discussions with other large pharmaceutical companies to jointly bring cancer drugs into the clinic over the next two years.

“That’s exciting because we’ll be implementing our DNA into partners’ drug system, so when they file their new drug and dose their first patients, our DNA is going to be the therapeutic information carrier for efficacy,” Engelhardt says. “As a first-time founder, this is where you want to go. We talk about patient impact all the time, and this is how we’re going to get it.”

Kano is also developing the first databank mapping cssDNA designs to activity, to speed up the development of new treatments.

“Right now, there is no understanding of how to design DNA for these therapies,” Engelhardt says. “Everyone who wants to differentiate needs to come up with a new editing tool, a new delivery tool, and there’s no connecting company that can enable those areas of expertise. When partners come to us, we can say, ‘The gene sequence is all yours.’ But often it’s not just about the sequence. It’s also about the promoter or flanking sequence that allows you to insert your DNA into the genome, or that makes DNA package well into your delivery nanoparticle. At Kano, we’re building the best knowledgebase to use DNA material to treat diseases.”

EFF and 12 Organizations Urge UK Politicians to Drop Digital ID Scheme Ahead of Parliamentary Petition Debate

EFF: Updates - Fri, 12/12/2025 - 11:48am

The UK Parliament convened earlier this week to debate a petition signed by almost 2.9 million people calling for an end to the government’s plans to roll out a national digital ID. Ahead of that debate, EFF and 12 other civil society organizations wrote to politicians in the country urging MPs to reject the Labour government’s newly announced digital ID proposal.

The UK’s Prime Minister Keir Starmer pitched the scheme as a way to “cut the faff” in proving people’s identities by creating a virtual ID on personal devices with information like names, date of birth, nationality, photo, and residency status to verify their right to live and work in the country. 

But the case for digital identification has not been made. 

As we detail in our joint briefing, the proposal follows a troubling global trend: governments introducing expansive digital identity systems that are structurally incompatible with a rights-respecting democracy. The UK’s plan raises six interconnected concerns:

  1. Mission creep
  2. Infringements on privacy rights
  3. Serious security risks
  4. Reliance on inaccurate and unproven technologies
  5. Discrimination and exclusion
  6. The deepening of entrenched power imbalances between the state and the public.

Digital ID schemes don’t simply verify who you are—they redefine who can access services and what those services look like. They become a gatekeeper to essential societal infrastructure, enabling governments and state agencies to close doors as easily as they open them. And they disproportionately harm those already at society’s margins, including people seeking asylum and undocumented communities, who already face heightened surveillance and risk.

Even the strongest recommended safeguards cannot resolve the core problem: a mandatory digital ID scheme that shifts power dramatically away from individuals and toward the state. No one should be coerced—technically or socially—into a digital system in order to participate fully in public life. And at a time when almost 3 million people in the UK have called on politicians to reject this proposal, the government must listen to people and say no to digital ID.

Read our civil society briefing in full here.

Making clean energy investments more successful

MIT Latest News - Fri, 12/12/2025 - 11:20am

Governments and companies constantly face decisions about how to allocate finite amounts of money to clean energy technologies that can make a difference to the world’s climate, its economies, and to society as a whole. The process is inherently uncertain, but research has been shown to help predict which technologies will be most successful. Using data-driven bases for such decisions can have a significant impact on allowing more informed decisions that produce the desired results.

The role of these predictive tools, and the areas where further research is needed, are addressed in a perspective article published Nov. 24 in Nature Energy, by professor Jessika Trancik of MIT’s Sociotechnical Systems Research Center and Institute of Data, Systems, and Society and 13 co-authors from institutions around the world.

She and her co-authors span engineering and social science and share “a common interest in understanding how to best use data and models to inform decisions that influence how technology evolves,” Trancik says. They are interested in “analyzing many evolving technologies — rather than focusing on developing only one particular technology — to understand which ones can deliver.” Their paper is aimed at companies and governments, as well as researchers. “Increasingly, companies have as much agency as governments over these technology portfolio decisions,” she says, “although government policy can still do a lot because it can provide a sort of signal across the market.”

The study looked at three stages of the process, starting with forecasting the actual technological changes that are likely to play important roles in coming years, then looking at how those changes could affect economic, social, and environmental conditions, and finally, how to apply these insights into the actual decision-making processes as they occur.

Forecasting usually falls into two categories, either data-driven or expert-driven, or a combination of those. That provides an estimate of how technologies may be improving, as well as an estimate of the uncertainties in those predictions. Then in the next step, a variety of models are applied that are “very wide ranging,” Trancik says, “different models that cover energy systems, transportation systems, electricity, and also integrated assessment models that look at the impact of technology on the environment and on the economy.”

And then, the third step is “finding structured ways to use the information from predictive models to interact with people that may be using that information to inform their decision-making process,” she says. “In all three of these steps, how you need to recognize the vast uncertainty and tease out the predictive aspects. How you deal with uncertainty is really important.”

In the implementation of these decisions, “people may have different objectives, or they may have the same objective but different beliefs about how to get there. And so, part of the research is bringing in this quantitative analysis, these research results, into that process,” Trancik says. And a very important aspect of that third step, she adds, is “recognizing that it’s not just about presenting the model results and saying, ‘here you go, this is the right answer.’ Rather, you have to bring people into the process of designing the studies and interacting with the modeling results.”

She adds that “the role of research is to provide information to, in this case, the decision-making processes. It’s not the role of the researchers to push for one outcome or another, in terms of balancing the trade-offs,” such as between economic, environmental, and social equity concerns. It’s about providing information, not just for the decision-makers themselves, but also for the public who may influence those decisions. “I do think it’s relevant for the public to think about this, and to think about the agency that actually they could have over how technology is evolving.”

In the study, the team highlighted priorities for further research that needs to be done. Those priorities, Trancik says, include “streamlining and validating models, and also streamlining data collection,” because these days “we often have more data than we need, just tons of data,” and yet “there’s often a scarcity of data in certain key areas like technology performance and evolution. How technologies evolve is just so important in influencing our daily lives, yet it’s hard sometimes to access good representative data on what’s actually happening with this technology.” But she sees opportunities for concerted efforts to assemble large, comprehensive data on technology from publicly available sources.

Trancik points out that many models are developed to represent some real-world process, and “it’s very important to test how well that model does against reality,” for example by using the model to “predict” some event whose outcome is already known and then “seeing how far off you are.” That’s easier to do with a more streamlined model, she says.

“It’s tempting to develop a model that includes many, many parameters and lots of different detail. But often what you need to do is only include detail that’s relevant for the particular question you’re asking, and that allows you to make your model simpler.” Sometimes that means you can simplify the decision down to just solving an equation, and other times, “you need to simulate things, but you can still validate the model against real-world data that you have.”

“The scale of energy and climate problems mean there is much more to do,” says Gregory Nemet, faculty chair in business and regulation at the University of Wisconsin at Madison, who was a co-author of the paper. He adds, “while we can’t accurately forecast individual technologies on their own, a variety of methods have been developed that in conjunction can enable decision-makers to make public dollars go much further, and enhance the likelihood that future investments create strong public benefits.”

This work is perhaps particularly relevant now, Trancik says, in helping to address global challenges including climate change and meeting energy demand, which were in focus at the global climate conference COP 30 that just took place in Brazil. “I think with big societal challenges like climate change, always a key question is, ‘how do you make progress with limited time and limited financial resources?’” This research, she stresses, “is all about that. It’s about using data, using knowledge that’s out there, expertise that’s out there, drawing out the relevant parts of all of that, to allow people and society to be more deliberate and successful about how they’re making decisions about investing in technology.”

As with other areas such as epidemiology, where the power of analytical forecasting may be more widely appreciated, she says, “in other areas of technology as well, there’s a lot we can do to anticipate where things are going, how technology is evolving at the global or at the national scale … There are these macro-level trends that you can steer in certain directions, that we actually have more agency over as a society than we might recognize.”

The study included researchers in Massachusetts, Wisconsin, Colorado, Maryland, Maine, California, Austria, Norway, Mexico, Finland, Italy, the U.K., and the Netherlands. 

President Tharman Shanmugaratnam of Singapore visits MIT

MIT Latest News - Fri, 12/12/2025 - 10:00am

President Tharman Shanmugaratnam of the Republic of Singapore visited MIT on Tuesday, meeting campus leaders while receiving the Miriam Pozen Prize and delivering a lecture on fiscal policy at the MIT Sloan School of Management.

“We really have to re-orient fiscal policy and develop new fiscal compacts,” said Tharman in his remarks, referring to the budget policy challenges countries face at a time of expanding government debt.

His talk, “The Compacts We Need: Fiscal Choices and Risk-sharing for Sustained Prosperity,” was delivered before a capacity audience of students, faculty, administrators, and staff at MIT’s Samberg Center.

Tharman is a trained economist who for many years ran Singapore’s central bank and has become a notable presence in global policymaking circles. Presenting a crisp summary of global trends, he observed that debt levels in major economies are at or beyond levels once regarded as unsustainable.

“There is no realistic solution to putting government debts back on a sustainable path other than having to make major adjustments to taxes and spending,” he said. However, he emphasized that his remarks were distinctly not “a call for austerity.” Instead, as he outlined, well-considered public investment can reduce the need for additional spending and thus be fiscally sound over time.

For instance, he noted, sound policy approaches can reduce individuals’ health care needs by better providing the conditions in which people stay healthy. Lowering some of these individual burdens and investing in community-building policies can help society both fiscally and by enhancing social solidarity.

“The challenge is to make these adjustments while re-fashioning fiscal policy so that people can see the adjustments — they can see the value in government spending that their taxes are contributing to — and to make adjustments in a way that doesn’t reduce growth,” Tharman said. “You do need growth for solidarity.”

In this sense, he proposed, “We need new fiscal compacts, new retirement compacts, and new global compacts to address the risks that are posed in the minds of individuals, as well as the largest risks” in society. Countries are vulnerable to a variety of shocks, he noted, calling climate change the “defining challenge of our time.” And yet, he added, for all of this, sensible policymaking can encourage people, creating more support for public-minded governance.

“It is that sharing of hopes and aspirations that is at the heart of true solidarity, not the sharing of fears,” Tharman concluded.

Before the lecture, Tharman was greeted by MIT Provost Anantha Chandrakasan, who presented him with a small gift from the MIT Glass Lab, and MIT Sloan Dean Richard Locke. Locke then made welcoming remarks at the event, praising Tharman’s “remarkable leadership in international financial policy, among other things.” After the lecture, Tharman also met with a group of MIT students from Singapore.

The Miriam Pozen Prize is awarded every two years by the MIT Golub Center for Finance and Policy, part of MIT Sloan. The prize, which recognizes extraordinary contributions to financial policy, was created to draw attention to the important research on financial policy conducted at the Golub Center, whose mission is to support research and educational initiatives related to governments’ roles as financial institutions and as regulators of the global financial system. It is named for the mother of MIT Sloan Senior Lecturer Robert C. Pozen, who is also the former executive chairman of MFS Investment Management, and a former vice chairman of Fidelity Investments and president of Fidelity Management and Research Company.

In introductory remarks. Robert Pozen said he was “deeply honored” to present the prize, adding, “It’s very unusual to have someone who is both a brilliant economist and an effective political leader, and that combination is exactly what we’re trying to honor and recognize.”

The previous recipients of the award are Mario Draghi PhD ’77, the former prime minister of Italy and president of the European Central Bank; and the late Stanley Fischer PhD ’69, an influential MIT economist who later became governor of the Bank of Israel, and then vice-chairman of the U.S. Federal Reserve. Draghi received the honor in 2023, and Fischer in 2021.

Tharman was first elected to his current office in 2023. In Singapore, he previously served as, among other roles, deputy prime minister, minister for finance, minister for education, and chairman of the Monetary Authority of Singapore.

Tharman holds a BA in economics from the London School of Economics, an MA in economics from the University of Cambridge, and an MPA from the Harvard Kennedy School at Harvard University.

MIT and Singapore have developed a sustained and productive relationship in research and education over the last quarter-century. The Singapore-MIT Alliance for Research and Technology (SMART), formally launched in 2007, is MIT’s first research center located outside of the United States, featuring work in several interdisciplinary areas of innovation.

The MIT-Singapore program also provides MIT students with research, work, and educational opportunities in Singapore. Additionally, MIT Institute Professor Emeritus Thomas Magnanti, who was present at Tuesday’s event, was the founding president of the Singapore University of Technology and Design, in 2009.

Tuesday’s event also had introductory remarks from Deborah J. Lucas, Sloan Distinguished Professor of Finance at MIT Sloan and director of the MIT Golub Center for Finance and Policy; Peter Fischer, Golub Distinguished Senior Fellow at MIT Sloan and a former under secretary in the U.S. Treasury Department; and Robert C. Merton, School of Managament Distinguished Professor of Finance at MIT Sloan.

In her comments, Lucas said that Tharman “personifies the qualities the award was created to honor,” while Fischer cited his emphasis on “the betterment of humankind.”

Merton praised Tharman’s “deep commitment for advancing financial policy in a way that serves both national and global arenas.” He added: “You have always believed that policy is not just about numbers, but about people. And that sound financial [policies] serve the many, not just the few.”

Building Trustworthy AI Agents

Schneier on Security - Fri, 12/12/2025 - 7:00am

The promise of personal AI assistants rests on a dangerous assumption: that we can trust systems we haven’t made trustworthy. We can’t. And today’s versions are failing us in predictable ways: pushing us to do things against our own best interests, gaslighting us with doubt about things we are or that we know, and being unable to distinguish between who we are and who we have been. They struggle with incomplete, inaccurate, and partial context: with no standard way to move toward accuracy, no mechanism to correct sources of error, and no accountability when wrong information leads to bad decisions...

The Paris Agreement at 10: What the world has achieved.

ClimateWire News - Fri, 12/12/2025 - 6:24am
The blockbuster climate deal made history a decade ago. But its record at taming climate change is spotty.

Noem says FEMA is moving faster than ever. Agency records say otherwise.

ClimateWire News - Fri, 12/12/2025 - 6:22am
President Donald Trump is approving disaster requests at a slower pace in his second term than his predecessor, former President Joe Biden.

Judge faults Trump admin for scrapping FEMA program

ClimateWire News - Fri, 12/12/2025 - 6:21am
The decision is a win for Democratic-led states that sued to save the program, which helps states gird for natural disasters.

Deadly floods in southern Asia mark worsening trend

ClimateWire News - Fri, 12/12/2025 - 6:21am
Some communities are taking their concerns about intensifying climate disasters to the courts.

Trump wants to keep Venezuela’s seized oil. It’s probably legal.

ClimateWire News - Fri, 12/12/2025 - 6:21am
The U.S. may be able to keep oil worth as much as $100 million after seizing an oil tanker headed to Cuba.

No big party in Paris as climate pact turns 10

ClimateWire News - Fri, 12/12/2025 - 6:18am
The birthday of the founding treaty of climate negotiations arrives just as the fight against climate change appears to lose momentum.

EU mulls 5-year respite from combustion ban for hybrids

ClimateWire News - Fri, 12/12/2025 - 6:17am
Governments and carmakers say shifting away from current technology by 2035 is too aggressive and risks killing a core industry.

German coalition targets accord by March on disputed heating law

ClimateWire News - Fri, 12/12/2025 - 6:17am
The heating law provoked an outcry when it was introduced by Germany’s previous government of Social Democrats and Greens.

Winter storm rips through Gaza, exposing failure to deliver enough aid

ClimateWire News - Fri, 12/12/2025 - 6:16am
Figures released by Israel's military suggest it hasn't met the ceasefire stipulation of allowing 600 trucks of aid into Gaza a day.

New method improves the reliability of statistical estimations

MIT Latest News - Fri, 12/12/2025 - 12:00am

Let’s say an environmental scientist is studying whether exposure to air pollution is associated with lower birth weights in a particular county.

They might train a machine-learning model to estimate the magnitude of this association, since machine-learning methods are especially good at learning complex relationships.

Standard machine-learning methods excel at making predictions and sometimes provide uncertainties, like confidence intervals, for these predictions. However, they generally don’t provide estimates or confidence intervals when determining whether two variables are related. Other methods have been developed specifically to address this association problem and provide confidence intervals. But, in spatial settings, MIT researchers found these confidence intervals can be completely off the mark.

When variables like air pollution levels or precipitation change across different locations, common methods for generating confidence intervals may claim a high level of confidence when, in fact, the estimation completely failed to capture the actual value. These faulty confidence intervals can mislead the user into trusting a model that failed.

After identifying this shortfall, the researchers developed a new method designed to generate valid confidence intervals for problems involving data that vary across space. In simulations and experiments with real data, their method was the only technique that consistently generated accurate confidence intervals.

This work could help researchers in fields like environmental science, economics, and epidemiology better understand when to trust the results of certain experiments.

“There are so many problems where people are interested in understanding phenomena over space, like weather or forest management. We’ve shown that, for this broad class of problems, there are more appropriate methods that can get us better performance, a better understanding of what is going on, and results that are more trustworthy,” says Tamara Broderick, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Laboratory for Information and Decision Systems (LIDS) and the Institute for Data, Systems, and Society, an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and senior author of this study.

Broderick is joined on the paper by co-lead authors David R. Burt, a postdoc, and Renato Berlinghieri, an EECS graduate student; and Stephen Bates an assistant professor in EECS and member of LIDS. The research was recently presented at the Conference on Neural Information Processing Systems.

Invalid assumptions

Spatial association involves studying how a variable and a certain outcome are related over a geographic area. For instance, one might want to study how tree cover in the United States relates to elevation.

To solve this type of problem, a scientist could gather observational data from many locations and use it to estimate the association at a different location where they do not have data.

The MIT researchers realized that, in this case, existing methods often generate confidence intervals that are completely wrong. A model might say it is 95 percent confident its estimation captures the true relationship between tree cover and elevation, when it didn’t capture that relationship at all.

After exploring this problem, the researchers determined that the assumptions these confidence interval methods rely on don’t hold up when data vary spatially.

Assumptions are like rules that must be followed to ensure results of a statistical analysis are valid. Common methods for generating confidence intervals operate under various assumptions.

First, they assume that the source data, which is the observational data one gathered to train the model, is independent and identically distributed. This assumption implies that the chance of including one location in the data has no bearing on whether another is included. But, for example, U.S. Environmental Protection Agency (EPA) air sensors are placed with other air sensor locations in mind.

Second, existing methods often assume that the model is perfectly correct, but this assumption is never true in practice. Finally, they assume the source data are similar to the target data where one wants to estimate.

But in spatial settings, the source data can be fundamentally different from the target data because the target data are in a different location than where the source data were gathered.

For instance, a scientist might use data from EPA pollution monitors to train a machine-learning model that can predict health outcomes in a rural area where there are no monitors. But the EPA pollution monitors are likely placed in urban areas, where there is more traffic and heavy industry, so the air quality data will be much different than the air quality data in the rural area.

In this case, estimates of association using the urban data suffer from bias because the target data are systematically different from the source data.

A smooth solution

The new method for generating confidence intervals explicitly accounts for this potential bias.

Instead of assuming the source and target data are similar, the researchers assume the data vary smoothly over space.

For instance, with fine particulate air pollution, one wouldn’t expect the pollution level on one city block to be starkly different than the pollution level on the next city block. Instead, pollution levels would smoothly taper off as one moves away from a pollution source.

“For these types of problems, this spatial smoothness assumption is more appropriate. It is a better match for what is actually going on in the data,” Broderick says.

When they compared their method to other common techniques, they found it was the only one that could consistently produce reliable confidence intervals for spatial analyses. In addition, their method remains reliable even when the observational data are distorted by random errors.

In the future, the researchers want to apply this analysis to different types of variables and explore other applications where it could provide more reliable results.

This research was funded, in part, by an MIT Social and Ethical Responsibilities of Computing (SERC) seed grant, the Office of Naval Research, Generali, Microsoft, and the National Science Foundation (NSF).

From Speakeasies to DEF CON—Celebrating With EFF Members: 2025 Year In Review

EFF: Updates - Thu, 12/11/2025 - 5:21pm

It’s been a great year to be on EFF’s membership team. There's no better feeling than hanging out with your fellow digital freedom supporters and being able to say, “Oh yeah, and we’re suing the government!” We’ve done that a lot this year—and that’s all thanks to people like you. 

As a token of appreciation for supporting EFF’s mission to protect privacy and free expression online for all people, we put a lot of care into meeting the members who make our work possible. Whether it’s hosting meetups, traveling to conferences, or finding new and fun ways to explain what we’re fighting for, connecting with you is always a highlight of the job.

EFF Speakeasy Meet Ups

One of my favorite perks we offer for EFF members is exclusive invites for Speakeasy meet ups. It’s a chance for us to meet the very passionate members who fuel our work! 

This year, we hosted Speakeasies across the country while making the rounds at conferences. We met supporters in Mesa, AZ during CactusCon; Pasadena, CA during SCALE; Portland, OR during BSidesPDX; New York, NY during HOPE and BSidesNYC; and Seattle, WA during our panel at the University of Washington. 

Of course, we also had to host a Speakeasy in our home court—and for the first time it took place in the South Bay Area in Mountain View, CA at Hacker Dojo! There, members of EFF’s D.C. Legislative team spoke about EFF’s legislative efforts and how they’ll shape digital rights for all. We even recorded that conversation for you to watch on YouTube or the Internet Archive

And we can’t forget about our global community! Our annual online Speakeasy brought together members around the world for a conversation and Q&A with our friends at Women in Security and Privacy (WISP) about online behavioral tracking and the data broker industry. We heard and answered great questions about pushing back on online tracking and what legislative steps we can take to strengthen privacy. 

Summer Security Conferences

Say what you will about Vegas—nothing compares to the energy of seeing thousands of EFF supporters during the summer security conferences: BSidesLV, Black Hat USA, and DEF CON. This year over one thousand people signed up to support the digital freedom movement in just that one week.  

If you’ve ever seen us at a conference, you know the drill: a table full of EFF staff frantically handing out swag, answering questions, and excitedly saying hi to everyone that stops by and supports our work. This year it was especially fun to see how many people brought their Rayhunter devices

And of course, it wouldn’t be a trip to Vegas without EFF’s annual DEF CON Poker Tournament. This year 48 supporters and friends played for money, glory, and the future of the web—all with EFF’s very own playing cards. For the first time ever, the jellybean trophy went to the same winner two years in a row! 

img_6123-web.jpg

EFFecting Change Livestream Series

We ramped up our livestream series, EFFecting Change, this year with a total of six livestreams covering topics including the future of social media with guests from Mastodon, Bluesky, and Spill; EFF’s 35th Anniversary and what’s next in the fight for privacy and free speech online; and generative AI, including how to address the risks of the technology while protecting civil liberties and human rights online. 

We’ve got more in store for EFFecting Change in 2026, so be sure to stay up-to-date by signing up for updates

EFF Awards Ceremony

EFF is at the forefront of protecting users from dystopian surveillance and unjust censorship online. But we’re not the only one doing this work, and we couldn’t do it without other organizations in the space. So, every year we like to award those who are courageously championing the digital rights movement. 

This year we gave out three awards: the EFF Award for Defending Digital Freedoms went to Software Freedom Law Center, India, the EFF Award for Protecting Americans’ Data went to Erie Meyer, and the EFF Award for Leading Immigration and Surveillance Litigation went to Just Futures Law. You can watch the EFF Awards here and see photos from the event too!


And It's All Thanks to You

That doesn’t even cover all of it! We even got to celebrate 35 years of EFF in July with limited-edition challenge coins and all-new member swag—plus a livestream covering EFF’s history and what’s next for us.

Grab EFF's 35th Anniversary t-shirt when you become a member today!

As the new year approaches, I always like to look back on the bright spots—especially the joy of hanging out with this incredible community. The world can feel hectic, but connecting with supporters like you is a reminder of how much good we can build when we work together. 

Many thanks to all of the EFF members who joined forces with us this year. If you’ve been meaning to join, but haven’t yet, year-end is a great time to do so

This article is part of our Year in Review series. Read other articles about the fight for digital rights in 2025.

School of Science welcomed new faculty in 2024

MIT Latest News - Thu, 12/11/2025 - 4:55pm

The School of Science welcomed 11 new faculty members in 2024.

Shaoyun Bai researches symplectic topology, the study of even-dimensional spaces whose properties are reflected by two-dimensional surfaces inside them. He is interested in this area’s interaction with other fields, including algebraic geometry, algebraic topology, geometric topology, and dynamics. He has been developing new tool kits for counting problems from moduli spaces, which have been applied to classical questions, including the Arnold conjecture, periodic points of Hamiltonian maps, higher-rank Casson invariants, enumeration of embedded curves, and topology of symplectic fibrations.

Bai completed his undergraduate studies at Tsinghua University in 2017 and earned his PhD in mathematics from Princeton University in 2022, advised by John Pardon. Bai then held visiting positions at MSRI (now known as Simons Laufer Mathematical Sciences Institute) as a McDuff Postdoctoral Fellow and at the Simons Center for Geometry and Physics, and he was a Ritt Assistant Professor at Columbia University. He joined the MIT Department of Mathematics as an assistant professor in 2024.

Abigail Bodner investigates turbulence in the upper ocean using remote sensing measurements, in-situ ocean observations numerical simulations, climate models, and machine learning. Her research explores how the small-scale physics of turbulence near the ocean surface impacts the large-scale climate. 

Bodner earned a BS and MS from Tel Aviv University studying mathematics and geophysics, atmospheric and planetary sciences. She then went on to Brown University, earning an MS in applied mathematics before completing her PhD studies in 2021 in Earth, environmental, and planetary science. Prior to coming to MIT, Bodner was a Simons Society Junior Fellow at New York University. Bodner joined the Department of Earth, Atmospheric and Planetary Sciences (EAPS) faculty in 2024, with a shared appointment in the Department of Electrical Engineering and Computer Science.

Jacopo Borga is interested in probability theory and its connections to combinatorics, and in mathematical physics. He studies various random combinatorial structures — mathematical objects such as graphs or permutations — and their patterns and behavior at a large scale. This research includes random permutons, meanders, multidimensional constrained Brownian motions, Schramm-Loewner evolutions, and Liouville quantum gravity. 

Borga earned bachelor’s and master’s degrees in mathematics from the Università degli Studi di Padova, and a master’s degree in mathematics from Université Sorbonne Paris Cité (USPC), then proceeded to complete a PhD in mathematics at Unstitut für Mathematik at the Universität Zürich. Borga was an assistant professor at Stanford University before joining MIT as an assistant professor of mathematics in 2024.

Linlin Fan aims to decipher the neural codes underlying learning and memory and to identify the physical basis of learning and memory. Her research focus is on the learning rules of brain circuits — what kinds of activity trigger the encoding and storing of information — how these learning rulers are implemented, and how memories can be inferred from mapping neural functional connectivity patterns. To answer these questions, Fan’s group leverages high-precision, all-optical technologies to map and control the electrical charges of neurons within the brain.

Fan earned her PhD at Harvard University after undergraduate studies at Peking University in China. She joined the MIT Department of Brain and Cognitive Sciences as the Samuel A. Goldblith Career Development Professor of Applied Biology, and the Picower Institute for Learning and Memory as an investigator in January 2024. Previously, Fan worked as a postdoc at Stanford University.

Whitney Henry investigates ferroptosis, a type of cell death dependent on iron, to uncover how oxidative stress, metabolism, and immune signaling intersect to shape cell fate decisions. Her research has defined key lipid metabolic and iron homeostatic programs that regulate ferroptosis susceptibility. By uncovering the molecular factors influencing ferroptosis susceptibility, investigating its effects on the tumor microenvironment, and developing innovative methods to manipulate ferroptosis resistance in living organisms, Henry’s lab aims to gain a comprehensive understanding of the therapeutic potential of ferroptosis, especially to target highly metastatic, therapy-resistant cancer cells.

Henry received her bachelor's degree in biology with a minor in chemistry from Grambling State University and her PhD from Harvard University. Following her doctoral studies, she worked at the Whitehead Institute for Biomedical Research and was supported by fellowships from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center at MIT. Henry joined the MIT faculty in 2024 as an assistant professor in the Department of Biology and a member of the Koch Institute for Integrative Cancer Research, and was recently named the Robert A. Swanson (1969) Career Development Professor of Life Sciences and a HHMI Freeman Hrabowski Scholar.

Gian Michele Innocenti is an experimental physicist who probes new regimes of quantum chromodynamics (QCD) through collisions of ultra relativistic heavy ions at the Large Hadron Collider. He has developed advanced analysis techniques and data-acquisition strategies that enable novel measurements of open heavy-flavor and jet production in hadronic and ultraperipheral heavy-ion collisions, shedding light on the properties of high-temperature QCD matter and parton dynamics in Lorentz-contracted nuclei. He leads the MIT Pixel𝜑 program, which exploits CMOS MAPS technology to build a high-precision tracking detector for the ePIC experiment at the Electron–Ion Collider.

Innocenti received his PhD in particle and nuclear physics at the University of Turin in Italy in early 2014. He then joined the MIT heavy-ion group in the Laboratory of Nuclear Science in 2014 as a postdoc, followed by a staff research physicist position at CERN in 2018. Innocenti joined the MIT Department of Physics as an assistant professor in January 2024.

Mathematician Christoph Kehle's research interests lie at the intersection of analysis, geometry, and partial differential equations. In particular, he focuses on the Einstein field equations of general relativity and our current understanding of gravitation, which describe how matter and energy shape spacetime. His work addresses the Strong Cosmic Censorship conjecture, singularities in black hole interiors, and the dynamics of extremal black holes.

Prior to joining MIT, Kehle was a junior fellow at ETH Zürich and a member at the Institute for Advanced Study in Princeton. He earned his bachelor’s and master’s degrees at Ludwig Maximilian University and Technical University of Munich, and his PhD in 2020 from the University of Cambridge. Kehle joined the Department of Mathematics as an assistant professor in July 2024.

Aleksandr Logunov is a mathematician specializing in harmonic analysis and geometric analysis. He has developed novel techniques for studying the zeros of solutions to partial differential equations and has resolved several long-standing problems, including Yau’s conjecture, Nadirashvili’s conjecture, and Landis’ conjectures.

Logunov earned his PhD in 2015 from St. Petersburg State University. He then spent two years as a postdoc at Tel Aviv University, followed by a year as a member of the Institute for Advanced Study in Princeton. In 2018, he joined Princeton University as an assistant professor. In 2020, he spent a semester at Tel Aviv University as an IAS Outstanding Fellow, and in 2021, he was appointed full professor at the University of Geneva. Logunov joined MIT as a full professor in the Department of Mathematics in January 2024.

Lyle Nelson is a sedimentary geologist studying the co-evolution of life and surface environments across pivotal transitions in Earth history, especially during significant ecological change — such as extinction events and the emergence of new clades — and during major shifts in ocean chemistry and climate. Studying sedimentary rocks that were tectonically uplifted and are now exposed in mountain belts around the world, Nelson’s group aims to answer questions such as how the reorganization of continents influenced the carbon cycle and climate, the causes and effects of ancient ice ages, and what factors drove the evolution of early life forms and the rapid diversification of animals during the Cambrian period.

Nelson earned a bachelor’s degree in earth and planetary sciences from Harvard University in 2015 and then worked as an exploration geologist before completing his PhD at Johns Hopkins University in 2022. Prior to coming to MIT, he was an assistant professor in the Department of Earth Sciences at Carleton University in Ontario, Canada. Nelson joined the EAPS faculty in 2024.

Protein evolution is the process by which proteins change over time through mechanisms such as mutation or natural selection. Biologist Sergey Ovchinnikov uses phylogenetic inference, protein structure prediction/determination, protein design, deep learning, energy-based models, and differentiable programming to tackle evolutionary questions at environmental, organismal, genomic, structural, and molecular scales, with the aim of developing a unified model of protein evolution.

Ovchinnikov received his BS in micro/molecular biology from Portland State University in 2010 and his PhD in molecular and cellular biology from the University of Washington in 2017. He was next a John Harvard Distinguished Science Fellow at Harvard University until 2023. Ovchinnikov joined MIT as an assistant professor of biology in January 2024.

Shu-Heng Shao explores the structural aspects of quantum field theories and lattice systems. Recently, his research has centered on generalized symmetries and anomalies, with a particular focus on a novel type of symmetry without an inverse, referred to as non-invertible symmetries. These new symmetries have been identified in various quantum systems, including the Ising model, Yang-Mills theories, lattice gauge theories, and the Standard Model. They lead to new constraints on renormalization group flows, new conservation laws, and new organizing principles in classifying phases of quantum matter.

Shao obtained his BS in physics from National Taiwan University in 2010, and his PhD in physics from Harvard University in 2016. He was then a five-year long-term member at the Institute for Advanced Study in Princeton before he moved to the Yang Institute for Theoretical Physics at Stony Brook University as an assistant professor in 2021. In 2024, he joined the MIT faculty as an assistant professor of physics.

MIT researchers find new immunotherapeutic targets for glioblastoma

MIT Latest News - Thu, 12/11/2025 - 4:40pm

Glioblastoma is the most common form of brain cancer in adults, and its consequences are usually quick and fatal. After receiving standard-of-care treatment (surgery followed by radiation and chemotherapy), fewer than half of patients will survive longer than 15 months. Only 5 percent of patients survive longer than five years.

Researchers have explored immune checkpoint inhibitors as an avenue for boosting glioblastoma survival rates. This type of immunotherapy, which has proven effective against a range of tumor types, turns off a molecular switch that prevents T cells from attacking cancer cells. The patient’s own immune system is then able to clear the tumor. 

However, glioblastoma is unusually resistant to attack by T cells, rendering immune checkpoint inhibitors ineffective. The culprit is a different immune cell, macrophages, which have been recruited to tumors, where they support tumor growth while suppressing the ability of T cells to infiltrate and attack tumors.

A team of researchers led by Forest White at the MIT Koch Institute for Integrative Cancer Research used sophisticated immune profiling tools to map out how macrophages evolve from a first-line defense against cancer and other pathogens into a shield that protects the glioblastoma tumor — as well as how the tumor cells themselves are transformed by the encounter.

“Looking at the co-evolution of both cell types is key,” says White, who is also the Ned C. (1949) and Janet C. (Bemis) Rice Professor in the Department of Biological Engineering. “It’s a little bit like what happens when a new family moves into a neighborhood: The family members’ lives change, but so do the social dynamics of the people around them. Whether you’re mixing people or cells, you won’t be able to predict how they will interact, even if you know both well.”

“By looking at what happens when macrophages move into the tumor, we can observe changes to both types of cells that we wouldn’t otherwise be able to see,” says Yufei Cui, a PhD candidate in the White Laboratory. “We were able to identify new targets for both glioblastoma and macrophages that could be used to develop therapies that, when delivered in combination with immune checkpoint inhibitors, more effectively treat glioblastoma.”

The study, appearing recently in Cancer Research, includes Stefani Spranger, associate professor of biology and member of the MIT Koch Institute, and Darrell Irvine, former member of the Koch Institute and now professor at the Scripps Research Institute.

As in other cancers, macrophages play a pivotal role in glioblastoma development and resistance to immune therapies. In laboratory models, inhibiting the activity of tumor-associated macrophages has been found to slow glioblastoma growth, but that success has not translated to studies of human patients. While the overall strategy of targeting glioblastoma-associated macrophages is promising, new targets — derived from models that more accurately reproduce the cell interactions in patient tumors — need to be identified.

One approach to discovering such targets is a specialty of the White lab: profiling cells’ immunopeptidomes — the repertoires of antigens presented on the surfaces of cancer cells, macrophages, and many other types of cells. Surface-presenting antigens are a window into the internal state of the cell: The antigens derive from proteins produced as the cell carries out different functions and responds to its environment. By binding to surface antigens, T cells and other immune cells can monitor cells for dysfunction and respond to them. 

The White lab has developed sophisticated methods for immunopeptidome profiling, combining methods such as liquid chromatography and mass spectrometry to isolate cell surface antigens — in this case, from glioblastoma and macrophage cells cultured in isolation and together — and quantifying changes in expression over time. The researchers identified over 800 peptides in macrophages that either increased or decreased in expression when cultured with glioblastoma cells. Peptides with the biggest gains in expression under co-cultivation derived from 33 source proteins, mostly related to cytokine signaling that promotes tumor aggression and suppresses immune response to tumors.

Antigen presentation on glioblastoma cells was also transformed by interactions with macrophages. These antigens were associated with Rho GTPase, a signaling protein that belongs to Ras, a class of proteins that is mutated in 30 percent of all cancers. Changes in Rho GTPase expression predispose cells to developing hallmark traits of cancer, such as prolonged cell longevity, abnormal growth, and metastasis. Antigen profiles of co-cultured glioblastoma cells revealed over 40 Rho GTPase-associated antigens with increased expression compared to tumor cells cultured in isolation.

Researchers compared antigen expression changes in co-cultured macrophage and glioblastoma cells to immunopeptidome profiles of mouse models and human tumor samples, finding that patterns observed in cell culture translated to animal models and, potentially, to patients.

Researchers selected six antigens showing increased expression in either glioblastoma cells or macrophages to test as therapeutic targets, developing an mRNA-based immunostimulatory therapy for each antigen. After treating mice with glioblastoma, tumors showed significantly slowed growth overall and, in a few cases, were completely eradicated. 

In future work, the team plans to use their immunopeptidome profiling techniques to characterize co-cultured dendritic cells, which retrieve proteins from cancer cells and presents them to T cells as antigens, as well as to explore antigen presentation of cells in live models of glioblastoma.

“This study demonstrates the promise of profiling cell surface antigens,” says Cui. “With quantitative accuracy and cell type resolution, our approach could be used to design improved immunotherapies against many cancer types and other diseases,” says Cui.

This work was supported, in part, by the National Cancer Institute (NCI) and the MIT Center for Precision Cancer Medicine. 

Pages