Feed aggregator
Temporary carbon dioxide removals to offset methane emissions
Nature Climate Change, Published online: 05 December 2025; doi:10.1038/s41558-025-02487-8
Methane emissions have a large short-term impact on temperature, which can be potentially offset by nature-based solutions that provide temporary carbon storage. This research demonstrates such matching could minimize intertemporal welfare trade-offs and avoid various risks for permanent removal.EU's New Digital Package Proposal Promises Red Tape Cuts but Guts GDPR Privacy Rights
The European Commission (EC) is considering a “Digital Omnibus” package that would substantially rewrite EU privacy law, particularly the landmark General Data Protection Regulation (GDPR). It’s not a done deal, and it shouldn’t be.
The GDPR is the most comprehensive model for privacy legislation around the world. While it is far from perfect and suffers from uneven enforcement, complexities and certain administrative burdens, the omnibus package is full of bad and confusing ideas that, on balance, will significantly weaken privacy protections for users in the name of cutting red tape.
It contains at least one good idea: improving consent rules so users can automatically set consent preferences that will apply across all sites. But much as we love limiting cookie fatigue, it’s not worth the price users will pay if the rest of the proposal is adopted. The EC needs to go back to the drawing board if it wants to achieve the goal of simplifying EU regulations without gutting user privacy.
Let’s break it down.
Changing What Constitutes Personal Data
The digital package is part of a larger Simplification Agenda to reduce compliance costs and administrative burdens for businesses, echoing the Draghi Report’s call to boost productivity and support innovation. Businesses have been complaining about GDPR red tape since its inception, and new rules are supposed to make compliance easier and turbocharge the development of AI in the EU. Simplification is framed as a precondition for firms to scale up in the EU, ironically targeting laws that were also argued to promote innovation in Europe. It might also stave off tariffs the U.S. has threatened to levy, thanks in part to heavy lobbying from Meta and tech lobbying groups.
The most striking proposal seeks to narrow the definition of personal data, the very basis of the GDPR. Today, information counts as personal data if someone can reasonably identify a person from it, whether directly or by combining it with other information.
The proposal jettisons this relatively simple test in favor of a variable one: whether data is “personal” depends on what a specific entity says it can reasonably do or is likely to do with it. This selectively restates part of a recent ruling by the EU Court of Justice but ignores the multiple other cases that have considered the issue.
This structural move toward entity specific standards will create massive legal and practical confusion, as the same data could be treated as personal for some actors but not for others. It also creates a path for companies to avoid established GDPR obligations via operational restructuring to separate identifiers from other information—a change in paperwork rather than in actual identifiability. What’s more, it will be up to the Commission, a political executive body, to define what counts as unidentifiable pseudonymized data for certain entities.
Privileging AI
In the name of facilitating AI innovation, which often relies on large datasets in which sensitive data may residually appear, the digital package treats AI development as a “legitimate interest,” which gives AI companies a broad legal basis to process personal data, unless individuals actively object. The proposals gesture towards organisational and technical safeguards but leave companies broad discretion.
Another amendment would create a new exemption that allows even sensitive personal data to be used for AI systems under some circumstances. This is not a blanket permission: “organisational and technical measures” must be taken to avoid collecting or processing such data, and proportionate efforts must be taken to remove them from AI models or training sets where they appear. However, it is unclear what will count as an appropriate or proportionate measures.
Taken together with the new personal data test, these AI privileges mean that core data protection rights, which are meant to apply uniformly, are likely to vary in practice depending on a company’s technological and commercial goals.
And it means that AI systems may be allowed to process sensitive data even though non-AI systems that could pose equal or lower risks are not allowed to handle it.
A Broad Reform Beyond the GDPR
There are additional adjustments, many of them troubling, such as changes to rules on automated-decision making (making it easier for companies to claim it’s needed for a service or contract), reduced transparency requirements (less explanation about how users’ data are used), and revised data access rights (supposed to tackle abusive requests). An extensive analysis by NGO noyb can be found here.
Moreover, the digital package reaches well beyond the GDPR, aiming to streamline Europe’s digital regulatory rulebook, including the e-Privacy Directive, cybersecurity rules, the AI Act and the Data Act. The Commission also launched “reality checks” of other core legislation, which suggests it is eyeing other mandates.
Browser Signals and Cookie Fatigue
There is one proposal in the Digital Omnibus that actually could simplify something important to users: requiring online interfaces to respect automated consent signals, allowing users to automatically reject consent across all websites instead of clicking through cookie popups on each. Cookie popups are often designed with “dark patterns” that make rejecting data sharing harder than accepting it. Automated signals can address cookie banner fatigue and make it easier for people to exercise their privacy rights.
While this proposal is a step forward, the devil is in the details: First, the exact format of the automated consent signal will be determined by technical standards organizations where Big Tech companies have historically lobbied for standards that work in their favor. The amendments should therefore define minimum protections that cannot be weakened later.
Second, the provision takes the important step of requiring web browsers to make it easy for users sending this automated consent signal, so they can opt-out without installing a browser add-on.
However, mobile operating systems are excluded from this latter requirement, which is a significant oversight. People deserve the same privacy rights on websites and mobile apps.
Finally, exempting media service providers altogether creates a loophole that lets them keep using tedious or deceptive banners to get consent for data sharing. A media service’s harvesting of user information on its website to track its customers is distinct from news gathering, which should be protected.
A Muddled Legal Landscape
The Commission’s use of the "Omnibus" process is meant to streamline lawmaking by bundling multiple changes. An earlier proposal kept the GDPR intact, focusing on easing the record-keeping obligation for smaller businesses—a far less contentious measure. The new digital package instead moves forward with thinner evidence than a substantive structural reform would require, violating basic Better Regulation principles, such as coherence and proportionality.
The result is the opposite of “simple.” The proposed delay of the high-risk requirements under the AI Act to late 2027—part of the omnibus package—illustrates this: Businesses will face a muddled legal landscape as they must comply with rules that may soon be paused and later revived again. This sounds like "complification” rather than simplification.
The Digital Package Is Not a Done Deal
Evaluating existing legislation is part of a sensible legislative cycle and clarifying and simplifying complex process and practices is not a bad idea. Unfortunately, the digital package misses the mark by making processes even more complex, at the expense of personal data protection.
Simplification doesn't require tossing out digital rights. The EC should keep that in mind as it launches its reality check of core legislation such as the Digital Services Act and Digital Markets Act, where tidying up can too easily drift into a verschlimmbessern, the kind of well-meant fix that ends up resembling the infamous ecce homo restoration.
Alternate proteins from the same gene contribute differently to health and rare disease
Around 25 million Americans have rare genetic diseases, and many of them struggle with not only a lack of effective treatments, but also a lack of good information about their disease. Clinicians may not know what causes a patient’s symptoms, know how their disease will progress, or even have a clear diagnosis. Researchers have looked to the human genome for answers, and many disease-causing genetic mutations have been identified, but as many as 70 percent of patients still lack a clear genetic explanation.
In a paper published in Molecular Cell on Nov. 7, Whitehead Institute for Biomedical Research member Iain Cheeseman, graduate student Jimmy Ly, and colleagues propose that researchers and clinicians may be able to get more information from patients’ genomes by looking at them in a different way.
The common wisdom is that each gene codes for one protein. Someone studying whether a patient has a mutation or version of a gene that contributes to their disease will therefore look for mutations that affect the “known” protein product of that gene. However, Cheeseman and others are finding that the majority of genes code for more than one protein. That means that a mutation that might seem insignificant because it does not appear to affect the known protein could nonetheless alter a different protein made by the same gene. Now, Cheeseman and Ly have shown that mutations affecting one or multiple proteins from the same gene can contribute differently to disease.
In their paper, the researchers first share what they have learned about how cells make use of the ability to generate different versions of proteins from the same gene. Then, they examine how mutations that affect these proteins contribute to disease. Through a collaboration with co-author Mark Fleming, the pathologist-in-chief at Boston Children’s Hospital, they provide two case studies of patients with atypical presentations of a rare anemia linked to mutations that selectively affect only one of two proteins produced by the gene implicated in the disease.
“We hope this work demonstrates the importance of considering whether a gene of interest makes multiple versions of a protein, and what the role of each version is in health and disease,” Ly says. “This information could lead to better understanding of the biology of disease, better diagnostics, and perhaps one day to tailored therapies to treat these diseases.”
Cells have several ways to make different versions of a protein, but the variation that Cheeseman and Ly study happens during protein production from genetic code. Cellular machines build each protein according to the instructions within a genetic sequence that begins at a “start codon” and ends at a “stop codon.” However, some genetic sequences contain more than one start codon, many of them hiding in plain sight. If the cellular machinery skips the first start codon and detects a second one, it may build a shorter version of the protein. In other cases, the machinery may detect a section that closely resembles a start codon at a point earlier in the sequence than its typical starting place, and build a longer version of the protein.
These events may sound like mistakes: the cell’s machinery accidentally creating the wrong version of the correct protein. To the contrary, protein production from these alternate starting places is an important feature of cell biology that exists across species. When Ly traced when certain genes evolved to produce multiple proteins, he found that this is a common, robust process that has been preserved throughout evolutionary history for millions of years.
Ly shows that one function this serves is to send versions of a protein to different parts of the cell. Many proteins contain ZIP code-like sequences that tell the cell’s machinery where to deliver them so the proteins can do their jobs. Ly found many examples in which longer and shorter versions of the same protein contained different ZIP codes and ended up in different places within the cell.
In particular, Ly found many cases in which one version of a protein ended up in mitochondria, structures that provide energy to cells, while another version ended up elsewhere. Because of the mitochondria’s role in the essential process of energy production, mutations to mitochondrial genes are often implicated in disease.
Ly wondered what would happen when a disease-causing mutation eliminates one version of a protein but leaves the other intact, causing the protein to only reach one of its two intended destinations. He looked through a database containing genetic information from people with rare diseases to see if such cases existed, and found that they did. In fact, there may be tens of thousands of such cases. However, without access to the people, Ly had no way of knowing what the consequences of this were in terms of symptoms and severity of disease.
Meanwhile, Cheeseman, who is also a professor of biology at MIT, had begun working with Boston Children’s Hospital to foster collaborations between Whitehead Institute and the hospital’s researchers and clinicians to accelerate the pathway from research discovery to clinical application. Through these efforts, Cheeseman and Ly met Fleming.
One group of Fleming’s patients have a type of anemia called SIFD — sideroblastic anemia with B-cell immunodeficiency, periodic fevers, and developmental delay — that is caused by mutations to the TRNT1 gene. TRNT1 is one of the genes Ly had identified as producing a mitochondrial version of its protein and another version that ends up elsewhere: in the nucleus.
Fleming shared anonymized patient data with Ly, and Ly found two cases of interest in the genetic data. Most of the patients had mutations that impaired both versions of the protein, but one patient had a mutation that eliminated only the mitochondrial version of the protein, while another patient had a mutation that eliminated only the nuclear version.
When Ly shared his results, Fleming revealed that both of those patients had very atypical presentations of SIFD, supporting Ly’s hypothesis that mutations affecting different versions of a protein would have different consequences. The patient who only had the mitochondrial version was anemic, but developmentally normal. The patient missing the mitochondrial version of the protein did not have developmental delays or chronic anemia, but did have other immune symptoms, and was not correctly diagnosed until his 50s. There are likely other factors contributing to each patient’s exact presentation of the disease, but Ly’s work begins to unravel the mystery of their atypical symptoms.
Cheeseman and Ly want to make more clinicians aware of the prevalence of genes coding for more than one protein, so they know to check for mutations affecting any of the protein versions that could contribute to disease. For example, several TRNT1 mutations that only eliminate the shorter version of the protein are not flagged as disease-causing by current assessment tools. Cheeseman lab researchers, including Ly and graduate student Matteo Di Bernardo, are now developing a new assessment tool for clinicians, called SwissIsoform, that will identify relevant mutations that affect specific protein versions, including mutations that would otherwise be missed.
“Jimmy and Iain’s work will globally support genetic disease variant interpretation and help with connecting genetic differences to variation in disease symptoms,” Fleming says. “In fact, we have recently identified two other patients with mutations affecting only the mitochondrial versions of two other proteins, who similarly have milder symptoms than patients with mutations that affect both versions.”
Long term, the researchers hope that their discoveries could aid in understanding the molecular basis of disease and in developing new gene therapies: Once researchers understand what has gone wrong within a cell to cause disease, they are better equipped to devise a solution. More immediately, the researchers hope that their work will make a difference by providing better information to clinicians and people with rare diseases.
“As a basic researcher who doesn’t typically interact with patients, there’s something very satisfying about knowing that the work you are doing is helping specific people,” Cheeseman says. “As my lab transitions to this new focus, I’ve heard many stories from people trying to navigate a rare disease and just get answers, and that has been really motivating to us, as we work to provide new insights into the disease biology.”
MIT School of Engineering faculty and staff receive awards in summer 2025
Each year, faculty and researchers across the MIT School of Engineering are recognized with prestigious awards for their contributions to research, technology, society, and education. To celebrate these achievements, the school periodically highlights select honors received by members of its departments, institutes, labs, and centers. The following individuals were recognized in summer 2025:
Iwnetim Abate, the Chipman Career Development Professor and assistant professor in the Department of Materials Science and Engineering, was honored as one of MIT Technology Review’s 2025 Innovators Under 35. He was recognized for his research on sodium-ion batteries and ammonia production.
Daniel G. Anderson, the Joseph R. Mares (1924) Professor in the Department of Chemical Engineering and the Institute of Medical Engineering and Science (IMES), received the 2025 AIChE James E. Bailey Award. The award honors outstanding contributions in biological engineering and commemorates the pioneering work of James Bailey.
Regina Barzilay, the School of Engineering Distinguished Professor for AI and Health in the Department of Electrical Engineering and Computer Science (EECS), was named to Time’s AI100 2025 list, recognizing her groundbreaking work in AI and health.
Richard D. Braatz, the Edwin R. Gilliland Professor in the Department of Chemical Engineering, received the 2025 AIChE CAST Distinguished Service Award. The award recognizes exceptional service and leadership within the Computing and Systems Technology Division of AIChE.
Rodney Brooks, the Panasonic Professor of Robotics, Emeritus in the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Sciences, one of the highest honors in scientific research.
Arup K. Chakraborty, the John M. Deutch (1961) Institute Professor in the Department of Chemical Engineering and IMES, received the 2025 AIChE Alpha Chi Sigma Award. This award honors outstanding accomplishments in chemical engineering research over the past decade.
Connor W. Coley, the Class of 1957 Career Development Professor and associate professor in the departments of Chemical Engineering and EECS, received the 2025 AIChE CoMSEF Young Investigator Award for Modeling and Simulation. The award recognizes outstanding research in computational molecular science and engineering. Coley was also one of 74 highly accomplished, early-career engineers selected to participate in the Grainger Foundation Frontiers of Engineering Symposium, a signature activity of the National Academy of Engineering.
Henry Corrigan-Gibbs, the Douglas Ross (1954) Career Development Professor of Software Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design and implementation of efficient, scalable, secure, and trustworthy computing systems.
Christina Delimitrou, the KDD Career Development Professor in Communications and Technology and associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award. The award supports assistant professors advancing scalable and trustworthy computing systems for machine learning and cloud computing. Delimitrou also received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.
Priya Donti, the Silverman (1968) Family Career Development Professor and assistant professor in the Department of EECS, was named to Time’s AI100 2025 list, which honors innovators reshaping the world through artificial intelligence.
Joel Emer, a professor of the practice in the Department of EECS, received the Alan D. Berenbaum Distinguished Service Award from ACM SIGARCH. He was honored for decades of mentoring and leadership in the computer architecture community.
Roger Greenwood Mark, the Distinguished Professor of Health Sciences and Technology, Emeritus in IMES, received the IEEE Biomedical Engineering Award for leadership in ECG signal processing and global dissemination of curated biomedical and clinical databases, thereby accelerating biomedical research worldwide.
Ali Jadbabaie, the JR East Professor and head of the Department of Civil and Environmental Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Yoon Kim, associate professor in the Department of EECS, received the Google ML and Systems Junior Faculty Award, presented to assistant professors who are leading the analysis, design, and implementation of efficient, scalable, secure, and trustworthy computing systems.
Mathias Kolle, an associate professor in the Department of Mechanical Engineering, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Muriel Médard, the NEC Professor of Software Science and Engineering in the Department of EECS, was elected an International Fellow of the United Kingdom's Royal Academy of Engineering. The honor recognizes exceptional contributions to engineering and technology across sectors.
Pablo Parrilo, the Joseph F. and Nancy P. Keithley Professor in Electrical Engineering in the Department of EECS, received the 2025 INFORMS Computing Society Prize. The award honors outstanding contributions at the interface of computing and operations research. Parrilo was recognized for pioneering work on accelerating gradient descent through stepsize hedging, introducing concepts such as Silver Stepsizes and recursive gluing.
Nidhi Seethapathi, the Frederick A. (1971) and Carole J. Middleton Career Development Professor of Neuroscience and assistant professor in the Department of EECS, was named to MIT Technology Review’s “2025 Innovators Under 35” list. The honor celebrates early-career scientists and entrepreneurs driving real-world impact.
Justin Solomon, an associate professor in the Department of EECS, was named a 2025 Schmidt Science Polymath. The award supports novel, early-stage research across disciplines, including acoustics and climate simulation.
Martin Staadecker, a research assistant in the Sustainable Supply Chain Lab, received the MIT-GE Vernova Energy and Climate Alliance Technology and Policy Program Project Award. The award recognizes his work on Scope 3 emissions and sustainable supply chain practices.
Antonio Torralba, the Delta Electronics Professor and faculty head of AI+D in the Department of EECS, received the 2025 Multidisciplinary University Research Initiative (MURI) award for research projects in areas of critical importance to national defense.
Ryan Williams, a professor in the Department of EECS, received the Best Paper Award at STOC 2025 for his paper “Simulating Time With Square-Root Space,” recognized for its technical merit and originality. Williams was also selected as a Member of the Institute for Advanced Study for the 2025–26 academic year. This prestigious fellowship recognizes the significance of these scholars' work, and it is an opportunity to advance their research and exchange ideas with scholars from around the world.
Gioele Zardini, the Rudge (1948) and Nancy Allen Career Development Professor in the Department of Civil and Environmental Engineering, received the 2025 DARPA Young Faculty Award. The award supports rising stars among early-career faculty, helping them develop research ideas aligned with national security needs.
Revisiting a revolution through poetry
There are several narratives surrounding the American Revolution, a well-traveled and -documented series of events leading to the drafting and signing of the Declaration of Independence and the war that followed.
MIT philosopher Brad Skow is taking a new approach to telling this story: a collection of 47 poems about the former American colonies’ journey from England’s imposition of the Stamp Act in 1765 to the war for America’s independence that began in 1775.
When asked why he chose poetry to retell the story, Skow, the Laurence S. Rockefeller Professor in the Department of Linguistics and Philosophy, said he “wanted to take just the great bits of these speeches and writings, while maintaining their intent and integrity.” Poetry, Skow argues, allows for that kind of nuance and specificity.
“American Independence in Verse,” published by Pentameter Press, traces a story of America’s origins through a collection of vignettes featuring some well-known characters, like politician and orator Patrick Henry, alongside some lesser-known but no less important ones, like royalist and former chief justice of North Carolina Martin Howard. Each is rendered in blank verse, a nursery-style rhyme, or free verse.
The book is divided into three segments: “Taxation Without Representation,” “Occupation and Massacre,” and “War and Independence.” Themes like freedom, government, and authority, rendered in a style of writing and oratory seldom seen today, lent themselves to being reimagined as poems. “The options available with poetic license offer opportunities for readers that might prove more difficult with prose,” Skow reports.
Skow based each of the poems on actual speeches, letters, pamphlets, and other printed materials produced by people on both sides of the debate about independence. “While reviewing a variety of primary sources for the book, I began to see the poetry in them,” he says.
In the poem “Everywhere, the spirit of equality prevails,” during an “Interlude” between the “Occupation and Massacre” and “War and Independence” sections of the book, British commissioner of customs Henry Hulton, writing to Robert Nicholson in Liverpool, England, describes the America he experienced during a trip with his wife:
The spirit of equality prevails.
Regarding social differences, they’ve no
Notion of rank, and will show more respect
To one another than to those above them.
They’ll ask a thousand strange impertinent
Questions, sit down when they should wait at a table,
React with puzzlement when you do not
Invite your valet to come share your meal.
Here, Skow, using Hulton’s words, illustrates the tension between agreed-upon social conventions — remnants of the Old World — and the society being built in the New World that animates a portion of the disconnect leading both toward war. “These writings are really powerful, and poetry offers a way to convey that power,” Skow says.
The journey to the printed page
Skow’s interest in exploring the American Revolution came, in part, from watching the Emmy Award-winning play “Hamilton.” The book ends where the play begins. “It led me to want to learn more,” he says of the play and his experience watching it. “Its focus on the Revolution made the era more exciting for me.”
While conducting research for another poetry project, Skow read an interview with American diplomat, inventor, and publisher Benjamin Franklin in the House of Commons conducted in 1766. “There were lots of amazing poetic moments in the interview,” he says. Skow began reading additional pamphlets, letters, and other writings, disconnecting his work as a philosopher from the research that would yield the book.
“I wanted to remove my philosopher hat with this project,” he says. “Poetry can encourage ambiguity and, unlike philosophy, can focus on emotional and non-rational connections between ideas.”
Although eager to approach the work as a poet and author, rather than a philosopher, Skow discovered that more primary sources than he expected were themselves often philosophical treatises. “Early in the resistance movement there were sophisticated arguments, often printed in newspapers, that it was unjust to tax the colonies without granting them representation in Parliament,” he notes.
A series of new perspectives and lessons
Skow made some discoveries that further enhanced his passion for the project. “Samuel Adams is an important figure who isn’t as well-known as he should be,” he says. “I wanted to raise his profile.”
Skow also notes that American separatists used strong-arm tactics to “encourage” support for independence, and that prevailing narratives regarding America and its eventual separation from England are more complex and layered than we might believe. “There were arguments underway about legitimate forms of government and which kind of government was right,” he says, “and many Americans wanted to retain the existing relationship with England.”
Skow says the American Revolution is a useful benchmark when considering subsequent political movements, a notion he hopes readers will take away from the book. “The book is meant to be fun and not just a collection of dry, abstract ideas,” he believes.
“There’s a simple version of the independence story we tell when we’re in a hurry; and there is the more complex truth, printed in long history books,” he continues. “I wanted to write something that was both short and included a variety of perspectives.”
Skow believes the book and its subjects are a testament to ideas he’d like to see return to political and practical discourse. “The ideals around which this country rallied for its independence are still good ideals, and the courage the participants exhibited is still worth admiring,” he says.
Trump caps EV assault with fuel economy repeal
The US power market is getting messy. Here’s why.
Senators rally behind NASA's Earth science program
China’s emissions plateau amid clean energy boom
Flood insurance heavyweights push Congress for NFIP renewal
Backers of dueling insurance ballot measures to withdraw them
Researchers lower estimate of climate-related plunge in global income
Delaying EU’s new carbon price will cost Denmark’s budget $583M
EU won’t sign weak climate deals at COP in the future, Poland warns
What’s the best way to expand the US electricity grid?
Growing energy demand means the U.S. will almost certainly have to expand its electricity grid in coming years. What’s the best way to do this? A new study by MIT researchers examines legislation introduced in Congress and identifies relative tradeoffs involving reliability, cost, and emissions, depending on the proposed approach.
The researchers evaluated two policy approaches to expanding the U.S. electricity grid: One would concentrate on regions with more renewable energy sources, and the other would create more interconnections across the country. For instance, some of the best untapped wind-power resources in the U.S. lie in the center of the country, so one type of grid expansion would situate relatively more grid infrastructure in those regions. Alternatively, the other scenario involves building more infrastructure everywhere in roughly equal measure, which the researchers call the “prescriptive” approach. How does each pencil out?
After extensive modeling, the researchers found that a grid expansion could make improvements on all fronts, with each approach offering different advantages. A more geographically unbalanced grid buildout would be 1.13 percent less expensive, and would reduce carbon emissions by 3.65 percent compared to the prescriptive approach. And yet, the prescriptive approach, with more national interconnection, would significantly reduce power outages due to extreme weather, among other things.
“There’s a tradeoff between the two things that are most on policymakers’ minds: cost and reliability,” says Christopher Knittel, an economist at the MIT Sloan School of Management, who helped direct the research. “This study makes it more clear that the more prescriptive approach ends up being better in the face of extreme weather and outages.”
The paper, “Implications of Policy-Driven Transmission Expansion on Costs, Emissions and Reliability in the United States,” is published today in Nature Energy.
The authors are Juan Ramon L. Senga, a postdoc in the MIT Center for Energy and Environmental Policy Research; Audun Botterud, a principal research scientist in the MIT Laboratory for Information and Decision Systems; John E. Parson, the deputy director for research at MIT’s Center for Energy and Environmental Policy Research; Drew Story, the managing director at MIT’s Policy Lab; and Knittel, who is the George P. Schultz Professor at MIT Sloan, and associate dean for climate and sustainability at MIT.
The new study is a product of the MIT Climate Policy Center, housed within MIT Sloan and committed to bipartisan research on energy issues. The center is also part of the Climate Project at MIT, founded in 2024 as a high-level Institute effort to develop practical climate solutions.
In this case, the project was developed from work the researchers did with federal lawmakers who have introduced legislation aimed at bolstering and expanding the U.S. electric grid. One of these bills, the BIG WIRES Act, co-sponsored by Sen. John Hickenlooper of Colorado and Rep. Scott Peters of California, would require each transmission region in the U.S. to be able to send at least 30 percent of its peak load to other regions by 2035.
That would represent a substantial change for a national transmission scenario where grids have largely been developed regionally, without an enormous amount of national oversight.
“The U.S. grid is aging and it needs an upgrade,” Senga says. “Implementing these kinds of policies is an important step for us to get to that future where we improve the grid, lower costs, lower emissions, and improve reliability. Some progress is better than none, and in this case, it would be important.”
To conduct the study, the researchers looked at how policies like the BIG WIRES Act would affect energy distribution. The scholars used a model of energy generation developed at the MIT Energy Initiative — the model is called “Gen X” — and examined the changes proposed by the legislation.
With a 30 percent level of interregional connectivity, the study estimates, the number of outages due to extreme cold would drop by 39 percent, for instance, a substantial increase in reliability. That would help avoid scenarios such as the one Texas experienced in 2021, when winter storms damaged distribution capacity.
“Reliability is what we find to be most salient to policymakers,” Senga says.
On the other hand, as the paper details, a future grid that is “optimized” with more transmission capacity near geographic spots of new energy generation would be less expensive.
“On the cost side, this kind of optimized system looks better,” Senga says.
A more geographically imbalanced grid would also have a greater impact on reducing emissions. Globally, the levelized cost of wind and solar dropped by 89 percent and 69 percent, respectively, from 2010 to 2022, meaning that incorporating less-expensive renewables into the grid would help with both cost and emissions.
“On the emissions side, a priori it’s not clear the optimized system would do better, but it does,” Knittel says. “That’s probably tied to cost, in the sense that it’s building more transmission links to where the good, cheap renewable resources are, because they’re cheap. Emissions fall when you let the optimizing action take place.”
To be sure, these two differing approaches to grid expansion are not the only paths forward. The study also examines a hybrid approach, which involves both national interconnectivity requirements and local buildouts based around new power sources on top of that. Still, the model does show that there may be some tradeoffs lawmakers will want to consider when developing and considering future grid legislation.
“You can find a balance between these factors, where you’re still going to still have an increase in reliability while also getting the cost and emission reductions,” Senga observes.
For his part, Knittel emphasizes that working with legislation as the basis for academic studies, while not generally common, can be productive for everyone involved. Scholars get to apply their research tools and models to real-world scenarios, and policymakers get a sophisticated evaluation of how their proposals would work.
“Compared to the typical academic path to publication, this is different, but at the Climate Policy Center, we’re already doing this kind of research,” Knittel says.
A smarter way for large language models to think about hard problems
To make large language models (LLMs) more accurate when answering harder questions, researchers can let the model spend more time thinking about potential solutions.
But common approaches that give LLMs this capability set a fixed computational budget for every problem, regardless of how complex it is. This means the LLM might waste computational resources on simpler questions or be unable to tackle intricate problems that require more reasoning.
To address this, MIT researchers developed a smarter way to allocate computational effort as the LLM solves a problem. Their method enables the model to dynamically adjust its computational budget based on the difficulty of the question and the likelihood that each partial solution will lead to the correct answer.
The researchers found that their new approach enabled LLMs to use as little as one-half the computation as existing methods, while achieving comparable accuracy on a range of questions with varying difficulties. In addition, their method allows smaller, less resource-intensive LLMs to perform as well as or even better than larger models on complex problems.
By improving the reliability and efficiency of LLMs, especially when they tackle complex reasoning tasks, this technique could reduce the energy consumption of generative AI systems and enable the use of LLMs in more high-stakes and time-sensitive applications.
“The computational cost of inference has quickly become a major bottleneck for frontier model providers, and they are actively trying to find ways to improve computational efficiency per user queries. For instance, the recent GPT-5.1 release highlights the efficacy of the ‘adaptive reasoning’ approach our paper proposes. By endowing the models with the ability to know what they don’t know, we can enable them to spend more compute on the hardest problems and most promising solution paths, and use far fewer tokens on easy ones. That makes reasoning both more reliable and far more efficient,” says Navid Azizan, the Alfred H. and Jean M. Hayes Career Development Assistant Professor in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS), a principal investigator of the Laboratory for Information and Decision Systems (LIDS), and the senior author of a paper on this technique.
Azizan is joined on the paper by lead author Young-Jin Park, a LIDS/MechE graduate student; Kristjan Greenewald, a research scientist in the MIT-IBM Watson AI Lab; Kaveh Alim, an IDSS graduate student; and Hao Wang, a research scientist at the MIT-IBM Watson AI Lab and the Red Hat AI Innovation Team. The research is being presented this week at the Conference on Neural Information Processing Systems.
Computation for contemplation
A recent approach called inference-time scaling lets a large language model take more time to reason about difficult problems.
Using inference-time scaling, the LLM might generate multiple solution attempts at once or explore different reasoning paths, then choose the best ones to pursue from those candidates.
A separate model, known as a process reward model (PRM), scores each potential solution or reasoning path. The LLM uses these scores to identify the most promising ones.
Typical inference-time scaling approaches assign a fixed amount of computation for the LLM to break the problem down and reason about the steps.
Instead, the researchers’ method, known as instance-adaptive scaling, dynamically adjusts the number of potential solutions or reasoning steps based on how likely they are to succeed, as the model wrestles with the problem.
“This is how humans solve problems. We come up with some partial solutions and then decide, should I go further with any of these, or stop and revise, or even go back to my previous step and continue solving the problem from there?” Wang explains.
To do this, the framework uses the PRM to estimate the difficulty of the question, helping the LLM assess how much computational budget to utilize for generating and reasoning about potential solutions.
At every step in the model’s reasoning process, the PRM looks at the question and partial answers and evaluates how promising each one is for getting to the right solution. If the LLM is more confident, it can reduce the number of potential solutions or reasoning trajectories to pursue, saving computational resources.
But the researchers found that existing PRMs often overestimate the model’s probability of success.
Overcoming overconfidence
“If we were to just trust current PRMs, which often overestimate the chance of success, our system would reduce the computational budget too aggressively. So we first had to find a way to better calibrate PRMs to make inference-time scaling more efficient and reliable,” Park says.
The researchers introduced a calibration method that enables PRMs to generate a range of probability scores rather than a single value. In this way, the PRM creates more reliable uncertainty estimates that better reflect the true probability of success.
With a well-calibrated PRM, their instance-adaptive scaling framework can use the probability scores to effectively reduce computation while maintaining the accuracy of the model’s outputs.
When they compared their method to standard inference-time scaling approaches on a series of mathematical reasoning tasks, it utilized less computation to solve each problem while achieving similar accuracy.
“The beauty of our approach is that this adaptation happens on the fly, as the problem is being solved, rather than happening all at once at the beginning of the process,” says Greenewald.
In the future, the researchers are interested in applying this technique to other applications, such as code generation and AI agents. They are also planning to explore additional uses for their PRM calibration method, like for reinforcement learning and fine-tuning.
“Human employees learn on the job — some CEOs even started as interns — but today’s agents remain largely static pieces of probabilistic software. Work like this paper is an important step toward changing that: helping agents understand what they don’t know and building mechanisms for continual self-improvement. These capabilities are essential if we want agents that can operate safely, adapt to new situations, and deliver consistent results at scale,” says Akash Srivastava, director and chief architect of Core AI at IBM Software, who was not involved with this work.
This work was funded, in part, by the MIT-IBM Watson AI Lab, the MIT-Amazon Science Hub, the MIT-Google Program for Computing Innovation, and MathWorks.
Axon Tests Face Recognition on Body-Worn Cameras
Axon Enterprise Inc. is working with a Canadian police department to test the addition of face recognition technology (FRT) to its body-worn cameras (BWCs). This is an alarming development in government surveillance that should put communities everywhere on alert.
As many as 50 officers from the Edmonton Police Department (EPD) will begin using these FRT-enabled BWCs today as part of a proof-of-concept experiment. EPD is the first police department in the world to use these Axon devices, according to a report from the Edmonton Journal.
This kind of technology could give officers instant identification of any person that crosses their path. During the current trial period, the Edmonton officers will not be notified in the field of an individual’s identity but will review identifications generated by the BWCs later on.
“This Proof of Concept will test the technology’s ability to work with our database to make officers aware of individuals with safety flags and cautions from previous interactions,” as well as “individuals who have outstanding warrants for serious crime,” Edmonton Police described in a press release, suggesting that individuals will be placed on a watchlist of sorts.
FRT brings a rash of problems. It relies on extensive surveillance and collecting images on individuals, law-abiding or otherwise. Misidentifications can cause horrendous consequences for individuals, including prolonged and difficult fights for innocence and unfair incarceration for crimes never committed. In a world where police are using real-time face recognition, law-abiding individuals or those participating in legal, protected activity that police may find objectionable — like protest — could be quickly identified.
With the increasing connections being made between disparate data sources about nearly every person, BWCs enabled with FRT can easily connect a person minding their own business, who happens to come within view of a police officer, with a whole slew of other personal information.
Axon had previously claimed it would pause the addition of face recognition to its tools due to concerns raised in 2019 by the company’s AI and Policing Technology Ethics Board. However, since then, the company has continued to research and consider the addition of FRT to its products.
This BWC-FRT integration signals possible other FRT integrations in the future. Axon is building an entire arsenal of cameras and surveillance devices for law enforcement, and the company grows the reach of its police surveillance apparatus, in part, by leveraging relationships with its thousands of customers, including those using its flagship product, the Taser. This so-called “ecosystem” of surveillance technologyq includes the Fusus system, a platform for connecting surveillance cameras to facilitate real-time viewing of video footage. It also involves expanding the use of surveillance tools like BWCs and the flying cameras of “drone as first responder” (DFR) programs.
Face recognition undermines individual privacy, and it is too dangerous when deployed by police. Communities everywhere must move to protect themselves and safeguard their civil liberties, insisting on transparency, clear policies, public accountability, and audit mechanisms. Ideally, communities should ban police use of the technology altogether. At a minimum, police must not add FRT to BWCs.
After Years of Controversy, the EU’s Chat Control Nears Its Final Hurdle: What to Know
After a years-long battle, the European Commission’s “Chat Control” plan, which would mandate mass scanning and other encryption-breaking measures, at last codifies agreement on a position within the Council of the EU, representing EU States. The good news is that the most controversial part, the forced requirement to scan encrypted messages, is out. The bad news is there’s more to it than that.
Chat Control has gone through several iterations since it was first introduced, with the EU Parliament backing a position that protects fundamental rights, while the Council of the EU spent many months pursuing an intrusive law-enforcement-focused approach. Many proposals earlier this year required the scanning and detection of illicit content on all services, including private messaging apps such as WhatsApp and Signal. This requirement would fundamentally break end-to-end encryption.
Thanks to the tireless efforts of digital rights groups, including European Digital Rights (EDRi), we won a significant improvement: the Council agreed on its position, which removed the requirement that forces providers to scan messages on their services. It also comes with strong language to protect encryption, which is good news for users.
But here comes the rub: first, the Council’s position allows for “voluntary” detection, where tech platforms can scan personal messages that aren’t end-to-end encrypted. Unlike in the U.S., where there is no comprehensive federal privacy law, voluntary scanning is not technically legal in the EU, though it’s been possible through a derogation set to expire in 2026. It is unclear how this will play out over time, though we are concerned that this approach to voluntary scanning will lead to private mass-scanning of non-encrypted services and might limit the sorts of secure communication and storage services big providers offer. With limited transparency and oversight, it will be difficult to know how services approach this sort of detection.
With mandatory detection orders being off the table, the Council has embraced another worrying system to protect children online: risk mitigation. Providers will have to take “all reasonable mitigation measures” to reduce risks on their services. This includes age verification and age assessment measures. We have written about the perils of age verification schemes and recent developments in the EU, where regulators are increasingly focusing on AV to reduce online harms.
If secure messaging platforms like Signal or WhatsApp are required to implement age verification methods, it would fundamentally reshape what it means to use these services privately. Encrypted communication tools should be available to everyone, everywhere, of all ages, freely and without the requirement to prove their identity. As age verification has started to creep in as a mandatory risk mitigation measure under the EU’s Digital Services Act in certain situations, it could become a de facto requirement under the Chat Control proposal if the wording is left broad enough for regulators to treat it as a baseline.
Likewise, the Council’s position lists “voluntary activities” as a potential risk mitigation measure. Pull the thread on this and you’re left with a contradictory stance, because an activity is no longer voluntary if it forms part of a formal risk management obligation. While courts might interpret its mention in a risk assessment as an optional measure available to providers that do not use encrypted communication channels, this reading is far from certain, and the current language will, at a minimum, nudge non-encrypted services to perform voluntary scanning if they don’t want to invest in alternative risk mitigation options. It’s largely up to the provider to choose how to mitigate risks, but it’s up to enforcers to decide what is effective. Again, we're concerned about how this will play out in practice.
For the same reason, clear and unambiguous language is needed to prevent authorities from taking a hostile view of what is meant by “allowing encryption” if that means then expecting service providers to implement client-side scanning. We welcome the clear assurance in the text that encryption cannot be weakened or bypassed, including through any requirement to grant access to protected data, but even greater clarity would come from an explicit statement that client-side scanning cannot coexist with encryption.
As we approach the final “trilogue” negotiations of this regulation, we urge EU lawmakers to work on a final text that fully protects users’ right to private communication and avoids intrusive age-verification mandates and risk benchmark systems that lead to surveillance in practice.
MIT engineers design an aerial microrobot that can fly as fast as a bumblebee
In the future, tiny flying robots could be deployed to aid in the search for survivors trapped beneath the rubble after a devastating earthquake. Like real insects, these robots could flit through tight spaces larger robots can’t reach, while simultaneously dodging stationary obstacles and pieces of falling rubble.
So far, aerial microrobots have only been able to fly slowly along smooth trajectories, far from the swift, agile flight of real insects — until now.
MIT researchers have demonstrated aerial microrobots that can fly with speed and agility that is comparable to their biological counterparts. A collaborative team designed a new AI-based controller for the robotic bug that enabled it to follow gymnastic flight paths, such as executing continuous body flips.
With a two-part control scheme that combines high performance with computational efficiency, the robot’s speed and acceleration increased by about 450 percent and 250 percent, respectively, compared to the researchers’ best previous demonstrations.
The speedy robot was agile enough to complete 10 consecutive somersaults in 11 seconds, even when wind disturbances threatened to push it off course.
“We want to be able to use these robots in scenarios that more traditional quad copter robots would have trouble flying into, but that insects could navigate. Now, with our bioinspired control framework, the flight performance of our robot is comparable to insects in terms of speed, acceleration, and the pitching angle. This is quite an exciting step toward that future goal,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and co-senior author of a paper on the robot.
Chen is joined on the paper by co-lead authors Yi-Hsuan Hsiao, an EECS MIT graduate student; Andrea Tagliabue PhD ’24; and Owen Matteson, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro); as well as EECS graduate student Suhan Kim; Tong Zhao MEng ’23; and co-senior author Jonathan P. How, the Ford Professor of Engineering in the Department of Aeronautics and Astronautics and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research appears today in Science Advances.
An AI controller
Chen’s group has been building robotic insects for more than five years.
They recently developed a more durable version of their tiny robot, a microcassette-sized device that weighs less than a paperclip. The new version utilizes larger, flapping wings that enable more agile movements. They are powered by a set of squishy artificial muscles that flap the wings at an extremely fast rate.
But the controller — the “brain” of the robot that determines its position and tells it where to fly — was hand-tuned by a human, limiting the robot’s performance.
For the robot to fly quickly and aggressively like a real insect, it needed a more robust controller that could account for uncertainty and perform complex optimizations quickly.
Such a controller would be too computationally intensive to be deployed in real time, especially with the complicated aerodynamics of the lightweight robot.
To overcome this challenge, Chen’s group joined forces with How’s team and, together, they crafted a two-step, AI-driven control scheme that provides the robustness necessary for complex, rapid maneuvers, and the computational efficiency needed for real-time deployment.
“The hardware advances pushed the controller so there was more we could do on the software side, but at the same time, as the controller developed, there was more they could do with the hardware. As Kevin’s team demonstrates new capabilities, we demonstrate that we can utilize them,” How says.
For the first step, the team built what is known as a model-predictive controller. This type of powerful controller uses a dynamic, mathematical model to predict the behavior of the robot and plan the optimal series of actions to safely follow a trajectory.
While computationally intensive, it can plan challenging maneuvers like aerial somersaults, rapid turns, and aggressive body tilting. This high-performance planner is also designed to consider constraints on the force and torque the robot could apply, which is essential for avoiding collisions.
For instance, to perform multiple flips in a row, the robot would need to decelerate in such a way that its initial conditions are exactly right for doing the flip again.
“If small errors creep in, and you try to repeat that flip 10 times with those small errors, the robot will just crash. We need to have robust flight control,” How says.
They use this expert planner to train a “policy” based on a deep-learning model, to control the robot in real time, through a process called imitation learning. A policy is the robot’s decision-making engine, which tells the robot where and how to fly.
Essentially, the imitation-learning process compresses the powerful controller into a computationally efficient AI model that can run very fast.
The key was having a smart way to create just enough training data, which would teach the policy everything it needs to know for aggressive maneuvers.
“The robust training method is the secret sauce of this technique,” How explains.
The AI-driven policy takes robot positions as inputs and outputs control commands in real time, such as thrust force and torques.
Insect-like performance
In their experiments, this two-step approach enabled the insect-scale robot to fly 447 percent faster while exhibiting a 255 percent increase in acceleration. The robot was able to complete 10 somersaults in 11 seconds, and the tiny robot never strayed more than 4 or 5 centimeters off its planned trajectory.
“This work demonstrates that soft and microrobots, traditionally limited in speed, can now leverage advanced control algorithms to achieve agility approaching that of natural insects and larger robots, opening up new opportunities for multimodal locomotion,” says Hsiao.
The researchers were also able to demonstrate saccade movement, which occurs when insects pitch very aggressively, fly rapidly to a certain position, and then pitch the other way to stop. This rapid acceleration and deceleration help insects localize themselves and see clearly.
“This bio-mimicking flight behavior could help us in the future when we start putting cameras and sensors on board the robot,” Chen says.
Adding sensors and cameras so the microrobots can fly outdoors, without being attached to a complex motion capture system, will be a major area of future work.
The researchers also want to study how onboard sensors could help the robots avoid colliding with one another or coordinate navigation.
“For the micro-robotics community, I hope this paper signals a paradigm shift by showing that we can develop a new control architecture that is high-performing and efficient at the same time,” says Chen.
“This work is especially impressive because these robots still perform precise flips and fast turns despite the large uncertainties that come from relatively large fabrication tolerances in small-scale manufacturing, wind gusts of more than 1 meter per second, and even its power tether wrapping around the robot as it performs repeated flips,” says Sarah Bergbreiter, a professor of mechanical engineering at Carnegie Mellon University, who was not involved with this work.
“Although the controller currently runs on an external computer rather than onboard the robot, the authors demonstrate that similar, but less precise, control policies may be feasible even with the more limited computation available on an insect-scale robot. This is exciting because it points toward future insect-scale robots with agility approaching that of their biological counterparts,” she adds.
This research is funded, in part, by the National Science Foundation (NSF), the Office of Naval Research, Air Force Office of Scientific Research, MathWorks, and the Zakhartchenko Fellowship.
Staying stable
With every step we take, our brains are already thinking about the next one. If a bump in the terrain or a minor misstep has thrown us off balance, our stride may need to be altered to prevent a fall. Our two-legged posture makes maintaining stability particularly complex, which our brains solve in part by continually monitoring our bodies and adjusting where we place our feet.
Now, scientists at MIT have determined that animals with very different bodies likely use a shared strategy to balance themselves when they walk.
Nidhi Seethapathi, the Frederick A. and Carole J. Middleton Career Development Assistant Professor in Brain and Cognitive Sciences and Electrical Engineering and Computer Science at MIT, and K. Lisa Yang ICoN Center Fellow Antoine De Comite found that humans, mice, and fruit flies all use an error-correction process to guide foot placement and maintain stability while walking. Their findings, published Oct. 21 in the journal PNAS, could inform future studies exploring how the brain achieves stability during locomotion — bridging the gap between animal models and human balance.
Corrective action
Information must be integrated by the brain to keep us upright when we walk or run. Our steps must be continually adjusted according to the terrain, our desired speed, and our body’s current velocity and position in space.
“We rely on a combination of vestibular, proprioceptive, and visual information to build an estimate of our body’s state, determining if we are about to fall. Once we know the body’s state, we can decide which corrective actions to take,” explains Seethapathi, who is also an associate investigator at the McGovern Institute for Brain Research.
While humans are known to adjust where they place their feet to correct for errors, it is not known whether animals whose bodies are more stable do this, too.
To find out, Seethapathi and De Comite, who is a postdoc in Seethapathi’s and Guoping Feng's lab at the McGovern Institute, turned to locomotion data from mice, fruit flies, and humans shared by other labs, enabling an analysis across species that is otherwise challenging. Importantly, Seethapathi notes, all the animals they studied were walking in everyday natural environments, such as around a room — not on a treadmill or over unusual terrain.
Even in these ordinary circumstances, missteps and minor imbalances are common, and the team’s analysis showed that these errors predicted where all of the animals placed their feet in subsequent steps, regardless of whether they had two, four, or six legs.
One foot in front of another
By tracking the animals’ bodies and the step-by-step placement of their feet, Seethapathi and De Comite were able to find a measure of error that informs each animal’s next step. “By taking this comparative approach, we’ve forced ourselves to come up with a definition of error that generalizes across species,” Seethapathi says. “An animal moves with an expected body state for a particular speed. If it deviates from that ideal state, that deviation — at any given moment — is the error.”
“It was surprising to find similarities across these three species, which, at first sight, look very different,” says DeComite. “The methods themselves are surprising because we now have a pipeline to analyze foot placement and locomotion stability in any legged species,” explains DeComite, “which could lead similar analyses in even more species in the future.”
The team’s data suggest that in all of the species in the study, placement of the feet is guided both by an error-correction process and the speed at which an animal is traveling. Steps tend to lengthen and feet spend less time on the ground as animals pick up their pace, while the width of each step seems to change largely to compensate for body-state errors.
Now, Seethapathi says, we can look forward to future studies to explore how the dual control systems might be generated and integrated in the brain to keep moving bodies stable.
Studying how brains help animals move stably may also guide the development of more-targeted strategies to help people improve their balance and, ultimately, prevent falls.
“In elderly individuals and individuals with sensorimotor disorders, minimizing fall risk is one of the major functional targets of rehabilitation,” says Seethapathi. “A fundamental understanding of the error correction process that helps us remain stable will provide insight into why this process falls short in populations with neural deficits,” she says.
