Feed aggregator
Storms, fires hit Balkan countries following extreme heat
Far-right climate delayers to lead Parliament talks on EU’s 2040 target
Global finance watchdog confronts climate discord after officials clash
Implantable device could save diabetes patients from dangerously low blood sugar
For people with Type 1 diabetes, developing hypoglycemia, or low blood sugar, is an ever-present threat. When glucose levels become extremely low, it creates a life-threatening situation for which the standard treatment of care is injecting a hormone called glucagon.
As an emergency backup, for cases where patients may not realize that their blood sugar is dropping to dangerous levels, MIT engineers have designed an implantable reservoir that can remain under the skin and be triggered to release glucagon when blood sugar levels get too low.
This approach could also help in cases where hypoglycemia occurs during sleep, or for diabetic children who are unable to administer injections on their own.
“This is a small, emergency-event device that can be placed under the skin, where it is ready to act if the patient’s blood sugar drops too low,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), and the senior author of the study. “Our goal was to build a device that is always ready to protect patients from low blood sugar. We think this can also help relieve the fear of hypoglycemia that many patients, and their parents, suffer from.”
The researchers showed that this device could also be used to deliver emergency doses of epinephrine, a drug that is used to treat heart attacks and can also prevent severe allergic reactions, including anaphylactic shock.
Siddharth Krishnan, a former MIT research scientist who is now an assistant professor of electrical engineering at Stanford University, is the lead author of the study, which appears today in Nature Biomedical Engineering.
Emergency response
Most patients with type 1 diabetes use daily insulin injections to help their body absorb sugar and prevent their blood sugar levels from getting too high. However, if their blood sugar levels get too low, they develop hypoglycemia, which can lead to confusion and seizures, and may be fatal if it goes untreated.
To combat hypoglycemia, some patients carry preloaded syringes of glucagon, a hormone that stimulates the liver to release glucose into the bloodstream. However, it isn’t always easy for people, especially children, to know when they are becoming hypoglycemic.
“Some patients can sense when they’re getting low blood sugar, and go eat something or give themselves glucagon,” Anderson says. “But some are unaware that they’re hypoglycemic, and they can just slip into confusion and coma. This is also a problem when patients sleep, as they are reliant on glucose sensor alarms to wake them when sugar drops dangerously low.”
To make it easier to counteract hypoglycemia, the MIT team set out to design an emergency device that could be triggered either by the person using it, or automatically by a sensor.
The device, which is about the size of a quarter, contains a small drug reservoir made of a 3D-printed polymer. The reservoir is sealed with a special material known as a shape-memory alloy, which can be programmed to change its shape when heated. In this case, the researcher used a nickel-titanium alloy that is programmed to curl from a flat slab into a U-shape when heated to 40 degrees Celsius.
Like many other protein or peptide drugs, glucagon tends to break down quickly, so the liquid form can’t be stored long-term in the body. Instead, the MIT team created a powdered version of the drug, which remains stable for much longer and stays in the reservoir until released.
Each device can carry either one or four doses of glucagon, and it also includes an antenna tuned to respond to a specific frequency in the radiofrequency range. That allows it to be remotely triggered to turn on a small electrical current, which is used to heat the shape-memory alloy. When the temperature reaches the 40-degree threshold, the slab bends into a U shape, releasing the contents of the reservoir.
Because the device can receive wireless signals, it could also be designed so that drug release is triggered by a glucose monitor when the wearer’s blood sugar drops below a certain level.
“One of the key features of this type of digital drug delivery system is that you can have it talk to sensors,” Krishnan says. “In this case, the continuous glucose-monitoring technology that a lot of patients use is something that would be easy for these types of devices to interface with.”
Reversing hypoglycemia
After implanting the device in diabetic mice, the researchers used it to trigger glucagon release as the animals’ blood sugar levels were dropping. Within less than 10 minutes of activating the drug release, blood sugar levels began to level off, allowing them to remain within the normal range and avert hypoglycemia.
The researchers also tested the device with a powdered version of epinephrine. They found that within 10 minutes of drug release, epinephrine levels in the bloodstream became elevated and heart rate increased.
In this study, the researchers kept the devices implanted for up to four weeks, but they now plan to see if they can extend that time up to at least a year.
“The idea is you would have enough doses that can provide this therapeutic rescue event over a significant period of time. We don’t know exactly what that is — maybe a year, maybe a few years, and we’re currently working on establishing what the optimal lifetime is. But then after that, it would need to be replaced,” Krishnan says.
Typically, when a medical device is implanted in the body, scar tissue develops around the device, which can interfere with its function. However, in this study, the researchers showed that even after fibrotic tissue formed around the implant, they were able to successfully trigger the drug release.
The researchers are now planning for additional animal studies and hope to begin testing the device in clinical trials within the next three years.
“It’s really exciting to see our team accomplish this, which I hope will someday help diabetic patients and could more broadly provide a new paradigm for delivering any emergency medicine,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper.
Other authors of the paper include Laura O’Keeffe, Arnab Rudra, Derin Gumustop, Nima Khatib, Claudia Liu, Jiawei Yang, Athena Wang, Matthew Bochenek, Yen-Chun Lu, Suman Bose, and Kaelan Reed.
The research was funded by the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, a JDRF postdoctoral fellowship, and the National Institute of Biomedical Imaging and Bioengineering.
Processing our technological angst through humor
The first time Steve Jobs held a public demo of the Apple Macintosh, in early 1984, scripted jokes were part of the rollout. First, Jobs pulled the machine out of a bag. Then, using speech technology from Samsung, the Macintosh made a quip about rival IBM’s mainframes: “Never trust a computer you can’t lift.”
There’s a reason Jobs was doing that. For the first few decades that computing became part of cultural life, starting in the 1950s, computers seemed unfriendly, grim, and liable to work against human interests. Take the 1968 film “2001: A Space Odyssey,” in which the onboard computer, HAL, turns against the expedition’s astronauts. It’s a famous cultural touchstone. Jobs, in selling the idea of a personal computer, was using humor to ease concerns about the machines.
“Against the sense of computing as cold and numbers-driven, the fact that this computer was using voice technology to deliver jokes made it seem less forbidding, less evil,” says MIT scholar Benjamin Mangrum.
In fact, this dynamic turns up throughout modern culture, in movies, television, fiction, and the theater. We often deal with our doubts and fears about computing through humor, whether reconciling ourselves to machines or critiquing them. Now, Mangrum analyzes this phenomenon in a new book, “The Comedy of Computation: Or, How I Learned to Stop Worrying and Love Obsolescence,” published this month by Stanford University Press.
“Comedy has been a form for making this technology seem ordinary,” says Mangrum, an associate professor in MIT’s literature program. “Where in other circumstances computing might seem inhuman or impersonal, comedy allows us to incorporate it into our lives in a way that makes it make sense.”
Reversals of fortune
Mangrum’s interest in the subject was sparked partly by William Marchant’s 1955 play, “The Desk Set” — a romantic comedy later turned into a film starring Katharine Hepburn and Spencer Tracy — which queries, among other things, how office workers will co-exist alongside computers.
Perhaps against expectations, romantic comedies have turned out to be one of the most prominent contemporary forms of culture that grapple with technology and its effects on us. Mangrum, in the book, explains why: Their plot structure often involves reversals, which sometimes are extended to technology, too. Computing might seem forbidding, but it might also pull people together.
“One of the common tropes about romantic comedies is that there are characters or factors in the drama that obstruct the happy union of two people,” Mangrum observes. “And often across the arc of the drama, the obstruction or obstructive character is transformed into a partner, or collaborator, and assimilated within the happy couple’s union. That provides a template for how some cultural producers want to present the experience of computing. It begins as an obstruction and ends as a partner.”
That plot structure, Mangrum notes, dates to antiquity and was common in Shakespeare’s day. Still, as he writes in the book, there is “no timeless reality called Comedy,” as the vehicles and forms of it change over time. Beyond that, specific jokes about computing can quickly become outmoded. Steve Jobs made fun of mainframes, and the 1998 Nora Ephron comedy “You’ve Got Mail” got laughs out of dial-up modems, but those jokes might leave most people puzzled today.
“Comedy is not a fixed resource,” Mangrum says. “It’s an ever-changing toolbox.”
Continuing this evolution into the 21st century, Mangrum observes that a lot of computational comedy centers on an entire category of commentary he calls “the Great Tech-Industrial Joke.” This focuses on the gap between noble-sounding declared aspirations of technology and the sometimes-dismal outcomes it creates.
Social media, for instance, promised new worlds of connectivity and social exploration, and has benefits people enjoy — but it has also generated polarization, misinformation, and toxicity. Technology’s social effects are complex. Whole televisions shows, such as “Silicon Valley,” have dug into this terrain.
“The tech industry announces that some of its products have revolutionary or utopian aims, but the achievements of many of them fall far short of that,” Mangrum says. “It’s a funny setup for a joke. People have been claiming we’re saving the world, when actually we’re just processing emails faster. But it’s a mode of criticism aimed at big tech, since its products are more complicated.”
A complicated, messy picture
“The Comedy of Computation” digs into several other facets of modern culture and technology. The notion of personal authenticity, as Mangrum observes, is a fairly recent and modern construct in society — and it’s another sphere of life that collides with computing, since social media is full of charges of inauthenticity.
“That ethics of authenticity connects to comedy, as we make jokes about people not being authentic,” Mangrum says.
“The Comedy of Computation” has received praise from other scholars. Mark Goble, a professor of English at the University of California at Berkeley, has called it “essential for understanding the technological world in its complexity, absurdity, and vibrancy.”
For his part, Mangrum emphasizes that his book is an exploration of the full complexity of technology, culture, and society.
“There’s this really complicated, messy picture,” Mangrum says. “And comedy sometimes finds a way of experiencing and finding pleasure in that messiness, and other times it neatly wraps it up in a lesson that can make things neater than they actually are.”
Mangrum adds that the book focuses on “the combination of the threat and pleasure that’s involved across the history of the computer, in the ways it’s been assimilated and shaped society, with real advances and benefits, along with real threats, for instance to employment. I’m interested in the duality, the simultaneous and seemingly conflicting features of that experience.”
Amplified warming accelerates deoxygenation in the Arctic Ocean
Nature Climate Change, Published online: 09 July 2025; doi:10.1038/s41558-025-02376-0
Rapid warming of the global ocean and amplified Arctic warming will alter the ocean biogeochemistry. Here the authors show that Atlantic water inflow, and the subsequent subduction and circulation, is reducing dissolved oxygen in the Arctic due to reduced solubility with increased temperatures.EFF to US Court of Appeals: Protect Taxpayer Privacy
EFF has filed an amicus brief in Trabajadores v. Bessent, a case concerning the Internal Revenue Service (IRS) sharing protected personal tax information with the Department of Homeland Security for the purposes of immigration enforcement. Our expertise in privacy and data sharing makes us the ideal organization to step in and inform the judge: government actions like this have real-world consequences. The IRS’s sharing, and especially bulk sharing, of data is improper and makes taxpayers vulnerable to inevitable mistakes. As a practical matter, the sharing of data that IRS had previously claimed was protected undermines the trust important civil institutions require in order to be effective.
You can read the entire brief here.
The brief makes two particular arguments. The first is that if the Tax Reform Act, the statute under which the IRS found the authority to share the data, is considered to be ambiguous, and that the statute should be interpreted in light of the legislative intent and historical background, which disfavors disclosure. The brief reads,
Given the historical context, and decades of subsequent agency promises to protect taxpayer confidentiality and taxpayer reliance on those promises, the Administration’s abrupt decision to re-interpret §6103 to allow sharing with ICE whenever a potential “criminal proceeding” can be posited, is a textbook example of an arbitrary and capricious action even if the statute can be read to be ambiguous.
The other argument we make to the court is that data scientists agree: when you try to corroborate information between two databases in which information is only partially identifiable, mistakes happen. We argue:
Those errors result from such mundane issues as outdated information, data entry errors, and taxpayers or tax preparer submission of incorrect names or addresses. If public reports are correct, and officials intend to share information regarding 700,000 or even 7 million taxpayers, the errors will multiply, leading to the mistaken targeting, detention, deportation, and potentially even physical harm to regular taxpayers.
Information silos in the government exist for a reason. Here, it was designed to protect individual privacy and prevent executive abuse that can come with unfettered access to properly-collected information. The concern motivating Congress to pass the Tax Reform Act was the same as that behind Privacy Act of 1974 and the 1978 Right to Financial Privacy Act. These laws were part of a wave of reforms Congress considered necessary to address the misuse of tax data to spy on and harass political opponents, dissidents, civil rights activists, and anti-war protestors in the 1960s and early 1970s. Congress saw the need to ensure that data collected for one purpose should only be used for that purpose, with very narrow exceptions, or else it is prone to abuse. Yet the IRS is currently sharing information to allow ICE to enforce immigration law.
Taxation in the United States operates through a very simple agreement: the government requires taxes from people working inside the United States in order to function. In order to get people to pay their taxes, including undocumented immigrants living and working in the United States, the IRS has previously promised that the data they collect will not be used against a person for punitive reasons. This increases people to pay taxes and alleviates concerns of people people may have to avoid interacting with the government. But the IRS’s reversal has greatly harmed that trust and has potential to have far reaching and negative ramifications, including decreasing future tax revenue.
Consolidating government information so that the agencies responsible for healthcare, taxes, or financial support are linked to agencies that police, surveil, and fine people is a recipe for disaster. For that reason, EFF is proud to submit this amicus brief in Trabajadores v. Bessent in support of taxpayer privacy.
Related Cases: American Federation of Government Employees v. U.S. Office of Personnel ManagementFlood predictions could worsen when Trump’s cuts take hold
Chicago teachers won climate action in their contract. But funding issues loom.
Trump orders crackdown on ‘green’ subsidies
Climate activists say Alito boosted their lawsuit against EPA
Dems demand investigations into deadly Texas floods
California AG beats his own lawyers in suit related to climate case
California Democrats retreat on climate
Floods like the one that hit Texas are US's top storm-related killer
Days of monsoon rains, flooding kill at least 72 in Pakistan
Greece imposes work breaks as heat wave grips country
Challenges of institutional adaptation
Nature Climate Change, Published online: 08 July 2025; doi:10.1038/s41558-025-02388-w
Adaptation efforts require responsive and adaptive institutions. Some progress has been made, but more systematic institutional adaptation is needed given the growing climate hazards.Study could lead to LLMs that are better at complex reasoning
For all their impressive capabilities, large language models (LLMs) often fall short when given challenging new tasks that require complex reasoning skills.
While an accounting firm’s LLM might excel at summarizing financial reports, that same model could fail unexpectedly if tasked with predicting market trends or identifying fraudulent transactions.
To make LLMs more adaptable, MIT researchers investigated how a certain training technique can be strategically deployed to boost a model’s performance on unfamiliar, difficult problems.
They show that test-time training, a method that involves temporarily updating some of a model’s inner workings during deployment, can lead to a sixfold improvement in accuracy. The researchers developed a framework for implementing a test-time training strategy that uses examples of the new task to maximize these gains.
Their work could improve a model’s flexibility, enabling an off-the-shelf LLM to adapt to complex tasks that require planning or abstraction. This could lead to LLMs that would be more accurate in many applications that require logical deduction, from medical diagnostics to supply chain management.
“Genuine learning — what we did here with test-time training — is something these models can’t do on their own after they are shipped. They can’t gain new skills or get better at a task. But we have shown that if you push the model a little bit to do actual learning, you see that huge improvements in performance can happen,” says Ekin Akyürek PhD ’25, lead author of the study.
Akyürek is joined on the paper by graduate students Mehul Damani, Linlu Qiu, Han Guo, and Jyothish Pari; undergraduate Adam Zweiger; and senior authors Yoon Kim, an assistant professor of Electrical Engineering and Computer Science (EECS) and a member of the Computer Science and Artificial Intelligence Laboratory (CSAIL); and Jacob Andreas, an associate professor in EECS and a member of CSAIL. The research will be presented at the International Conference on Machine Learning.
Tackling hard domains
LLM users often try to improve the performance of their model on a new task using a technique called in-context learning. They feed the model a few examples of the new task as text prompts which guide the model’s outputs.
But in-context learning doesn’t always work for problems that require logic and reasoning.
The MIT researchers investigated how test-time training can be used in conjunction with in-context learning to boost performance on these challenging tasks. Test-time training involves updating some model parameters — the internal variables it uses to make predictions — using a small amount of new data specific to the task at hand.
The researchers explored how test-time training interacts with in-context learning. They studied design choices that maximize the performance improvements one can coax out of a general-purpose LLM.
“We find that test-time training is a much stronger form of learning. While simply providing examples can modestly boost accuracy, actually updating the model with those examples can lead to significantly better performance, particularly in challenging domains,” Damani says.
In-context learning requires a small set of task examples, including problems and their solutions. The researchers use these examples to create a task-specific dataset needed for test-time training.
To expand the size of this dataset, they create new inputs by slightly changing the problems and solutions in the examples, such as by horizontally flipping some input data. They find that training the model on the outputs of this new dataset leads to the best performance.
In addition, the researchers only update a small number of model parameters using a technique called low-rank adaption, which improves the efficiency of the test-time training process.
“This is important because our method needs to be efficient if it is going to be deployed in the real world. We find that you can get huge improvements in accuracy with a very small amount of parameter training,” Akyürek says.
Developing new skills
Streamlining the process is key, since test-time training is employed on a per-instance basis, meaning a user would need to do this for each individual task. The updates to the model are only temporary, and the model reverts to its original form after making a prediction.
A model that usually takes less than a minute to answer a query might take five or 10 minutes to provide an answer with test-time training, Akyürek adds.
“We wouldn’t want to do this for all user queries, but it is useful if you have a very hard task that you want to the model to solve well. There also might be tasks that are too challenging for an LLM to solve without this method,” he says.
The researchers tested their approach on two benchmark datasets of extremely complex problems, such as IQ puzzles. It boosted accuracy as much as sixfold over techniques that use only in-context learning.
Tasks that involved structured patterns or those which used completely unfamiliar types of data showed the largest performance improvements.
“For simpler tasks, in-context learning might be OK. But updating the parameters themselves might develop a new skill in the model,” Damani says.
In the future, the researchers want to use these insights toward the development of models that continually learn.
The long-term goal is an LLM that, given a query, can automatically determine if it needs to use test-time training to update parameters or if it can solve the task using in-context learning, and then implement the best test-time training strategy without the need for human intervention.
This work is supported, in part, by the MIT-IBM Watson AI Lab and the National Science Foundation.
Avoid urban development policy that fuels climate risk
Nature Climate Change, Published online: 08 July 2025; doi:10.1038/s41558-025-02365-3
Urban development policies, designed to improve city resilience, could unintentionally increase the exposure to climate risk. This Comment discusses the impact of misaligned incentives, miscalculated benefits and costs, and overlooked behavioural responses on policy outcomes, as well as future directions.