Feed aggregator
Implantable islet cells could control diabetes without insulin injections
Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high.
As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an on-board oxygen generator to keep the cells healthy.
This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.
“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”
Anderson is the senior author of the study, which appears today in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.
Insulin on demand
Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.
Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.
In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an on-board oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.
Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.
“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.
In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.
The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.
Protein factories
In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.
The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.
“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.
The researchers now plan to study whether they can get the devices to last for even longer in the body — up to two years, or longer.
“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”
The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.
“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”
The research was funded, in part, by Breakthrough TID, the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, and a Koch Institute Support (core) Grant from the National Cancer Institute.
Study reveals why some cancer therapies don’t work for all patients
Drugs that block enzymes called tyrosine kinases are among the most effective targeted therapies for cancer. However, they typically work for only 40 to 80 percent of the patients who would be expected to respond to them.
In a new study, MIT researchers have figured out why those drugs don’t work in all cases: Many of these tumors have turned on a backup survival pathway that helps them keep growing when the targeted pathway is knocked out.
“This seems to be hardwired into the cells and seems to be providing activation of a critical survival pathway in cancer cells,” says Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT. “This pathway allows the cells to be resistant to a wide variety of therapies, including chemotherapies.”
Additionally, the researchers found that they could kill those drug-resistant cancer cells by treating with both a tyrosine kinase inhibitor and a drug that targets the backup pathway. Clinical trials are now underway to test one such combination in lung cancer patients.
White is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Cameron Flower PhD ’24, who is now a postdoc at Dana-Farber Cancer Institute and Boston Children’s Hospital, is the paper’s lead author.
Tumor survival
Tyrosine kinases are involved in many signaling pathways that allow cells to receive input from the external environment and convert it into a response such as growing or dividing. There are about 90 types of these kinases in human cells, and many of them are overactive in cancer cells.
“These kinases are very important for regulating cell growth and mitosis, and pushing the cell from a nondividing state to a dividing state depends on the activity of a lot of different tyrosine kinases,” Flower says. “We see a lot of mutations and overexpression of these kinases in cancer cells.”
These cancer-associated kinases include EGFR and BCR-ABL. Many cancer drugs targeting these kinases, including imatinib (Gleevec), have been approved to treat leukemia and other cancers. However, these drugs are not effective for all of the patients whose tumors overexpress tyrosine kinases — a phenomenon that has puzzled cancer researchers.
That lower-than-expected success rate motivated the MIT team to look into these drugs and try to figure out why some tumors do not respond to them.
For this study, the researchers examined six different cancer cell lines, which originally came from lung cancer patients. They chose two cell lines with EGFR mutations, two with mutations in a tyrosine kinase called MET and two with mutations in a tyrosine kinase called ALK. Each pair included one line that responded well to the tyrosine kinase inhibitor targeting the overactive pathway and one line that did not.
Using a technique called phosphoproteomics, the researchers were able to analyze the signaling pathways that were active in each of the cells, before and after treatment. Phosphoproteomics is used to identify proteins that have had a phosphate group added to them by a kinase. This process, known as phosphorylation, can activate or deactivate the target protein.
The researchers’ analysis revealed that the drugs were working as intended in all of the cancer cells. Even in resistant cells, the drugs did knock out signaling by their target kinase. However, in the cells that were resistant, an alternative network was already turned on, which helped the cells survive in spite of the treatment.
“Even before the therapy begins, the cells are in a state that intrinsically is resistant to the drug,” Flower says.
This survival network consists of signaling pathways that are regulated by another type of kinases known as SRC family kinases. Activation of this network appears to help cancer cells proliferate and possibly to migrate to new locations in the body. In addition to lung cancer, researchers from White’s lab have also found SRC family kinases activated in melanoma cells, where they also play a role in drug resistance, and in glioblastoma, a type of brain cancer.
“As inhibitors for SRC kinases are also drugs, the work suggests that combining inhibitors of driver oncogenes with SRC inhibitors could increase the number of patients who would benefit. This strategy merits testing in new clinical trials,” says Benjamin Neel, a professor of medicine at NYU Grossman School of Medicine, who was not involved in the study.
These findings might also explain why some patients who initially respond to tyrosine kinase inhibitors end up having their tumors recur later; the cells may end up activating this same survival pathway, but not until sometime after the initial treatment.
Combating resistance
The researchers also found that treating the resistant cells with both a tyrosine kinase inhibitor and a drug that inhibits SRC family kinases led to much greater cell death rates. By coincidence, a clinical trial testing the combination of a tyrosine kinase inhibitor called osmertinib and an SRC inhibitor is now underway, in patients with lung cancer. The MIT team now hopes to work with the same drug company to run a similar trial in pancreatic cancer patients.
The researchers also showed that they could use phosphoproteomics to analyze patient biopsy samples to see which cells already have the SRC pathways turned on.
“We are really excited to watch these clinical trials and to see how well patients do on these combinations. And I really think there’s a future for using tyrosine phosphoproteomics to guide this clinical decision-making,” White says.
This therapy might also be useful for patients whose tumors are originally susceptible to tyrosine kinase inhibitors but then later become resistant by turning on SRC pathways.
“Among the sensitive cells, some of them are able to upregulate this survival pathway and survive, which might be the residual disease that’s still there after treatment,” White says. “One of the interesting avenues here is, could we improve therapy for almost everybody, regardless of whether their tumors have intrinsic or adaptive resistance?”
The research was funded by the National Institutes of Health and the MIT Center for Precision Cancer Medicine.
“Near-misses” in particle accelerators can illuminate new physics, study finds
Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.
An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope — and led to the discovery of new behavior in the forces that hold matter together.
In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC) — a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.
When particles travel at close to the speed of light, they are surrounded by an electromagnetic halo that flattens when particles pass close but don’t collide. The pancaked energy fields produce extremely high-energy photons. Occasionally, a photon from one particle can ping off another particle, like an intense, quantum-sized pinprick of light.
The MIT team was able to pick out such near-miss pinpricks, or what scientists call “photonuclear interactions,” from the LHC’s particle-collision data. They found that when some photons pinged off a particle, they kicked out a type of subatomic particle, known as a D0 meson, that the scientists could measure for the first time.
D0 mesons are subatomic particles that contain a charm quark, a rare type of quark not normally found in ordinary nuclear matter. Quarks are the fundamental building blocks of all matter, and are bound by gluons, which are massless particles that are the invisible glue, or “strong force” that holds matter together. The rare charm quarks can only be created in high-energy interactions. As such, they provide an especially clean, unambiguous probe of quarks and gluons inside a nucleus.
Through their measurements of D0 mesons , the researchers could estimate how tightly gluons are packed, and, essentially, how strong the strong force is within a particle’s nucleus.
“Our result gives an indication that when nuclear matter is squeezed together, then gluons start behaving in a funny way,” says lead author Gian Michele Innocenti, an assistant professor of physics at MIT. “We need to know how these gluons behave in these extreme conditions because gluons keep the universe together. And at this point, photonuclear interactions are the best way we have to study gluon behavior.”
The study’s co-authors include members of the CMS Collaboration — a global consortium of physicists who operate and maintain the Compact Muon Solenoid (CMS) experiment, which is one of the largest detectors within the LHC that was used to collect the study’s data.
Bringing a “background” into focus
With each run, the Large Hadron Collider fires off needle-thin beams of particles in opposite directions around a 27-kilometer-long underground ring. When the beams cross paths, particles can collide. If the collisions happen to take place in a region of the ring where the CMS detector is set up, the detector can record the collisions, and scientists can then analyze the aftermath to reconstruct the fragments that make up the original particles.
Since the LHC began operations in 2008, the focus has been overwhelmingly on the detection and analysis of “head-on” collisions. Physicists have known that by accelerating particle beams, they would also produce photonuclear interactions — near-miss events where a particle might collide not with another particle, but with its cloud of photons. But such light-nucleus interactions were thought to be simply noise.
“These photonuclear events were considered a background that people wanted to cancel,” Innocenti says. “But now people want to use it as a signal because a collision between a photon and a nucleus can essentially be like a super-high-accuracy microscope for nuclear matter.”
When a photon pings off a particle, the abundance, direction, and energy of the produced D0 meson relates directly to the energy and density of the gluons in the nucleus. If scientists can detect and measure this photon interaction, it would be like using an extremely small and powerful flashlight to illuminate the nuclear structures. But until now, it was assumed that photonuclear interactions would be impossible to pick out amid the various physics processes that can occur in such collisions.
“People didn’t think it was possible to remove the huge mess of all these other collisions, to zoom in on single photons hitting single nuclei producing a D0 meson,” Innocenti says. “We had to devise a system to recognize those very rare photonuclear interactions while data was being taken of particle collisions.”
Illuminating charm
For their new study, Innocenti and his colleagues first simulated what a photonuclear interaction would look like amid a shower of other particle collisions. In particular, they simulated a scenario in which a photon pings off a nucleus and produces a D0 meson. Although these events are rare, D0 mesons are among the most abundant particles that contain a charm quark. The team reasoned that if they could detect signs of a charm quark in D0 mesons that are produced in a photonuclear interaction, it could give valuable information about the gluons that hold the nucleus together.
With their simulations, the researchers then developed an algorithm to detect photonuclear interactions. They implemented the algorithm at the CMS detector to search for signals in real-time during the LHC’s particle-colliding runs.
“We had to collect tens of billions of collisions in order to extract a few hundred of these rare instances where a photon hits a nucleus and produces one of these exotic D0 meson particles,” Innocenti explains.
From this enormous dataset, the team identified a clean sample of these rare events by exploiting CMS’s advanced detector capabilities to select near-miss events and reconstruct the properties of the D0 mesons.
Through this process, the team detected instances of D0 meson production and then worked back to calculate properties of the particles’ charm quarks and the gluons that would have held them together in the original nucleus.
“We are constraining what happens to gluons when they are squeezed in ions that are very large that are traveling very fast,” Innocenti says. “So far, our data confirms what people expect in terms of high-density nuclear matter. In reality, this is the first time we’ve shown this kind of measurement is feasible. ”
The team is working to improve the measurement’s accuracy in order to provide a clearer picture of how quarks and gluons are arranged inside a nucleus.
“Gluons are a very strong force that keeps the universe together,” Innocenti says. “The description of the strong force is at the basis of everything we see in nature. Now we have a way to either fully confirm, or show deviations from, that description.”
This work was supported, in part, by the U.S. Department of Energy, including support from a DOE Early Career Research Program award, and it builds on the contributions of a large MIT team of graduate students, undergraduate researchers, scientists, and postdocs.
As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters
In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.
Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives...
Iran conflict cuts deep in places that can least afford it
Enviros decry state efforts to block climate lawsuits
Maryland energy bill would trade short-term gains for long-term pain
India unveils long-delayed climate targets as Iran war roils energy markets
Far from Hormuz, a second Middle East strait enters the crosshairs
Report: Energy recovery from Iran war could take years
FEMA official: No plans to cut agency staff despite earlier reports
Alberta and Canada reach deal on oil and gas methane emissions
JPMorgan exec calls out ‘vague’ carbon market contracts
The ferocity of the downpour that flooded Hawaii surprised meteorologists
Mexico bets on supercomputer to combat extreme weather events
AI system learns to keep warehouse robot traffic running smoothly
Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.
To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.
The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.
In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.
“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.
Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.
Rerouting robots
Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.
The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.
Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.
But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.
“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.
The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.
Over time, the neural network learns to coordinate many robots efficiently.
“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.
It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.
By predicting current and future robot interactions, the model plans to avoid congestion before it happens.
After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.
This combination of methods is key.
“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.
Overcoming complexity
Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.
On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.
“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.
While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.
In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.
This research was funded by Symbotic.
Championing fusion’s promising underdog
Like many people who end up going into physics, Sophia Henneberg had a hard time, when she was young, choosing between that discipline and mathematics. Both subjects came easily to her, and she — unlike many of her peers — thought they were fun. Henneberg grew up in a small town in central Germany, and it was not until one week before applying to college that she decided on physics, reasoning that it would still give her the chance to do plenty of math, while also affording opportunities to connect with a broad range of applications.
Midway through her undergraduate studies at Goethe University in Frankfurt, she started taking courses in plasma physics and almost instantly knew that she had found her niche. “Most of the visible material in the universe is in the form of hot, ionized gas called plasma, so studying that is really fundamental,” she says. “And there’s this amazing application, fusion, which has the potential to become an unlimited energy source.”
Early on, Henneberg resolved to try to make that potential a reality, and she’s been pursuing that goal at MIT since becoming the Norman Rasmussen Career Development Assistant Professor in the Department of Nuclear Science and Engineering in fall of 2025. Her research focus is on stellarators — a kind of fusion machine that has been overshadowed for many decades by another fusion device called the tokamak. Both of these machines rely on magnetic confinement — using powerful magnetic fields to compress a plasma into a tiny volume causing some of the atoms within this dense cluster to fuse together, unleashing energy in the process. In the tokamak, the plasma assumes the shape of a donut. In a stellarator, the plasma is also contained within a rounded loop, only this one resembles a twisted donut.
As a PhD candidate at the University of York (in the United Kingdom), Henneberg studied the instabilities that can arise in tokamaks, where plasma temperatures often exceed 100 million degrees Celsius and currents induced within the plasma can attain speeds of roughly 100 kilometers per second. In such an ultra-extreme setting — more than six times hotter than the core of the sun — sudden surges of energy, leading to something akin to small-scale solar flares, can breach the magnetic cage enclosing the plasma, thereby disrupting the fusion process and possibly damaging the reactor itself. Henneberg started hearing about stellarators in her classes and, after a bit of research, she came to realize that “they could be much more stable if you design them in the right way.”
Striking a favorable balanceIn 2016, she began a postdoctoral fellowship at the Max Planck Institute (MPI) for Plasma Physics in Greifswald, Germany, joining the Stellarator Theory Group. Greifswald may well have been the best place for her to carry out stellarator research, given that the world’s biggest and most advanced reactor of this type, Wendelstein 7-X (W7-X), was based there, and experiments were just starting in the year she arrived.
Her main assignment at MPI was to work on stellarator optimization, figuring out the best way to design the reactor to meet the engineering and physics goals — a task not unlike that of tuning a car to achieve maximum fuel efficiency or, for a racecar, maximum speed. Henneberg’s interest in optimization continues to this day, remaining central to her research agenda at MIT.
“If you want to design a stellarator, there are two principal components you can look at,” she says. The first relates to the shape of the boundary, or cage, into which the plasma will ultimately be confined. This shape is constrained by magnetic fields that are generated, in turn, by a series of superconducting coils that might range in number anywhere from around 4 to 50. In stellarators, the coils tend to be bent rather than circular. That gives rise to twists in the magnetic fields, but it also makes the coils more complicated and likely more expensive. Henneberg has come up with ways to simplify the optimization process — one of which involves designing the plasma boundary and the shape of the coils in the same step rather than looking at them separately.
“We’ve now reached the point where stellarator performances can exceed those of tokamaks, because we’re able to optimize them very well, but you have to put the effort in,” she says. “You can’t get good performance out of just any twisty donut.”
The best of both worldsIn a 2024 paper, Henneberg and her former Greifswald colleague, Gabriel Plunk, introduced the notion of a stellarator-tokamak hybrid reactor. The goal, they wrote, is both “simple and compelling: to combine the strengths of the two concepts into a single device” that outperforms either of the existing modes.
One of Henneberg’s major preoccupations at present is exploring ways of converting a tokamak into a stellarator that basically entails adding just a few coils — of the bent variety — that can be turned on or off. “This can be an easy way for people in the tokamak community to think more about the possible benefits of the stellarator,” she says. While no one has yet built a hybrid, at least one university has secured funding to do so.
Interest in stellarators has been steadily mounting in recent years, a fact that delights Henneberg. When she started working in this area almost a decade ago, the field of stellarator optimization was tiny and there were very few people she could converse with. There’s much more research going on today, which means that more ideas are coming out, along with some exciting results. Commercial interest is growing as well, and Henneberg has been in contact with several stellarator startup companies, including Type One Energy and Thea Energy in the United States and Proxima Fusion and Gauss Fusion in Germany.
“It seems to me that most new startups these days are focusing on stellarators,” Henneberg says. “With so many companies now entering the field, it can seem like the technical issues involved in fusion are already solved, but there are still many interesting open questions. I’m working on improved designs that advance both the physics and the economic feasibility.”
That’s where her students come in. She believes that one part of her role as an MIT professor is to train the next generation of stellarator experts — people who will help, for instance, to design effective coils that are easy to make, as well as to improve reactor performance overall.
During her first term, she co-taught the renowned Fusion Design (22.63) course alongside MIT Professor Dennis Whyte. This course has had a remarkable influence on the fusion community, leading to nine published papers with over 1,000 citations and inspiring the creation of several companies. In the fall 2025 version of this course, students were charged with comparing designs for stellarators with machines that relied on a different way of confining the plasma called magnetic mirrors.
After just a few months at MIT, Henneberg has been impressed with her students, calling them “highly motivated and a lot of fun to work with.” She’s confident that her research group will soon be making progress.
She is also happy to be affiliated with MIT’s Plasma Science and Fusion Center, which is internationally recognized as a leading university laboratory in this field. “It’s great to have so many experts [primarily in tokamaks] in one place that I can work with and learn from,” Henneberg says. “Because of my interest in hybrid reactors, my research will really benefit from all the expertise here on the tokamak side.”
Augmenting citizen science with computer vision for fish monitoring
Each spring, river herring populations migrate from Massachusetts coastal waters to begin their annual journey up rivers and streams to freshwater spawning habitat. River herring have faced severe population declines over the past several decades, and their migration is extensively monitored across the region, primarily through traditional visual counting and volunteer-based programs.
Monitoring fish movement and understanding population dynamics are essential for informing conservation efforts and supporting fisheries management. With the annual herring run getting underway this month, researchers and resource managers once again take on the challenge of counting and estimating the migrating fish population as accurately as possible.
A team of researchers from the Woodwell Climate Research Center, MIT Sea Grant, the MIT Computer Science and Artificial Intelligence Lab (CSAIL), MIT Lincoln Laboratory, and Intuit explored a new monitoring method using underwater video and computer vision to supplement citizen science efforts. The researchers — Zhongqi Chen and Linda Deegan from the Woodwell Climate Research Center, Robert Vincent and Kevin Bennett from MIT Sea Grant, Sara Beery and Timm Haucke from MIT CSAIL, Austin Powell from Intuit, and Lydia Zuehsow from MIT Lincoln Laboratory — published a paper describing this work in the journal Remote Sensing in Ecology and Conservation this February.
The open-access paper, “From snapshots to continuous estimates: Augmenting citizen science with computer vision for fish monitoring,” outlines how recent advancements in computer vision and deep learning, from object detection and tracking to species classification, offer promising real-world solutions for automating fish counting with improved efficiency and data quality.
Traditional monitoring methods are constrained by time, environmental conditions, and labor intensity. Volunteer visual counts are limited to brief daytime sampling windows, missing nighttime movement and short migration pulses, when hundreds of fish pass by within the span of a few minutes. While technologies like passive acoustic monitoring and imaging sonar have advanced continuous fish monitoring under certain conditions, the most promising and low-cost option — manual review of underwater video — is still labor-intensive and time-consuming. With the growing demand for automated video processing solutions, this study presents a scalable, cost-effective, and efficient deep learning-based system for reliable automated fish monitoring.
The team built an end-to-end pipeline — from in-field underwater cameras to video labeling and model training — to achieve automated, computer vision-powered fish counting. Videos were collected from three rivers in Massachusetts: the Coonamessett River in Falmouth, the Ipswich River (Ipswich), and the Santuit River in Mashpee.
To prepare the training dataset, the team selected video clips with variations in lighting, water clarity, fish species and density, time of day, and season to ensure that the computer vision model would work reliably across diverse real-world scenarios. They used an open-source web platform to manually label the videos frame-by-frame with bounding boxes to track fish movement. In total, they labeled 1,435 video clips and annotated 59,850 frames.
The researchers compared and validated the computer vision counts with human video reviews, stream-side visual counts, and data from passive integrated transponder (PIT) tagging. They concluded that models trained on diverse multi-site and multi-year data performed best and produced season-long, high-resolution counts consistent with traditionally established estimates. Going one step further, the system provided insights into migration behavior, timing, and movement patterns linked to environmental factors. Using video from the 2024 Coonamesset River migration, the system counted 42,510 river herring and revealed that upstream migration peaked at dawn, while downstream migration was largely nocturnal, with fish utilizing darker, quieter periods to avoid predators.
With this real-world application, the researchers aim to advance computer vision in fisheries management and provide a framework and best practices for integrating the technology into conservation efforts for a wide range of aquatic species. “MIT Sea Grant has been funding work on this topic for some time now, and this excellent work by Zhongqi Chen and colleagues will advance fisheries monitoring capabilities and improve fish population assessments for fisheries managers and conservation groups,” Vincent says. “It will also provide education and training for students, the public, and citizen science groups in support of the ecologically and culturally important river herring populations along our coasts.”
Still, continued traditional monitoring is essential for maintaining consistency in long-term datasets until fisheries management agencies fully implement automated counting systems. Even then, computer vision and citizen science should be seen as complementary. Volunteers will be necessary for camera maintenance and for contributing directly to the computer vision workflow, from video annotation to model verification. The researchers envision that integrating citizen observations and computer vision-generated data will help create a more comprehensive and holistic approach to environmental monitoring.
This work was funded by MIT Sea Grant, with additional support provided by the Northeast Climate Adaptation Science Center, an MIT Abdul Latif Jameel Water and Food Systems seed grant, the AI and Biodiversity Change Global Center (supported by the National Science Foundation and the Natural Sciences and Engineering Research Council of Canada), and the MIT Undergraduate Research Opportunities Program.
EFF Sues for Answers About Medicare's AI Experiment
SAN FRANCISCO – The Electronic Frontier Foundation (EFF) today filed a Freedom of Information Act (FOIA) lawsuit against the Centers for Medicare & Medicaid Services (CMS) seeking records about a multi-state program that is using AI to evaluate requests for medical care.
"Tasking an algorithm with making determinations about treatment can create unwarranted—and even discriminatory—delays or denials of necessary medical care," said Kit Walsh, EFF’s Director of AI and Access-to-Knowledge Legal Projects. "Given these serious risks, the public requires transparency that it hasn't gotten. We're suing to get badly needed answers about how Medicare's AI experiment works."
Announced by CMS Administrator Dr. Mehmet Oz last year, the pilot program known as WISeR (Wasteful and Inappropriate Service Reduction) uses AI to assess prior authorization requests from Medicare beneficiaries. Previously rare in original Medicare, prior authorization requires medical providers to obtain advance approval from a patient’s health insurer before delivering certain treatments or services as a condition of coverage.
Unfortunately, there is little information about how the AI algorithms used in WISeR work, including what training data they rely on. It remains unclear whether WISeR has any safeguards against systemic flaws such as algorithmic bias, privacy violations, and wrongful denials of care.
Healthcare experts, care providers, and lawmakers have all raised alarms that WISeR may cause serious harm to patients by relying on AI unless it has the necessary safeguards. Despite this widespread criticism, WISeR was rolled out in six states in January, potentially affecting as many as 6.4 million Medicare beneficiaries, according to one estimate.
By design, WISeR incentivizes contracted companies to deny prior approval against the best interests of patients. Vendors are compensated, in part, on the volume of healthcare services they deny and are entitled to as much as 20 percent of the associated savings. Just weeks after WISeR's launch, hospitals and other health care providers started reporting delays in care approval, communication gaps, and administrative strain.
Earlier this year, EFF submitted a FOIA request to CMS asking for records related to WISeR. Among other records, the request sought agreements with software vendors participating in WISeR; records related to any tests for accuracy, bias, or hallucinations in vendors' technology; and records related to any audits, monitoring, or evaluation of WISeR and participating vendors. To date, CMS has not provided any of these records to EFF. EFF's FOIA lawsuit asks for their immediate processing and release.
"The public has a right to know more about the algorithms driving decisions around their healthcare," said Tori Noble, Staff Attorney at EFF. "Without greater transparency, patients, providers, and policymakers will continue to be left in the dark.”
EFF thanks Stanford Law School's Juelsgaard Intellectual Property & Innovation Clinic for their help in preparing this lawsuit.
For the complaint: https://www.eff.org/document/complaint-eff-v-cms-medicare-wiser-foia
Why solid-state batteries keep short-circuiting
Batteries that use solid metal as their charge-carrying electrolyte could potentially be a safer and far more energy-dense alternative to lithium-ion batteries. However, these solid-state batteries have been plagued by the formation of metallic cracks called dendrites that cause them to short circuit.
The problem has so far prevented such batteries from becoming a major player in energy storage. But now, research from MIT could finally help engineers find a way to get past this hurdle.
For decades, many researchers have treated dendrites as largely the result of mechanical stress — like cracks that form on the sidewalk when a tree root grows underneath. But MIT engineers have discovered the exact opposite: Faster dendrite growth was associated with lower stress levels in a commonly used battery electrolyte material. Using a new technique that allowed them to directly measure the stress around growing dendrites, the researchers found cracks formed at stress levels as low as 25 percent of what would be expected under mechanical stress alone.
The experiments, published in Nature today, instead revealed another culprit: chemical reactions caused by high electrical currents that weaken the electrolyte and make it more susceptible to dendrite growth. Researchers had previously proposed that such reactions cause dendrite growth, but the new study provides the first experimental data on the interplay between chemical and mechanical stress in dendrite formation.
“Direct measurement techniques allowed us to see how tough the material is as we cycle the cell,” says Cole Fincher, the paper’s first author and an MIT PhD student in materials science and engineering. “What we saw was that if you just test the ceramic electrolyte on the benchtop, it’s about as tough as your tooth. But during charging, it gets a lot weaker — closer to the brittleness of a lollipop.”
The findings reveal why developing stronger electrolytes alone hasn’t solved the decades-old dendrite problem. It also points to the importance of developing more chemically stable materials to finally fulfill the promise of high-density solid-state batteries.
“There’s a large community of researchers that are constantly trying to discover and design better solid electrolytes to enable the solid-state battery,” says senior author Yet-Ming Chiang, MIT’s Kyocera Professor of Materials Science and Engineering. “This study provides guidance in those efforts. We discovered a new mechanism by which these dendrites grow, allowing us to explore ways to design around it to make solid-state batteries successful.”
Joining Fincher and Chiang on the paper are MIT PhD student Colin Gilgenbach; Thermo Fisher Scientific scientists Christian Roach and Rachel Osmundsen; MIT.nano researcher Aubrey Penn; MIT Toyota Professor in Materials Processing W. Craig Carter; MIT Kyocera Professor of Materials Science and Engineering James LeBeau; University of Michigan Professor Michael Thouless; and Brown University Professor Brian W. Sheldon.
Measuring stress
Dendrites have presented a major roadblock to battery development since the 1970s. One reason lithium-ion batteries have become ubiquitous while other approaches have stalled is that their commonly used graphite anodes are less susceptible to dendrite formation. That’s a shame because solid-state batteries that use lithium metal as an anode and a solid electrolyte could theoretically store far more energy in the same sized package with less weight. They could thus enable longer-lasting phones and laptops, or electric cars with double the range of today’s options.
“There’s no more energy-dense form of lithium than lithium metal,” Chiang says. “But the dendrite problem has limited progress with solid-state batteries.”
Lithium metal is soft like taffy. Fincher, who has been studying the dendrite problem in the labs of Chiang and Carter, says one puzzle is how such a soft material can penetrate into the hard electrolyte materials being explored for use in solid-state batteries.
“The ceramics that have been used in these applications are stiff, like a coffee mug, so it’s been hoped that solid-state batteries would stop this relatively soft dendrite from growing,” Fincher explains.
Believing that mechanical stress causes dendrites, scientists have worked to develop stronger electrolytes that can withstand more mechanical stress. Some researchers have proposed that chemical reactions play a role in dendrite formation, but how those reactions worked with mechanical stress was not known.
For their Nature study, the researchers set out to directly observe mechanical and chemical changes in a commonly used solid-state electrolyte material as dendrites grew. Solid-state batteries are typically organized like a sandwich, which makes it hard to look inside the middle electrolyte layer. For their first experiment, the researchers developed a special solid-state battery cell in which the ceramic layers can be observed from the side, allowing the researchers to watch dendrite growth occurring in the electrolyte.
The researchers also used a measurement technique called birefringence microscopy to precisely measure the stress around the dendrite, which Fincher developed as part of his PhD thesis.
“It works the same way as polarized sunglasses when you look at something like a windshield,” Fincher explains of the technique. “When light comes through, residual stresses in the glass enable light of some orientations to pass faster than others, and that can give rise to observable rainbow patterns. These patterns can be used to measure stress.”
The technique gave the researchers a way to both visualize and quantify stress around actively growing dendrites for the first time, leading to the unexpected findings.
“Normally you would expect that the faster a dendrite grows, the more stress it creates,” Chiang says. “Instead, we observed exactly the opposite. The faster it grew, the lower the stress around it, meaning the solid electrolyte is breaking under a lower stress, and therefore it’s been embrittled.”
In fact, the dendrites grew at stress levels far weaker than expected. Fincher describes the weaker electrolyte as electrochemically corroded.
“Imagine you test a piece of glass one day, and the next day it’s only a quarter as strong,” Chiang says. “It was very surprising.”
Led by LeBeau, the researchers then cooled the electrolyte to extremely low temperatures and applied a powerful imaging technique called cryogenic scanning transmission electron microscopy that allowed them to study the area around the dendrite on nearly atomic scales. The imaging revealed that the passage of ionic current through the material had caused chemical reactions that made it more brittle.
“The electric current drives the flow of lithium ions through the solid electrolyte,” Chiang explains. “That causes a highly concentrated flow of lithium ions at the dendrite tip. We believe that leads to a chemical reduction of the material compound, which leads to its decomposition into new phases. You start with a crystalline phase of the electrolyte, then there’s a volume contraction after the deposition that is consistent with the embrittlement we see.”
Toward better batteries
The experiment was done on one of the most stable electrolytes used in solid-state batteries, making the researchers confident the findings will carry over to other electrolyte materials.
“This tells us we have to look for electrolyte materials that are even more stable, especially when in contact with lithium metal, which chemically speaking is very reducing,” Chiang says. “This will help direct the search for new materials.”
For instance, Chiang says now that they understand more about the chemical changes causing embrittlement, researchers could explore materials that actually get tougher as cracks grow.
The researchers say it will take more work to figure out what electrochemical reactions are taking place to make the electrolyte so much weaker. But they say their approach for directly observing stresses could also help improve materials for use in devices like fuel cells and electrolyzers.
The work was supported by the center for Mechano-Chemical Understanding of Solid Ionic Conductors, a Department of Energy Engineering Frontiers Research Center, the National Science Foundation, and Fincher’s Department of Defense Science and Engineering Graduate Fellowship, and was carried out using MIT.nano facilities.
