Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Traffic Violation! License Plate Reader Mission Creep Is Already Here
A new report from 404 Media sheds light on how automated license plate readers (ALPRs) could be used beyond the press releases and glossy marketing materials put out by law enforcement agencies and ALPR vendors. In December 2025, Georgia State Patrol ticketed a motorcyclist for holding a cell phone in his hand. According to the report, the ticket read, “CAPTURED ON FLOCK CAMERA 31 MM 1 HOLDING PHONE IN LEFT HAND.”
If you’re thinking that this sounds outside of the scope of what ALPRs are supposed to do, you’re right. In November 2025, Flock Safety, the maker of the ALPR in question, wrote a post about how they definitely are in compliance with the Fourth Amendment to the U.S. Constitution. In this post, which highlighted what ALPRs are and what they are not, the company writes: “What it is not: Flock ALPR does not perform facial recognition, does not store biometrics, cannot be queried to find people, and is not used to enforce traffic violations.” (emphasis added)
Well, apparently their customers never got the memo and apparently the technology’s design does not explicitly prevent behavior the company officially and publicly disavows.
Or at least this used to be the case: Flock now lists six different companies providing traffic enforcement technology on its “Partner program” site. Public records also show that speed enforcement cameras have been connected to Flock's ALPR network.
EFF and other privacy advocates have long warned about mission creep when it comes to surveillance infrastructure. Police often swear that a piece of technology will only be used in a particular set of circumstances or to fight only the most serious crimes only to utilize it to fight petty crimes or watch protests.
We continue to urge cities, states, and even companies to end their relationship with Flock Safety because of the incompatibility between the mass surveillance it enables and its inability to protect civil liberties—including preventing mission creep.
Supreme Court Agrees With EFF: ISPs Don't Have To Be Copyright Enforcers
If your ISP can be liable for huge amounts of money for not terminating your access to the internet because of accusations that you—or someone in your household or college network—has committed copyright infringement, that is dangerous. We live in a world where high speed internet access is a necessity for participation in everyday life. That’s why liability for ISPs for their customers’ actions should not be expanded.
Last fall, EFF filed an amicus brief urging the U.S. Supreme Court to reject an expansive theory of secondary copyright liability that threatened to impose massive damages on internet service providers and other technology companies simply for offering widely used services. Yesterday, the Court agreed.
In Cox v. Sony, the Court reversed a Fourth Circuit decision that had upheld a billion-dollar verdict against internet provider Cox Communications. Writing for the majority, Justice Thomas explained that contributory liability is limited to two situations: when a defendant actively induces infringement, or when it provides a product or service that it knows is tailored for infringement.
This framework closely tracks the approach EFF urged in our amicus brief. As we explained, courts should look to patent law for guidance in defining the boundaries of secondary copyright liability. Patent law recognizes liability where a defendant actively induces infringement, or distributes a product knowing that it lacks substantial non-infringing uses. The Court’s opinion adopts that same basic structure.
EFF also emphasized the broader public interest at stake in preserving these limits. Expansive theories of secondary liability do not just affect large internet providers. They can chill innovation, threaten smaller technology companies, and undermine the development of general-purpose tools that millions of people rely on for lawful speech, creativity, education, and access to information. When liability turns on generalized knowledge that some users may infringe, service providers face pressure to over-police user activity or withdraw useful services altogether.
The Court also made clear that mere knowledge that some customers use a service to infringe is not enough. Copyright holders must show that the provider intended its service to be used for infringement. That intent can be established only through active inducement or by showing that the service is specifically designed for unlawful uses—not simply because the service provider failed to take affirmative steps to prevent infringement.
Applying this standard, the Court held that Cox could not be liable. There was no evidence that Cox encouraged or promoted infringement. The record instead showed that Cox implemented warning systems, suspended service, and in some cases terminated accounts in an effort to discourage unlawful activity.
Nor was Cox’s internet access service tailored to infringement. The Court emphasized that general-purpose internet connectivity is capable of substantial lawful uses. Treating the provision of such services as contributory infringement would improperly expand secondary liability beyond the limits recognized in prior Supreme Court decisions.
The Court also rejected the Fourth Circuit’s broader rule that supplying a service with knowledge it may be used to infringe is itself sufficient for liability. That theory conflicts with decades of precedent warning against imposing copyright liability based solely on knowledge or a failure to take additional preventive steps.
EFF is pleased with yesterday’s opinion. We will continue to advocate for the public’s ability to build, use, and innovate with new technologies.
Link to our amicus brief:
https://www.eff.org/document/us-s-ct-cox-v-sony-eff-et-al-amicus-brief
Link to the opinion:
https://www.supremecourt.gov/opinions/25pdf/24-171_bq7d.pdf
Implantable islet cells could control diabetes without insulin injections
Most diabetes patients must carefully monitor their blood sugar levels and inject insulin multiple times per day, to help keep their blood sugar from getting too high.
As a possible alternative to those injections, MIT researchers are developing an implantable device that contains insulin-producing cells. The device encapsulates the cells, protecting them from immune rejection, and it also carries an on-board oxygen generator to keep the cells healthy.
This device, the researchers hope, could offer a way to achieve long-term control of type 1 diabetes. In a new study, they showed that these encapsulated pancreatic islet cells could survive in the body for at least 90 days. In mice that received the implants, the cells remained functional and produced enough insulin to control the animals’ blood sugar levels.
“Islet cell therapy can be a transformative treatment for patients. However, current methods also require immune suppression, which for some people can be really debilitating,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science. “Our goal is to find a way to give patients the benefit of cell therapy without the need for immune suppression.”
Anderson is the senior author of the study, which appears today in the journal Device. Former MIT research scientist Siddharth Krishnan, who is now an assistant professor of electrical engineering at Stanford University, and former MIT postdoc Matthew Bochenek are the lead authors of the paper. Robert Langer, the David H. Koch Institute Professor at MIT, is also a co-author.
Insulin on demand
Islet cell transplantation has already been used successfully to treat diabetes in patients. Those islet cells typically come from human cadavers, or more recently, can be generated from stem cells. In either case, patients must take immunosuppressive drugs to prevent their immune system from rejecting the transplanted cells.
Another way to prevent immune rejection is to encapsulate cells in a protective device. However, this raises new challenges, as the coating that surrounds the cells can prevent them from receiving enough oxygen.
In a 2023 study, Anderson and his colleagues reported an islet-encapsulation device that also carries an on-board oxygen generator. This generator consists of a proton-exchange membrane that can split water vapor (found abundantly in the body) into hydrogen and oxygen. The hydrogen diffuses harmlessly away, while oxygen goes into a storage chamber that feeds the islet cells through a thin, oxygen-permeable membrane.
Cells encapsulated within this device, they found, could produce insulin for up to a month after being implanted in mice.
“A month is a good timeframe in that it shows basic proof-of-concept. But from a translational standpoint, it’s important to show that you can go quite a bit longer than that,” Krishnan says.
In the new study, the researchers increased the lifespan of the devices by making them more waterproof and more resilient to cracking. They also improved the device electronics to deliver more power to the oxygen generator. The implant is powered wirelessly by an external antenna placed on the skin, which transfers energy to the device. By optimizing the circuitry, the researchers were able to increase the amount of power reaching the oxygen-generating system.
The additional power allowed the device to produce more oxygen, helping the encapsulated cells to survive and function more effectively. As a result, the cells were able to generate much more insulin over time.
Protein factories
In studies in rats and mice, the researchers showed that the new device could function for at least 90 days after being implanted under the skin. During this time, donor islet cells were able to produce enough insulin to keep the animals’ blood sugar levels within a healthy range.
The researchers saw similar results with islet cells derived from induced pluripotent stem cells, which could one day provide an indefinite supply that could be used for any patient who needs them. These islets didn’t fully reverse diabetes, but they did achieve some control of blood sugar levels.
“We’re hoping that in the future, if we can give the cells a little bit longer to fully mature, that they’ll secrete even more insulin to better regulate diabetes in the animals,” Bochenek says.
The researchers now plan to study whether they can get the devices to last for even longer in the body — up to two years, or longer.
“Long-term survival of the islets is an important goal,” Anderson says. “The cells, if they’re in the right environment, seem to be able to survive for a long time. We are excited by the duration we’ve already achieved, and we will be working to extend their function as long as possible.”
The researchers are also exploring the possibility of using this approach to deliver cells that could produce other useful proteins, such as antibodies, enzymes, or clotting factors.
“We think that these technologies could provide a long-term way to treat human disease by making drugs in the body instead of outside of the body,” Anderson says. “There are many protein therapies where patients must receive repeated, lengthy infusions. We think it may be possible to create a device that could continuously create protein therapeutics on demand and as needed by the patient.”
The research was funded, in part, by Breakthrough TID, the Leona M. and Harry B. Helmsley Charitable Trust, the National Institutes of Health, and a Koch Institute Support (core) Grant from the National Cancer Institute.
Study reveals why some cancer therapies don’t work for all patients
Drugs that block enzymes called tyrosine kinases are among the most effective targeted therapies for cancer. However, they typically work for only 40 to 80 percent of the patients who would be expected to respond to them.
In a new study, MIT researchers have figured out why those drugs don’t work in all cases: Many of these tumors have turned on a backup survival pathway that helps them keep growing when the targeted pathway is knocked out.
“This seems to be hardwired into the cells and seems to be providing activation of a critical survival pathway in cancer cells,” says Forest White, the Ned C. and Janet C. Rice Professor of Biological Engineering at MIT. “This pathway allows the cells to be resistant to a wide variety of therapies, including chemotherapies.”
Additionally, the researchers found that they could kill those drug-resistant cancer cells by treating with both a tyrosine kinase inhibitor and a drug that targets the backup pathway. Clinical trials are now underway to test one such combination in lung cancer patients.
White is the senior author of the study, which appears this week in the Proceedings of the National Academy of Sciences. Cameron Flower PhD ’24, who is now a postdoc at Dana-Farber Cancer Institute and Boston Children’s Hospital, is the paper’s lead author.
Tumor survival
Tyrosine kinases are involved in many signaling pathways that allow cells to receive input from the external environment and convert it into a response such as growing or dividing. There are about 90 types of these kinases in human cells, and many of them are overactive in cancer cells.
“These kinases are very important for regulating cell growth and mitosis, and pushing the cell from a nondividing state to a dividing state depends on the activity of a lot of different tyrosine kinases,” Flower says. “We see a lot of mutations and overexpression of these kinases in cancer cells.”
These cancer-associated kinases include EGFR and BCR-ABL. Many cancer drugs targeting these kinases, including imatinib (Gleevec), have been approved to treat leukemia and other cancers. However, these drugs are not effective for all of the patients whose tumors overexpress tyrosine kinases — a phenomenon that has puzzled cancer researchers.
That lower-than-expected success rate motivated the MIT team to look into these drugs and try to figure out why some tumors do not respond to them.
For this study, the researchers examined six different cancer cell lines, which originally came from lung cancer patients. They chose two cell lines with EGFR mutations, two with mutations in a tyrosine kinase called MET and two with mutations in a tyrosine kinase called ALK. Each pair included one line that responded well to the tyrosine kinase inhibitor targeting the overactive pathway and one line that did not.
Using a technique called phosphoproteomics, the researchers were able to analyze the signaling pathways that were active in each of the cells, before and after treatment. Phosphoproteomics is used to identify proteins that have had a phosphate group added to them by a kinase. This process, known as phosphorylation, can activate or deactivate the target protein.
The researchers’ analysis revealed that the drugs were working as intended in all of the cancer cells. Even in resistant cells, the drugs did knock out signaling by their target kinase. However, in the cells that were resistant, an alternative network was already turned on, which helped the cells survive in spite of the treatment.
“Even before the therapy begins, the cells are in a state that intrinsically is resistant to the drug,” Flower says.
This survival network consists of signaling pathways that are regulated by another type of kinases known as SRC family kinases. Activation of this network appears to help cancer cells proliferate and possibly to migrate to new locations in the body. In addition to lung cancer, researchers from White’s lab have also found SRC family kinases activated in melanoma cells, where they also play a role in drug resistance, and in glioblastoma, a type of brain cancer.
“As inhibitors for SRC kinases are also drugs, the work suggests that combining inhibitors of driver oncogenes with SRC inhibitors could increase the number of patients who would benefit. This strategy merits testing in new clinical trials,” says Benjamin Neel, a professor of medicine at NYU Grossman School of Medicine, who was not involved in the study.
These findings might also explain why some patients who initially respond to tyrosine kinase inhibitors end up having their tumors recur later; the cells may end up activating this same survival pathway, but not until sometime after the initial treatment.
Combating resistance
The researchers also found that treating the resistant cells with both a tyrosine kinase inhibitor and a drug that inhibits SRC family kinases led to much greater cell death rates. By coincidence, a clinical trial testing the combination of a tyrosine kinase inhibitor called osmertinib and an SRC inhibitor is now underway, in patients with lung cancer. The MIT team now hopes to work with the same drug company to run a similar trial in pancreatic cancer patients.
The researchers also showed that they could use phosphoproteomics to analyze patient biopsy samples to see which cells already have the SRC pathways turned on.
“We are really excited to watch these clinical trials and to see how well patients do on these combinations. And I really think there’s a future for using tyrosine phosphoproteomics to guide this clinical decision-making,” White says.
This therapy might also be useful for patients whose tumors are originally susceptible to tyrosine kinase inhibitors but then later become resistant by turning on SRC pathways.
“Among the sensitive cells, some of them are able to upregulate this survival pathway and survive, which might be the residual disease that’s still there after treatment,” White says. “One of the interesting avenues here is, could we improve therapy for almost everybody, regardless of whether their tumors have intrinsic or adaptive resistance?”
The research was funded by the National Institutes of Health and the MIT Center for Precision Cancer Medicine.
“Near-misses” in particle accelerators can illuminate new physics, study finds
Particle accelerators reveal the heart of nuclear matter by smashing together atoms at close to the speed of light. The high-energy collisions produce a shower of subatomic fragments that scientists can then study to reconstruct the core building blocks of matter.
An MIT-led team has now used the world’s most powerful particle accelerator to discover new properties of matter, through particles’ “near-misses.” The approach has turned the particle accelerator into a new kind of microscope — and led to the discovery of new behavior in the forces that hold matter together.
In a study appearing this week in the journal Physical Review Letters, the team reports results from the Large Hadron Collider (LHC) — a massive underground, ring-shaped accelerator in Geneva, Switzerland. Rather than focus on the accelerator’s particle collisions, the MIT team searched for instances when particles barely glanced by each other.
When particles travel at close to the speed of light, they are surrounded by an electromagnetic halo that flattens when particles pass close but don’t collide. The pancaked energy fields produce extremely high-energy photons. Occasionally, a photon from one particle can ping off another particle, like an intense, quantum-sized pinprick of light.
The MIT team was able to pick out such near-miss pinpricks, or what scientists call “photonuclear interactions,” from the LHC’s particle-collision data. They found that when some photons pinged off a particle, they kicked out a type of subatomic particle, known as a D0 meson, that the scientists could measure for the first time.
D0 mesons are subatomic particles that contain a charm quark, a rare type of quark not normally found in ordinary nuclear matter. Quarks are the fundamental building blocks of all matter, and are bound by gluons, which are massless particles that are the invisible glue, or “strong force” that holds matter together. The rare charm quarks can only be created in high-energy interactions. As such, they provide an especially clean, unambiguous probe of quarks and gluons inside a nucleus.
Through their measurements of D0 mesons , the researchers could estimate how tightly gluons are packed, and, essentially, how strong the strong force is within a particle’s nucleus.
“Our result gives an indication that when nuclear matter is squeezed together, then gluons start behaving in a funny way,” says lead author Gian Michele Innocenti, an assistant professor of physics at MIT. “We need to know how these gluons behave in these extreme conditions because gluons keep the universe together. And at this point, photonuclear interactions are the best way we have to study gluon behavior.”
The study’s co-authors include members of the CMS Collaboration — a global consortium of physicists who operate and maintain the Compact Muon Solenoid (CMS) experiment, which is one of the largest detectors within the LHC that was used to collect the study’s data.
Bringing a “background” into focus
With each run, the Large Hadron Collider fires off needle-thin beams of particles in opposite directions around a 27-kilometer-long underground ring. When the beams cross paths, particles can collide. If the collisions happen to take place in a region of the ring where the CMS detector is set up, the detector can record the collisions, and scientists can then analyze the aftermath to reconstruct the fragments that make up the original particles.
Since the LHC began operations in 2008, the focus has been overwhelmingly on the detection and analysis of “head-on” collisions. Physicists have known that by accelerating particle beams, they would also produce photonuclear interactions — near-miss events where a particle might collide not with another particle, but with its cloud of photons. But such light-nucleus interactions were thought to be simply noise.
“These photonuclear events were considered a background that people wanted to cancel,” Innocenti says. “But now people want to use it as a signal because a collision between a photon and a nucleus can essentially be like a super-high-accuracy microscope for nuclear matter.”
When a photon pings off a particle, the abundance, direction, and energy of the produced D0 meson relates directly to the energy and density of the gluons in the nucleus. If scientists can detect and measure this photon interaction, it would be like using an extremely small and powerful flashlight to illuminate the nuclear structures. But until now, it was assumed that photonuclear interactions would be impossible to pick out amid the various physics processes that can occur in such collisions.
“People didn’t think it was possible to remove the huge mess of all these other collisions, to zoom in on single photons hitting single nuclei producing a D0 meson,” Innocenti says. “We had to devise a system to recognize those very rare photonuclear interactions while data was being taken of particle collisions.”
Illuminating charm
For their new study, Innocenti and his colleagues first simulated what a photonuclear interaction would look like amid a shower of other particle collisions. In particular, they simulated a scenario in which a photon pings off a nucleus and produces a D0 meson. Although these events are rare, D0 mesons are among the most abundant particles that contain a charm quark. The team reasoned that if they could detect signs of a charm quark in D0 mesons that are produced in a photonuclear interaction, it could give valuable information about the gluons that hold the nucleus together.
With their simulations, the researchers then developed an algorithm to detect photonuclear interactions. They implemented the algorithm at the CMS detector to search for signals in real-time during the LHC’s particle-colliding runs.
“We had to collect tens of billions of collisions in order to extract a few hundred of these rare instances where a photon hits a nucleus and produces one of these exotic D0 meson particles,” Innocenti explains.
From this enormous dataset, the team identified a clean sample of these rare events by exploiting CMS’s advanced detector capabilities to select near-miss events and reconstruct the properties of the D0 mesons.
Through this process, the team detected instances of D0 meson production and then worked back to calculate properties of the particles’ charm quarks and the gluons that would have held them together in the original nucleus.
“We are constraining what happens to gluons when they are squeezed in ions that are very large that are traveling very fast,” Innocenti says. “So far, our data confirms what people expect in terms of high-density nuclear matter. In reality, this is the first time we’ve shown this kind of measurement is feasible. ”
The team is working to improve the measurement’s accuracy in order to provide a clearer picture of how quarks and gluons are arranged inside a nucleus.
“Gluons are a very strong force that keeps the universe together,” Innocenti says. “The description of the strong force is at the basis of everything we see in nature. Now we have a way to either fully confirm, or show deviations from, that description.”
This work was supported, in part, by the U.S. Department of Energy, including support from a DOE Early Career Research Program award, and it builds on the contributions of a large MIT team of graduate students, undergraduate researchers, scientists, and postdocs.
As the US Midterms Approach, AI Is Going to Emerge as a Key Issue Concerning Voters
In December, the Trump administration signed an executive order that neutered states’ ability to regulate AI by ordering his administration to both sue and withhold funds from states that try to do so. This action pointedly supported industry lobbyists keen to avoid any constraints and consequences on their deployment of AI, while undermining the efforts of consumers, advocates, and industry associations concerned about AI’s harms who have spent years pushing for state regulation.
Trump’s actions have clarified the ideological alignments around AI within America’s electoral factions. They set down lines on a new playing field for the midterm elections, prompting members of his party, the opposition, and all of us to consider where we stand in the debate over how and where to let AI transform our lives...
Iran conflict cuts deep in places that can least afford it
Enviros decry state efforts to block climate lawsuits
Maryland energy bill would trade short-term gains for long-term pain
India unveils long-delayed climate targets as Iran war roils energy markets
Far from Hormuz, a second Middle East strait enters the crosshairs
Report: Energy recovery from Iran war could take years
FEMA official: No plans to cut agency staff despite earlier reports
Alberta and Canada reach deal on oil and gas methane emissions
JPMorgan exec calls out ‘vague’ carbon market contracts
The ferocity of the downpour that flooded Hawaii surprised meteorologists
Mexico bets on supercomputer to combat extreme weather events
AI system learns to keep warehouse robot traffic running smoothly
Inside a giant autonomous warehouse, hundreds of robots dart down aisles as they collect and distribute items to fulfill a steady stream of customer orders. In this busy environment, even small traffic jams or minor collisions can snowball into massive slowdowns.
To avoid such an avalanche of inefficiencies, researchers from MIT and the tech firm Symbotic developed a new method that automatically keeps a fleet of robots moving smoothly. Their method learns which robots should go first at each moment, based on how congestion is forming, and adapts to prioritize robots that are about to get stuck. In this way, the system can reroute robots in advance to avoid bottlenecks.
The hybrid system utilizes deep reinforcement learning, a powerful artificial intelligence method for solving complex problems, to figure out which robots should be prioritized. Then, a fast and reliable planning algorithm feeds instructions to the robots, enabling them to respond rapidly in constantly changing conditions.
In simulations inspired by actual e-commerce warehouse layouts, this new approach achieved about a 25 percent gain in throughput over other methods. Importantly, the system can quickly adapt to new environments with different quantities of robots or varied warehouse layouts.
“There are a lot of decision-making problems in manufacturing and logistics where companies rely on algorithms designed by human experts. But we have shown that, with the power of deep reinforcement learning, we can achieve super-human performance. This is a very promising approach, because in these giant warehouses even a 2 or 3 percent increase in throughput can have a huge impact,” says Han Zheng, a graduate student in the Laboratory for Information and Decision Systems (LIDS) at MIT and lead author of a paper on this new approach.
Zheng is joined on the paper by Yining Ma, a LIDS postdoc; Brandon Araki and Jingkai Chen of Symbotic; and senior author Cathy Wu, the Class of 1954 Career Development Associate Professor in Civil and Environmental Engineering (CEE) and the Institute for Data, Systems, and Society (IDSS) at MIT, and a member of LIDS. The research appears today in the Journal of Artificial Intelligence Research.
Rerouting robots
Coordinating hundreds of robots in an e-commerce warehouse simultaneously is no easy task.
The problem is especially complicated because the warehouse is a dynamic environment, and robots continually receive new tasks after reaching their goals. They need to be rapidly redirected as they leave and enter the warehouse floor.
Companies often leverage algorithms written by human experts to determine where and when robots should move to maximize the number of packages they can handle.
But if there is congestion or a collision, a firm may have no choice but to shut down the entire warehouse for hours to manually sort the problem out.
“In this setting, we don’t have an exact prediction of the future. We only know what the future might hold, in terms of the packages that come in or the distribution of future orders. The planning system needs to be adaptive to these changes as the warehouse operations go on,” Zheng says.
The MIT researchers achieved this adaptability using machine learning. They began by designing a neural network model to take observations of the warehouse environment and decide how to prioritize the robots. They train this model using deep reinforcement learning, a trial-and-error method in which the model learns to control robots in simulations that mimic actual warehouses. The model is rewarded for making decisions that increase overall throughput while avoiding conflicts.
Over time, the neural network learns to coordinate many robots efficiently.
“By interacting with simulations inspired by real warehouse layouts, our system receives feedback that we use to make its decision-making more intelligent. The trained neural network can then adapt to warehouses with different layouts,” Zheng explains.
It is designed to capture the long-term constraints and obstacles in each robot’s path, while also considering dynamic interactions between robots as they move through the warehouse.
By predicting current and future robot interactions, the model plans to avoid congestion before it happens.
After the neural network decides which robots should receive priority, the system employs a tried-and-true planning algorithm to tell each robot how to move from one point to another. This efficient algorithm helps the robots react quickly in the changing warehouse environment.
This combination of methods is key.
“This hybrid approach builds on my group’s work on how to achieve the best of both worlds between machine learning and classical optimization methods. Pure machine-learning methods still struggle to solve complex optimization problems, and yet it is extremely time- and labor-intensive for human experts to design effective methods. But together, using expert-designed methods the right way can tremendously simplify the machine learning task,” says Wu.
Overcoming complexity
Once the researchers trained the neural network, they tested the system in simulated warehouses that were different than those it had seen during training. Since industrial simulations were too inefficient for this complex problem, the researchers designed their own environments to mimic what happens in actual warehouses.
On average, their hybrid learning-based approach achieved 25 percent greater throughput than traditional algorithms as well as a random search method, in terms of number of packages delivered per robot. Their approach could also generate feasible robot path plans that overcame congestion caused by traditional methods.
“Especially when the density of robots in the warehouse goes up, the complexity scales exponentially, and these traditional methods quickly start to break down. In these environments, our method is much more efficient,” Zheng says.
While their system is still far away from real-world deployment, these demonstrations highlight the feasibility and benefits of using a machine learning-guided approach in warehouse automation.
In the future, the researchers want to include task assignments in the problem formulation, since determining which robot will complete each task impacts congestion. They also plan to scale up their system to larger warehouses with thousands of robots.
This research was funded by Symbotic.
Championing fusion’s promising underdog
Like many people who end up going into physics, Sophia Henneberg had a hard time, when she was young, choosing between that discipline and mathematics. Both subjects came easily to her, and she — unlike many of her peers — thought they were fun. Henneberg grew up in a small town in central Germany, and it was not until one week before applying to college that she decided on physics, reasoning that it would still give her the chance to do plenty of math, while also affording opportunities to connect with a broad range of applications.
Midway through her undergraduate studies at Goethe University in Frankfurt, she started taking courses in plasma physics and almost instantly knew that she had found her niche. “Most of the visible material in the universe is in the form of hot, ionized gas called plasma, so studying that is really fundamental,” she says. “And there’s this amazing application, fusion, which has the potential to become an unlimited energy source.”
Early on, Henneberg resolved to try to make that potential a reality, and she’s been pursuing that goal at MIT since becoming the Norman Rasmussen Career Development Assistant Professor in the Department of Nuclear Science and Engineering in fall of 2025. Her research focus is on stellarators — a kind of fusion machine that has been overshadowed for many decades by another fusion device called the tokamak. Both of these machines rely on magnetic confinement — using powerful magnetic fields to compress a plasma into a tiny volume causing some of the atoms within this dense cluster to fuse together, unleashing energy in the process. In the tokamak, the plasma assumes the shape of a donut. In a stellarator, the plasma is also contained within a rounded loop, only this one resembles a twisted donut.
As a PhD candidate at the University of York (in the United Kingdom), Henneberg studied the instabilities that can arise in tokamaks, where plasma temperatures often exceed 100 million degrees Celsius and currents induced within the plasma can attain speeds of roughly 100 kilometers per second. In such an ultra-extreme setting — more than six times hotter than the core of the sun — sudden surges of energy, leading to something akin to small-scale solar flares, can breach the magnetic cage enclosing the plasma, thereby disrupting the fusion process and possibly damaging the reactor itself. Henneberg started hearing about stellarators in her classes and, after a bit of research, she came to realize that “they could be much more stable if you design them in the right way.”
Striking a favorable balanceIn 2016, she began a postdoctoral fellowship at the Max Planck Institute (MPI) for Plasma Physics in Greifswald, Germany, joining the Stellarator Theory Group. Greifswald may well have been the best place for her to carry out stellarator research, given that the world’s biggest and most advanced reactor of this type, Wendelstein 7-X (W7-X), was based there, and experiments were just starting in the year she arrived.
Her main assignment at MPI was to work on stellarator optimization, figuring out the best way to design the reactor to meet the engineering and physics goals — a task not unlike that of tuning a car to achieve maximum fuel efficiency or, for a racecar, maximum speed. Henneberg’s interest in optimization continues to this day, remaining central to her research agenda at MIT.
“If you want to design a stellarator, there are two principal components you can look at,” she says. The first relates to the shape of the boundary, or cage, into which the plasma will ultimately be confined. This shape is constrained by magnetic fields that are generated, in turn, by a series of superconducting coils that might range in number anywhere from around 4 to 50. In stellarators, the coils tend to be bent rather than circular. That gives rise to twists in the magnetic fields, but it also makes the coils more complicated and likely more expensive. Henneberg has come up with ways to simplify the optimization process — one of which involves designing the plasma boundary and the shape of the coils in the same step rather than looking at them separately.
“We’ve now reached the point where stellarator performances can exceed those of tokamaks, because we’re able to optimize them very well, but you have to put the effort in,” she says. “You can’t get good performance out of just any twisty donut.”
The best of both worldsIn a 2024 paper, Henneberg and her former Greifswald colleague, Gabriel Plunk, introduced the notion of a stellarator-tokamak hybrid reactor. The goal, they wrote, is both “simple and compelling: to combine the strengths of the two concepts into a single device” that outperforms either of the existing modes.
One of Henneberg’s major preoccupations at present is exploring ways of converting a tokamak into a stellarator that basically entails adding just a few coils — of the bent variety — that can be turned on or off. “This can be an easy way for people in the tokamak community to think more about the possible benefits of the stellarator,” she says. While no one has yet built a hybrid, at least one university has secured funding to do so.
Interest in stellarators has been steadily mounting in recent years, a fact that delights Henneberg. When she started working in this area almost a decade ago, the field of stellarator optimization was tiny and there were very few people she could converse with. There’s much more research going on today, which means that more ideas are coming out, along with some exciting results. Commercial interest is growing as well, and Henneberg has been in contact with several stellarator startup companies, including Type One Energy and Thea Energy in the United States and Proxima Fusion and Gauss Fusion in Germany.
“It seems to me that most new startups these days are focusing on stellarators,” Henneberg says. “With so many companies now entering the field, it can seem like the technical issues involved in fusion are already solved, but there are still many interesting open questions. I’m working on improved designs that advance both the physics and the economic feasibility.”
That’s where her students come in. She believes that one part of her role as an MIT professor is to train the next generation of stellarator experts — people who will help, for instance, to design effective coils that are easy to make, as well as to improve reactor performance overall.
During her first term, she co-taught the renowned Fusion Design (22.63) course alongside MIT Professor Dennis Whyte. This course has had a remarkable influence on the fusion community, leading to nine published papers with over 1,000 citations and inspiring the creation of several companies. In the fall 2025 version of this course, students were charged with comparing designs for stellarators with machines that relied on a different way of confining the plasma called magnetic mirrors.
After just a few months at MIT, Henneberg has been impressed with her students, calling them “highly motivated and a lot of fun to work with.” She’s confident that her research group will soon be making progress.
She is also happy to be affiliated with MIT’s Plasma Science and Fusion Center, which is internationally recognized as a leading university laboratory in this field. “It’s great to have so many experts [primarily in tokamaks] in one place that I can work with and learn from,” Henneberg says. “Because of my interest in hybrid reactors, my research will really benefit from all the expertise here on the tokamak side.”
