Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Meta’s AI Glasses and Privacy
Surprising no one, Meta’s new AI glasses are a privacy disaster.
I’m not sure what can be done here. This is a technology that will exist, whether we like it or not.
Meanwhile, there is a new Android app that detects when there are smart glasses nearby.
EPA tied its climate rollback to low oil prices. Then came the Iran war.
‘Alarm bells’: Early Western heat wave foreshadows future danger
Maryland Dems eye climate funds to offset utility bills
Corpus Christi water shortage deepens, threatening oil refining hub
Feds pump $540M into California’s crumbling canals
New York budget talks heat up, with climate law a key sticking point
Plaintiffs push back on Hochul’s climate arguments
New York sues solar company over alleged fraud
European Commission to revise carbon market reserve before ETS review
As winters warm, falling through ice becomes more common — and deadly
Triple-digit heat wave alters MLB spring training start times
Wind-triggered Antarctic sea-ice decline preconditioned by thinning Winter Water
Nature Climate Change, Published online: 18 March 2026; doi:10.1038/s41558-026-02601-4
Antarctic sea ice declined sharply between 2015 and 2017, and this study uses ocean observations and atmospheric data to determine contributing factors. The authors show that thinning of Winter Water in the previous decade, followed by strong winds, brought warm deep water into contact with sea ice.John Ochsendorf named associate dean for research for the School of Architecture and Planning
Professor John Ochsendorf, a member of the MIT faculty since 2002, is taking on a new role in support of the research efforts of faculty and students in the MIT School of Architecture and Planning (SA+P). At the start of this year, Ochsendorf was appointed to lead an initiative strengthening research strategy, support, and funding across the school.
“John is a bridge-builder by instinct and practice, and we look forward to the bridges he will build between our school and industry, our school and MIT, and between research and pedagogy in our school,” says SA+P Dean Hashim Sarkis. The appointment comes as sponsored research across SA+P continues to grow, expanding opportunities for graduate research assistantships and interdisciplinary collaboration across MIT.
Ochsendorf is the Class of 1942 Professor with dual appointments in the departments of Architecture and Civil and Environmental Engineering in the MIT School of Engineering. At the center of his work is a deep commitment to students and education through research and making. For example, in close collaboration with students and alumni, he has contributed to projects ranging from the Sean Collier Memorial on campus to a recent Martin Puryear sculpture at Storm King Art Center. Since 2022, Ochsendorf has served as the founding director of the MIT Morningside Academy for Design, where he helped establish new models for design research, interdisciplinary collaboration, and student engagement across the Institute.
Ochsendorf describes the new role as both a “challenge and an opportunity” to support the considerable and increasingly broad portfolio of research across SA+P.
“We want to understand the current landscape of our research funding and identify the challenges and inefficiencies impacting faculty,” he notes. “The ultimate goal is to grow our research capacity for a world that needs the best ideas from MIT.”
The effort is consistent with SA+P’s history of pioneering research and pedagogic exploration. The Department of Architecture was among the first in the United States to establish doctoral programs within a school of architecture, including PhDs in history, theory, and criticism and in building technology. The Department of Urban Studies and Planning is home to the largest urban planning faculty in the country and maintains a variety of research labs, while Media Arts and Sciences and the Media Lab has a broad and deep research culture. Each of the school’s departments enjoys the advantage of operating within the context of MIT’s culture of innovation and interdisciplinary study. As new faculty hires have been increasingly research-driven, the time for developing and supporting robust research portfolios is now.
Ochsendorf and his students’ research have bridged the spectrum from humanistic research supported by organizations such as the National Endowment for the Humanities and the Graham Foundation for Advanced Studies in the Fine Arts to more scientific research supported by the National Science Foundation. In his new role, he will build on that experience to work with faculty and Institute partners to strengthen grant development, clarify research priorities, and expand research capacity across SA+P.
“I’ve always loved being at MIT because of the team spirit here,” says Ochsendorf. “We’re a place where we try to support each other, and it’s because of this environment that I am excited about this new role.”
Sustaining diplomacy amid competition in US-China relations
The United States and China “are the two largest emitters of carbon in the world,” said Nicholas Burns, former U.S. ambassador to the People’s Republic of China, at a recent MIT seminar. “We need to work with each other for the good of both of our countries.”
During the MITEI Presents: Advancing the Energy Transition presentation, Burns gave insight into the evolving state of U.S.-China relations, its implications for the global order, and its impact on global efforts to advance the energy transition and address climate change.
“We are the two largest global economies,” said Burns, who is now the Goodman Professor of the Practice of Diplomacy and International Relations at Harvard University’s Kennedy School of Government. “These are the only two countries that affect everybody else in the international system because of our weight.”
The relationship between the United States and China can be summarized in three words, according to Burns: competitive, tough, and adversarial — a description that rings true on both sides. He listed four primary areas for this competition: military, technology, trade and economics, and values.
Burns described the especially complicated area of trade and economics. “We both want to be number one. Neither of us — to be honest — is willing to be number two,” said Burns. Outside of North America, China is the United States’ largest trade partner. Outright trade wars — like those in April and October 2025 — create friction. “At one point, you’ll remember, 145 percent tariffs by the United States, and 125 percent by China on the United States. That just grinds a relationship. Those level of tariffs, had they been sustained, would have meant zero trade between the two countries.”
The energy field can be significantly impacted by this area of competition, Burns added. China is dominant in the production and processing of rare earth elements, many of which are critical to products like lithium batteries, solar panels, and electric vehicles. In 2024 and 2025, the United States was not the only country to place tariffs on these products; India, Turkey, South Africa, Mexico, Canada, the EU, and others followed suit. “I think the Trump administration is right, as President Biden was, to try to diversify sources on rare earths,” Burns said.
Burns also noted with interest the dichotomy in the Chinese energy sector between their lead on clean energy technology and their continual use of coal, standing out as an inconsistency in China’s efforts. Burns believes that climate change could be a key area of cooperation between China and the United States, emphasizing the importance of the United States’ participation, both technologically and diplomatically.
Burns also described the significant technological competition between the United States and China — an area of central importance. Throughout his presentation, Burns was quick to praise the emphasis that China puts on education and academic achievement, particularly in STEM fields. Pulling from a recent article in The Economist, he compared the 36 percent of Chinese first-year university students majoring in STEM fields to the 5 percent of American first-year students in STEM. “Think about the volume of graduates and the disparity between our country and China,” he said. “Then think about the percentage of those graduates who go into science and technology.”
Currently, areas like artificial intelligence, quantum computing, and biotechnology are taking center stage in technological innovation. “The Chinese are very skilled in terms of industrial processes and doctrine of adapting quickly,” said Burns. He explained that holding a competitive edge lies not only in who is first on the market, but who adopts the technology first, and who is able to unite that technological progress with policy.
“This is the most important relationship that we have in the world,” said Burns. He believes that the true test is whether the United States and China can manage competition so that interests are protected, while avoiding the use of the massive destructive power both countries possess. “We’ve got to normalize the communication and engagement to prevent the worst from happening,” said Burns.
“We’re at a stage of human history where we’re all linked together, and the fate of everybody in this room and all of our countries is linked together by these huge transnational challenges,” said Burns. “We’ve got to learn to compete and yet live in peace with each other in the process.”
This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit MITEI’s Events page for more information on this and additional events.
MIT-IBM Watson AI Lab seed to signal: Amplifying early-career faculty impact
The early years of faculty members’ careers are a formative and exciting time in which to establish a firm footing that helps determine the trajectory of researchers’ studies. This includes building a research team, which demands innovative ideas and direction, creative collaborators, and reliable resources.
For a group of MIT faculty working with and on artificial intelligence, early engagement with the MIT-IBM Watson AI Lab through projects has played an important role helping to promote ambitious lines of inquiry and shaping prolific research groups.
Building momentum
“The MIT-IBM Watson AI Lab has been hugely important for my success, especially when I was starting out,” says Jacob Andreas — associate professor in the Department of Electrical Engineering and Computer Science (EECS), a member of the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL), and a researcher with the MIT-IBM Watson AI Lab — who studies natural language processing (NLP). Shortly after joining MIT, Andreas jump-started his first major project through the MIT-IBM Watson AI Lab, working on language representation and structured data augmentation methods for low-resource languages. “It really was the thing that let me launch my lab and start recruiting students.”
Andreas notes that this occurred during a “pivotal moment” when the field of NLP was undergoing significant shifts to understand language models — a task that required significantly more compute, which was available through the MIT-IBM Watson AI Lab. “I feel like the kind of the work that we did under that [first] project, and in collaboration with all of our people on the IBM side, was pretty helpful in figuring out just how to navigate that transition.” Further, the Andreas group was able to pursue multi-year projects on pre-training, reinforcement learning, and calibration for trustworthy responses, thanks to the computing resources and expertise within the MIT-IBM community.
For several other faculty members, timely participation with the MIT-IBM Watson AI Lab proved to be highly advantageous as well. “Having both intellectual support and also being able to leverage some of the computational resources that are within MIT-IBM, that’s been completely transformative and incredibly important for my research program,” says Yoon Kim — associate professor in EECS, CSAIL, and a researcher with the MIT-IBM Watson AI Lab — who has also seen his research field alter trajectory. Before joining MIT, Kim met his future collaborators during an MIT-IBM postdoctoral position, where he pursued neuro-symbolic model development; now, Kim’s team develops methods to improve large language model (LLM) capabilities and efficiency.
One factor he points to that led to his group’s success is a seamless research process with intellectual partners. This has allowed his MIT-IBM team to apply for a project, experiment at scale, identify bottlenecks, validate techniques, and adapt as necessary to develop cutting-edge methods for potential inclusion in real-world applications. “This is an impetus for new ideas, and that’s, I think, what’s unique about this relationship,” says Kim.
Merging expertise
The nature of the MIT-IBM Watson AI Lab is that it not only brings together researchers in the AI realm to accelerate research, but also blends work across disciplines. Lab researcher and MIT associate professor in EECS and CSAIL Justin Solomon describes his research group as growing up with the lab, and the collaboration as being “crucial … from its beginning until now.” Solomon’s research team focuses on theoretically oriented, geometric problems as they pertain to computer graphics, vision, and machine learning.
Solomon credits the MIT-IBM collaboration with expanding his skill set as well as applications of his group’s work — a sentiment that’s also shared by lab researchers Chuchu Fan, an associate professor of aeronautics and astronautics and a member of the Laboratory for Information and Decision Systems, and Faez Ahmed, associate professor of mechanical engineering. “They [IBM] are able to translate some of these really messy problems from engineering into the sort of mathematical assets that our team can work on, and close the loop,” says Solomon. This, for Solomon, includes fusing distinct AI models that were trained on different datasets for separate tasks. “I think these are all really exciting spaces,” he says.
“I think these early-career projects [with the MIT-IBM Watson AI Lab] largely shaped my own research agenda,” says Fan, whose research intersects robotics, control theory, and safety-critical systems. Like Kim, Solomon, and Andreas, Fan and Ahmed began projects through the collaboration the first year they were able to at MIT. Constraints and optimization govern the problems that Fan and Ahmed address, and so require deep domain knowledge outside of AI.
Working with the MIT-IBM Watson AI Lab enabled Fan’s group to combine formal methods with natural language processing, which she says, allowed the team to go from developing autoregressive task and motion planning for robots to creating LLM-based agents for travel planning, decision-making, and verification. “That work was the first exploration of using an LLM to translate any free-form natural language into some specification that robot can understand, can execute. That’s something that I’m very proud of, and very difficult at the time,” says Fan. Further, through joint investigation, her team has been able to improve LLM reasoning — work that “would be impossible without the IBM support,” she says.
Through the lab, Faez Ahmed’s collaboration facilitated the development of machine-learning methods to accelerate discovery and design within complex mechanical systems. Their Linkages work, for instance, employs “generative optimization” to solve engineering problems in a way that is both data-driven and has precision; more recently, they’re applying multi-modal data and LLMs to computer-aided design. Ahmed states that AI is frequently applied to problems that are already solvable, but could benefit from increased speed or efficiency; however, challenges — like mechanical linkages that were deemed “almost unsolvable” — are now within reach. “I do think that is definitely the hallmark [of our MIT-IBM team],” says Ahmed, praising the achievements of his MIT-IBM group, which is co-lead by Akash Srivastava and Dan Gutfreund of IBM.
What began as initial collaborations for each MIT faculty member has evolved into a lasting intellectual relationship, where both parties are “excited about the science,” and “student-driven,” Ahmed adds. Taken together, the experiences of Jacob Andreas, Yoon Kim, Justin Solomon, Chuchu Fan, and Faez Ahmed speak to the impact that a durable, hands-on, academia-industry relationship can have on establishing research groups and ambitious scientific exploration.
Three anesthesia drugs all have the same effect in the brain, MIT researchers find
When patients undergo general anesthesia, doctors can choose among several drugs. Although each of these drugs acts on neurons in different ways, they all lead to the same result: a disruption of the brain’s balance between stability and excitability, according to a new MIT study.
This disruption causes neural activity to become increasingly unstable, until the brain loses consciousness, the researchers found. The discovery of this common mechanism could make it easier to develop new technologies for monitoring patients while they are undergoing anesthesia.
“What’s exciting about that is the possibility of a universal anesthesia-delivery system that can measure this one signal and tell how unconscious you are, regardless of which drugs they’re using in the operating room,” says Earl Miller, the Picower Professor of Neuroscience and a member of MIT’s Picower Institute for Learning and Memory.
Miller, Edward Hood Taplin Professor of Medical Engineering and Computational Neuroscience Emery Brown, and their colleagues are now working on an automated control system for delivery of anesthesia drugs, which would measure the brain’s stability using EEG and then automatically adjust the drug dose. This could help doctors ensure that patients stay unconscious throughout surgery without becoming too deeply unconscious, which can have negative side effects following the procedure.
Miller and Ila Fiete, a professor of brain and cognitive sciences, the director of the K. Lisa Yang Integrative Computational Neuroscience Center (ICoN), and a member of MIT’s McGovern Institute for Brain Research, are the senior authors of the new study, which appears today in Cell Reports. MIT graduate student Adam Eisen is the paper’s lead author.
Destabilizing the brain
Exactly how anesthesia drugs cause the brain to lose consciousness has been a longstanding question in neuroscience. In 2024, a study from Miller’s and Fiete’s labs suggested that for propofol, the answer is that anesthesia works by disrupting the balance between stability and excitability in the brain.
When someone is awake, their brain is able to maintain this delicate balance, responding to sensory information or other input and then returning to a stable baseline.
“The nervous system has to operate on a knife’s edge in this narrow range of excitability,” Miller says. “It has to be excitable enough so different parts can influence one another, but if it gets too excited it goes off into chaotic activity.”
In that 2024 study, the researchers found that propofol knocks the brain out of this state, known as “dynamic stability.” As doses of the drug increased, the brain took longer and longer to return to its baseline state after responding to new input. This effect became increasingly pronounced until consciousness was lost.
For that study, the researchers devised a computational model that analyzes neural activity recorded from the brain. This technique allowed them to determine how the brain responds to perturbations such as an auditory tone or other sensory input, and how long it takes to return to its baseline stability.
In their new study, the researchers used the same technique to measure how the brain responds to not only propofol but two additional anesthesia drugs — ketamine and dexmedetomidine. Animals were given one of the three drugs while their brain activity was analyzed, including their response to auditory tones.
This study showed that the same destabilization induced by propofol also appears during administration of the other two drugs. This “universal signature” appears even though the three drugs have different molecular mechanisms: propofol binds to GABA receptors, inhibiting neurons that have those receptors; dexmedetomidine blocks the release of norepinephrine; and ketamine blocks NMDA receptors, suppressing neurons with those receptors.
Each of these pathways, the researchers hypothesize, affect the brain’s balance of stability and excitability in different ways, and each leads to an overall destabilization of this balance.
“All three of these drugs appear to do the exact same thing,” Miller says. “In fact, you could look at the destabilization measure we use and you can’t tell which drug is being applied.”
The researchers now plan to further investigate how each of these drugs may give rise to the same patterns of brain destabilization.
“The molecular mechanisms of ketamine and dexmedetomidine are a bit more involved than propofol mechanisms,” Eisen says. “A future direction is to do a meaningful model of what the biophysical effects of those are and see how that could lead to destabilization.”
Monitoring anesthesia
Now that the researchers have shown that three different anesthesia drugs produce similar destabilization patterns in the brain, they believe that measuring those patterns could offer a valuable way to monitor patients during anesthesia. While anesthesia is overall a very safe procedure, it does carry some risks, especially for very young children and for people over 65.
For adults suffering from dementia, anesthesia can make the condition worse, and it can also exacerbate neuropsychiatric disorders such as depression. These risks are higher if patients go into a deeper state of unconsciousness known as burst suppression.
To help reduce those risks, Miller and Brown, who is also an anesthesiologist at MGH, are developing a prototype device that can measure patients’ EEG readings while under anesthesia and adjust their dose accordingly. Currently, doctors monitor patients’ heart rate, blood pressure, and other vital signs during surgery, but these don’t give as accurate a reading of how deeply the patient is unconscious.
“If you can limit people’s exposure to anesthesia, if you give just enough and no more, you can reduce risks across the board,” Miller says.
Working with researchers at Brown University, the MIT team is now planning to run a small clinical trial of their monitoring device with patients undergoing surgery.
The research was funded by the U.S. Office of Naval Research, the National Institute of Mental Health, the Simons Center for the Social Brain, the Freedom Together Foundation, the Picower Institute, the National Science Foundation Computer and Information Science and Engineering Directorate, the Simons Collaboration on the Global Brain, the McGovern Institute, and the National Institutes of Health.
