Feed aggregator
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
Internet Voting is Too Insecure for Use in Elections
No matter how many times we say it, the idea comes back again and again. Hopefully, this letter will hold back the tide for at least a while longer.
Executive summary: Scientists have understood for many years that internet voting is insecure and that there is no known or foreseeable technology that can make it secure. Still, vendors of internet voting keep claiming that, somehow, their new system is different, or the insecurity doesn’t matter. Bradley Tusk and his Mobile Voting Foundation keep touting internet voting to journalists and election administrators; this whole effort is misleading and dangerous...
New Jersey governor leans on climate funds for ‘affordability’ push
EPA thwarts Musk’s use of diesel turbines for AI
Budget plan would stymie Trump’s FEMA cuts
Former Biden officials go to bat for kids’ climate case
Red states back EPA freeze of $20B in climate grants
Four-bill ‘minibus’: EV chargers, energy aid, disaster mitigation
Climate activist predicts Trump’s attacks on green energy will hurt GOP
Italy unveils Arctic strategy as polar race heats up
Mozambique floods impacting over 600,000 people, official says
Researchers find Antarctic penguin breeding starts sooner
Broadening climate migration research across impacts, adaptation and mitigation
Nature Climate Change, Published online: 21 January 2026; doi:10.1038/s41558-025-02545-1
Current climate migration literature focuses on quantifying the link between climate drivers and migration, yet overlooks its broader and more complex interactions with mitigation, adaptation and climate impacts. This Perspective highlights key gaps and offers concrete solutions.Electrifying boilers to decarbonize industry
More than 200 years ago, the steam boiler helped spark the Industrial Revolution. Since then, steam has been the lifeblood of industrial activity around the world. Today the production of steam — created by burning gas, oil, or coal to boil water — accounts for a significant percentage of global energy use in manufacturing, powering the creation of paper, chemicals, pharmaceuticals, food, and more.
Now, the startup AtmosZero, founded by Addison Stark SM ’10, PhD ’14; Todd Bandhauer; and Ashwin Salvi, is taking a new approach to electrify the centuries-old steam boiler. The company has developed a modular heat pump capable of delivering industrial steam at temperatures up to 150 degrees Celsius to serve as a drop-in replacement for combustion boilers.
The company says its first 1-megawatt steam system is far cheaper to operate than commercially available electric solutions thanks to ultra-efficient compressor technology, which uses 50 percent less electricity than electric resistive boilers. The founders are hoping that’s enough to make decarbonized steam boilers drive the next industrial revolution.
“Steam is the most important working fluid ever,” says Stark, who serves as AtmosZero’s CEO. “Today everything is built around the ubiquitous availability of steam. Cost-effectively electrifying that requires innovation that can scale. In other words, it requires a mass-produced product — not one-off projects.”
Tapping into steam
Stark joined the Technology and Policy Program when he came to MIT in 2007. He ultimately completed a dual master’s degree by adding mechanical engineering to his studies.
“I was interested in the energy transition and in accelerating solutions to enable that,” Stark says. “The transition isn’t happening in a vacuum. You need to align economics, policy, and technology to drive that change.”
Stark stayed at MIT to earn his PhD in mechanical engineering, studying thermochemical biofuels.
After MIT, Stark began working on early-stage energy technologies with the Department of Energy’s Advanced Research Projects Agency— Energy (ARPA-E), with a focus on manufacturing efficiency, the energy-water nexus, and electrification.
“Part of that work involved applying my training at MIT to things that hadn’t really been innovated on for 50 years,” Stark says. “I was looking at the heat exchanger. It’s so fundamental. I thought, ‘How might we reimagine it in the context of modern advances in manufacturing technology?’”
The problem is as difficult as it is consequential, touching nearly every corner of the global industrial economy. More than 2.2 gigatons of CO2 emissions are generated each year to turn water into steam — accounting for more than 5 percent of global energy-related emissions.
In 2020, Stark co-authored an article in the journal Joule with Gregory Thiel SM ’12, PhD ’15 titled, “To decarbonize industry, we must decarbonize heat.” The article examined opportunities for industrial heat decarbonization, and it got Stark excited about the potential impact of a standardized, scalable electric heat pump.
Most electric boiler options today bring huge increases in operating costs. Many also make use of factory waste heat, which requires pricey retrofits. Stark never imagined he’d become an entrepreneur, but he soon realized no one was going to act on his findings for him.
“The only path to seeing this invention brought out into the world was to found and run the company,” Stark says. “It’s something I didn’t anticipate or necessarily want, but here I am.”
Stark partnered with former ARPA-E awardee Todd Bandhauer, who had been inventing new refrigerant compressor technology in his lab at Colorado State University, and former ARPA-E colleague Ashwin Salvi. The team officially founded AtmosZero in 2022.
“The compressor is the engine of the heat pump and defines the efficiency, cost, and performance,” Stark says. “The fundamental challenge of delivering heat is that the higher your heat pump is raising the air temperature, the lower your maximum efficiency. It runs into thermodynamic limitations. By designing for optimum efficiency in the operational windows that matter for the refrigerants we’re using, and for the precision manufacturing of our compressors, we’re able to maximize the individual stages of compression to maximize operational efficiency.”
The system can work with waste heat from air or water, but it doesn’t need waste heat to work. Many other electric boilers rely on waste heat, but Stark thinks that adds too much complexity to installation and operations.
Instead, in AtmosZero’s novel heat pump cycle, heat from ambient-temperature air is used to warm a liquid heat transfer material, which evaporates a refrigerant so it flows into the system’s series of compressors and heat exchangers, reaching high enough temperatures to boil water while recovering heat from the refrigerant once it reaches lower temperatures. The system can be ramped up and down to seamlessly fit into existing industrial processes.
“We can work just like a combustion boiler,” Stark says. “At the end of the day, customers don’t want to change how their manufacturing facilities operate in order to electrify. You can’t change or increase complexity on-site.”
That approach means the boiler can be deployed in a range of industrial contexts without unique project costs or other changes.
“What we really offer is flexibility and something that can drop in with ease and minimize total capital costs,” Stark says.
From 1 to 1,000
AtmosZero already has a pilot 650 kilowatt system operating at a customer facility near its headquarters in Loveland, Colorado. The company is currently focused on demonstrating year-round durability and reliability of the system as they work to build out their backlog of orders and prepare to scale.
Stark says once the system is brought to a customer’s facility, it can be installed in an afternoon and deployed in a matter of days, with zero downtime.
AtmosZero is aiming to deliver a handful of units to customers over the next year or two, with plans to deploy hundreds of units a year after that. The company is currently targeting manufacturing plants using under 10 megawatts of thermal energy at peak demand, which represents most U.S. manufacturing facilities.
Stark is proud to be part of a growing group of MIT-affiliated decarbonization startups, some of which are targeting specific verticals, like Boston Metal for steel and Sublime Systems for cement. But he says beyond the most common materials, the industry gets very fragmented, with one of the only common threads being the use of steam.
“If we look across industrial segments, we see the ubiquity of steam,” Stark says. “It’s a tremendously ripe opportunity to have impact at scale. Steam cannot be removed from industry. So much of every industrial process that we’ve designed over the last 160 years has been around the availability of steam. So, we need to focus on ways to deliver low-emissions steam rather than removing it from the equation.”
Why it’s critical to move beyond overly aggregated machine-learning metrics
MIT researchers have identified significant examples of machine-learning model failure when those models are applied to data other than what they were trained on, raising questions about the need to test whenever a model is deployed in a new setting.
“We demonstrate that even when you train models on large amounts of data, and choose the best average model, in a new setting this ‘best model’ could be the worst model for 6-75 percent of the new data,” says Marzyeh Ghassemi, an associate professor in MIT’s Department of Electrical Engineering and Computer Science (EECS), a member of the Institute for Medical Engineering and Science, and principal investigator at the Laboratory for Information and Decision Systems.
In a paper that was presented at the Neural Information Processing Systems (NeurIPS 2025) conference in December, the researchers point out that models trained to effectively diagnose illness in chest X-rays at one hospital, for example, may be considered effective in a different hospital, on average. The researchers’ performance assessment, however, revealed that some of the best-performing models at the first hospital were the worst-performing on up to 75 percent of patients at the second hospital, even though when all patients are aggregated in the second hospital, high average performance hides this failure.
Their findings demonstrate that although spurious correlations — a simple example of which is when a machine-learning system, not having “seen” many cows pictured at the beach, classifies a photo of a beach-going cow as an orca simply because of its background — are thought to be mitigated by just improving model performance on observed data, they actually still occur and remain a risk to a model’s trustworthiness in new settings. In many instances — including areas examined by the researchers such as chest X-rays, cancer histopathology images, and hate speech detection — such spurious correlations are much harder to detect.
In the case of a medical diagnosis model trained on chest X-rays, for example, the model may have learned to correlate a specific and irrelevant marking on one hospital’s X-rays with a certain pathology. At another hospital where the marking is not used, that pathology could be missed.
Previous research by Ghassemi’s group has shown that models can spuriously correlate such factors as age, gender, and race with medical findings. If, for instance, a model has been trained on more older people’s chest X-rays that have pneumonia and hasn’t “seen” as many X-rays belonging to younger people, it might predict that only older patients have pneumonia.
“We want models to learn how to look at the anatomical features of the patient and then make a decision based on that,” says Olawale Salaudeen, an MIT postdoc and the lead author of the paper, “but really anything that’s in the data that’s correlated with a decision can be used by the model. And those correlations might not actually be robust with changes in the environment, making the model predictions unreliable sources of decision-making.”
Spurious correlations contribute to the risks of biased decision-making. In the NeurIPS conference paper, the researchers showed that, for example, chest X-ray models that improved overall diagnosis performance actually performed worse on patients with pleural conditions or enlarged cardiomediastinum, meaning enlargement of the heart or central chest cavity.
Other authors of the paper included PhD students Haoran Zhang and Kumail Alhamoud, EECS Assistant Professor Sara Beery, and Ghassemi.
While previous work has generally accepted that models ordered best-to-worst by performance will preserve that order when applied in new settings, called accuracy-on-the-line, the researchers were able to demonstrate examples of when the best-performing models in one setting were the worst-performing in another.
Salaudeen devised an algorithm called OODSelect to find examples where accuracy-on-the-line was broken. Basically, he trained thousands of models using in-distribution data, meaning the data were from the first setting, and calculated their accuracy. Then he applied the models to the data from the second setting. When those with the highest accuracy on the first-setting data were wrong when applied to a large percentage of examples in the second setting, this identified the problem subsets, or sub-populations. Salaudeen also emphasizes the dangers of aggregate statistics for evaluation, which can obscure more granular and consequential information about model performance.
In the course of their work, the researchers separated out the “most miscalculated examples” so as not to conflate spurious correlations within a dataset with situations that are simply difficult to classify.
The NeurIPS paper releases the researchers’ code and some identified subsets for future work.
Once a hospital, or any organization employing machine learning, identifies subsets on which a model is performing poorly, that information can be used to improve the model for its particular task and setting. The researchers recommend that future work adopt OODSelect in order to highlight targets for evaluation and design approaches to improving performance more consistently.
“We hope the released code and OODSelect subsets become a steppingstone,” the researchers write, “toward benchmarks and models that confront the adverse effects of spurious correlations.”
Statutory Damages: The Fuel of Copyright-based Censorship
We're taking part in Copyright Week, a series of actions and discussions supporting key principles that should guide copyright policy. Every day this week, various groups are taking on different elements of copyright law and policy, and addressing what's at stake, and what we need to do to make sure that copyright promotes creativity and innovation.
Imagine every post online came with a bounty of up to $150,000 paid to anyone who finds it violates opaque government rules—all out of the pocket of the platform. Smaller sites could be snuffed out, and big platforms would avoid crippling liability by aggressively blocking, taking down, and penalizing speech that even possibly violates these rules. In turn, users would self-censor, and opportunists would turn accusations into a profitable business.
This dystopia isn’t a fantasy, it’s close to how U.S. copyright’s broken statutory damages regime actually works.
Copyright includes "statutory damages,” which means letting a jury decide how big of a penalty the defendant will have to pay—anywhere from $200 to $150,000 per work—without the jury necessarily seeing any evidence of actual financial losses or illicit profits. In fact, the law gives judges and juries almost no guidelines on how to set damages. This is a huge problem for online speech.
One way or another, everyone builds on the speech of others when expressing themselves online: quoting posts, reposting memes, sharing images from the news. For some users, re-use is central to their online expression: parodists, journalists, researchers, and artists use others’ words, sounds, and images as part of making something new every day. Both these users and the online platforms they rely on risk unpredictable, potentially devastating penalties if a copyright holder objects to some re-use and a court disagrees with the user’s well-intentioned efforts.
On Copyright Week, we like to talk about ways to improve copyright law. One of the most important would be to fix U.S. copyright’s broken statutory damages regime. In other areas of civil law, the courts have limited jury-awarded punitive damages so that they can’t be far higher than the amount of harm caused. Extremely large jury awards for fraud, for example, have been found to offend the Constitution’s Due Process Clause. But somehow, that’s not the case in copyright—some courts have ruled that Congress can set damages that are potentially hundreds of times greater than actual harm.
Massive, unpredictable damages awards for copyright infringement, such as a $222,000 penalty for sharing 24 music tracks online, are the fuel that drives overzealous or downright abusive takedowns of creative material from online platforms. Capricious and error-prone copyright enforcement bots, like YouTube’s Content ID, were created in part to avoid the threat of massive statutory damages against the platform. Those same damages create an ever-present bias in favor of major rightsholders and against innocent users in the platforms’ enforcement decisions. And they stop platforms from addressing the serious problems of careless and downright abusive copyright takedowns.
By turning litigation into a game of financial Russian roulette, statutory damages also discourage artistic and technological experimentation at the boundaries of fair use. None but the largest corporations can risk ruinous damages if a well-intentioned fair use crosses the fuzzy line into infringement.
“But wait”, you might say, “don’t legal protections like fair use and the safe harbors of the Digital Millennium Copyright Act protect users and platforms?” They do—but the threat of statutory damages makes that protection brittle. Fair use allows for many important re-uses of copyrighted works without permission. But fair use is heavily dependent on circumstances and can sometimes be difficult to predict when copyright is applied to new uses. Even well-intentioned and well-resourced users avoid experimenting at the boundaries of fair use when the cost of a court disagreeing is so high and unpredictable.
Many reforms are possible. Congress could limit statutory damages to a multiple of actual harm. That would bring U.S. copyright in line with other countries, and with other civil laws like patent and antitrust. Congress could also make statutory damages unavailable in cases where the defendant has a good-faith claim of fair use, which would encourage creative experimentation. Fixing fair use would make many of the other problems in copyright law more easily solvable, and create a fairer system for creators and users alike.
To flexibly organize thought, the brain makes use of space
Our thoughts are specified by our knowledge and plans, yet our cognition can also be fast and flexible in handling new information. How does the well-controlled and yet highly nimble nature of cognition emerge from the brain’s anatomy of billions of neurons and circuits?
A study by researchers in The Picower Institute for Learning and Memory at MIT provides new evidence from tests in animals that the answer might be found within a theory called “spatial computing.”
First proposed in 2023 by Picower Professor Earl K. Miller and colleagues Mikael Lundqvist and Pawel Herman, spatial computing theory explains how neurons in the prefrontal cortex can be organized on the fly into a functional group capable of carrying out the information processing required by a cognitive task. Moreover, it allows for neurons to participate in multiple such groups, as years of experiments have shown that many prefrontal neurons can indeed participate in multiple tasks at once.
The basic idea of the theory is that the brain recruits and organizes ad hoc “task forces” of neurons by using “alpha” and “beta” frequency brain waves (about 10-30Hz) to apply control signals to physical patches of the prefrontal cortex. Rather than having to rewire themselves into new physical circuits every time a new task must be done, the neurons in the patch instead process information by following the patterns of excitation and inhibition imposed by the waves.
Think of the alpha and beta frequency waves as stencils that shape when and where in the prefrontal cortex groups of neurons can take in or express information from the senses, Miller says. In that way, the waves represent the rules of the task and can organize how the neurons electrically “spike” to process the information content needed for the task.
“Cognition is all about large-scale neural self-organization,” says Miller, senior author of the paper in Current Biology and a faculty member in MIT’s Department of Brain and Cognitive Sciences. “Spatial computing explains how the brain does that.”
Testing five predictions
A theory is just an idea. In the study, lead author Zhen Chen and other current and former members of Miller’s lab put spatial computing to the test by examining whether five predictions it makes about neural activity and brain wave patterns were actually evident in measurements made in the prefrontal cortex of animals as they engaged in two working memory and one categorization tasks. Across the tasks there were distinct pieces of sensory information to process (e.g., “A blue square appeared on the screen followed by a green triangle”) and rules to follow (e.g., “When new shapes appear on the screen, do they match the shapes I saw before and appear in the same order?”)
The first two predictions were that alpha and beta waves should represent task controls and rules, while the spiking activity of neurons should represent the sensory inputs. When the researchers analyzed the brain wave and spiking readings gathered by the four electrode arrays implanted in the cortex, they found that indeed these predictions were true. Neural spikes, but not the alpha/beta waves, carried sensory information. While both spikes and the alpha/beta waves carried task information, it was strongest in the waves, and it peaked at times relevant to when rules were needed to carry out the tasks.
Notably, in the categorization task, the researchers purposely varied the level of abstraction to make categorization more or less cognitively difficult. The researchers saw that the greater the difficulty, the stronger the alpha/beta wave power was, further showing that it carries task rules.
The next two predictions were that alpha/beta would be spatially organized, and that when and where it was strong, the sensory information represented by spiking would be suppressed, but where and when it was weak, spiking would increase. These predictions also held true in the data. Under the electrodes, Chen, Miller, and the team could see distinct spatial patterns of higher or lower wave power, and where power was high, the sensory information in spiking was low, and vice versa.
Finally, if spatial computing is valid, the researchers predicted, then trial by trial, alpha/beta power and timing should accurately correlate with the animals’ performance. Sure enough, there were significant differences in the signals on trials where the animals performed the tasks correctly versus when they made mistakes. In particular, the measurements predicted mistakes due to messing up task rules versus sensory information. For instance, alpha/beta discrepancies pertained to the order in which stimuli appeared (first square then triangle) rather than the identity of the individual stimuli (square or triangle).
Compatible with findings in humans
By conducting this study with animals, the researchers were able to make direct measurements of individual neural spikes as well as brain waves, and in the paper, they note that other studies in humans report some similar findings. For instance, studies using noninvasive EEG and MEG brain wave readings show that humans use alpha oscillations to inhibit activity in task-irrelevant areas under top-down control, and that alpha oscillations appear to govern task-related activity in the prefrontal cortex.
While Miller says he finds the results of the new study, and their intersection with human studies, to be encouraging, he acknowledges that more evidence is still needed. For instance, his lab has shown that brain waves are typically not still (like a jump rope), but travel across areas of the brain. Spatial computing should account for that, he says.
In addition to Chen and Miller, the paper’s other authors are Scott Brincat, Mikael Lundqvist, Roman Loonis, and Melissa Warden.
The U.S. Office of Naval Research, The Freedom Together Foundation, and The Picower Institute for Learning and Memory funded the study.
A new way to “paint with light” to create radiant, color-changing items
Gemstones like precious opal are beautiful to look at and deceivingly complex. As you look at such gems from different angles, you’ll see a variety of tints glisten, causing you to question what color the rock actually is. It’s iridescent thanks to something called structural color — microscopic structures that reflect light to produce radiant hues.
Structural color can be found across different organisms in nature, such as on the tails of peacocks and the wings of certain butterflies. Scientists and artists have been working to replicate this quality, but outside of the lab, it’s still very hard to recreate, causing a barrier to on-demand, customizable fabrication. Instead, companies and individual designers alike have resorted to adding existing color-changing objects like feathers and gems to things like personal items, clothes, and artwork.
Now MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have replicated nature’s brilliance with a new optical system called “MorphoChrome.” MorphoChrome allows users to design and program iridescence onto everyday objects (like a glove, for example), augmenting them with the structurally colored multi-color glimmer reminiscent of many gemstones. You select particular colors from a color wheel in the team’s software program and use their handheld device to “paint” with multi-color light onto holographic film. Then, you apply that painted sheet to 3D-printed objects or flexible substrates such as fashion items, sporting goods, and other personal accessories, using their unique epoxy resin transfer process.
“We wanted to tap into the innate intelligence of nature,” says MIT Department of Electrical Engineering and Computer Science (EECS) PhD student and CSAIL researcher Paris Myers SM ’25, who is a lead author on a recent paper presenting MorphoChrome. “In the past, you couldn’t easily synthesize structural color yourself, but using pigments or dyes gave you full creative expression. With our system, you have full creative agency over this new material space, predictably programming iridescent designs in real-time.”
MorphoChrome showed it could add a luminous touch to things like a necklace charm of a butterfly. What started as a static, black accessory became a shiny pendant with green, orange, and blue glimmers, thanks to the system’s programmable color process. MorphoChrome also turned golfing gloves into beginner-friendly training equipment that shine green when you hold a golf club at the correct angle, and even helped one user adorn their fingernails with a gemstone-like look.
These multi-color displays are the result of a handheld fabrication process where MorphoChrome acts as a “brush" to paint with red-green-blue (RGB) laser light, while a holographic photopolymer film (used for things like passports and debit cards) is the canvas. Users first connect the system’s handheld device to a computer via a USB-C port, then open the software program. They can then click “send color” to rapidly transmit different hues from their laptop or home computer to the MorphoChrome hardware tool.
This handheld device transforms the colors on a screen into a controllable, multi-color RGB laser light output that instantly exposes the film, a sort of canvas where users can explore different combinations of hues. About the size of a glue bottle, MorphoChrome’s optical machine houses red, green, and blue lasers, which are activated at various intensities depending on the color chosen. These lights are reflected off mirrors toward an optical prism, where the colors mix and are promptly released as a single combined beam of light.
After designing the film, one can fabricate diverse structurally colored objects by first coating a chosen object with a thin layer of epoxy resin. Next, the holographic film (litiholographics) — composed of a photopolymer layer and a protective plastic backing — is bonded to the object through a 20-second ultraviolet cure, essentially using a handheld UV light to transfer the colored design onto the surface. Finally, users peel off the film’s protective plastic sheet, revealing a color-changing, structurally-colored object that looks like a jewel.
Do try this at home
MorphoChrome is surprisingly user-friendly, consisting of a straightforward fabrication blueprint and an easy-to-use device that encourages do-it-yourself designers and other makers to explore iridescent designs at home. Instead of spending time searching for hard-to-find artistic materials or chemically synthesizing structural color in the lab, users can focus on expressing various ideas and experimenting with programming different radiant color mixes.
The array of possible colors stems from intriguing fusions. Nagenta, for instance, is created after the system’s blue and red lasers mix. Selecting cyan on the MorphoChrome software’s color wheel will mix the green and blue lights.
Users should note that the time it takes to fully expose the film to each color will vary, based on the researchers’ multi-color findings and the intrinsic properties of holographic photopolymer film. MorphoChrome activates green in 2.5 seconds, whereas red takes about 3 seconds, and blue needs roughly 6 seconds to saturate. The reason for this discrepancy is that each color is a particular wavelength of light, requiring a certain level of light exposure (blue needing more than green or red).
Look at this hologram
MorphoChrome builds upon previous work on stretchable structural color by co-author Benjamin Miller PhD ’24, Professor Mathias Kolle, and Kolle’s Laboratory for Biologically Inspired Photonic Engineering group at MIT's Department of Mechanical Engineering. The CSAIL researchers, who work in the Human-Computer Interaction Engineering Group, say that MorphoChrome also advances their ongoing work on merging computation with unique materials to create dynamic, programmable color interfaces.
Going forward, their goal is to push the capabilities of holographic structural color as a reprogrammable design and manufacturing space, empowering individuals and industries alike to customize iridescent and diffuse multi-color interfaces. “The polymer sheet we went with here is holographic, which has potential beyond what we’re showing here,” says co-author Yunyi Zhu ’20, MEng ’21, who is an MIT EECS PhD student and CSAIL researcher. “We’re working on adapting our process for creating entire 3D light fields in one film.”
Customizing full light-field holographic messages onto objects would allow users to encode information and 3D images. One could imagine, for example, that a passport could have a sticker that beams out a 3D green check mark. This hologram would signal its authenticity when viewed through a particular device or at a certain angle.
The team is also inspired by how animals use structural color as an adaptive communication channel and camouflage technique. Going forward, they are curious how programmable structural color could be integrated into different types of environments, perhaps as camouflage for soft robotic structures to blend into an environment. For instance, they imagine a robot studying jungle terrain may need to match the appearance of nearby bushes to collect data, with a human reprogramming the machine’s color from afar.
In the meantime, MorphoChrome recreates the majestic structural color found in various ecosystems, connecting a natural phenomenon with our creative processes. MIT researchers will look to improve the system’s color gamut and maximize how luminous mixed colors are. They’re also considering using another material for the device’s casing, since its current 3D-printing housing leaks out some light.
“Being able to easily create and manipulate structural color is a great new tool, and opens up new avenues for discovery and expression,” says Liti Holographics CEO Paul Christie SM ’97, who wasn’t involved in the research. “Simplifying the process to be more easily accessible allows for new applications to be developed in a wider range of areas, from art and jewelry to functional fabric.”
Myers, Zhu, and Miller wrote the paper with senior author Stefanie Mueller, who is an MIT associate professor of electrical engineering and computer science and CSAIL principal investigator. Their research was supported by the National Science Foundation, and presented as a demo paper and poster at the 2025 ACM Symposium on Computational Fabrication in November.
💾 The Worst Data Breaches of 2025—And What You Can Do | EFFector 38.1
So many data breaches happen throughout the year that it can be pretty easy to gloss over not just if, but how many different breaches compromised your data. We're diving into these data breaches and more with our latest EFFector newsletter.
Since 1990, EFFector has been your guide to understanding the intersection of technology, civil liberties, and the law. This latest issue tracks U.S. Immigration and Customs Enforcement's (ICE) surveillance spending spree, explains how hackers are countering ICE's surveillance, and invites you to our free livestream covering online age verification mandates.
Prefer to listen in? In our audio companion, EFF Security and Privacy Activist Thorin Klosowski explains what you can do to protect yourself from data breaches and how companies can better protect their users. Find the conversation on YouTube or the Internet Archive.
EFFECTOR 38.1 - 💾 THE WORST DATA BREACHES OF 2025—and what you can do
Want to stay in the fight for privacy and free speech online? Sign up for EFF's EFFector newsletter for updates, ways to take action, and new merch drops. You can also fuel the fight to protect people from these data breaches and unlawful surveillance when you support EFF today!
EFF Joins Internet Advocates Calling on the Iranian Government to Restore Full Internet Connectivity
Earlier this month, Iran’s internet connectivity faced one of its most severe disruptions in recent years with a near-total shutdown from the global internet and major restrictions on mobile access.
EFF joined architects, operators, and stewards of the global internet infrastructure in calling upon authorities in Iran to immediately restore full and unfiltered internet access. We further call upon the international technical community to remain vigilant in monitoring connectivity and to support efforts that ensure the internet remains open, interoperable, and accessible to all.
This is not the first time the people in Iran have been forced to experience this, with the government suppressing internet access in the country for many years. In the past three years in particular, people of Iran have suffered repeated internet and social media blackouts following an activist movement that blossomed after the death of Mahsa Amini, a woman murdered in police custody for refusing to wear a hijab. The movement gained global attention and in response, the Iranian government rushed to control both the public narrative and organizing efforts by banning social media and sometimes cutting off internet access altogether.
EFF has long maintained that governments and occupying powers must not disrupt internet or telecommunication access. Cutting off telecommunications and internet access is a violation of basic human rights and a direct attack on people's ability to access information and communicate with one another.
Our joint statement continues:
“We assert the following principles:
- Connectivity is a Fundamental Enabler of Human Rights: In the 21st century, the right to assemble, the right to speak, and the right to access information are inextricably linked to internet access.
- Protecting the Global Internet Commons: National-scale shutdowns fragment the global network, undermining the stability and trust required for the internet to function as a global commons.
- Transparency: The technical community condemns the use of BGP manipulation and infrastructure filtering to obscure events on the ground.”
Read the letter in full here.
