MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 20 hours 6 min ago

Three MIT students named 2026 Schwarzman Scholars

Wed, 01/15/2025 - 2:45pm

Three MIT students — Yutao Gong, Brandon Man, and Andrii Zahorodnii — have been awarded 2025 Schwarzman Scholarships and will join the program’s 10th cohort to pursue a master’s degree in global affairs at Tsinghua University in Beijing, China.

The MIT students were selected from a pool of over 5,000 applicants. This year’s class of 150 scholars represents 38 countries and 105 universities from around the world.

The Schwarzman Scholars program aims to develop leadership skills and deepen understanding of China’s changing role in the world. The fully funded one-year master’s program at Tsinghua University emphasizes leadership, global affairs, and China. Scholars also gain exposure to China through mentoring, internships, and experiential learning.

MIT’s Schwarzman Scholar applicants receive guidance and mentorship from the distinguished fellowships team in Career Advising and Professional Development and the Presidential Committee on Distinguished Fellowships.

Yutao Gong will graduate this spring from the Leaders for Global Operations program at the MIT Sloan School of Management, earning a dual MBA and a MS degree in civil and environmental engineering with a focus on manufacturing and operations. Gong, who hails from Shanghai, China, has academic, work, and social engagement experiences in China, the United States, Jordan, and Denmark. She was previously a consultant at Boston Consulting Group working on manufacturing, agriculture, sustainability, and renewable energy-related projects, and spent two years in Chicago and one year in Greater China as a global ambassador. Gong graduated magna cum laude from Duke University with double majors in environmental science and statistics, where she organized the Duke China-U.S. Summit.

Brandon Man, from Canada and Hong Kong, is a master’s student in the Department of Mechanical Engineering at MIT, where he studies generative artificial intelligence (genAI) for engineering design. Previously, he graduated from Cornell University magna cum laude with honors in computer science. With a wealth of experience in robotics — from assistive robots to next-generation spacesuits for NASA to Tencent’s robot dog, Max — he is now a co-founder of Sequestor, a genAI-powered data aggregation platform that enables carbon credit investors to perform faster due diligence. His goal is to bridge the best practices of the Eastern and Western tech worlds.

Andrii Zahorodnii, from Ukraine, will graduate this spring with a bachelor of science and a master of engineering degree in computer science and cognitive sciences. An engineer as well as a neuroscientist, he has conducted research at MIT with Professor Guangyu Robert Yang’s MetaConscious Group and the Fiete Lab. Zahorodnii is passionate about using AI to uncover insights into human cognition, leading to more-informed, empathetic, and effective global decision-making and policy. Besides driving the exchange of ideas as a TEDxMIT organizer, he strives to empower and inspire future leaders internationally and in Ukraine through the Ukraine Leadership and Technology Academy he founded.

This fast and agile robotic insect could someday aid in mechanical pollination

Wed, 01/15/2025 - 2:00pm

With a more efficient method for artificial pollination, farmers in the future could grow fruits and vegetables inside multilevel warehouses, boosting yields while mitigating some of agriculture’s harmful impacts on the environment.

To help make this idea a reality, MIT researchers are developing robotic insects that could someday swarm out of mechanical hives to rapidly perform precise pollination. However, even the best bug-sized robots are no match for natural pollinators like bees when it comes to endurance, speed, and maneuverability.

Now, inspired by the anatomy of these natural pollinators, the researchers have overhauled their design to produce tiny, aerial robots that are far more agile and durable than prior versions.

The new bots can hover for about 1,000 seconds, which is more than 100 times longer than previously demonstrated. The robotic insect, which weighs less than a paperclip, can fly significantly faster than similar bots while completing acrobatic maneuvers like double aerial flips.

The revamped robot is designed to boost flight precision and agility while minimizing the mechanical stress on its artificial wing flexures, which enables faster maneuvers, increased endurance, and a longer lifespan.

The new design also has enough free space that the robot could carry tiny batteries or sensors, which could enable it to fly on its own outside the lab.

“The amount of flight we demonstrated in this paper is probably longer than the entire amount of flight our field has been able to accumulate with these robotic insects. With the improved lifespan and precision of this robot, we are getting closer to some very exciting applications, like assisted pollination,” says Kevin Chen, an associate professor in the Department of Electrical Engineering and Computer Science (EECS), head of the Soft and Micro Robotics Laboratory within the Research Laboratory of Electronics (RLE), and the senior author of an open-access paper on the new design.

Chen is joined on the paper by co-lead authors Suhan Kim and Yi-Hsuan Hsiao, who are EECS graduate students; as well as EECS graduate student Zhijian Ren and summer visiting student Jiashu Huang. The research appears today in Science Robotics.

Boosting performance

Prior versions of the robotic insect were composed of four identical units, each with two wings, combined into a rectangular device about the size of a microcassette.

“But there is no insect that has eight wings. In our old design, the performance of each individual unit was always better than the assembled robot,” Chen says.

This performance drop was partly caused by the arrangement of the wings, which would blow air into each other when flapping, reducing the lift forces they could generate.

The new design chops the robot in half. Each of the four identical units now has one flapping wing pointing away from the robot’s center, stabilizing the wings and boosting their lift forces. With half as many wings, this design also frees up space so the robot could carry electronics.

In addition, the researchers created more complex transmissions that connect the wings to the actuators, or artificial muscles, that flap them. These durable transmissions, which required the design of longer wing hinges, reduce the mechanical strain that limited the endurance of past versions.

“Compared to the old robot, we can now generate control torque three times larger than before, which is why we can do very sophisticated and very accurate path-finding flights,” Chen says.

Yet even with these design innovations, there is still a gap between the best robotic insects and the real thing. For instance, a bee has only two wings, yet it can perform rapid and highly controlled motions.

“The wings of bees are finely controlled by a very sophisticated set of muscles. That level of fine-tuning is something that truly intrigues us, but we have not yet been able to replicate,” he says.

Less strain, more force

The motion of the robot’s wings is driven by artificial muscles. These tiny, soft actuators are made from layers of elastomer sandwiched between two very thin carbon nanotube electrodes and then rolled into a squishy cylinder. The actuators rapidly compress and elongate, generating mechanical force that flaps the wings.

In previous designs, when the actuator’s movements reach the extremely high frequencies needed for flight, the devices often start buckling. That reduces the power and efficiency of the robot. The new transmissions inhibit this bending-buckling motion, which reduces the strain on the artificial muscles and enables them to apply more force to flap the wings.

Another new design involves a long wing hinge that reduces torsional stress experienced during the flapping-wing motion. Fabricating the hinge, which is about 2 centimeters long but just 200 microns in diameter, was among their greatest challenges.

“If you have even a tiny alignment issue during the fabrication process, the wing hinge will be slanted instead of rectangular, which affects the wing kinematics,” Chen says.

After many attempts, the researchers perfected a multistep laser-cutting process that enabled them to precisely fabricate each wing hinge.

With all four units in place, the new robotic insect can hover for more than 1,000 seconds, which equates to almost 17 minutes, without showing any degradation of flight precision.

“When my student Nemo was performing that flight, he said it was the slowest 1,000 seconds he had spent in his entire life. The experiment was extremely nerve-racking,” Chen says.

The new robot also reached an average speed of 35 centimeters per second, the fastest flight researchers have reported, while performing body rolls and double flips. It can even precisely track a trajectory that spells M-I-T.

“At the end of the day, we’ve shown flight that is 100 times longer than anyone else in the field has been able to do, so this is an extremely exciting result,” he says.

From here, Chen and his students want to see how far they can push this new design, with the goal of achieving flight for longer than 10,000 seconds.

They also want to improve the precision of the robots so they could land and take off from the center of a flower. In the long run, the researchers hope to install tiny batteries and sensors onto the aerial robots so they could fly and navigate outside the lab.

“This new robot platform is a major result from our group and leads to many exciting directions. For example, incorporating sensors, batteries, and computing capabilities on this robot will be a central focus in the next three to five years,” Chen says.

This research is funded, in part, by the U.S. National Science Foundation and a Mathworks Fellowship.

How one brain circuit encodes memories of both places and events

Wed, 01/15/2025 - 11:00am

Nearly 50 years ago, neuroscientists discovered cells within the brain’s hippocampus that store memories of specific locations. These cells also play an important role in storing memories of events, known as episodic memories. While the mechanism of how place cells encode spatial memory has been well-characterized, it has remained a puzzle how they encode episodic memories.

A new model developed by MIT researchers explains how those place cells can be recruited to form episodic memories, even when there’s no spatial component. According to this model, place cells, along with grid cells found in the entorhinal cortex, act as a scaffold that can be used to anchor memories as a linked series.

“This model is a first-draft model of the entorhinal-hippocampal episodic memory circuit. It’s a foundation to build on to understand the nature of episodic memory. That’s the thing I’m really excited about,” says Ila Fiete, a professor of brain and cognitive sciences at MIT, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the new study.

The model accurately replicates several features of biological memory systems, including the large storage capacity, gradual degradation of older memories, and the ability of people who compete in memory competitions to store enormous amounts of information in “memory palaces.”

MIT Research Scientist Sarthak Chandra and Sugandha Sharma PhD ’24 are the lead authors of the study, which appears today in Nature. Rishidev Chaudhuri, an assistant professor at the University of California at Davis, is also an author of the paper.

An index of memories

To encode spatial memory, place cells in the hippocampus work closely with grid cells — a special type of neuron that fires at many different locations, arranged geometrically in a regular pattern of repeating triangles. Together, a population of grid cells forms a lattice of triangles representing a physical space.

In addition to helping us recall places where we’ve been, these hippocampal-entorhinal circuits also help us navigate new locations. From human patients, it’s known that these circuits are also critical for forming episodic memories, which might have a spatial component but mainly consist of events, such as how you celebrated your last birthday or what you had for lunch yesterday.

“The same hippocampal and entorhinal circuits are used not just for spatial memory, but also for general episodic memory,” Fiete says. “The question you can ask is what is the connection between spatial and episodic memory that makes them live in the same circuit?”

Two hypotheses have been proposed to account for this overlap in function. One is that the circuit is specialized to store spatial memories because those types of memories — remembering where food was located or where predators were seen — are important to survival. Under this hypothesis, this circuit encodes episodic memories as a byproduct of spatial memory.

An alternative hypothesis suggests that the circuit is specialized to store episodic memories, but also encodes spatial memory because location is one aspect of many episodic memories.

In this work, Fiete and her colleagues proposed a third option: that the peculiar tiling structure of grid cells and their interactions with hippocampus are equally important for both types of memory — episodic and spatial. To develop their new model, they built on computational models that her lab has been developing over the past decade, which mimic how grid cells encode spatial information.

“We reached the point where I felt like we understood on some level the mechanisms of the grid cell circuit, so it felt like the time to try to understand the interactions between the grid cells and the larger circuit that includes the hippocampus,” Fiete says.

In the new model, the researchers hypothesized that grid cells interacting with hippocampal cells can act as a scaffold for storing either spatial or episodic memory. Each activation pattern within the grid defines a “well,” and these wells are spaced out at regular intervals. The wells don’t store the content of a specific memory, but each one acts as a pointer to a specific memory, which is stored in the synapses between the hippocampus and the sensory cortex.

When the memory is triggered later from fragmentary pieces, grid and hippocampal cell interactions drive the circuit state into the nearest well, and the state at the bottom of the well connects to the appropriate part of the sensory cortex to fill in the details of the memory. The sensory cortex is much larger than the hippocampus and can store vast amounts of memory.

“Conceptually, we can think about the hippocampus as a pointer network. It’s like an index that can be pattern-completed from a partial input, and that index then points toward sensory cortex, where those inputs were experienced in the first place,” Fiete says. “The scaffold doesn’t contain the content, it only contains this index of abstract scaffold states.”

Furthermore, events that occur in sequence can be linked together: Each well in the grid cell-hippocampal network efficiently stores the information that is needed to activate the next well, allowing memories to be recalled in the right order.

Modeling memory cliffs and palaces

The researchers’ new model replicates several memory-related phenomena much more accurately than existing models that are based on Hopfield networks — a type of neural network that can store and recall patterns.

While Hopfield networks offer insight into how memories can be formed by strengthening connections between neurons, they don’t perfectly model how biological memory works. In Hopfield models, every memory is recalled in perfect detail until capacity is reached. At that point, no new memories can form, and worse, attempting to add more memories erases all prior ones. This “memory cliff” doesn’t accurately mimic what happens in the biological brain, which tends to gradually forget the details of older memories while new ones are continually added.

The new MIT model captures findings from decades of recordings of grid and hippocampal cells in rodents made as the animals explore and forage in various environments. It also helps to explain the underlying mechanisms for a memorization strategy known as a memory palace. One of the tasks in memory competitions is to memorize the shuffled sequence of cards in one or several card decks. They usually do this by assigning each card to a particular spot in a memory palace — a memory of a childhood home or other environment they know well. When they need to recall the cards, they mentally stroll through the house, visualizing each card in its spot as they go along. Counterintuitively, adding the memory burden of associating cards with locations makes recall stronger and more reliable.

The MIT team’s computational model was able to perform such tasks very well, suggesting that memory palaces take advantage of the memory circuit’s own strategy of associating inputs with a scaffold in the hippocampus, but one level down: Long-acquired memories reconstructed in the larger sensory cortex can now be pressed into service as a scaffold for new memories. This allows for the storage and recall of many more items in a sequence than would otherwise be possible.

The researchers now plan to build on their model to explore how episodic memories could become converted to cortical “semantic” memory, or the memory of facts dissociated from the specific context in which they were acquired (for example, Paris is the capital of France), how episodes are defined, and how brain-like memory models could be integrated into modern machine learning.

The research was funded by the U.S. Office of Naval Research, the National Science Foundation under the Robust Intelligence program, the ARO-MURI award, the Simons Foundation, and the K. Lisa Yang ICoN Center.

Fast control methods enable record-setting fidelity in superconducting qubit

Tue, 01/14/2025 - 4:35pm

Quantum computing promises to solve complex problems exponentially faster than a classical computer, by using the principles of quantum mechanics to encode and manipulate information in quantum bits (qubits).

Qubits are the building blocks of a quantum computer. One challenge to scaling, however, is that qubits are highly sensitive to background noise and control imperfections, which introduce errors into the quantum operations and ultimately limit the complexity and duration of a quantum algorithm. To improve the situation, MIT researchers and researchers worldwide have continually focused on improving qubit performance. 

In new work, using a superconducting qubit called fluxonium, MIT researchers in the Department of Physics, the Research Laboratory of Electronics (RLE), and the Department of Electrical Engineering and Computer Science (EECS) developed two new control techniques to achieve a world-record single-qubit fidelity of 99.998 percent. This result complements then-MIT researcher Leon Ding’s demonstration last year of a 99.92 percent two-qubit gate fidelity

The paper’s senior authors are David Rower PhD ’24, a recent physics postdoc in MIT’s Engineering Quantum Systems (EQuS) group and now a research scientist at the Google Quantum AI laboratory; Leon Ding PhD ’23 from EQuS, now leading the Calibration team at Atlantic Quantum; and William D. Oliver, the Henry Ellis Warren Professor of EECS and professor of physics, leader of EQuS, director of the Center for Quantum Engineering, and RLE associate director. The paper recently appeared in the journal PRX Quantum.

Decoherence and counter-rotating errors

A major challenge with quantum computation is decoherence, a process by which qubits lose their quantum information. For platforms such as superconducting qubits, decoherence stands in the way of realizing higher-fidelity quantum gates.

Quantum computers need to achieve high gate fidelities in order to implement sustained computation through protocols like quantum error correction. The higher the gate fidelity, the easier it is to realize practical quantum computing.

MIT researchers are developing techniques to make quantum gates, the basic operations of a quantum computer, as fast as possible in order to reduce the impact of decoherence. However, as gates get faster, another type of error, arising from counter-rotating dynamics, can be introduced because of the way qubits are controlled using electromagnetic waves. 

Single-qubit gates are usually implemented with a resonant pulse, which induces Rabi oscillations between the qubit states. When the pulses are too fast, however, “Rabi gates” are not so consistent, due to unwanted errors from counter-rotating effects. The faster the gate, the more the counter-rotating error is manifest. For low-frequency qubits such as fluxonium, counter-rotating errors limit the fidelity of fast gates.

“Getting rid of these errors was a fun challenge for us,” says Rower. “Initially, Leon had the idea to utilize circularly polarized microwave drives, analogous to circularly polarized light, but realized by controlling the relative phase of charge and flux drives of a superconducting qubit. Such a circularly polarized drive would ideally be immune to counter-rotating errors.”

While Ding’s idea worked immediately, the fidelities achieved with circularly polarized drives were not as high as expected from coherence measurements.

“Eventually, we stumbled on a beautifully simple idea,” says Rower. “If we applied pulses at exactly the right times, we should be able to make counter-rotating errors consistent from pulse-to-pulse. This would make the counter-rotating errors correctable. Even better, they would be automatically accounted for with our usual Rabi gate calibrations!”

They called this idea “commensurate pulses,” since the pulses needed to be applied at times commensurate with intervals determined by the qubit frequency through its inverse, the time period. Commensurate pulses are defined simply by timing constraints and can be applied to a single linear qubit drive. In contrast, circularly polarized microwaves require two drives and some extra calibration.

“I had much fun developing the commensurate technique,” says Rower. “It was simple, we understood why it worked so well, and it should be portable to any qubit suffering from counter-rotating errors!”

“This project makes it clear that counter-rotating errors can be dealt with easily. This is a wonderful thing for low-frequency qubits such as fluxonium, which are looking more and more promising for quantum computing.”

Fluxonium’s promise

Fluxonium is a type of superconducting qubit made up of a capacitor and Josephson junction; unlike transmon qubits, however, fluxonium also includes a large “superinductor,” which by design helps protect the qubit from environmental noise. This results in performing logical operations, or gates, with greater accuracy.

Despite having higher coherence, however, fluxonium has a lower qubit frequency that is generally associated with proportionally longer gates.

“Here, we’ve demonstrated a gate that is among the fastest and highest-fidelity across all superconducting qubits,” says Ding. “Our experiments really show that fluxonium is a qubit that supports both interesting physical explorations and also absolutely delivers in terms of engineering performance.”

With further research, they hope to reveal new limitations and yield even faster and higher-fidelity gates.

“Counter-rotating dynamics have been understudied in the context of superconducting quantum computing because of how well the rotating-wave approximation holds in common scenarios,” says Ding. “Our paper shows how to precisely calibrate fast, low-frequency gates where the rotating-wave approximation does not hold.”

Physics and engineering team up

“This is a wonderful example of the type of work we like to do in EQuS, because it leverages fundamental concepts in both physics and electrical engineering to achieve a better outcome,” says Oliver. “It builds on our earlier work with non-adiabatic qubit control, applies it to a new qubit — fluxonium — and makes a beautiful connection with counter-rotating dynamics.”

The science and engineering teams enabled the high fidelity in two ways. First, the team demonstrated “commensurate” (synchronous) non-adiabatic control, which goes beyond the standard “rotating wave approximation” of standard Rabi approaches. This leverages ideas that won the 2023 Nobel Prize in Physics for ultrafast “attosecond” pulses of light.

Secondly, they demonstrated it using an analog to circularly polarized light. Rather than a physical electromagnetic field with a rotating polarization vector in real x-y space, they realized a synthetic version of circularly polarized light using the qubit’s x-y space, which in this case corresponds to its magnetic flux and electric charge.

The combination of a new take on an existing qubit design (fluxonium) and the application of advanced control methods applied to an understanding of the underlying physics enabled this result.

Platform-independent and requiring no additional calibration overhead, this work establishes straightforward strategies for mitigating counter-rotating effects from strong drives in circuit quantum electrodynamics and other platforms, which the researchers expect to be helpful in the effort to realize high-fidelity control for fault-tolerant quantum computing.

Adds Oliver, “With the recent announcement of Google’s Willow quantum chip that demonstrated quantum error correction beyond threshold for the first time, this is a timely result, as we have pushed performance even higher. Higher-performant qubits will lead to lower overhead requirements for implementing error correction.”  

Other researchers on the paper are RLE’s Helin ZhangMax Hays, Patrick M. Harrington, Ilan T. RosenSimon GustavssonKyle SerniakJeffrey A. Grover, and Junyoung An, who is also with EECS; and MIT Lincoln Laboratory’s Jeffrey M. Gertler, Thomas M. Hazard, Bethany M. Niedzielski, and Mollie E. Schwartz.

This research was funded, in part, by the U.S. Army Research Office, the U.S. Department of Energy Office of Science, National Quantum Information Science Research Centers, Co-design Center for Quantum Advantage, U.S. Air Force, the U.S. Office of the Director of National Intelligence, and the U.S. National Science Foundation.  

Global Languages program empowers student ambassadors

Tue, 01/14/2025 - 4:20pm

Angelina Wu has been taking Japanese classes at MIT since arriving as a first-year student.

“I have had such a wonderful experience learning the language, getting to know my classmates, and interacting with the Japanese community at MIT,” says Wu, now a senior majoring in computer science and engineering.

“It’s been an integral part of my MIT experience, supplementing my other technical skills and also giving me opportunities to meet many people outside my major that I likely wouldn’t have had otherwise. As a result, I feel like I get to understand a much broader, more complete version of MIT.”

Now, Wu is sharing her experience and giving back as a Global Languages Student Ambassador. At a recent Global Languages preregistration fair, Wu spoke with other students interested in pursuing Japanese studies.

“I could not be happier to help promote such an experience to curious students and the greater MIT community,” Wu says.

Global Language Student Ambassadors is a group of students who lead outreach efforts to help increase visibility for the program.

In addition to disseminating information and promotional materials to the MIT undergraduate community, student ambassadors are asked to organize and host informal gatherings for Global Languages students around themes related to language and cultural exploration to build community and provide opportunities for learning and fun outside of the classroom.

Global Languages director Per Urlaub isn’t surprised that the Student Ambassadors program is popular with both students and the MIT community.

“The Global Languages program brings people together,” he says. “Providing a caring learning environment and creating a sense of belonging are central to our mission.”

What’s also central to the Global Languages’ mission is centering students’ work and creating spaces in which language learning can help create connections across academic areas. Students who study languages may improve their understanding of the cultural facets that underlie communication across cultures and open new worlds.

“An engaging community that fosters a deep sense of belonging doesn’t just happen automatically,” Urlaub notes. “A stronger community elevates our students’ proficiency gains, and also makes language learning more meaningful and fun.”

Each student ambassador serves for a single academic year in their area of language focus. They work closely with MIT’s academic administrators to plan, communicate, and stage events.

“I love exploring the richness of the Arabic language, especially how it connects to my culture and heritage,” says Heba Hussein, a student ambassador studying Arabic and majoring in electrical science and engineering. “I believe that having a strong grasp of languages and cultural awareness will help me work effectively in diverse teams.”

Student ambassadors, alongside other language learners, discover how other languages, cultures, and countries can guide their communications with others while shaping how they understand the world.

“My Spanish courses at MIT have been a highlight of my college experience thus far — the opportunity to connect on a deeper level with other cultures and force myself out of my comfort zone in conversations is important to me,” says Katie Kempff, another student ambassador who is majoring in climate system science and engineering and Spanish.

“As a heritage speaker, learning Chinese has been a way for me to connect with my culture and my roots,” adds Zixuan Liu, a double major in biological engineering and biology, and a Chinese student ambassador, who says that as a heritage speaker, learning Chinese has been a way for her to connect with her culture and her roots.

“I would highly recommend diving into languages and culture at MIT, where the support and the community really enhances the experience,” Liu says.

New computational chemistry techniques accelerate the prediction of molecules and materials

Tue, 01/14/2025 - 3:40pm

Back in the old days — the really old days — the task of designing materials was laborious. Investigators, over the course of 1,000-plus years, tried to make gold by combining things like lead, mercury, and sulfur, mixed in what they hoped would be just the right proportions. Even famous scientists like Tycho Brahe, Robert Boyle, and Isaac Newton tried their hands at the fruitless endeavor we call alchemy.

Materials science has, of course, come a long way. For the past 150 years, researchers have had the benefit of the periodic table of elements to draw upon, which tells them that different elements have different properties, and one can’t magically transform into another. Moreover, in the past decade or so, machine learning tools have considerably boosted our capacity to determine the structure and physical properties of various molecules and substances. New research by a group led by Ju Li — the Tokyo Electric Power Company Professor of Nuclear Engineering at MIT and professor of materials science and engineering — offers the promise of a major leap in capabilities that can facilitate materials design. The results of their investigation are reported in a December 2024 issue of Nature Computational Science.

At present, most of the machine-learning models that are used to characterize molecular systems are based on density functional theory (DFT), which offers a quantum mechanical approach to determining the total energy of a molecule or crystal by looking at the electron density distribution — which is, basically, the average number of electrons located in a unit volume around each given point in space near the molecule. (Walter Kohn, who co-invented this theory 60 years ago, received a Nobel Prize in Chemistry for it in 1998.) While the method has been very successful, it has some drawbacks, according to Li: “First, the accuracy is not uniformly great. And, second, it only tells you one thing: the lowest total energy of the molecular system.”

“Couples therapy” to the rescue

His team is now relying on a different computational chemistry technique, also derived from quantum mechanics, known as coupled-cluster theory, or CCSD(T). “This is the gold standard of quantum chemistry,” Li comments. The results of CCSD(T) calculations are much more accurate than what you get from DFT calculations, and they can be as trustworthy as those currently obtainable from experiments. The problem is that carrying out these calculations on a computer is very slow, he says, “and the scaling is bad: If you double the number of electrons in the system, the computations become 100 times more expensive.” For that reason, CCSD(T) calculations have normally been limited to molecules with a small number of atoms — on the order of about 10. Anything much beyond that would simply take too long.

That’s where machine learning comes in. CCSD(T) calculations are first performed on conventional computers, and the results are then used to train a neural network with a novel architecture specially devised by Li and his colleagues. After training, the neural network can perform these same calculations much faster by taking advantage of approximation techniques. What’s more, their neural network model can extract much more information about a molecule than just its energy. “In previous work, people have used multiple different models to assess different properties,” says Hao Tang, an MIT PhD student in materials science and engineering. “Here we use just one model to evaluate all of these properties, which is why we call it a ‘multi-task’ approach.”

The “Multi-task Electronic Hamiltonian network,” or MEHnet, sheds light on a number of electronic properties, such as the dipole and quadrupole moments, electronic polarizability, and the optical excitation gap — the amount of energy needed to take an electron from the ground state to the lowest excited state. “The excitation gap affects the optical properties of materials,” Tang explains, “because it determines the frequency of light that can be absorbed by a molecule.” Another advantage of their CCSD-trained model is that it can reveal properties of not only ground states, but also excited states. The model can also predict the infrared absorption spectrum of a molecule related to its vibrational properties, where the vibrations of atoms within a molecule are coupled to each other, leading to various collective behaviors.

The strength of their approach owes a lot to the network architecture. Drawing on the work of MIT Assistant Professor Tess Smidt, the team is utilizing a so-called E(3)-equivariant graph neural network, says Tang, “in which the nodes represent atoms and the edges that connect the nodes represent the bonds between atoms. We also use customized algorithms that incorporate physics principles — related to how people calculate molecular properties in quantum mechanics — directly into our model.”

Testing, 1, 2 3

When tested on its analysis of known hydrocarbon molecules, the model of Li et al. outperformed DFT counterparts and closely matched experimental results taken from the published literature.

Qiang Zhu — a materials discovery specialist at the University of North Carolina at Charlotte (who was not part of this study) — is impressed by what’s been accomplished so far. “Their method enables effective training with a small dataset, while achieving superior accuracy and computational efficiency compared to existing models,” he says. “This is exciting work that illustrates the powerful synergy between computational chemistry and deep learning, offering fresh ideas for developing more accurate and scalable electronic structure methods.”

The MIT-based group applied their model first to small, nonmetallic elements — hydrogen, carbon, nitrogen, oxygen, and fluorine, from which organic compounds can be made — and has since moved on to examining heavier elements: silicon, phosphorus, sulfur, chlorine, and even platinum. After being trained on small molecules, the model can be generalized to bigger and bigger molecules. “Previously, most calculations were limited to analyzing hundreds of atoms with DFT and just tens of atoms with CCSD(T) calculations,” Li says. “Now we’re talking about handling thousands of atoms and, eventually, perhaps tens of thousands.”

For now, the researchers are still evaluating known molecules, but the model can be used to characterize molecules that haven’t been seen before, as well as to predict the properties of hypothetical materials that consist of different kinds of molecules. “The idea is to use our theoretical tools to pick out promising candidates, which satisfy a particular set of criteria, before suggesting them to an experimentalist to check out,” Tang says.

It’s all about the apps

Looking ahead, Zhu is optimistic about the possible applications. “This approach holds the potential for high-throughput molecular screening,” he says. “That’s a task where achieving chemical accuracy can be essential for identifying novel molecules and materials with desirable properties.”

Once they demonstrate the ability to analyze large molecules with perhaps tens of thousands of atoms, Li says, “we should be able to invent new polymers or materials” that might be used in drug design or in semiconductor devices. The examination of heavier transition metal elements could lead to the advent of new materials for batteries — presently an area of acute need.

The future, as Li sees it, is wide open. “It’s no longer about just one area,” he says. “Our ambition, ultimately, is to cover the whole periodic table with CCSD(T)-level accuracy, but at lower computational cost than DFT. This should enable us to solve a wide range of problems in chemistry, biology, and materials science. It’s hard to know, at present, just how wide that range might be.”

This work was supported by the Honda Research Institute. Hao Tang acknowledges support from the Mathworks Engineering Fellowship. The calculations in this work were performed, in part, on the Matlantis high-speed universal atomistic simulator, the Texas Advanced Computing Center, the MIT SuperCloud, and the National Energy Research Scientific Computing.

For healthy hearing, timing matters

Tue, 01/14/2025 - 3:15pm

When sound waves reach the inner ear, neurons there pick up the vibrations and alert the brain. Encoded in their signals is a wealth of information that enables us to follow conversations, recognize familiar voices, appreciate music, and quickly locate a ringing phone or crying baby.

Neurons send signals by emitting spikes — brief changes in voltage that propagate along nerve fibers, also known as action potentials. Remarkably, auditory neurons can fire hundreds of spikes per second, and time their spikes with exquisite precision to match the oscillations of incoming sound waves.

With powerful new models of human hearing, scientists at MIT’s McGovern Institute for Brain Research have determined that this precise timing is vital for some of the most important ways we make sense of auditory information, including recognizing voices and localizing sounds.

The open-access findings, reported Dec. 4 in the journal Nature Communications, show how machine learning can help neuroscientists understand how the brain uses auditory information in the real world. MIT professor and McGovern investigator Josh McDermott, who led the research, explains that his team’s models better-equip researchers to study the consequences of different types of hearing impairment and devise more effective interventions.

Science of sound

The nervous system’s auditory signals are timed so precisely, researchers have long suspected that timing is important to our perception of sound. Sound waves oscillate at rates that determine their pitch: Low-pitched sounds travel in slow waves, whereas high-pitched sound waves oscillate more frequently. The auditory nerve that relays information from sound-detecting hair cells in the ear to the brain generates electrical spikes that correspond to the frequency of these oscillations. “The action potentials in an auditory nerve get fired at very particular points in time relative to the peaks in the stimulus waveform,” explains McDermott, who is also associate head of the MIT Department of Brain and Cognitive Sciences.

This relationship, known as phase-locking, requires neurons to time their spikes with sub-millisecond precision. But scientists haven’t really known how informative these temporal patterns are to the brain. Beyond being scientifically intriguing, McDermott says, the question has important clinical implications: “If you want to design a prosthesis that provides electrical signals to the brain to reproduce the function of the ear, it’s arguably pretty important to know what kinds of information in the normal ear actually matter,” he says.

This has been difficult to study experimentally; animal models can’t offer much insight into how the human brain extracts structure in language or music, and the auditory nerve is inaccessible for study in humans. So McDermott and graduate student Mark Saddler PhD ’24 turned to artificial neural networks.

Artificial hearing

Neuroscientists have long used computational models to explore how sensory information might be decoded by the brain, but until recent advances in computing power and machine learning methods, these models were limited to simulating simple tasks. “One of the problems with these prior models is that they’re often way too good,” says Saddler, who is now at the Technical University of Denmark. For example, a computational model tasked with identifying the higher pitch in a pair of simple tones is likely to perform better than people who are asked to do the same thing. “This is not the kind of task that we do every day in hearing,” Saddler points out. “The brain is not optimized to solve this very artificial task.” This mismatch limited the insights that could be drawn from this prior generation of models.

To better understand the brain, Saddler and McDermott wanted to challenge a hearing model to do things that people use their hearing for in the real world, like recognizing words and voices. That meant developing an artificial neural network to simulate the parts of the brain that receive input from the ear. The network was given input from some 32,000 simulated sound-detecting sensory neurons and then optimized for various real-world tasks.

The researchers showed that their model replicated human hearing well — better than any previous model of auditory behavior, McDermott says. In one test, the artificial neural network was asked to recognize words and voices within dozens of types of background noise, from the hum of an airplane cabin to enthusiastic applause. Under every condition, the model performed very similarly to humans.

When the team degraded the timing of the spikes in the simulated ear, however, their model could no longer match humans’ ability to recognize voices or identify the locations of sounds. For example, while McDermott’s team had previously shown that people use pitch to help them identify people’s voices, the model revealed that that this ability is lost without precisely timed signals. “You need quite precise spike timing in order to both account for human behavior and to perform well on the task,” Saddler says. That suggests that the brain uses precisely timed auditory signals because they aid these practical aspects of hearing.

The team’s findings demonstrate how artificial neural networks can help neuroscientists understand how the information extracted by the ear influences our perception of the world, both when hearing is intact and when it is impaired. “The ability to link patterns of firing in the auditory nerve with behavior opens a lot of doors,” McDermott says.

“Now that we have these models that link neural responses in the ear to auditory behavior, we can ask, ‘If we simulate different types of hearing loss, what effect is that going to have on our auditory abilities?’” McDermott says. “That will help us better diagnose hearing loss, and we think there are also extensions of that to help us design better hearing aids or cochlear implants.” For example, he says, “The cochlear implant is limited in various ways — it can do some things and not others. What’s the best way to set up that cochlear implant to enable you to mediate behaviors? You can, in principle, use the models to tell you that.”

Physicists measure quantum geometry for the first time

Mon, 01/13/2025 - 3:55pm

MIT physicists and colleagues have for the first time measured the geometry, or shape, of electrons in solids at the quantum level. Scientists have long known how to measure the energies and velocities of electrons in crystalline materials, but until now, those systems’ quantum geometry could only be inferred theoretically, or sometimes not at all.

The work, reported in the Nov. 25 issue of Nature Physics, “opens new avenues for understanding and manipulating the quantum properties of materials,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work.

“We’ve essentially developed a blueprint for obtaining some completely new information that couldn’t be obtained before,” says Comin, who is also affiliated with MIT’s Materials Research Laboratory and the Research Laboratory of Electronics.

The work could be applied to “any kind of quantum material, not just the one we worked with,” says Mingu Kang PhD ’23, first author of the Nature Physics paper who conducted the work as an MIT graduate student and who is now a Kavli Postdoctoral Fellow at Cornell University’s Laboratory of Atomic and Solid State Physics. 

Kang was also invited to write an accompanying research briefing on the work, including its implications, for the Nov. 25 issue of Nature Physics.

A weird world

In the weird world of quantum physics, an electron can be described as both a point in space and a wave-like shape. At the heart of the current work is a fundamental object known as a wave function that describes the latter. “You can think of it like a surface in a three-dimensional space,” says Comin.

There are different types of wave functions, ranging from the simple to the complex. Think of a ball. That is analogous to a simple, or trivial, wave function. Now picture a Mobius strip, the kind of structure explored by M.C. Escher in his art. That’s analogous to a complex, or nontrivial, wave function. And the quantum world is filled with materials composed of the latter.

But until now, the quantum geometry of wave functions could only be inferred theoretically, or sometimes not at all. And the property is becoming more and more important as physicists find more and more quantum materials with potential applications in everything from quantum computers to advanced electronic and magnetic devices.

The MIT team solved the problem using a technique called angle-resolved photoemission spectroscopy, or ARPES. Comin, Kang, and some of the same colleagues had used the technique in other research. For example, in 2022 they reported discovering the “secret sauce” behind exotic properties of a new quantum material known as a kagome metal. That work, too, appeared in Nature Physics. In the current work, the team adapted ARPES to measure the quantum geometry of a kagome metal.

Close collaborations

Kang stresses that the new ability to measure the quantum geometry of materials “comes from the close cooperation between theorists and experimentalists.”

The Covid-19 pandemic, too, had an impact. Kang, who is from South Korea, was based in that country during the pandemic. “That facilitated a collaboration with theorists in South Korea,” says Kang, an experimentalist.

The pandemic also led to an unusual opportunity for Comin. He traveled to Italy to help run the ARPES experiments at the Italian Light Source Elettra, a national laboratory. The lab was closed during the pandemic, but was starting to reopen when Comin arrived. He found himself alone, however, when Kang tested positive for Covid and couldn’t join him. So he inadvertently ran the experiments himself with the support of local scientists. “As a professor, I lead projects, but students and postdocs actually carry out the work. So this is basically the last study where I actually contributed to the experiments themselves,” he says with a smile.

In addition to Kang and Comin, additional authors of the Nature Physics paper are Sunje Kim of Seoul National University (Kim is a co-first author with Kang); Paul M. Neves, a graduate student in the MIT Department of Physics; Linda Ye of Stanford University; Junseo Jung of Seoul National University; Denny Puntel of the University of Trieste; Federico Mazzola of Consiglio Nazionale delle Ricerche and Ca’ Foscari University of Venice; Shiang Fang of Google DeepMind; Chris Jozwiak, Aaron Bostwick, and Eli Rotenberg of Lawrence Berkeley National Laboratory; Jun Fuji and Ivana Vobornik of Consiglio Nazionale delle Ricerche; Jae-Hoon Park of Max Planck POSTECH/Korea Research Initiative and Pohang University of Science and Technology; Joseph G. Checkelsky, associate professor of physics at MIT; and Bohm-Jung Yang of Seoul National University, who co-led the research project with Comin.

This work was funded by the U.S. Air Force Office of Scientific Research, the U.S. National Science Foundation, the Gordon and Betty Moore Foundation, the National Research Foundation of Korea, the Samsung Science and Technology Foundation, the U.S. Army Research Office, the U.S. Department of Energy Office of Science, the Heising-Simons Physics Research Fellow Program, the Tsinghua Education Foundation, the NFFA-MUR Italy Progetti Internazionali facility, the Samsung Foundation of Culture, and the Kavli Institute at Cornell.

Q&A: The climate impact of generative AI

Mon, 01/13/2025 - 3:45pm

Vijay Gadepally, a senior staff member at MIT Lincoln Laboratory, leads a number of projects at the Lincoln Laboratory Supercomputing Center (LLSC) to make computing platforms, and the artificial intelligence systems that run on them, more efficient. Here, Gadepally discusses the increasing use of generative AI in everyday tools, its hidden environmental impact, and some of the ways that Lincoln Laboratory and the greater AI community can reduce emissions for a greener future.

Q: What trends are you seeing in terms of how generative AI is being used in computing?

A: Generative AI uses machine learning (ML) to create new content, like images and text, based on data that is inputted into the ML system. At the LLSC we design and build some of the largest academic computing platforms in the world, and over the past few years we've seen an explosion in the number of projects that need access to high-performance computing for generative AI. We're also seeing how generative AI is changing all sorts of fields and domains — for example, ChatGPT is already influencing the classroom and the workplace faster than regulations can seem to keep up.

We can imagine all sorts of uses for generative AI within the next decade or so, like powering highly capable virtual assistants, developing new drugs and materials, and even improving our understanding of basic science. We can't predict everything that generative AI will be used for, but I can certainly say that with more and more complex algorithms, their compute, energy, and climate impact will continue to grow very quickly.

Q: What strategies is the LLSC using to mitigate this climate impact?

A: We're always looking for ways to make computing more efficient, as doing so helps our data center make the most of its resources and allows our scientific colleagues to push their fields forward in as efficient a manner as possible.

As one example, we've been reducing the amount of power our hardware consumes by making simple changes, similar to dimming or turning off lights when you leave a room. In one experiment, we reduced the energy consumption of a group of graphics processing units by 20 percent to 30 percent, with minimal impact on their performance, by enforcing a power cap. This technique also lowered the hardware operating temperatures, making the GPUs easier to cool and longer lasting.

Another strategy is changing our behavior to be more climate-aware. At home, some of us might choose to use renewable energy sources or intelligent scheduling. We are using similar techniques at the LLSC — such as training AI models when temperatures are cooler, or when local grid energy demand is low.

We also realized that a lot of the energy spent on computing is often wasted, like how a water leak increases your bill but without any benefits to your home. We developed some new techniques that allow us to monitor computing workloads as they are running and then terminate those that are unlikely to yield good results. Surprisingly, in a number of cases we found that the majority of computations could be terminated early without compromising the end result.

Q: What's an example of a project you've done that reduces the energy output of a generative AI program?

A: We recently built a climate-aware computer vision tool. Computer vision is a domain that's focused on applying AI to images; so, differentiating between cats and dogs in an image, correctly labeling objects within an image, or looking for components of interest within an image.

In our tool, we included real-time carbon telemetry, which produces information about how much carbon is being emitted by our local grid as a model is running. Depending on this information, our system will automatically switch to a more energy-efficient version of the model, which typically has fewer parameters, in times of high carbon intensity, or a much higher-fidelity version of the model in times of low carbon intensity.

By doing this, we saw a nearly 80 percent reduction in carbon emissions over a one- to two-day period. We recently extended this idea to other generative AI tasks such as text summarization and found the same results. Interestingly, the performance sometimes improved after using our technique!

Q: What can we do as consumers of generative AI to help mitigate its climate impact?

A: As consumers, we can ask our AI providers to offer greater transparency. For example, on Google Flights, I can see a variety of options that indicate a specific flight's carbon footprint. We should be getting similar kinds of measurements from generative AI tools so that we can make a conscious decision on which product or platform to use based on our priorities.

We can also make an effort to be more educated on generative AI emissions in general. Many of us are familiar with vehicle emissions, and it can help to talk about generative AI emissions in comparative terms. People may be surprised to know, for example, that one image-generation task is roughly equivalent to driving four miles in a gas car, or that it takes the same amount of energy to charge an electric car as it does to generate about 1,500 text summarizations.

There are many cases where customers would be happy to make a trade-off if they knew the trade-off's impact.

Q: What do you see for the future?

A: Mitigating the climate impact of generative AI is one of those problems that people all over the world are working on, and with a similar goal. We're doing a lot of work here at Lincoln Laboratory, but its only scratching at the surface. In the long term, data centers, AI developers, and energy grids will need to work together to provide "energy audits" to uncover other unique ways that we can improve computing efficiencies. We need more partnerships and more collaboration in order to forge ahead.

If you're interested in learning more, or collaborating with Lincoln Laboratory on these efforts, please contact Vijay Gadepally.

X-ray flashes from a nearby supermassive black hole accelerate mysteriously

Mon, 01/13/2025 - 10:15am

One supermassive black hole has kept astronomers glued to their scopes for the last several years. First came a surprise disappearance, and now, a precarious spinning act.

The black hole in question is 1ES 1927+654, which is about as massive as a million suns and sits in a galaxy that is 270 million light-years away. In 2018, astronomers at MIT and elsewhere observed that the black hole’s corona — a cloud of whirling, white-hot plasma — suddenly disappeared, before reassembling months later. The brief though dramatic shut-off was a first in black hole astronomy.

Members of the MIT team have now caught the same black hole exhibiting more unprecedented behavior.

The astronomers have detected flashes of X-rays coming from the black hole at a steadily increasing clip. Over a period of two years, the flashes, at millihertz frequencies, increased from every 18 minutes to every seven minutes. This dramatic speed-up in X-rays has not been seen from a black hole until now.

The researchers explored a number of scenarios for what might explain the flashes. They believe the most likely culprit is a spinning white dwarf — an extremely compact core of a dead star that is orbiting around the black hole and getting precariously closer to its event horizon, the boundary beyond which nothing can escape the black hole’s gravitational pull. If this is the case, the white dwarf must be pulling off an impressive balancing act, as it could be coming right up to the black hole’s edge without actually falling in.

“This would be the closest thing we know of around any black hole,” says Megan Masterson, a graduate student in physics at MIT, who co-led the discovery. “This tells us that objects like white dwarfs may be able to live very close to an event horizon for a relatively extended period of time.”

The researchers present their findings today at the 245th meeting of the American Astronomical Society.

If a white dwarf is at the root of the black hole’s mysterious flashing, it would also give off gravitational waves, in a range that would be detectable by next-generation observatories such as the European Space Agency's Laser Interferometer Space Antenna (LISA).

These new detectors are designed to detect oscillations on the scale of minutes, so this black hole system is in that sweet spot,” says co-author Erin Kara, associate professor of physics at MIT.

The study’s other co-authors include MIT Kavli members Christos Panagiotou, Joheen Chakraborty, Kevin Burdge, Riccardo Arcodia, Ronald Remillard, and Jingyi Wang, along with collaborators from multiple other institutions.

Nothing normal

Kara and Masterson were part of the team that observed 1ES 1927+654 in 2018, as the black hole’s corona went dark, then slowly rebuilt itself over time. For a while, the newly reformed corona — a cloud of highly energetic plasma and X-rays — was the brightest X-ray-emitting object in the sky.

“It was still extremely bright, though it wasn’t doing anything new for a couple years and was kind of gurgling along. But we felt we had to keep monitoring it because it was so beautiful,” Kara says. “Then we noticed something that has never really been seen before.”

In 2022, the team looked through observations of the black hole taken by the European Space Agency’s XMM-Newton, a space-based observatory that detects and measures X-ray emissions from black holes, neutron stars, galactic clusters, and other extreme cosmic sources. They noticed that X-rays from the black hole appeared to pulse with increasing frequency. Such “quasi-periodic oscillations” have only been observed in a handful of other supermassive black holes, where X-ray flashes appear with regular frequency.

In the case of 1ES 1927+654, the flickering seemed to steadily ramp up, from every 18 minutes to every seven minutes over the span of two years.

“We’ve never seen this dramatic variability in the rate at which it’s flashing,” Masterson says. “This looked absolutely nothing like a normal supermassive black hole.”

The fact that the flashing was detected in the X-ray band points to the strong possibility that the source is somewhere very close to the black hole. The innermost regions of a black hole are extremely high-energy environments, where X-rays are produced by fast-moving, hot plasma. X-rays are less likely to be seen at farther distances, where gas can circle more slowly in an accretion disk. The cooler environment of the disk can emit optical and ultraviolet light, but rarely gives off X-rays.

Seeing something in the X-rays is already telling you you’re pretty close to the black hole,” Kara says. “When you see variability on the timescale of minutes, that’s close to the event horizon, and the first thing your mind goes to is circular motion, and whether something could be orbiting around the black hole.”

X-ray kick-up

Whatever was producing the X-ray flashes was doing so at an extremely close distance from the black hole, which the researchers estimate to be within a few million miles of the event horizon.

Masterson and Kara explored models for various astrophysical phenomena that could explain the X-ray patterns that they observed, including a possibility relating to the black hole’s corona.

“One idea is that this corona is oscillating, maybe blobbing back and forth, and if it starts to shrink, those oscillations get faster as the scales get smaller,” Masterson says. “But we’re in the very early stages of understanding coronal oscillations.”

Another promising scenario, and one that scientists have a better grasp on in terms of the physics involved, has to do with a daredevil of a white dwarf. According to their modeling, the researchers estimate the white dwarf could have been about one-tenth the mass of the sun. In contrast, the supermassive black hole itself is on the order of 1 million solar masses.

When any object gets this close to a supermassive black hole, gravitational waves are expected to be emitted, dragging the object closer to the black hole. As it circles closer, the white dwarf moves at a faster rate, which can explain the increasing frequency of X-ray oscillations that the team observed.

The white dwarf is practically at the precipice of no return and is estimated to be just a few million miles from the event horizon. However, the researchers predict that the star will not fall in. While the black hole’s gravity may pull the white dwarf inward, the star is also shedding part of its outer layer into the black hole. This shedding acts as a small kick-back, such that the white dwarf — an incredibly compact object itself — can resist crossing the black hole’s boundary.

“Because white dwarfs are small and compact, they’re very difficult to shred apart, so they can be very close to a black hole,” Kara says. “If this scenario is correct, this white dwarf is right at the turn around point, and we may see it get further away.”

The team plans to continue observing the system, with existing and future telescopes, to better understand the extreme physics at work in a black hole’s innermost environments. They are particularly excited to study the system once the space-based gravitational-wave detector LISA launches — currently planned for the mid 2030s — as the gravitational waves that the system should give off will be in a sweet spot that LISA can clearly detect.

“The one thing I’ve learned with this source is to never stop looking at it because it will probably teach us something new,” Masterson says. “The next step is just to keep our eyes open.”

Study shows how households can cut energy costs

Mon, 01/13/2025 - 5:00am

Many people around the globe are living in energy poverty, meaning they spend at least 8 percent of their annual household income on energy. Addressing this problem is not simple, but an experiment by MIT researchers shows that giving people better data about their energy use, plus some coaching on the subject, can lead them to substantially reduce their consumption and costs.

The experiment, based in Amsterdam, resulted in households cutting their energy expenses in half, on aggregate — a savings big enough to move three-quarters of them out of energy poverty.

“Our energy coaching project as a whole showed a 75 percent success rate at alleviating energy poverty,” says Joseph Llewellyn, a researcher with MIT’s Senseable City Lab and co-author of a newly published paper detailing the experiment’s results.

“Energy poverty afflicts families all over the world. With empirical evidence on which policies work, governments could focus their efforts more effectively,” says Fábio Duarte, associate director of MIT’s Senseable City Lab, and another co-author of the paper.

The paper, “Assessing the impact of energy coaching with smart technology interventions to alleviate energy poverty,” appears today in Nature Scientific Reports.

The authors are Llewellyn, who is also a researcher at the Amsterdam Institute for Advanced Metropolitan Solutions (AMS) and the KTH Royal Institute of Technology in Stockholm; Titus Venverloo, a research fellow at the MIT Senseable City Lab and AMS; Fábio Duarte, who is also a principal researcher MIT’s Senseable City Lab; Carlo Ratti, director of the Senseable City Lab; Cecilia Katzeff; Fredrik Johansson; and Daniel Pargman of the KTH Royal Institute of Technology.

The researchers developed the study after engaging with city officials in Amsterdam. In the Netherlands, about 550,000 households, or 7 percent of the population, are considered to be in energy poverty; in the European Union, that figure is about 50 million. In the U.S., separate research has shown that about three in 10 households report trouble paying energy bills.

To conduct the experiment, the researchers ran two versions of an energy coaching intervention. In one version, 67 households received one report on their energy usage, along with coaching about how to increase energy efficiency. In the other version, 50 households received those things as well as a smart device giving them real-time updates on their energy consumption. (All households also received some modest energy-savings improvements at the outset, such as additional insulation.)

Across the two groups, homes typically reduced monthly consumption of electricity by 33 percent and gas by 42 percent. They lowered their bills by 53 percent, on aggregate, and the percentage of income they spent on energy dropped from 10.1 percent to 5.3 percent.

What were these households doing differently? Some of the biggest behavioral changes included things such as only heating rooms that were in use and unplugging devices not being used. Both of those changes save energy, but their benefits were not always understood by residents before they received energy coaching.

“The range of energy literacy was quite wide from one home to the next,” Llewellyn says. “And when I went somewhere as an energy coach, it was never to moralize about energy use. I never said, ‘Oh, you’re using way too much.’ It was always working on it with the households, depending on what people need for their homes.”

Intriguingly, the homes receiving the small devices that displayed real-time energy data only tended to use them for three or four weeks following a coaching visit. After that, people seemed to lose interest in very frequent monitoring of their energy use. And yet, a few weeks of consulting the devices tended to be long enough to get people to change their habits in a lasting way.

“Our research shows that smart devices need to be accompanied by a close understanding of what drives families to change their behaviors,” Venverloo says.

As the researchers acknowledge, working with consumers to reduce their energy consumption is just one way to help people escape energy poverty. Other “structural” factors that can help include lower energy prices and more energy-efficient buildings.

On the latter note, the current paper has given rise to a new experiment Llewellyn is developing with Amsterdam officials, to examine the benefits of retrofitting residental buildings to lower energy costs. In that case, local policymakers are trying to work out how to fund the retrofitting in such a way that landlords do not simply pass those costs on to tenants.

“We don’t want a household to save money on their energy bills if it also means the rent increases, because then we’ve just displaced expenses from one item to another,” Llewellyn says.

Households can also invest in products like better insulation themselves, for windows or heating components, although for low-income households, finding the money to pay for such things may not be trivial. That is especially the case, Llewellyn suggests, because energy costs can seem “invisible,” and a lower priority, than feeding and clothing a family.

“It’s a big upfront cost for a household that does not have 100 Euros to spend,” Llewellyn says. Compared to paying for other necessities, he notes, “Energy is often the thing that tends to fall last on their list. Energy is always going to be this invisible thing that hides behind the walls, and it’s not easy to change that.” 

Designing tiny filters to solve big problems

Sun, 01/12/2025 - 12:00am

For many industrial processes, the typical way to separate gases, liquids, or ions is with heat, using slight differences in boiling points to purify mixtures. These thermal processes account for roughly 10 percent of the energy use in the United States.

MIT chemical engineer Zachary Smith wants to reduce costs and carbon footprints by replacing these energy-intensive processes with highly efficient filters that can separate gases, liquids, and ions at room temperature.

In his lab at MIT, Smith is designing membranes with tiny pores that can filter tiny molecules based on their size. These membranes could be useful for purifying biogas, capturing carbon dioxide from power plant emissions, or generating hydrogen fuel.

“We’re taking materials that have unique capabilities for separating molecules and ions with precision, and applying them to applications where the current processes are not efficient, and where there’s an enormous carbon footprint,” says Smith, an associate professor of chemical engineering.

Smith and several former students have founded a company called Osmoses that is working toward developing these materials for large-scale use in gas purification. Removing the need for high temperatures in these widespread industrial processes could have a significant impact on energy consumption, potentially reducing it by as much as 90 percent.

“I would love to see a world where we could eliminate thermal separations, and where heat is no longer a problem in creating the things that we need and producing the energy that we need,” Smith says.

Hooked on research

As a high school student, Smith was drawn to engineering but didn’t have many engineering role models. Both of his parents were physicians, and they always encouraged him to work hard in school.

“I grew up without knowing many engineers, and certainly no chemical engineers. But I knew that I really liked seeing how the world worked. I was always fascinated by chemistry and seeing how mathematics helped to explain this area of science,” recalls Smith, who grew up near Harrisburg, Pennsylvania. “Chemical engineering seemed to have all those things built into it, but I really had no idea what it was.”

At Penn State University, Smith worked with a professor named Henry “Hank” Foley on a research project designing carbon-based materials to create a “molecular sieve” for gas separation. Through a time-consuming and iterative layering process, he created a sieve that could purify oxygen and nitrogen from air.

“I kept adding more and more coatings of a special material that I could subsequently carbonize, and eventually I started to get selectivity. In the end, I had made a membrane that could sieve molecules that only differed by 0.18 angstrom in size,” he says. “I got hooked on research at that point, and that’s what led me to do more things in the area of membranes.”

After graduating from college in 2008, Smith pursued graduate studies in chemical engineering at the University of Texas at Austin. There, he continued developing membranes for gas separation, this time using a different class of materials — polymers. By controlling polymer structure, he was able to create films with pores that filter out specific molecules, such as carbon dioxide or other gases.

“Polymers are a type of material that you can actually form into big devices that can integrate into world-class chemical plants. So, it was exciting to see that there was a scalable class of materials that could have a real impact on addressing questions related to CO2 and other energy-efficient separations,” Smith says.

After finishing his PhD, he decided he wanted to learn more chemistry, which led him to a postdoctoral fellowship at the University of California at Berkeley.

“I wanted to learn how to make my own molecules and materials. I wanted to run my own reactions and do it in a more systematic way,” he says.

At Berkeley, he learned how make compounds called metal-organic frameworks (MOFs) — cage-like molecules that have potential applications in gas separation and many other fields. He also realized that while he enjoyed chemistry, he was definitely a chemical engineer at heart.

“I learned a ton when I was there, but I also learned a lot about myself,” he says. “As much as I love chemistry, work with chemists, and advise chemists in my own group, I’m definitely a chemical engineer, really focused on the process and application.”

Solving global problems

While interviewing for faculty jobs, Smith found himself drawn to MIT because of the mindset of the people he met.

“I began to realize not only how talented the faculty and the students were, but the way they thought was very different than other places I had been,” he says. “It wasn’t just about doing something that would move their field a little bit forward. They were actually creating new fields. There was something inspirational about the type of people that ended up at MIT who wanted to solve global problems.”

In his lab at MIT, Smith is now tackling some of those global problems, including water purification, critical element recovery, renewable energy, battery development, and carbon sequestration.

In a close collaboration with Yan Xia, a professor at Stanford University, Smith recently developed gas separation membranes that incorporate a novel type of polymer known as “ladder polymers,” which are currently being scaled for deployment at his startup. Historically, using polymers for gas separation has been limited by a tradeoff between permeability and selectivity — that is, membranes that permit a faster flow of gases through the membrane tend to be less selective, allowing impurities to get through.

Using ladder polymers, which consist of double strands connected by rung-like bonds, the researchers were able to create gas separation membranes that are both highly permeable and very selective. The boost in permeability — a 100- to 1,000-fold improvement over earlier materials — could enable membranes to replace some of the high-energy techniques now used to separate gases, Smith says.

“This allows you to envision large-scale industrial problems solved with miniaturized devices,” he says. “If you can really shrink down the system, then the solutions we’re developing in the lab could easily be applied to big industries like the chemicals industry.”

These developments and others have been part of a number of advancements made by collaborators, students, postdocs, and researchers who are part of Smith’s team.

“I have a great research team of talented and hard-working students and postdocs, and I get to teach on topics that have been instrumental in my own professional career,” Smith says. “MIT has been a playground to explore and learn new things. I am excited for what my team will discover next, and grateful for an opportunity to help solve many important global problems.”

Study suggests how the brain, with sleep, learns meaningful maps of spaces

Fri, 01/10/2025 - 4:50pm

On the first day of your vacation in a new city, your explorations expose you to innumerable individual places. While the memories of these spots (like a beautiful garden on a quiet side street) feel immediately indelible, it might be days before you have enough intuition about the neighborhood to direct a newer tourist to that same site and then maybe to the café you discovered nearby. A new study of mice by MIT neuroscientists at The Picower Insitute for Learning and Memory provides new evidence for how the brain forms cohesive cognitive maps of whole spaces and highlights the critical importance of sleep for the process.

Scientists have known for decades that the brain devotes neurons in a region called the hippocampus to remembering specific locations. So-called “place cells” reliably activate when an animal is at the location the neuron is tuned to remember. But more useful than having markers of specific spaces is having a mental model of how they all relate in a continuous overall geography. Though such “cognitive maps” were formally theorized in 1948, neuroscientists have remained unsure of how the brain constructs them. The new study in the December edition of Cell Reports finds that the capability may depend upon subtle but meaningful changes over days in the activity of cells that are only weakly attuned to individual locations, but that increase the robustness and refinement of the hippocampus’s encoding of the whole space. With sleep, the study’s analyses indicate, these “weakly spatial” cells increasingly enrich neural network activity in the hippocampus to link together these places into a cognitive map.

“On Day 1, the brain doesn’t represent the space very well,” says lead author Wei Guo, a research scientist in the lab of senior author Matthew Wilson, the Sherman Fairchild Professor in The Picower Institute and MIT’s departments of Biology and Brain and Cognitive Sciences. “Neurons represent individual locations, but together they don’t form a map. But on Day 5 they form a map. If you want a map, you need all these neurons to work together in a coordinated ensemble.”

Mice mapping mazes

To conduct the study, Guo and Wilson, along with labmates Jie “Jack” Zhang and Jonathan Newman, introduced mice to simple mazes of varying shapes and let them explore them freely for about 30 minutes a day for several days. Importantly, the mice were not directed to learn anything specific through the offer of any rewards. They just wandered. Previous studies have shown that mice naturally demonstrate “latent learning” of spaces from this kind of unrewarded experience after several days.

To understand how latent learning takes hold, Guo and his colleagues visually monitored hundreds of neurons in the CA1 area of the hippocampus by engineering cells to flash when a buildup of calcium ions made them electrically active. They not only recorded the neurons’ flashes when the mice were actively exploring, but also while they were sleeping. Wilson’s lab has shown that animals “replay” their previous journeys during sleep, essentially refining their memories by dreaming about their experiences.

Analysis of the recordings showed that the activity of the place cells developed immediately and remained strong and unchanged over several days of exploration. But this activity alone wouldn’t explain how latent learning or a cognitive map evolves over several days. So unlike in many other studies where scientists focus solely on the strong and clear activity of place cells, Guo extended his analysis to the more subtle and mysterious activity of cells that were not so strongly spatially tuned. 

Using an emerging technique called “manifold learning” he was able to discern that many of the “weakly spatial” cells gradually correlated their activity not with locations, but with activity patterns among other neurons in the network. As this was happening, Guo’s analyses showed, the network encoded a cognitive map of the maze that increasingly resembled the literal, physical space.

“Although not responding to specific locations like strongly spatial cells, weakly spatial cells specialize in responding to ‘‘mental locations,’’ i.e., specific ensemble firing patterns of other cells,” the study authors wrote. “If a weakly spatial cell’s mental field encompasses two subsets of strongly spatial cells that encode distinct locations, this weakly spatial cell can serve as a bridge between these locations.”

In other words, the activity of the weakly spatial cells likely stitches together the individual locations represented by the place cells into a mental map.

The need for sleep

Studies by Wilson’s lab and many others have shown that memories are consolidated, refined, and processed by neural activity, such as replay, that occurs during sleep and rest. Guo and Wilson’s team therefore sought to test whether sleep was necessary for the contribution of weakly spatial cells to latent learning of cognitive maps.

To do this they let some mice explore a new maze twice during the same day with a three-hour siesta in between. Some of the mice were allowed to sleep but some were not. The ones that did showed a significant refinement of their mental map, but the ones that weren’t allowed to sleep showed no such improvement. Not only did the network encoding of the map improve, but also measures of the tuning of individual cells during showed that sleep helped cells become better attuned both to places and to patterns of network activity, so-called “mental places” or “fields.”

Mental map meaning

The “cognitive maps” the mice encoded over several days were not literal, precise maps of the mazes, Guo notes. Instead they were more like schematics. Their value is that they provide the brain with a topology that can be explored mentally, without having to be in the physical space. For instance, once you’ve formed your cognitive map of the neighborhood around your hotel, you can plan the next morning’s excursion (e.g., you could imagine grabbing a croissant at the bakery you observed a few blocks west and then picture eating it on one of those benches you noticed in the park along the river).

Indeed, Wilson hypothesized that the weakly spatial cells’ activity may be overlaying salient non-spatial information that brings additional meaning to the maps (i.e., the idea of a bakery is not spatial, even if it’s closely linked to a specific location). The study, however, included no landmarks within the mazes and did not test any specific behaviors among the mice. But now that the study has identified that weakly spatial cells contribute meaningfully to mapping, Wilson said future studies can investigate what kind of information they may be incorporating into the animals’ sense of their environments. We seem to intuitively regard the spaces we inhabit as more than just sets of discrete locations.

“In this study we focused on animals behaving naturally and demonstrated that during freely exploratory behavior and subsequent sleep, in the absence of reinforcement, substantial neural plastic changes at the ensemble level still occur,” the authors concluded. “This form of implicit and unsupervised learning constitutes a crucial facet of human learning and intelligence, warranting further in-depth investigations.”

The Freedom Together Foundation, The Picower Institute, and the National Institutes of Health funded the study.

Q&A: Examining American attitudes on global climate policies

Fri, 01/10/2025 - 12:15pm

Does the United States have a “moral responsibility” for providing aid to poor nations — which have a significantly smaller carbon footprint and face catastrophic climate events at a much higher rate than wealthy countries?

A study published Dec. 11 in Climatic Change explores U.S. public opinion on global climate policies considering our nation’s historic role as a leading contributor of carbon emissions. The randomized, experimental survey specifically investigates American attitudes toward such a moral responsibility. 

The work was led by MIT Professor Evan Lieberman, the Total Chair on Contemporary African Politics and director of the MIT Center for International Studies, and Volha Charnysh, the Ford Career Development Associate Professor of Political Science, and was co-authored with MIT political science PhD student Jared Kalow and University of Pennsylvania postdoc Erin Walk PhD ’24. Here, Lieberman describes the team's research and insights, and offers recommendations that could result in more effective climate advocacy.

Q: What are the key findings — and any surprises — of your recent work on climate attitudes among the U.S. population?

A: A big question at the COP29 Climate talks in Baku, Azerbaijan was: Who will pay the trillions of dollars needed to help lower-income countries adapt to climate change? During past meetings, global leaders have come to an increasing consensus that the wealthiest countries should pay, but there has been little follow-through on commitments. In countries like the United States, popular opinion about such policies can weigh heavily on politicians' minds, as citizens focus on their own challenges at home.

Prime Minister Gaston Browne of Antigua and Barbuda is one of many who views such transfers as a matter of moral responsibility, explaining that many rich countries see climate finance as “a random act of charity ... not recognizing that they have a moral obligation to provide funding, especially the historical emitters and even those who currently have large emissions.”

In our study, we set out to measure American attitudes towards climate-related foreign aid, and explicitly to test the impact of this particular moral responsibility narrative. We did this on an experimental basis, so subjects were randomly assigned to receive different messages.

One message emphasized what we call a “climate justice” frame, and it argued that Americans should contribute to helping poor countries because of the United States’ disproportionate role in the emissions of greenhouse gasses that have led to global warming. That message had a positive impact on the extent to which citizens supported the use of foreign aid for climate adaptation in poor countries. However, when we looked at who was actually moved by the message, we found that the effect was larger and statistically significant only among Democrats, but not among Republicans.

We were surprised that a message emphasizing solidarity, the idea that “we are all in this together,” had no overall effect on citizen attitudes, Democrats or Republicans. 

Q: What are your recommendations toward addressing the attitudes on global climate policies within the U.S.?

A: First, given limited budgets and attention for communications campaigns, our research certainly suggests that emphasizing a bit of blaming and shaming is more powerful than more diffuse messages of shared responsibility.

But our research also emphasized how critically important it is to find new ways to communicate with Republicans about climate change and about foreign aid. Republicans were overwhelmingly less supportive of climate aid and yet even from that low baseline, a message that moved Democrats had a much more mixed reception among Republicans. Researchers and those working on the front lines of climate communications need to do more to better understand Republican perspectives. Younger Republicans, for example, might be more movable on key climate policies.

Q: With an incoming Trump administration, what are some of the specific hurdles and/or opportunities we face in garnering U.S. public support for international climate negotiations?

A: Not only did Trump demonstrate his disdain for international action on climate change by withdrawing from the Paris agreement during his first term in office, but he has indicated his intention to double down on such strategies in his second term. And the idea that he would support assistance for the world’s poorest countries harmed by climate change? This seems unlikely. Because we find Republican public opinion so firmly in line with these perspectives, frankly, it is hard to be optimistic.

Those Americans concerned with the effects of climate change may need to look to state-level, non-government, corporate, and more global organizations to support climate justice efforts.

Q: Are there any other takeaways you’d like to share?

A: Those working in the climate change area may need to rethink how we talk and message about the challenges the world faces. Right now, almost anything that sounds like “climate change” is likely to be rejected by Republican leaders and large segments of American society. Our approach of experimenting with different types of messages is a relatively low-cost strategy for identifying more promising strategies, targeted at Americans and at citizens in other wealthy countries.

But our study, in line with other work, also demonstrates that partisanship — identifying as a Republican or Democrat — is by far the strongest predictor of attitudes toward climate aid. While climate justice messaging can move attitudes slightly, the effects are still modest relative to the contributions of party identification itself. Just as Republican party elites were once persuaded to take leadership in the global fight against HIV and AIDS, a similar challenge lies ahead for climate aid.

Minimizing the carbon footprint of bridges and other structures

Fri, 01/10/2025 - 12:00am

Awed as a young child by the majesty of the Golden Gate Bridge in San Francisco, civil engineer and MIT Morningside Academy for Design (MAD) Fellow Zane Schemmer has retained his fascination with bridges: what they look like, why they work, and how they’re designed and built.

He weighed the choice between architecture and engineering when heading off to college, but, motivated by the why and how of structural engineering, selected the latter. Now he incorporates design as an iterative process in the writing of algorithms that perfectly balance the forces involved in discrete portions of a structure to create an overall design that optimizes function, minimizes carbon footprint, and still produces a manufacturable result.

While this may sound like an obvious goal in structural design, it’s not. It’s new. It’s a more holistic way of looking at the design process that can optimize even down to the materials, angles, and number of elements in the nodes or joints that connect the larger components of a building, bridge, tower, etc.

According to Schemmer, there hasn’t been much progress on optimizing structural design to minimize embodied carbon, and the work that exists often results in designs that are “too complex to be built in real life,” he says. The embodied carbon of a structure is the total carbon dioxide emissions of its life cycle: from the extraction or manufacture of its materials to their transport and use and through the demolition of the structure and disposal of the materials. Schemmer, who works with Josephine V. Carstensen, the Gilbert W. Winslow Career Development Associate Professor of Civil and Environmental Engineering at MIT, is focusing on the portion of that cycle that runs through construction.

In September, at the IASS 2024 symposium "Redefining the Art of Structural Design in Zurich," Schemmer and Carstensen presented their work on Discrete Topology Optimization algorithms that are able to minimize the embodied carbon in a bridge or other structure by up to 20 percent. This comes through materials selection that considers not only a material’s appearance and its ability to get the job done, but also the ease of procurement, its proximity to the building site, and the carbon embodied in its manufacture and transport.

“The real novelty of our algorithm is its ability to consider multiple materials in a highly constrained solution space to produce manufacturable designs with a user-specified force flow,” Schemmer says. “Real-life problems are complex and often have many constraints associated with them. In traditional formulations, it can be difficult to have a long list of complicated constraints. Our goal is to incorporate these constraints to make it easier to take our designs out of the computer and create them in real life.”

Take, for instance, a steel tower, which could be a “super lightweight, efficient design solution,” Schemmer explains. Because steel is so strong, you don’t need as much of it compared to concrete or timber to build a big building. But steel is also very carbon-intensive to produce and transport. Shipping it across the country or especially from a different continent can sharply increase its embodied carbon price tag. Schemmer’s topology optimization will replace some of the steel with timber elements or decrease the amount of steel in other elements to create a hybrid structure that will function effectively and minimize the carbon footprint. “This is why using the same steel in two different parts of the world can lead to two different optimized designs,” he explains.

Schemmer, who grew up in the mountains of Utah, earned a BS and MS in civil and environmental engineering from University of California at Berkeley, where his graduate work focused on seismic design. He describes that education as providing a “very traditional, super-strong engineering background that tackled some of the toughest engineering problems,” along with knowledge of structural engineering’s traditions and current methods.

But at MIT, he says, a lot of the work he sees “looks at removing the constraints of current societal conventions of doing things, and asks how could we do things if it was in a more ideal form; what are we looking at then? Which I think is really cool,” he says. “But I think sometimes too, there’s a jump between the most-perfect version of something and where we are now, that there needs to be a bridge between those two. And I feel like my education helps me see that bridge.”

The bridge he’s referring to is the topology optimization algorithms that make good designs better in terms of decreased global warming potential.

“That’s where the optimization algorithm comes in,” Schemmer says. “In contrast to a standard structure designed in the past, the algorithm can take the same design space and come up with a much more efficient material usage that still meets all the structural requirements, be up to code, and have everything we want from a safety standpoint.”

That’s also where the MAD Design Fellowship comes in. The program provides yearlong fellowships with full financial support to graduate students from all across the Institute who network with each other, with the MAD faculty, and with outside speakers who use design in new ways in a surprising variety of fields. This helps the fellows gain a better understanding of how to use iterative design in their own work.

“Usually people think of their own work like, ‘Oh, I had this background. I’ve been looking at this one way for a very long time.’ And when you look at it from an outside perspective, I think it opens your mind to be like, ‘Oh my God. I never would have thought about doing this that way. Maybe I should try that.’ And then we can move to new ideas, new inspiration for better work,” Schemmer says.

He chose civil and structural engineering over architecture some seven years ago, but says that “100 years ago, I don’t think architecture and structural engineering were two separate professions. I think there was an understanding of how things looked and how things worked, and it was merged together. Maybe from an efficiency standpoint, it’s better to have things done separately. But I think there’s something to be said for having knowledge about how the whole system works, potentially more intermingling between the free-form architectural design and the mathematical design of a civil engineer. Merging it back together, I think, has a lot of benefits.”

Which brings us back to the Golden Gate Bridge, Schemmer’s longtime favorite. You can still hear that excited 3-year-old in his voice when he talks about it.

“It’s so iconic,” he says. “It’s connecting these two spits of land that just rise straight up out of the ocean. There’s this fog that comes in and out a lot of days. It's a really magical place, from the size of the cable strands and everything. It’s just, ‘Wow.’ People built this over 100 years ago, before the existence of a lot of the computational tools that we have now. So, all the math, everything in the design, was all done by hand and from the mind. Nothing was computerized, which I think is crazy to think about.”

As Schemmer continues work on his doctoral degree at MIT, the MAD fellowship will expose him to many more awe-inspiring ideas in other fields, leading him to incorporate some of these in some way with his engineering knowledge to design better ways of building bridges and other structures.

The regions racing to become the “Silicon Valley” of an aging world

Thu, 01/09/2025 - 4:40pm

In 2018, when Inc. Magazine named Boston one of the country’s top places to start a business, it highlighted one significant reason: Boston is an innovation hub for products and services catering toward the aging population. The “longevity economy” represents a massive chunk of economic opportunity: As of 2020, the over-50 market contributed $45 trillion to global GDP, or 34 percent of the total, according to AARP and Economist Impact.

What makes Boston such a good place to do business in aging? One important factor, according to the Inc. story, was MIT — specifically, MIT’s AgeLab, a research organization devoted to creating a high quality of life for the world’s growing aging population.

Inspired by that claim, AgeLab Director Joseph Coughlin, AgeLab science writer and researcher Luke Yoquinto, and The Boston Globe organized a yearlong series of articles to explore what makes Boston such a fertile ground for businesses in the longevity economy — and what might make its soil even richer. The series, titled “The Longevity Hub,” had a big goal in mind: describing what would be necessary to transform Boston into the “Silicon Valley of aging.”

The articles from the Globe series stand as a primer on key issues related to the wants, needs, and economic capabilities of older people, not just in Boston but for any community with an aging population. Importantly, creating a business and research environment conducive to innovation on behalf of older users and customers would create the opportunity to serve national and global aging markets far larger than just Boston or New England.

But that project with the Globe raised a new question for the MIT AgeLab: What communities, Boston aside, were ahead of the curve in their support of aging innovation? More likely than Boston standing as the world’s lone longevity hub, there were doubtless many international communities that could be identified using similar terms. But where were they? And what makes them successful?

Now The MIT Press has published “Longevity Hubs: Regional Innovation for Global Aging,” an edited volume that collects the original articles from The Boston Globe series, as well as a set of new essays. In addition to AgeLab researchers Coughlin, Yoquinto, and Lisa D’Ambrosio, this work includes essays by members of the MIT community including Li-Huei Tsai, director of the Picower Institute for Learning and Memory; the author team of Rafi Segal (associate professor of architecture and urbanism) and Marisa Moràn Jahn (senior researcher at MIT Future Urban Collectives); as well as Elise Selinger, MIT’s director of residential renewal and renovation.

In addition to these Boston Globe articles, the book also includes a new collection of essays from an international set of contributors. These new essays highlight sites around the world that have developed a reputation for innovation in the longevity economy. 

The innovative activity described throughout the book may exemplify a phenomenon called clustering: when businesses within a given sector emerge or congregate close to one another geographically. On its face, industrial or innovation clustering is something that ought not to happen, since, when businesses get physically close to one another, rent and congestion costs increase — incentivizing their dispersal. For clustering to occur, then, additional mechanisms must be at play, outweighing these natural costs. One possible explanation, many researchers have theorized, is that clusters tend to occur where useful, tacit knowledge flows among organizations.

In the case of longevity hubs, the editors hypothesize that two sorts of tacit knowledge are being shared. First is the simple awareness that the older market is worth serving. Second is insight into how best to meet its needs — a trickier proposition than many would-be elder-market conquerors realize. An earlier book by Coughlin, “The Longevity Economy” (PublicAffairs, 2017), discusses a long history of failed attempts by companies to design products and services for older adults. Speaking to the longevity economy is not easy, but these international longevity hubs represent successful, ongoing efforts to address the needs of older consumers.   

The book’s opening chapters on the Greater Boston longevity hub encompass a swathe of sectors including biotech, health care, housing, transportation, and financial services. “Although life insurance is perhaps the clearest example of a financial services industry whose interests align with consumer longevity, it is far from the only one,” writes Brooks Tingle, president and CEO of John Hancock, in his entry. “Financial companies — especially those in Boston's increasingly longevity-aware business community — should dare to think big and join the effort to build a better old age.”

The book’s other contributions range far beyond Boston. They highlight, for example, Louisville, Kentucky, which is “the country’s largest hot spot for businesses specializing in aging care,” writes contributor and Humana CEO Bruce Broussard, in a chapter describing the city’s mix of massive health-care companies and smaller, nimbler startups. In Newcastle, in the U.K., a thriving biomedical industry laid the groundwork for a burst of innovation around the idea of aging as an economic opportunity, with initial funding from the public sector and academic research giving way to business development in the city. In Brazil’s São Paulo, meanwhile, in the absence of public funding from the national government, a grassroots network of academics, companies, and other institutions called Envelhecimento 2.0 is the main driver of aging innovation in the country.

“We are seeing a Cambrian explosion of efforts to provide a high quality of life for the world’s booming aging population,” says Coughlin. “And that explosion includes not just startups and companies, but also different regional economic approaches to taking the longevity dividend of living longer, and transforming it into an opportunity for everyone to live longer, better.”

By 2034, for the first time in history, older adults will outnumber children in the United States. That demographic shift represents an enormous societal challenge, and a grand economic opportunity. Greater Boston stands as a premier global longevity hub, but, as Coughlin and Yoquinto’s volume illustrates, there are potential competitors — and collaborators — popping up left and right. If and when innovation clusters befitting the title of “the Silicon Valley of longevity” do arise, it remains to be seen where they will appear first.

Professor William Thilly, whose research illuminated the effects of mutagens on human cells, dies at 79

Thu, 01/09/2025 - 2:00pm

William Thilly ’67, ScD ’71, a professor in MIT’s Department of Biological Engineering, died Dec. 24 at his home in Winchester, Massachusetts. He was 79.

Thilly, a pioneer in the study of human genetic mutations, had been a member of the MIT faculty since 1972. Throughout his career, he developed novel ways to measure how environmental mutagens affect human cells, creating assays that are now widely used in toxicology and pharmaceutical development.

He also served as a director of MIT’s Center for Environmental Health Sciences and in the 1980s established MIT’s first Superfund research program — an example of his dedication to ensuring that MIT’s research would have a real-world impact, colleagues say.

“He really was a giant in the field,” says Bevin Engelward, a professor of biological engineering at MIT. “He took his scientific understanding and said, ‘Let’s use this as a tool to go after this real-world problem.’ One of the things that Bill really pushed people on was challenging them to ask the question, ‘Does this research matter? Is this going to make a difference in the real world?’”

In a letter to the MIT community today, MIT President Sally Kornbluth noted that Thilly’s students and postdocs recalled him as “a wise but tough mentor.”

“Many of the students and postdocs Bill trained have become industry leaders in the fields of drug evaluation and toxicology. And he changed the lives of many more MIT students through his generous support of scholarships for undergraduates from diverse educational backgrounds,” Kornbluth wrote.

Tackling real-world problems

Thilly was born on Staten Island, New York, and his family later moved to a farm in Rush Township, located in central Pennsylvania. He earned his bachelor’s degree in biology in 1967 and an ScD in nutritional biochemistry in 1971, both from MIT. In 1972, he joined the MIT faculty as an assistant professor of genetic toxicology.

His research group began with the aim of discovering the origins of disease-causing mutations in humans. In the 1970s, his lab developed an assay that allows for quantitative measurement of mutations in human cells. This test, known as the TK6 assay, allows researchers to identify compounds that are likely to cause mutations, and it is now used by pharmaceutical companies to test whether new drug compounds are safe for human use.

Unlike many previous assays, which could identify only type of mutation at a time, Thilly’s TK6 assay could catch any mutation that would disrupt the function of a gene.

From 1980 to 2001, Thilly served as the director of MIT’s Center for Environmental Health Sciences. During that time, he assembled a cross-disciplinary team, including experts from several MIT departments, that examined the health effects of burning fossil fuels.

“Working in a coordinated manner, the team established more efficient ways to burn fuel, and, importantly, they were able to assess which combustion methods would have the least impact on human and environmental health,” says John Essigmann, the William R. and Betsy P. Leitch Professor of Chemistry, Toxicology, and Biological Engineering at MIT.

Thilly was also instrumental in developing MIT’s first Superfund program. In the 1980s, he mobilized a group of MIT researchers from different disciplines to investigate the effects of the toxic waste at a Superfund site in Woburn, Massachusetts, and help devise remediation plans.

Bringing together scientists and engineers from different fields, who were at the time very siloed within their own departments, was a feat of creativity and leadership, Thilly’s colleagues say, and an example of his dedication to tackling real-world problems.

Later, Thilly utilized a protocol known as denaturing gel electrophoresis to visualize environmentally caused mutations by their ability to alter the melting temperature of the DNA duplex. He used this tool to study human tissue derived from people who had experienced exposure to agents such as tobacco smoke, allowing him to create a rough draft of the mutational spectrum that such agents produce in human cells. This work led him to propose that the mutations in many cancers are likely caused by inaccurate copying of DNA by specialized polymerases known as non-replicative polymerases.

One of Thilly’s most significant discoveries was the fact that cells that are deficient in a DNA repair process called mismatch repair were resistant to certain DNA-damaging agents. Later work by Nobel laureate Paul Modrich ’68 showed how cells lacking mismatch repair become resistant to anticancer drugs.

In 2001, Thilly joined MIT’s newly formed Department of Biological Engineering. During the 2000s, Thilly’s wife, MIT Research Scientist Elena Gostjeva, discovered an unusual, bell-shaped structure in the nuclei of plant cells, known as metakaryotic nuclei. Thilly and Gostjeva later found these nuclei in mammalian stem cells. In recent years, they were exploring the possibility that these cells give rise to tumors, and investigating potential compounds that could be used to combat that type of tumor growth.

A wrestling mentality

Thilly was a dedicated teacher and received the Everett Moore Baker Award for Excellence in Undergraduate Teaching in 1974. In 1991, a series of courses he helped to create, called Chemicals in the Environment, was honored with the Irwin Sizer Award for the Most Significant Improvement to MIT Education. Many of the students and postdocs that he trained have become industry leaders in drug evaluation and toxicant identification. This past semester, Thilly and Gostjeva co-taught two undergraduate courses in the biology of metakaryotic stem cells.

A champion wrestler in his youth, Thilly told colleagues that he considered teaching “a contact sport.” “He had this wrestling mentality. He wanted a challenge,” Engelward says. “Whatever the issue was scientifically that he felt needed to be hashed out, he wanted to battle it out.”

In addition to wrestling, Thilly was also a captain of the MIT Rugby Football Club in the 1970s, and one of the founders of the New England Rugby Football Union.

Thilly loved to talk about science and often held court in the hallway outside his office on the seventh floor of Building 16, regaling colleagues and students who happened to come by.

“Bill was the kind of guy who would pull you aside and then start going on and on about some aspect of his work and why it was so important. And he was very passionate about it,” Essigmann recalls. “He was also an amazing scholar of the early literature of not only genetic toxicology, but molecular biology. His scholarship was extremely good, and he'd be the go-to person if you had a question about something.”

Thilly also considered it his duty to question students about their work and to make sure that they were thinking about whether their research would have real-world applications.

“He really was tough, but I think he really did see it as his responsibility. I think he felt like he needed to always be pushing people to do better when it comes to the real world,” Engelward says. “That’s a huge legacy. He affected probably hundreds of students, because he would go to the graduate student seminar series and he was always asking questions, always pushing people.”

Thilly was a strong proponent of recruiting more underserved students to MIT and made many trips to historically Black universities and colleges to recruit applicants. He also donated more than $1 million to scholarship funds for underserved students, according to colleagues.

While an undergraduate at MIT, Thilly also made a significant mark in the world of breakfast cereals. During the summer of 1965, he worked as an intern at Kellogg’s, where he was given the opportunity to create his own cereal, according to the breakfast food blog Extra Crispy. His experiments with dried apples and leftover O’s led to the invention of the cereal that eventually became Apple Jacks.

In addition to his wife, Thilly is survived by five children: William, Grethe, Walte and Audrey Thilly, and Fedor Gostjeva; a brother, Walter; a sister, Joan Harmon; and two grandchildren. 

Teaching AI to communicate sounds like humans do

Thu, 01/09/2025 - 12:00am

Whether you’re describing the sound of your faulty car engine or meowing like your neighbor’s cat, imitating sounds with your voice can be a helpful way to relay a concept when words don’t do the trick.

Vocal imitation is the sonic equivalent of doodling a quick picture to communicate something you saw — except that instead of using a pencil to illustrate an image, you use your vocal tract to express a sound. This might seem difficult, but it’s something we all do intuitively: To experience it for yourself, try using your voice to mirror the sound of an ambulance siren, a crow, or a bell being struck.

Inspired by the cognitive science of how we communicate, MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have developed an AI system that can produce human-like vocal imitations with no training, and without ever having "heard" a human vocal impression before.

To achieve this, the researchers engineered their system to produce and interpret sounds much like we do. They started by building a model of the human vocal tract that simulates how vibrations from the voice box are shaped by the throat, tongue, and lips. Then, they used a cognitively-inspired AI algorithm to control this vocal tract model and make it produce imitations, taking into consideration the context-specific ways that humans choose to communicate sound.

The model can effectively take many sounds from the world and generate a human-like imitation of them — including noises like leaves rustling, a snake’s hiss, and an approaching ambulance siren. Their model can also be run in reverse to guess real-world sounds from human vocal imitations, similar to how some computer vision systems can retrieve high-quality images based on sketches. For instance, the model can correctly distinguish the sound of a human imitating a cat’s “meow” versus its “hiss.”

In the future, this model could potentially lead to more intuitive “imitation-based” interfaces for sound designers, more human-like AI characters in virtual reality, and even methods to help students learn new languages.

The co-lead authors — MIT CSAIL PhD students Kartik Chandra SM ’23 and Karima Ma, and undergraduate researcher Matthew Caren — note that computer graphics researchers have long recognized that realism is rarely the ultimate goal of visual expression. For example, an abstract painting or a child’s crayon doodle can be just as expressive as a photograph.

“Over the past few decades, advances in sketching algorithms have led to new tools for artists, advances in AI and computer vision, and even a deeper understanding of human cognition,” notes Chandra. “In the same way that a sketch is an abstract, non-photorealistic representation of an image, our method captures the abstract, non-phono-realistic ways humans express the sounds they hear. This teaches us about the process of auditory abstraction.”

The art of imitation, in three parts

The team developed three increasingly nuanced versions of the model to compare to human vocal imitations. First, they created a baseline model that simply aimed to generate imitations that were as similar to real-world sounds as possible — but this model didn’t match human behavior very well.

The researchers then designed a second “communicative” model. According to Caren, this model considers what’s distinctive about a sound to a listener. For instance, you’d likely imitate the sound of a motorboat by mimicking the rumble of its engine, since that’s its most distinctive auditory feature, even if it’s not the loudest aspect of the sound (compared to, say, the water splashing). This second model created imitations that were better than the baseline, but the team wanted to improve it even more.

To take their method a step further, the researchers added a final layer of reasoning to the model. “Vocal imitations can sound different based on the amount of effort you put into them. It costs time and energy to produce sounds that are perfectly accurate,” says Chandra. The researchers’ full model accounts for this by trying to avoid utterances that are very rapid, loud, or high- or low-pitched, which people are less likely to use in a conversation. The result: more human-like imitations that closely match many of the decisions that humans make when imitating the same sounds.

After building this model, the team conducted a behavioral experiment to see whether the AI- or human-generated vocal imitations were perceived as better by human judges. Notably, participants in the experiment favored the AI model 25 percent of the time in general, and as much as 75 percent for an imitation of a motorboat and 50 percent for an imitation of a gunshot.

Toward more expressive sound technology

Passionate about technology for music and art, Caren envisions that this model could help artists better communicate sounds to computational systems and assist filmmakers and other content creators with generating AI sounds that are more nuanced to a specific context. It could also enable a musician to rapidly search a sound database by imitating a noise that is difficult to describe in, say, a text prompt.

In the meantime, Caren, Chandra, and Ma are looking at the implications of their model in other domains, including the development of language, how infants learn to talk, and even imitation behaviors in birds like parrots and songbirds.

The team still has work to do with the current iteration of their model: It struggles with some consonants, like “z,” which led to inaccurate impressions of some sounds, like bees buzzing. They also can’t yet replicate how humans imitate speech, music, or sounds that are imitated differently across different languages, like a heartbeat.

Stanford University linguistics professor Robert Hawkins says that language is full of onomatopoeia and words that mimic but don’t fully replicate the things they describe, like the “meow” sound that very inexactly approximates the sound that cats make. “The processes that get us from the sound of a real cat to a word like ‘meow’ reveal a lot about the intricate interplay between physiology, social reasoning, and communication in the evolution of language,” says Hawkins, who wasn’t involved in the CSAIL research. “This model presents an exciting step toward formalizing and testing theories of those processes, demonstrating that both physical constraints from the human vocal tract and social pressures from communication are needed to explain the distribution of vocal imitations.”

Caren, Chandra, and Ma wrote the paper with two other CSAIL affiliates: Jonathan Ragan-Kelley, MIT Department of Electrical Engineering and Computer Science associate professor, and Joshua Tenenbaum, MIT Brain and Cognitive Sciences professor and Center for Brains, Minds, and Machines member. Their work was supported, in part, by the Hertz Foundation and the National Science Foundation. It was presented at SIGGRAPH Asia in early December.

Monitoring space traffic

Wed, 01/08/2025 - 4:00pm

If there’s a through line in Sydney Dolan’s pursuits, it’s a fervent belief in being a good steward — both in space and on Earth.

As a doctoral student in the MIT Department of Aeronautics and Astronautics (AeroAstro), Dolan is developing a model that aims to mitigate satellite collisions. They see space as a public good, a resource for everyone. “There’s a real concern that you could be potentially desecrating a whole orbit if enough collisions were to happen,” they say. “We have to be very thoughtful about trying maintain people’s access, to be able to use space for all the different applications that it has today.”

Here on the Blue Planet, Dolan is passionate about building community and ensuring that students in the department have what they need to succeed. To that end, they have been deeply invested in mentoring other students; leading and participating in affinity groups for women and the LGBTQ+ community; and creating communications resources to help students navigate grad school.

Launching into new territories

Dolan’s interest in aerospace began as a high school student in Centerville, Virginia. A close friend asked them to go to a model rocket club meeting because she didn’t want to go alone. “I ended up going with her and really liking it, and it ended up becoming more of my thing than her kind of thing!” they say with a laugh. Building rockets and launching them in rural Virginia gave Dolan formative, hands-on experience in aerospace engineering and convinced them to pursue the field in college.

They attended Purdue University, lured by the beautiful aerospace building and the school’s stature as a leading producer of astronauts. While they’re grateful for the education they received at Purdue, the dearth of other women in the department was glaring.

That gender imbalance motivated Dolan to launch Purdue Women in Aerospace, to facilitate connections and work on changing the department’s culture. The group worked to make study spaces more welcoming to women and planned the inaugural Amelia Earhart Summit to celebrate women’s contributions to the field. Several hundred students, alumni, and others gathered for a full day of inspiring speakers, academic and industry panels, and networking opportunities.

During their junior year, Dolan was accepted into the Matthew Isakowitz Fellowship Program, which places students with a commercial space company and pairs them with a career mentor. They interned at Nanoracks over the summer, developing a small cubesat payload that went on the International Space Station. Through the internship they met an MIT AeroAstro PhD alumna, Natalya Bailey ’14. Since Dolan was leaning toward going to graduate school, Bailey provided valuable advice about where to consider applying and what goes into an application package — as well as a plug for MIT.

Although they applied to other schools, MIT stood out. “At the time, I really wasn’t sure if I wanted to be more in systems engineering or if I wanted to specialize more in guidance, navigation, controls, and autonomy. And I really like that the program at MIT has strength in both of those areas,” Dolan explains, adding that few schools have both specialties. That way, they would always have the option to switch from one to the other if their interests changed. 

Being a good space actor

That option would come in handy. For their master’s degree, they conducted two research projects in systems engineering. In their first year, they joined the Engineering Systems Laboratory, comparing lunar and Martian mission architectures to identify which technologies could be successfully deployed both on the moon and Mars to, as Dolan says, “get our bang for the buck.” Next, they worked on the Media Lab’s TESSERAE project, which aims to create tiles that can autonomously self-assemble to form science labs, zero-gravity habitats, and other applications in space. Dolan worked on the controls for the tiles and the feasibility of using computer vision for them.

Ultimately, Dolan decided to switch their focus to autonomy for their PhD, with a focus on satellite traffic applications. They joined the DINaMo Research Group, working with Hamsa Balakrishnan, associate dean of the School of Engineering and the William Leonhard (1940) Professor of Aeronautics and Astronautics.

Managing space traffic has become increasingly complex. As the cost to get to space decreases and new launch providers like SpaceX have spun up, the number of satellites has grown over the last few decades — as well as the risk of collisions. Traveling at approximately 17,000 miles per hour, satellites can cause catastrophic damage and create debris that, in turn, poses an additional hazard. The European Space Agency has estimated that there are roughly 11,500 satellites in orbit (2,500 of which are not active) and over 35,000 pieces of debris larger than 10 centimeters. Last February, there was a near-collision — missing by only 33 feet — between a NASA satellite and a non-operational Russian spy satellite.

Despite these risks, there’s no centralized governing body monitoring satellite maneuvers, and many operators are reluctant to share their satellite’s exact location, although they will provide limited information, Dolan says. Their doctoral thesis aims to address these issues through a model that enables satellites to independently make decisions on maneuvers to avoid collisions, using information they glean from nearby satellites. Dolan’s approach is interdisciplinary, using reinforcement learning, game theory, and optimal control to abstract a graph representation of the space environment.

Dolan sees the model as a potential tool that could provide decentralized oversight and inform policy: “I’m largely just all in favor of being a good space actor, thinking of space as a protected resource, just like the national parks. And here’s a mathematical tool we can use to really validate that this sort of information would be helpful.”

Finding a natural fit

Now wrapping up their fifth year, Dolan has been deeply involved in the MIT AeroAstro community since arriving in 2019. They have served as a peer mediator in the dREFS program (Department Resources for Easing Friction and Stress); mentored other women students; and served as co-president of the Graduate Women in Aerospace Engineering group. As a communication fellow in the AeroAstro Communications Lab, Dolan has created and offered workshops, coaching, and other resources to help students with journal articles, fellowship applications, posters, resumes, and other forms of science communications. “I just believe so firmly that all people should have the same resources to succeed in grad school,” Dolan says. “MIT does a really great job providing a lot of resources, but sometimes it can be daunting to figure out what they are and who to ask.”

In 2020, they helped found an LGBTQ+ affinity group called QuASAR (Queer Advocacy Space in AeroAstro). Unlike most MIT clubs, QuASAR is open to everyone in the department — undergraduate and graduate students, faculty, and staff. Members gather several times a year for social events, and QuASAR has hosted academic and industry panels to better reflect the variety of identities in the aerospace field.

In their spare time Dolan loves ultrarunning — that is, running distances greater than a marathon. To date, they’ve run 50-kilometer and 50-mile races, and recently, a whopping 120 miles in a backyard ultramarathon (“basically, run ’til you drop,” Dolan says). It’s a great antidote to stress, and, curiously, they’ve noticed there are a lot of PhD students in ultrarunning. “I was talking with my advisor about it one time and she’s like, ‘Sydney, you’re crazy, why on Earth would you do anything like that?’ She said this respectfully! And I’m like, ‘Yeah, why would I ever want to do a task that has an ambiguous end date and that requires a lot of work and discipline?’” Dolan says, grinning.

Their hard work and discipline will pay off as they prepare to complete their MIT journey. After wrapping up their degree program, Dolan hopes to land a faculty position at a college or university. Being a professor feels like a natural fit, they say, combining their fascination with aerospace engineering with their passion for teaching and mentoring. As to where they will end up, Dolan waxes philosophical: “I’m throwing a lot of darts at the wall, and we’ll see … it’s with the universe now.”

Images that transform through heat

Wed, 01/08/2025 - 2:40pm

Researchers in MIT Professor Stefanie Mueller’s group have spent much of the last decade developing a variety of computing techniques aimed at reimagining how products and systems are designed. Much in the way that platforms like Instagram allow users to modify 2-D photographs with filters, Mueller imagines a world where we can do the same thing for a wide array of physical objects.

In a new open-access paper, her team at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) has demonstrated a novel printing technique along these lines — which they call “Thermochromorph” — that produces images that can change colors when heated up.

Led by first author and MIT electrical engineering and computer science doctoral student Ticha Melody Sethapakdi SM '22, the researchers say that they could imagine their method being applied in ways that are both artistic and functional, like a coffee-cup that warns if the liquid is too hot, or packaging for medicines or perishable foods that could indicate if the product has been stored at a safe temperature.

So-called “thermochromic” materials that visually change with temperature are not new — you can see examples with consumer beverages like Coke and Coors Light that reveal “ready to drink” labeling when refrigerated. But such instances in product marketing have traditionally been limited to a single color. By using inks with complementary characteristics — with one set that goes from clear to colored, and another from colored to clear — Sethapakdi says that she and her colleagues are “finally taking advantage of full-color process printing, which opens up a lot of possibilities for designing with thermochromic materials.”

The researchers worked with several visual artists to teach them to use Thermochromorph, and then solicited feedback and brainstorming about new narrative concepts and techniques unlocked by the tool, like color-changing postcards that could tell sequential stories in more compact, dynamic ways. One participant even plans to use Thermochromorph to make an educational science kit aimed at teaching students about sea creatures that change color.

The team developed their method to be applied specifically to “relief printing,” an early form of printmaking that involves carving a design into a block of material, applying ink or pigment to it, and then transferring the image onto paper or another surface.

Sethapakdi says that, compared to techniques like screen printing, relief printing is “more lightweight” and can be done with less setup and fewer materials, enabling a faster, lower-stakes iteration process. Artists that include the likes of Pablo Picasso and Salvador Dalí have used a range of related approaches in their work, such as woodcut and linocut printing.

“Our key contribution is applying these new materials to a traditional artistic process, and exploring how artists might be able to use it as part of their practice,” says Sethapakdi, lead author on a related paper that was recently presented at SIGGRAPH Asia in Tokyo.

The color-changing component also need not come from an active external heating or cooling source like, say, a fridge or a hot plate; using thermochromic inks with lower activation temperatures can allow for more subtle thermal changes brought about by human touch. Sethapakdi says she could even imagine applying this new process to create interactive surfaces or dynamic analog “interfaces” that visually change in response to touch.

Thermochromorph combines digital and analog processes in the form of, on the one hand, CMYK imaging and laser cutting, and, on the other, manual printmaking and thermochromic inks. Fabrication involves four core steps:
 

  1. Block preparation: Solid hardwood blocks are used for Thermochromorph. The blocks are laser cut and engraved with the desired design, and then rinsed with water to remove any leftover particles.
  2. Inking the block: First, a thin layer of ink is spread evenly onto a plate using a rubber brayer. Then, the ink is transferred from the brayer to the woodblock.
  3. Registration: A registration jig is used to position the woodblock to ensure the different ink layers are aligned correctly. The printing surface, such as paper, is then placed on top of the block and secured.
  4. Printing the images: A printing press is used to apply even pressure across the printing surface and transfer the ink from the block to the surface. The hot image is printed first, followed by the cold image. (If necessary, additional ink can be applied to specific areas of the block to touch up the print.)

The three prints the team used to demonstrate their technique were a set of frames from a Batman comic, a label depicting a fish and its underlying skeleton, and an image of a male subject both in profile and viewed from the front. (For the latter, as the temperature changes, the viewpoint gradually shifts, giving the effect of motion.)

It’s worth noting that Thermochromorph does have some potential limitations related to image resolution and print quality. Specifically, image resolution is constrained by the smallest dot size that the team’s laser cutter can engrave. Techniques like screen printing would offset this, but with the additional drawback of needing more time and materials. In terms of print quality, the pigments are not entirely invisible in their ‘clear’ states, which means that the clarity of the transitions depends on how thickly the ink layers were applied during printmaking. While this issue is intrinsic to the properties of the pigments, Sethapakdi says that for future iterations the team plans to explore different image-processing techniques to modify the overlay of halftone patterns for the hot and cold images, which may help to reduce these visual artifacts.

Sethapakdi and Mueller co-authored the new paper alongside Juliana Covarrubias ’24, MIT graduate student in media arts and sciences Paris Myers, University of California at Berkeley PhD student Tianyu Yu, and Adobe Research Scientist Mackenzie Leake.

Pages