MIT Latest News
MIT engineers develop a magnetic transistor for more energy-efficient electronics
Transistors, the building blocks of modern electronics, are typically made of silicon. Because it’s a semiconductor, this material can control the flow of electricity in a circuit. But silicon has fundamental physical limits that restrict how compact and energy-efficient a transistor can be.
MIT researchers have now replaced silicon with a magnetic semiconductor, creating a magnetic transistor that could enable smaller, faster, and more energy-efficient circuits. The material’s magnetism strongly influences its electronic behavior, leading to more efficient control of the flow of electricity.
The team used a novel magnetic material and an optimization process that reduces the material’s defects, which boosts the transistor’s performance.
The material’s unique magnetic properties also allow for transistors with built-in memory, which would simplify circuit design and unlock new applications for high-performance electronics.
“People have known about magnets for thousands of years, but there are very limited ways to incorporate magnetism into electronics. We have shown a new way to efficiently utilize magnetism that opens up a lot of possibilities for future applications and research,” says Chung-Tao Chou, an MIT graduate student in the departments of Electrical Engineering and Computer Science (EECS) and Physics, and co-lead author of a paper on this advance.
Chou is joined on the paper by co-lead author Eugene Park, a graduate student in the Department of Materials Science and Engineering (DMSE); Julian Klein, a DMSE research scientist; Josep Ingla-Aynes, a postdoc in the MIT Plasma Science and Fusion Center; Jagadeesh S. Moodera, a senior research scientist in the Department of Physics; and senior authors Frances Ross, TDK Professor in DMSE; and Luqiao Liu, an associate professor in EECS, and a member of the Research Laboratory of Electronics; as well as others at the University of Chemistry and Technology in Prague. The paper appears today in Physical Review Letters.
Overcoming the limits
In an electronic device, silicon semiconductor transistors act like tiny light switches that turn a circuit on and off, or amplify weak signals in a communication system. They do this using a small input voltage.
But a fundamental physical limit of silicon semiconductors prevents a transistor from operating below a certain voltage, which hinders its energy efficiency.
To make more efficient electronics, researchers have spent decades working toward magnetic transistors that utilize electron spin to control the flow of electricity. Electron spin is a fundamental property that enables electrons to behave like tiny magnets.
So far, scientists have mostly been limited to using certain magnetic materials. These lack the favorable electronic properties of semiconductors, constraining device performance.
“In this work, we combine magnetism and semiconductor physics to realize useful spintronic devices,” Liu says.
The researchers replace the silicon in the surface layer of a transistor with chromium sulfur bromide, a two-dimensional material that acts as a magnetic semiconductor.
Due to the material’s structure, researchers can switch between two magnetic states very cleanly. This makes it ideal for use in a transistor that smoothly switches between “on” and “off.”
“One of the biggest challenges we faced was finding the right material. We tried many other materials that didn’t work,” Chou says.
They discovered that changing these magnetic states modifies the material’s electronic properties, enabling low-energy operation. And unlike many other 2D materials, chromium sulfur bromide remains stable in air.
To make a transistor, the researchers pattern electrodes onto a silicon substrate, then carefully align and transfer the 2D material on top. They use tape to pick up a tiny piece of material, only a few tens of nanometers thick, and place it onto the substrate.
“A lot of researchers will use solvents or glue to do the transfer, but transistors require a very clean surface. We eliminate all those risks by simplifying this step,” Chou says.
Leveraging magnetism
This lack of contamination enables their device to outperform existing magnetic transistors. Most others can only create a weak magnetic effect, changing the flow of current by a few percent or less. Their new transistor can switch or amplify the electric current by a factor of 10.
They use an external magnetic field to change the magnetic state of the material, switching the transistor using significantly less energy than would usually be required.
The material also allows them to control the magnetic states with electric current. This is important because engineers cannot apply magnetic fields to individual transistors in an electronic device. They need to control each one electrically.
The material’s magnetic properties could also enable transistors with built-in memory, simplifying the design of logic or memory circuits.
A typical memory device has a magnetic cell to store information and a transistor to read it out. Their method can combine both into one magnetic transistor.
“Now, not only are transistors turning on and off, they are also remembering information. And because we can switch the transistor with greater magnitude, the signal is much stronger so we can read out the information faster, and in a much more reliable way,” Liu says.
Building on this demonstration, the researchers plan to further study the use of electrical current to control the device. They are also working to make their method scalable so they can fabricate arrays of transistors.
This research was supported, in part, by the Semiconductor Research Corporation, the U.S. Defense Advanced Research Projects Agency (DARPA), the U.S. National Science Foundation (NSF), the U.S. Department of Energy, the U.S. Army Research Office, and the Czech Ministry of Education, Youth, and Sports. The work was partially carried out at the MIT.nano facilities.
MIT practicum connects students with Ukrainian city leaders on economic development
MIT graduate students are working with leaders from the Ukrainian city of Vinnytsia to explore strategies for economic development, infrastructure, and innovation during wartime conditions.
As part of the MIT Department of Urban Studies and Planning (DUSP) spring course 11.S941 (Innovating in Ukraine), DUSP hosted a delegation of five Ukrainian leaders from Vinnytsia, a city region of 400,000 people located approximately 280 kilometers from Kyiv in central Ukraine. The course, taught by professor of the practice Elisabeth Reynolds, is a practicum in which students work with a “client” for the semester on specific projects or issues the city would like to address and provide a final report or deliverable.
The city of Vinnytsia, which had two representatives on the trip, has focused on building out its “innovation ecosystem” across key parts of its economy. Amid the ongoing war with Russia, the country has accelerated its long-time expertise in information technology in both civilian and military contexts. Examples include the digitalization of government services, such that many services are accessible by cellphone through the e-governance app Diia, as well as the development of a rapidly evolving drone industry.
The 13 graduate students, who draw from the School of Architecture and Planning and the MIT Sloan School of Management, as well as Harvard University’s Kennedy School and Graduate School of Design, have worked with members of the city government and Vinnytsia National Technical University on a range of projects focused on the city’s future growth. The projects include developing an agro-food cluster to facilitate Ukraine’s integration into the European Union; transportation and logistics to support economic growth in the city and enhance its role as a regional hub; improving the city’s and country’s electronic waste management; and developing the city’s creative and entrepreneurial talent to retain and attract workers.
While in Cambridge for the week, the visitors and students toured a number of places and organizations that engage in innovation. A trip to Boston City Hall to meet with Kairos Shen, Boston’s chief city planner and a former professor of the practice at the MIT Center for Real Estate, highlighted the ways in which the built environment can facilitate activities and interactions to foster a more innovative city. Tours of the Cambridge Innovation Center in Kendall Square, Greentown Labs in Somerville, and MassChallenge in Boston provided examples of the myriad ways the region supports entrepreneurs through shared workspace, incubators, and network development.
“We are very interested in partnering with some of these organizations,” said Dmitry Sofyna, CEO and co-founder of WINSTARS.AI, an R&D center in Ukraine focused on AI applications. “We want to transform Ukraine from a major player in engineering and scientific outsourcing into a hub for creating large-scale tech companies in defense, medicine, and energy.” Vinnytsia is currently building Crystal Technology Park, one of the largest technology parks in Ukraine.
Usually during a practicum, students travel to the host location to spend a week during Independent Activities Period (IAP) or spring break learning about the city or region. In the case of the collaboration with Vinnytsia — an outgrowth of the MIT-Ukraine initiative and the Ukraine Community Recovery Academy, with which DUSP has been working for two years — the students are unable to travel to Ukraine due to the war. With the help of a generous alumnus, DUSP instead brought the Ukrainian delegation to Cambridge so that there could be in-person exchange between the students and the Vinnytsia partners.
“It’s been an amazing trip,” said Yanna Chaikovska, director of Vinnytsia’s Institute for Urban Development. “We are planning for the future because that is what we must do. Ukraine has faced many challenges in the past and always worked in small and big ways to move forward. MIT is helping us do this.”
Nick Durham, a joint DUSP/MIT Sloan master’s student, added: “I am continually inspired by the resilience of the Ukrainian people and how they are finding creative ways to build a better future. In many ways, Ukrainian innovation is serving as a model for reimagining industries and complex economic systems.”
The collaboration reflects a broader effort within DUSP to engage with cities facing complex economic and geopolitical challenges through applied, practice-based research. Hashim Sarkis, dean of the School of Architecture and Planning, spoke of this effort during a panel discussion with the Ukrainian visitors, noting that “with so much conflict in the world today, SA+P must create new ways to help cities rebuild, whether in Ukraine or elsewhere.”
Big strides in cancer detection and treatment from the tiniest technologies
That there is tremendous potential for nanotechnology to transform cancer detection and treatment is a vision that has guided faculty at the Marble Center for Cancer Nanomedicine through its first 10 years.
On April 9, the center gathered researchers, entrepreneurs, clinicians, industry collaborators, and members of the public at the Broad Institute of MIT and Harvard and the Koch Institute for Integrative Cancer Research galleries to celebrate a milestone anniversary and reflect on its journey.
“Our purpose has always been clear: to empower discovery and community in nanomedicine at MIT,” said Sangeeta Bhatia, faculty director at the Marble Center for Cancer Nanomedicine and the John J. and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science at MIT.
“A decade in, we are seeing that vision materialize not just in publications, but in our community, our startups, and ultimately, in patients whose lives are being changed,” Bhatia told an audience of about 150 gathered in person for the celebration.
The event featured an overview of the Marble Center by Bhatia and a perspective on nanomedicine by Robert S. Langer, the David H. Koch (1962) Institute Professor and faculty member at the Marble Center.
A panel on translational nanomedicine followed the talks. It was moderated by Susan Hockfield, president emerita and professor of neuroscience at MIT, and included Noor Jailkhani, former MIT postdoc in the laboratory of the late MIT professor of biology Richard Hynes and CEO, co-founder and president of Matrisome Bio; Peter DeMuth ’13, chief scientific officer at Elicio Therapeutics; Vadim Dudkin, founding chief technology officer at Soufflé Therapeutics; and Viktor Adalsteinsson ’15, co-founder of Amplifyer Bio and director of the Gerstner Center for Cancer Diagnostics at the Broad Institute.
A decade of impact in nanomedicine
Established in 2016 through a generous gift from Kathy and Curt Marble ’63, the Marble Center brings together leading Koch Institute faculty members and their teams to focus on grand challenges in cancer detection, treatment, and monitoring through miniaturization and convergence — the blending of the life and physical sciences with engineering, a core concept fueling multidisciplinary research at the Koch Institute.
At the center’s founding, Bhatia and Langer were joined by five additional faculty members: Daniel G. Anderson, professor of chemical engineering and member of the Institute for Medical Engineering and Science; Angela M. Belcher, the James Mason Crafts Professor in the departments of Biological Engineering and Materials Science and Engineering; Michael Birnbaum, professor of biological engineering; Paula T. Hammond, Institute professor and dean of the School of Engineering; and Darrell J. Irvine, who is now professor and vice-chair at the Department of Immunology and Microbiology at the Scripps Research Institute in La Jolla, California.
“Over the past decade, the center and its member laboratories have trained close to 500 researchers. Among them, 109 have become faculty in 79 clinical and research universities. We also have worked in close collaboration with clinical and industry partners to produce the results you are seeing today,” said Tarek Fadel, associate director of the Marble Center and director of strategic alliance at the Koch Institute.
“Twenty-three startup companies have emerged from Marble Center laboratories during that time with companies such as Cision Vision, Soufflé Therapeutics, Orna Therapeutics, Matrisome Bio, Amplifyer Bio, Gensaic, among several others that hold so much promise for the early detection of disease and drug delivery,” Fadel added.
The Marble Center has launched several topical programs aimed at trainee development and industry engagement. At monthly seminars, trainees at the Marble Center lead an open forum on emerging issues in their fields. The Convergence Scholars Program, which was originally launched in 2017 to further the development of postdocs beyond the laboratory bench, is now a competitive award program offered to postdocs at the Koch Institute. Through an industry affiliate program, the center worked closely with several key players in the field of nanoscience. Industry collaborators mentor trainees and participate as judges in an annual poster symposium.
More recently, MIT-wide grants have catalyzed new collaborations: In 2023, the Global Oncology in Nanomedicine grant supported a project on leveraging AI-based approaches to speed the development of RNA vaccines and other RNA therapies. The project was led by Giovanni Traverso, the Karl Van Tassel (1925) Career Development Professor and a professor of mechanical engineering.
From lab to clinic: Lessons in nanomedicine translation
Panelists at the anniversary event shared candid reflections on the often messy, but exhilarating process of turning their ideas into commercial technologies.
DeMuth described how Elicio Therapeutics, whose core technologies originated from his graduate research in Irvine’s group, harnesses the natural power of the lymph nodes to generate enhanced immune responses against tumors. The amphiphile platform uses the body’s natural albumin transport system to “shuttle” medicines into the lymph nodes, boosting immune cell activation. Elicio is now advancing their platform through a Phase 2 trial in pancreatic ductal adenocarcinoma and colorectal cancer.
Jailkhani co-founded Matrisome Bio with Bhatia and Hynes. Matrisome Bio is pioneering a new class of therapies, small protein binders called nanobodies that deliver potent payloads directly to the extracellular matrix of tumors and metastases while sparing normal tissues. Matrisome Bio is currently testing radioligand modalities with their targeting platform for the treatment of cancer.
Adalsteinsson co-founded Amplifyer Bio with Bhatia and J. Christopher Love, the Raymond A. (1921) and Helen E. St. Laurent Professor of Chemical Engineering and associate director of the Koch Institute, with the goal of developing priming agents for liquid biopsy. Priming agents injected before a blood draw transiently slow the clearance of cell-free DNA from the bloodstream, thus allowing up to 100-fold more tumor DNA to be recovered for liquid biopsy applications. While injection for medical diagnostics has been done for decades in the context of imaging scans, Amplifyer Bio’s approach would be the first of its kind in the field of liquid biopsy.
Dudkin described Soufflé Therapeutics’ vision to enable targeted delivery with receptor-mediated uptake to any type of cell in the human body. Soufflé Therapeutics is working to engineer cell-specific ligands to deliver siRNA-based medicines that are precise and transferred across the cell membrane to their target, by combining proprietary technologies for identification of cell-specific receptors, ligand optimization, and potent siRNA engineering.
Panelists stressed that successful translation requires complex choices. While platform technologies can theoretically address many cancer problems, startups must focus on specific indications and clinical modalities to succeed in resource-limited, commercial settings. While the academic lab offers freedom to explore multiple applications, commercialization demands strategic narrowing of scope.
Reproducibility during scale-up emerged as another critical consideration: Founders building platform companies must demonstrate not only that their technology works, but that their underlying discovery is reproducible and robust enough to support a business. All panelists agreed that thinking about manufacturability early in research, rather than as an afterthought, significantly improves a startup’s path to the clinic. Highlighting tension between selecting cutting-edge approaches and managing their inherent regulatory risks, they recommended minimizing risk by leveraging established processes and chemistries that have already been validated in approved drugs.
Finally, panelists highlighted the importance of institutional collaborations, particularly with centers like the Marble Center for Cancer Nanomedicine. These partnerships offer access to collaborative, mission-driven researchers who can push technological boundaries, while startups maintain focus on narrow clinical applications. Panelists emphasized that faculty collaborators, such as at the Marble Center, often provide “big sky thinking” that explores new directions and applications that complement the company’s core mission.
The next chapter in nanomedicine at MIT
As the Marble Center enters its second decade, the community is focused on expanding collaborations, leveraging advances in computation and other intersecting disciplines, and exploring new disease indications.
“The next 10 years will be defined by our ability to leverage insights gained at the nanoscale to push the boundaries of precision medicine. The Marble Center is in a unique position to do just that, as we evolve this incredible community at MIT to be a global hub for nanomedicine research,” said Bhatia.
Bhatia also announced that in June, the Marble Center will launch a new grant, Integrated Nanoscale Sensing, Imaging, and Health Technologies (INSIHT), aimed at advancing new imaging and sensing technologies for precision medicine.
Similarly, panelists expressed optimism about nanomedicine’s transformative potential, centered on precision medicine. The field, they argued, will focus on minimizing side effects while opening previously unavailable therapeutic windows — enabling treatments that are fundamentally more targeted and effective. This precision could render many currently untreatable diseases manageable, or even curable, while also enabling in some cases the repurposing of drugs that failed in earlier clinical contexts.
“Ten years ago, Sangeeta, Tyler Jacks, and the Marble Center community had a vision” said Matthew Vander Heiden, director of the Koch Institute and Lester Wolfe (1919) Professor of Molecular Biology.
“Today, that vision is creating a place where bold ideas turn into transformative advances that can help cancer patients and non-cancer patients as well. It is exciting to see this momentum in nanomedicine at MIT and what will happen in the coming decade.”
How the war in the Middle East is impacting global energy systems
One day after the announcement of a ceasefire between the United States and Iran, the head of the International Energy Agency (IEA) outlined the implications of the war in the Middle East on the global energy system and the world’s economy, offering his expertise to an MIT audience.
“This is the largest energy crisis we’ve ever had in the world,” Fatih Birol, the executive director of the IEA, said at the MIT Energy Initiative’s (MITEI) Earth Day Colloquium on April 8. Birol put the current disruption of the world’s energy markets into historical perspective, shared what he believes will be the long-term impacts of this war — even in the best-case scenario where the ceasefire paves a path toward peace — and emphasized the need to create a more sustainable, resilient system moving forward.
In 1973, and again in 1979, there were oil crises that led the world economy into recession, with many countries — especially those with developing economies — spiraling into debt. More recently, Russia’s invasion of Ukraine led to a natural gas crisis. “The current crisis, the amounts of oil and gas we’ve lost, is bigger than all those three put together,” Birol stated. According to data received two hours before the seminar, Birol confirmed that 80 energy facilities in the Middle East had been damaged, with over one-third of those having been severely damaged.
The IEA has played a significant role in the global response to the war. “Our job is to have a real-world impact,” said Birol. Earlier in the conflict, after making clear to policymakers and members of the press the scale of the problem at hand, the IEA turned to its member countries — which are required to have significant oil stock reserves — to bring their reserves to the market. “Since the disruption was so big, we brought all the countries together, which is not easy,” Birol said. “We released 400 million barrels of oil, which is the highest we have ever done. This calmed markets and put downward pressure on prices.” The IEA also released a suite of recommendations for conserving oil quickly, many of which countries around the world are already implementing, said Birol.
The implications of this crisis are far-reaching, and will vary in severity depending on how long the war lasts and how quickly normal operations resume afterwards — which could take some time, considering the extent of the damage to the Middle East’s energy infrastructure, Birol said.
Birol explained the more immediate impacts of the war on the gas industry. Although the natural gas industry has presented itself as a reliable, affordable, and flexible energy source, Birol highlighted that the two major gas crises in the last four years have brought that assertion into question.
“Is [natural gas] still reliable? Is it still flexible? Is it still affordable? After these two big crises, the natural gas industry needs to work hard to regain its brand,” he said.
Birol also outlined three potential outcomes that this shift may bring to the renewable energy sector. First, there is historical precedent for building up nuclear power plants in response to the oil crises of the 1970s. “Around 45 percent of nuclear power plants operating today were built as a response to those crises,” said Birol. He believes there will be another large push for nuclear power, including small nuclear reactors.
Second, renewables may be the biggest beneficiaries of this situation, he said. “In Europe, after Russia’s invasion of Ukraine, the renewable annual installations increased by a factor of three,” he said.
Third, especially in Asia, we will likely see an increase in the market penetration of electric vehicles, Birol said. This is especially important to note because Asia is the center of current oil demand growth, but the adoption of more electric vehicles could have an impact on that, he suggested. Previous crises have also led to car manufacturers improving the fuel efficiency of their cars.
“The energy security premium will be a factor of the energy trade in the future, in addition to the cost of energy,” said Birol, speaking to the longer-term effects on the global energy market. “Countries will be more careful now with whom they are trading.”
Addressing the current crisis also necessitates changes to our energy system going forward, according to Birol. He explained that the entire global economy is being held hostage by the 50 kilometers of the Strait of Hormuz, which is a critical path not only for oil and gas shipments, but for materials used to make fertilizer, which are needed to feed the world’s population, and materials such as helium, which are needed to manufacture products like cell phones.
“I'm afraid that after this is finished, some of the countries will come back faster because they have stronger financial muscles, better engineering capabilities, and better technologies, whereas other countries will suffer,” he said. “It will be, in my view, not easy for the global economy. I believe who will be suffering under this economic damage will be mainly developing countries.”
The burden on developing countries will not only come in the form of energy prices, but also lasting impacts on fertilizer consumption, food security, and food prices, which Birol emphasized is a global problem. “What should be the response to have a more secure, but also more sustainable, future for everybody?” he asked.
Birol suggested the best possible outcome to the current global energy and economic disruption would be if the ceasefire leads to a peaceful settlement of the war. Still, this “best possible outcome” includes significant risk for much of the world.
If there is a settlement of peace, Birol said he expects oil and the gas production in the region to restart. He noted that there are about 200 fully laden oil tankers and 15 loaded liquid natural gas ships that could leave the Gulf fairly quickly if the Strait of Hormuz fully reopens.
“But I don’t think that in a very short period of time we will go back where we were before the war,” Birol said. “And this may keep the prices at elevated levels. This is surely not good news, especially in the emerging world. I would be surprised if we don’t see significant inflationary pressures in Asian developing countries, in Africa, and in Latin America,” Birol said. “In addition to that, the petrochemical industry, fertilizers, we will discover how important those commodities are for the supply chains we have … I expect a bit of volatility in the markets.”
This speaker series highlights energy experts and leaders at the forefront of the scientific, technological, and policy solutions needed to transform our energy systems. Visit the MIT Energy Initiative’s events page for more information on this and additional events. The series will return this fall.
Two from MIT named 2026 Knight-Hennessy Scholars
MIT master’s student Sunshine Jiang ’25 and Rupert Li ’24 are recipients of this year’s Knight-Hennessy Scholarship. Now in its ninth year, the highly competitive scholarship provides up to three years of financial support for graduate studies at Stanford University.
Sunshine Jiang ’25
Sunshine Jiang, from Hangzhou, China, graduated from MIT in 2025 with a bachelor’s degree as a double major in physics and electrical engineering and computer science, along with minors in mathematics and economics. She will receive her master of engineering degree this month and will start her PhD in computer science at Stanford School of Engineering this fall.
Jiang researches embodied artificial intelligence and robotics, developing data-efficient, adaptive systems for general-purpose robots that broaden accessibility. She has presented her research at major conferences, including the Conference on Robot Learning, the International Conference on Robotics and Automation, and the International Conference on Learning Representations.
Jiang led the development of AI-powered systems that provide access to traditional Chinese art in rural classrooms, founded cross-country programs that expand girls’ access to STEM education, and created a Covid-19 documentary amplifying community voices, which was featured on China Daily.
Rupert Li ’24
Rupert Li, from Portland, Oregon, is currently pursuing a PhD in mathematics at Stanford School of Humanities and Sciences. He graduated from MIT in 2024 with a bachelor’s degree, double majoring in mathematics and computer science, economics, and data science. Along with his bachelor’s degree, he also received a master’s degree in data science. Li then traveled to the United Kingdom as a Marshall Scholar, where he earned a master’s degree in mathematics from the University of Cambridge.
Li’s research interests lie in probability, discrete geometry, and combinatorics. He enjoys serving as a mentor for MIT PRIMES-USA, a high school math research program, and previously served as an advisor for the Duluth REU, an undergraduate math research program. In addition to the Knight-Hennessy Scholarship and the Marshall Scholarship, he has been awarded the Hertz Fellowship, P.D. Soros Fellowship, and the Goldwater Scholarship, and he received honorable mention for the Frank and Brennie Morgan Prize.
Building “hardcore” advanced machines
MIT class 2.72/2.270 (Elements of Mechanical Design) offers undergraduate and graduate students advanced study of modeling, design, and integration, along with best practices for use of machine elements like bearings, bolts, belts, flexures, and gears.
“[Students] learn how to use basically everything from the MechE undergraduate curriculum to build hardcore advanced machines,” says Martin Culpepper, the Ralph E. and Eloise F. Cross Professor in Manufacturing and professor of mechanical engineering (MechE) at MIT.
The course employs modeling and analysis exercises based on rigorous application of physics, mathematics, and core mechanical engineering principles, which are then reinforced through lab experiences and a mechanical system design project.
Culpepper, known to students and colleagues as Marty, says one of his main goals in the course is to “make students into stronger engineers.” His methods involve a mix of teaching and coaching techniques that push students to explore the bounds of what’s possible.
“Marty likes to say that ‘as long as something doesn't break the laws of physics, it’s possible. You just have to figure out how to engineer it,’” says Yasin Hamed, a teaching assistant for the course.
For the system design projects, students build a lathe that can meet repeatability, accuracy, and functional requirements, and that can also “pass ‘Marty’s death test,’” says MechE graduate student Sarah Stoops. “What that means practically,” explains fellow graduate student Amber Velez, “is, at the end of class, Marty takes all our lathes and drops them and hits them with a hammer, and if they explode, you don’t pass the class.”
This final test may seem harsh, but it is an important part of the process and helps build to additional, critical skills: resilience and perseverance.
“The students are very resilient. They learn to persevere and take some time to try and figure things out, and through that process … you learn so much,” says Hannah Gazdus, a teaching assistant for the course.
Before the so-called “death test,” students tackle two other challenges: precision and material removal. “All of our lathes are required to cut to within 50 microns of precision,” explains Velez. In the material removal rate competition, teams compete to see who can turn down a piece of stock by one inch the fastest. Velez’s team completed the later task in approximately 27 seconds.
“The core classes are important — things like mechanics, materials, dynamics, controls — but many of them have a degree of abstraction that separates the content within those courses from the mechanical elements that you use in designing an actual machine,” says Hamed. “I feel like this class serves very well to bridge that [and] inspire that confidence as working engineers.”
From technical solution to systems change: Tackling the problem of plastic waste
When Akorfa Dagadu arrived at MIT, she had a solution in mind: a mobile app to improve recycling and environmental engagement in her home country of Ghana. The project, called Ishara, aimed to make it easier for people to participate in local recycling systems while creating economic opportunities.
“I grew up in what people often call the trash capital of Accra,” she recalls. “I thought I knew what would fix it. So [my Ishara co-founders and I] built a solution — an app — behind some desk in a library … We did what I thought was market research, but looking back, we were basically asking people what they thought about our idea instead of asking how things actually worked … Implementation humbled us very quickly.”
On the ground, Dagadu encountered a reality very different than she anticipated.
“Informal networks of waste pickers and aggregators were already doing the work,” she explains. They’d developed a system that was already working, but it was “invisible, undervalued, and excluded from larger recycling conversations.”
From technical solutions to systems change
Soon after arriving at MIT, Dagadu discovered the PKG Center for Social Impact as a place that could help her pivot, taking a step back from her technical solution to understand the systemic context of the problem she was trying to solve.
As a first-year student, Dagadu received a PKG Fellowship, which provides funding and mentorship for students to pursue community-engaged research and development. This early support positioned Dagadu to apply to PKG’s IDEAS Social Innovation Incubator to further refine her social enterprise, Ishara. Dagadu was one of few first-year students selected for IDEAS among an applicant pool dominated by MBA and other graduate students.
“At MIT, there are a lot of opportunities focused on entrepreneurship. But not as many that emphasize how you can do something for the environment or your community,” says Dagadu. IDEAS trains technical founders in systems change for social impact and community-engaged innovation.
Dagadu obtained another PKG Fellowship to iterate on Ishara the following summer, and was accepted to the IDEAS incubator a second time. Eventually, she refined her app from a technical solution the community didn’t need to one that connects existing recycling networks to the broader value chain, in ways that are transparent and fair, using a blockchain-enabled buyback center.
“The biggest thing PKG has given me is a way of thinking,” Dagadu explains. “The systems thinking mindset really stays with you. You start to see everything as connected. Technical solutions are not just technical; they have social and economic implications. I find myself applying that in all my classes. Whether I am designing a reactor system or working through a materials problem, I am always asking how this fits into the larger system and who it affects.”
Community-engaged chemical engineering
Dagadu says that “PKG has shaped both how I do research and how I think about it.” She grew to understand the importance of research grounded in local partnerships, and points to her collaboration with Chanja Datti, a recycling company in Nigeria, as a prime example.
“That collaboration has directly informed my research,” says Dagadu. “What started as a PKG-supported exploration has now grown into a full undergraduate-led research project at MIT, supported by D-Lab, focused on one of the hardest questions in recycling: what to do with multilayer plastic waste.”
“This is where my chemical engineering and materials background comes in,” explains Dagadu, who studies how random heteropolymers can stabilize enzymes for plastic degradation through the Alexander-Katz Lab. “Thinking about polymer structure, processing, and what is actually feasible,” is critical to her work on the ground. “But it is also shaped by everything PKG emphasizes. You cannot separate the material from the system it lives in.”
Dagadu also appreciates the personal community she’s developed through her journey at MIT, especially as her venture evolved and her co-founders stepped away. “I went from being part of a strong team of three to building Ishara largely on my own,” she recalls. “That’s when I understood what people mean by entrepreneurship being lonely. The doubt, the weight of decisions — it became very real, very quickly.”
She drew on relationships developed through PKG and the Kuo Sharper Center for Prosperity and Entrepreneurship, where Dagadu is a student fellow, to ground her and remind her of her personal mission. “It’s not just about having a team,” she realized. “It’s about having a community that can hold you through the moments when things fall apart.”
The PKG Center’s assistant dean, Alison Hynd, who supported Dagadu through multiple PKG Fellowships, sees Dagadu’s ability to create community as a tremendous asset: “As a first-year student, she came through the door with an intellectual vision and drive to do this work, but at MIT, she’s found her voice to pull other people into it.”
Same question, different scale
Next year, Dagadu will broaden her community still more, as a Schwarzman Scholar at Tsinghua University in Beijing. While the context of her studies will change, her motivation remains the same as when she entered MIT.
“I want to keep asking the same question that’s shaped so much of my work so far,” she says, “not just how we design better materials, but how we design systems where those materials can actually work. That means zooming out and exploring the policy and economics of material flow.”
Through Ishara, Dagadu’s social enterprise, she’s seen how systems intersect and function on the ground in the case of recycling in Ghana. “Now, I want to understand forces at a much larger scale,” she says, “and I can’t think of a better place to explore this question than in China, the manufacturing hub of the world.”
3Q: Why science is curiosity on a mission
This week, MIT launches a new initiative — titled Science Is Curiosity on a Mission — to make the case for the long-horizon, curiosity-driven science that has powered generations of American innovation. Through stories of scientists pursuing open-ended questions, the project highlights how fundamental discovery research sparks advances in medicine, technology, national security, and economic growth.
MIT News spoke with Alfred Ironside, the Institute’s vice president for communications, about what inspired the effort, what’s at stake for the U.S. research enterprise, and why curiosity remains one of America’s greatest strengths.
Q: What is “Science Is Curiosity on a Mission,” and why launch it now?
A: Science has been under threat for some time now, and public investment in discovery science has been flagging. We want to remind people in Washington and across the country what curiosity-driven science is all about, and why it matters so much in our individual lives and in the life of the country.
Science begins with curiosity — someone asking a question and refusing to let it go. History’s most important discoveries did not begin with a commercial objective or a guaranteed outcome. They began because someone wanted to understand how the world works. Think Ben Franklin and his kite: This drive to discover goes back to the beginnings of the United States.
That’s the story we want to tell, but in today’s terms. We’re spotlighting researchers whose years-long pursuit of core questions has seeded breakthroughs that have changed lives for the better.
We’re launching this storytelling initiative now because public investment is declining, and in all the debates about funding what’s gotten lost is an appreciation for the incredible gifts of curiosity-driven discovery science.
Over generations, the United States became the world’s scientific leader by investing in research of this kind, especially at universities, where long-term scientific undertakings have time and space to thrive. In turn, those investments have created an extraordinary pipeline of innovation, the envy of the world.
When public investment in basic science falters, the long-term losses start right away — and cascade. Labs close. Young scientists leave the field. Entire avenues of discovery go unexplored. Those losses are not always immediately visible, but eventually we feel them through what’s missing: treatments that never arrive, industries that never emerge, talent that migrates elsewhere.
Other countries understand this. They’re watching us stumble — and they’re growing their research investments aggressively. America’s scientific leadership has been built over decades — and maintaining it requires similar commitment.
It’s important to note that while this initiative to tell the story of discovery science was sparked at MIT, it is not about MIT. We want to spotlight university-based scientists across the country whose work is critical in advancing discovery, educating talent, and fueling innovation that benefits all of us.
Q: Why emphasize the idea of “curiosity”?
A: We start with curiosity for two reasons. First, it’s a human experience we’ve all had, so everyone can relate to it. Everyone knows the feeling of just wanting to know why something happens or how something works. Second, it’s the essential fuel that drives discovery science.
There’s sometimes a tendency to talk about science in terms of outputs: breakthroughs, startups, commercial applications. Those things matter enormously, but they usually come much later. The beginning is more human. It’s someone wondering why something behaves the way it does, or whether a seemingly impossible problem might have an answer.
Some of the most transformative breakthroughs arose from questions that once appeared disconnected from practical use. MRI technology grew from research on atomic nuclei. The foundations of immunotherapy came from scientists trying to understand how the immune system works. GPS depends on what was once viewed as purely theoretical physics.
Curiosity fuels scientific discovery by pushing people to keep pursuing deep questions because they simply need to know: How does the brain work? How does cancer start? What is the universe made of?
That’s why the second half of the phrase matters: “on a mission.” University researchers are not indulging in idle speculation. They are pursuing knowledge to expand our understanding — and that new knowledge can be the key to startling new solutions.
Universities are uniquely important environments for this work. They bring together people from different disciplines and backgrounds who challenge assumptions and generate new questions. That concentration of talent and openness is extraordinarily productive.
After World War II, the American research university system became one of the most successful engines of discovery in human history. Public investment in university research has helped produce new medicines, computing technologies, communications networks, energy systems, and entire industries that shape modern life.
This effort aims to reconnect all of us with that story.
Q: What’s at stake if the U.S. fails to sustain support for basic research?
A: What’s at stake is not just scientific leadership, but the future pace of American innovation and opportunity.
The innovation pipeline operates across long time horizons. The discoveries powering today’s companies and medical treatments often crystallized 10, 20, or 30 years ago. The breakthroughs that will define the 2040s and 2050s are being explored in laboratories right now.
Basic research is the foundation of that pipeline, and private-sector innovation depends on it. Private investment plays a critical role, but it naturally gravitates toward projects with clearer commercial returns. Public funding supports the earliest, highest-risk stages of inquiry, where outcomes are uncertain but the potential benefit to society is enormous.
If that pipeline dries up, the consequences are stark. Fewer discoveries lead to fewer technologies, startups, and industries. We also risk losing scientific talent to countries that are watching our shifting national priorities — and making larger and more sustained investments in advancing science.
At the same time, there is enormous reason for optimism. The American scientific enterprise remains one of the great achievements of the modern era. It has delivered extraordinary gains in health, prosperity, and quality of life. Millions of people are alive today because of advances rooted in publicly supported research.
This system was built through sustained national commitment across generations. The question now is whether the country will continue investing in curiosity, discovery, and the people pursuing the new knowledge that will allow us to solve the intractable problems of tomorrow.
When curiosity is given room to run, the results can be life-changing for us all.
“I have yet to meet a professor that cares more for their students”
Since joining the faculty of MIT’s Department of Political Science in 2012, F. Daniel Hidalgo, known to many as “Danny,” has built a reputation as both a meticulous quantitative scholar and one of the department’s most generous and steadfast mentors.
A member of the 2025–27 Committed to Caring cohort, Hidalgo is recognized for a style of mentorship that combines intellectual intensity with humility, approachability, and a willingness to show up for students. A quantitative political scientist whose research focuses on elections, democratic accountability, and political behavior in Brazil and Latin America, his scholarship uses statistical and experimental methods to study how institutions shape political outcomes. According to his students, the rigor he brings to his research is matched by an equally strong commitment to the people he mentors.
Hidalgo’s reputation is illuminated repeatedly in nominations. One student, reflecting on years of mentorship, aptly summed this up by saying, “I have yet to meet a professor that cares more for their students.”
Showing the mess, not just the map
Most MIT political science PhD students encounter Hidalgo in their first year, when he teaches the department’s quantitative methods sequence. For many, the course is a turning point — an introduction to causal inference and the logic of experimentation that reshapes how they think about political science itself.
While the material is demanding, students describe a classroom that feels captivating, rather than intimidating. Even during the height of Covid-19-era Zoom courses, one student reflected on the ways in which Hidalgo “made the class engaging and interesting,” injecting energy into even the most complex statistical concepts. “It is no surprise that for many of us, the final papers we wrote for this class laid the foundation … for our subsequent research trajectories,” the student added.
Hidalgo’s approach to mentorship begins with demystifying research by exposing the process behind final products. If he had to articulate a guiding principle, he says, it would be this: “Show students the mess, not just the map.” Graduate students too often see only the polished journal article, not the abandoned drafts, failed models, or questions that had to be rebuilt from scratch. Hidalgo makes a point of bringing students into that disorganization early, normalizing uncertainty as part of scholarship.
That transparency reshapes both how students conceive of research, and how they intentionally practice it. As one student explained, Hidalgo’s mentorship creates “a space where we can share even our messiest ideas,” knowing they will be met with thoughtful feedback rather than judgment. His classroom and office are often described as rare environments where rigor and creativity coexist without fear.
A boundless capacity for mentorship
It is no secret within the department that Hidalgo advises a large number of students, providing one-on-one mentorship in addition to leading a growing research group. Despite this, students consistently describe weekly meetings where he gives their work his full attention. He reads drafts carefully and responds with detailed, constructive feedback, whether on a fellowship application, a conference paper, or a dissertation chapter.
Hidalgo’s mentorship is not confined to his formal advisees. Students who are not on his committee can still rely on him for advice on quantitative methods, knowing that he will make time for them. Over time, this has earned him a department-wide reputation as approachable, steady, and kind.
His advisees’ research spans the discipline: business politics in China, applied machine learning, nationalism in Europe, and electoral politics in Latin America. As one student put it, mentees are “united not by a single topic, but by [Hidalgo’s] generous and inclusive mentorship.” Although his own scholarship centers on Brazil and Latin America, students say he tackles every project with genuine curiosity and intellectual investment, connecting them to literature they might never have encountered and sharpening their arguments’ credibility.
At an institution where quantitative research is often the default, Hidalgo encourages methodological grounding that goes beyond the dataset. He pushes students to immerse themselves in the contexts they study: spend time in the field, talk to people, and absorb local political realities. Immersion, he argues, does not replace rigorous analysis — it sharpens it.
Building community in a solitary profession
Dissertation work can be isolating. In response, Hidalgo has launched a biweekly research group for his mentees. The group, now more than 10 students strong, meets throughout the semester to workshop ideas at any stage of development.
Students describe it as a rare low-stakes space where early drafts are welcome and half-formed ideas encouraged. Discussions are intellectually demanding, but never hostile. The diversity of projects — across regions, methods, and topics — broadens everyone’s perspective.
Hidalgo’s care for his students also emerges in small but meaningful ways. He brings snacks to meetings, organizes informal gatherings, and creates opportunities for connection beyond formal advising. During the isolation of the Covid-19 pandemic, he engaged students through reading groups and small gatherings. When visiting scholars arrive, he folds them in. When global or personal events weigh heavily, he checks in.
One student recalled the morning after a deeply contentious U.S. presidential election. Rather than proceed as usual, Hidalgo canceled class and invited students to gather in his office. There were pastries and a space to talk — “a small, deeply touching gesture” that made an anxious day more bearable.
Standing by students in moments of uncertainty
Several nominations speak not only to academic mentorship, but to Hidalgo’s response during moments of personal and professional difficulty.
One advisee described hitting a breaking point in their fourth year: stalled research ideas, a failed fieldwork trip, deteriorating mental health, and a departmental warning about insufficient progress. Rather than stepping back, Hidalgo leaned in — helping generate new project ideas, structuring attainable plans, and encouraging another attempt at fieldwork, which ultimately proved successful.
Another student, pursuing an unconventional joint program bridging political science and statistics, described feeling academically isolated. Recognizing that need, Hidalgo helped create a reading group aligned with the student’s interests and encouraged collaboration across departments. As the student recalled, he “[put] the maximum trust in me to make decisions while always giving me the strong feeling that he [had] my back.”
When students choose paths outside academia, Hidalgo is equally supportive — encouraging them to align their research and professional development with their goals, without diminishing the value of their work.
His mentorship leaves a lasting imprint not only on students’ research, but on how they understand what it means to support others in turn. Across these experiences, a consistent theme emerges: Hidalgo challenges students to meet high standards while ensuring they never navigate those expectations alone.
Elazer Edelman receives the 2026-2027 Killian Award
Elazer R. Edelman ’78, SM ’79, PhD ’84, an engineer and cardiologist who helped develop cardiovascular stents that have been used by more than 100 million people, has been named the recipient of the 2026-2027 James R. Killian Jr. Faculty Achievement Award.
The award committee recognized Edelman, the Edward J. Poitras Professor in Medical Engineering at MIT’s Institute for Medical Engineering and Science, for his work at the interface of engineering, science, and medicine. In addition to his work on stents, he has made significant contributions to tissue engineering and to deciphering the fundamental biological processes underling cardiovascular disease.
A member of the MIT faculty for more than 30 years, Edelman is renowned as a teacher and mentor. He is also a professor of medicine at Harvard Medical School and a critical care cardiologist at Brigham and Women’s Hospital, and he served as director of MIT’s Institute for Medical Engineering and Science from 2018 to 2024.
“He is a clinician of the highest order who has touched the lives of many, a teacher of greatest passion who has mentored hundreds and taught thousands, and an engineer whose work has reached around the globe,” states the award citation, which was presented at today’s faculty meeting by Xuanhe Zhao, chair of the Killian Award Selection Committee and a professor of mechanical engineering at MIT.
The Killian Award was established in 1971 to recognize outstanding professional contributions by MIT faculty members. It is the highest honor that the faculty can give to one of its members.
“It’s deeply meaningful that your colleagues think enough of you to want to recognize your life’s work. This is an incredibly awe-inspiring group, and for them to feel that way is a truly special honor,” Edelman told MIT News after learning that he had been selected for the award.
Edelman, who grew up in Brookline, Massachusetts, got his first MIT experience as a high school student, taking classes as part of the Institute’s High School Studies Program. That experience led him to apply to MIT, where he earned two bachelor’s degrees, in applied biology and electrical engineering and computer science, followed by a master’s in bioelectrical engineering and a PhD in medical engineering and medical physics. He also earned an MD from Harvard Medical School through the Harvard-MIT Program in Health Sciences and Technology.
As a graduate student, Edelman was one of the first students to join the lab of Robert Langer, the David H. Koch Institute Professor at MIT. Working with Langer, he developed mathematical approaches to guide the design of controlled drug-delivery systems.
“Bob opened my eyes to what it really means to use MIT science to make the world a better place,” Edelman says.
Early in his career, Edelman brought a scientist’s eye to one of medicine’s most urgent clinical challenges: how to address diseased blood vessels without provoking further injury. His studies of the cellular and molecular mechanisms of atherosclerosis and vascular healing — work that continues to this day — coupled with fundamental insights from engineering and physics, helped enable the optimization of bare-metal stents and the development of drug-eluting stents.
Roughly 90 percent of the more than 100 million stents implanted worldwide now release drugs through principles his work helped define and advance, saving countless lives and improving quality of life for patients around the globe.
Edelman’s work reflects a continuing cycle of discovery: Basic insights in biology shaped transformative medical technologies, and the challenges posed by those technologies, in turn, continue to push biology, science, technology, and engineering together toward new discoveries and clinical advances.
“His landmark work on the cellular mechanisms underlying atherosclerosis and on the biology of cell-material interfaces established the scientific foundations that transformed bare-metal cardiovascular stents from a promising mechanical concept into a biologically informed and clinically transformative therapy with enduring legacy — paving the way for a cascade of innovations that changed the landscape of medicine,” the award committee wrote.
More recently, Edelman’s lab has designed novel heart valves and other innovative approaches to mechanical organ support.
During his tenure as the director of IMES, he led an MIT-wide effort to provide personal protective equipment to health care workers and emergency responders in the early stages of the Covid-19 pandemic.
“One of the things I’m most proud of is working with many people at MIT in the Covid response. At the height of Covid, we were supplying 23 percent of all PPE throughout New England,” he says. “Every single person who could possibly contribute contributed.”
As director of MIT’s Center for Clinical Translational Research and faculty lead for the Hood Pediatric Innovation Hub, he is now working to help clinical research thrive at MIT and to address the inequities in technology access for society’s most vulnerable population — children.
Throughout his career, Edelman has devoted himself to mentoring students and trainees.
“I’m really proud of what our students have accomplished, not only scientifically, but on a personal level, and not only with me, but everything they’ve done afterwards. The greatness of a place like MIT is that you enable people to grow beyond their potential. That’s really the extraordinary thing about our community,” he says.
In recognition of his scientific achievements, Edelman has been elected a fellow of the American College of Cardiology, the American Heart Association, the Association of University Cardiologists, the American Society of Clinical Investigation, American Institute of Medical and Biological Engineering, the American Academy of Arts and Sciences, National Academy of Inventors, the Institute of Medicine/National Academy of Medicine, and the National Academy of Engineering.
“The Selection Committee is delighted to have this opportunity to honor Professor Elazer Edelman for his exceptional contributions to medical engineering and science, to MIT, and to the world,” the award citation concludes.
MIT chemists discover and isolate a new boron-oxygen molecule
Oxygen is a cornerstone of chemistry, largely because it is so good at building the organic molecules that make up our world. Some oxygen-based compounds, called peroxides, are famous for being highly reactive — they act like oxygen delivery trucks, transferring atoms to other molecules. This process is essential for everything from creating new medicines to industrial manufacturing.
In an open-access study published April 24 in Nature Chemistry, researchers from the labs of MIT professors Christopher C. Cummins and Robert J. Gilliard, Jr. have revealed a brand-new type of peroxide containing boron. This molecule, called a dioxaborirane, represents a major advance in a field where such structures were long-proposed, but considered too unstable to actually isolate.
Room-temperature breakthrough
Dioxaborirane forms when a specially engineered boron molecule reacts with oxygen gas. What makes this discovery remarkable is that the reaction happens almost instantly at room temperature. Usually, creating strained oxygen-containing rings like this requires extreme, “punishing” conditions — like freezing temperatures or high pressure — to keep the molecule from falling apart.
Using advanced tools such as crystallography and computational modeling, the team proved the existence of a highly strained, three-member ring made of one boron and two oxygen atoms.
A molecule with two personalities
The most exciting part of the discovery is how the molecule behaves. Depending on its electrical charge, it acts in two very different ways:
- The builder: It can donate oxygen atoms to help construct new chemical compounds.
- The trapper: It can react with carbon dioxide, potentially offering a new way to capture and transform greenhouse gases.
“By showing that these compounds can be generated under mild conditions, our work opens the door to entirely new types of chemistry,” says Chonghe Zhang, the first author of the paper and an MIT chemistry graduate student co-advised by Cummins and Gilliard. “In the long term, these findings could provide us with powerful new tools for oxidation reactions in synthesis and materials science.”
Additional co-authors on the paper are Noah D. McMillion and Chun-Lin Deng of MIT and Junyi Wang of Baylor University. The work was funded, in part, by the U.S. National Science Foundation.
Researchers “reprogram” materials by quickly rearranging their atoms
It’s been 37 years since scientists first demonstrated the ability to move single atoms, suggesting the possibility of designing materials atom by atom to customize their properties. Today there are several techniques that allow researchers to move individual atoms in order to give materials exotic quantum properties and improve our understanding of quantum behavior.
But existing techniques can only move atoms across the surface of materials in two dimensions. Most also require painstakingly slow processes and high-vacuum, ultracold lab conditions.
Now a team of researchers at MIT, the Department of Energy’s Oak Ridge National Laboratory, and other institutions has created a way to precisely move tens of thousands of individual atoms within a material in minutes at room temperature. The approach uses a set of algorithms to carefully position an electron beam at specific locations of a material, then scan the beam to drive atomic motions.
“The results demonstrate the ability to deterministically move atoms repeatedly within a material’s 3D atomic lattice,” says MIT Research Scientist Julian Klein, who conceived of and directed the project. “We can reprogram materials to create defects at will, realizing entirely artificial states of matter not found in nature with a wide range of potential applications, including sensing, optical, and magnetic technologies. There are so many opportunities enabled by these techniques.”
“It’s like a photocopier that can create columns of identical atomic defects,” says Frances Ross, MIT’s TDK Professor in Materials Science and Engineering. “It’s especially useful because you can move a few atoms to form defects, and do it again and again to build atomic arrangements in three dimensions that have tunable functions in a system that is more robust because the defects exist beneath the surface.”
In a Nature paper appearing today, the researchers described their approach and how they used it to create more than 40,000 quantum defects in a crystalline semiconductor material.
The researchers say the approach offers a new way to study quantum behavior in materials. It could also one day lead to improvements in systems that leverage quantum defects, like quantum computers, dense magnetic memory, atomic-scale logic devices, and more.
Joining Klein and Ross on the paper are Kevin Roccapriore and Andrew Lupini, researchers at Oak Ridge National Laboratory; Mads Weile, a former MIT visiting student; Sergii Grytsiuk, a former Radbound University researcher; Malte Rösner, a professor at Bielefeld University in Germany; Zdenek Sofer, a professor at the University of Chemistry and Technology Prague in the Czeck Republic; Dimitar Pashov, a research associate at King’s College London; and Mark van Schilfgaarde and Swagata Acharya, researchers at the National Laboratory of the Rockies.
Designing matter
In a now-famous 1989 demonstration, IBM researchers used a scanning tunneling microscope to arrange 35 atoms on the surface of a chilled crystal to spell out “IBM.” It was the first time atoms had been precisely positioned, and an important milestone. The approach enabled scientists to engineer specific defects, such as atom-sized vacancies and surface atoms in crystalline materials, leading to major advances in quantum science. But placing those 35 atoms had taken researchers many hours, if not days.
In parallel with those developments, researchers also developed two additional approaches for manipulating atoms in a vacuum, using optical tweezers to trap neutral atoms and oscillating electric fields to trap ions.
While those approaches have enabled remarkable progress, they remain limited to either surfaces or highly controlled experimental systems. Another factor limiting the design of materials for applications such as quantum computers is the inability of atomic manipulation techniques to move atoms in three dimensions: The patterns are created on the surface of a material, where they are exposed to the environment and cannot survive outside tightly controlled laboratory settings.
Engineering usable materials with custom quantum properties would require researchers to rearrange many more atoms, preferably on the interior of materials. The MIT researchers demonstrated that capability in their Nature study.
“We were trying to improve the number of atoms we could move in a reasonable length of time,” Ross explains. “You want to place the atoms close to each other so they can interact, and you want to have a lot of them arranged as you’d like — thousands or millions of atoms in specific locations you’ve chosen. That’s been challenging with existing techniques.”
The researchers used high-performance microscopes at the Department of Energy’s Oak Ridge National Laboratory for their work. Their new technique uses a sophisticated set of algorithms to direct an electron beam at a target atom with a precision of a few picometers (one trillionth of a meter). The beam does a tight loop to help zero in on its target, then sends a beam of electrons through the material in a carefully designed oscillating path, spending about a second at each location.
“We developed algorithms that allow us to quickly obtain information on where the beam is in the material,” Klein explains. “The trick is to use very few electrons in the process of getting that information, so the whole process is fast and does not unintentionally damage your crystal. It took many years to develop these algorithms and determine the minimum required information needed to infer where the atoms are located with the highest precision.”
The motion of the beam as it delivers electrons, an oscillating path devised by the researchers, pushes entire columns of atoms to new locations the way you might swipe a screen on your phone.
In their experiments, the researchers used this approach to direct the movement of columns of chromium atoms in a stable semiconductor material, chromium sulfide bromide, using a crystal about 13 nanometers thick. The beam created atom-sized vacancies in the material, each vacancy paired with the displaced atom, that they calculated would give the crystal exotic quantum properties.
To show how well their approach scaled, the researchers created over 40,000 defects in about 40 minutes, creating vacancies and interstitials across different distances and in different patterns, calculating that different atomic arrangements should give rise to different quantum mechanical properties.
“Each of these defects has certain ways to interact with its neighbors,” Ross says. “If you place them in a pattern, you could essentially simulate the interactions between the electrons within a molecule, so the whole electronic structure of that molecule can, in a sense, be mapped onto a pattern that you can write into a solid material.”
Probing quantum systems
The success of the approach was likely aided by the way chromium binds within the semiconductor, which has a unique electronic structure. The researchers are further investigating other crystals in which this might work, though they suspect it will be applicable to a diverse range of materials.
In the materials where it works, the approach has several advantages over existing techniques.
“Moving atoms within solids enables the creation of quantum properties in materials that are stable in the air outside of vacuum conditions,” Klein explains. “And this approach is also scalable to many atomic manipulations, so moving thousands or millions of atoms to create artificial structures would represent completely new physics. We’d like to study those systems.”
The researchers say their technique lays the foundation for a new class of programable matter, which could aid the development of a range of stable quantum devices.
“This is a way of accessing physical phenomena that involve a lot of atoms placed in a certain specified arrangement, and can’t be done by self-assembly,” Ross says. “You can create individually tuned atomic arrangements, and you can have so many of them, each arranged exactly how you like over areas that are tens and hundreds of nanometers. That leads to collective physics we are excited to explore.”
The work was supported, in part, by the Department of Energy and the National Science Foundation.
A new approach to cancer vaccination yields more powerful T cells
MIT engineers have developed a new way to amplify the T-cell response to mRNA vaccines — an advance that could lead to much more powerful cancer vaccines and stronger protection against infectious diseases.
Most vaccines generate both antibodies and T cells that can target the vaccine antigen by activating antigen-presenting cells, such as dendritic cells. In this study, the researchers boosted the T-cell response with a new type of vaccine adjuvant (a material that can help stimulate the immune system). The new adjuvant consists of mRNA molecules encoding genes that turn on immune signaling pathways and promote a supercharged T-cell response.
In studies in mice, this mRNA-encoded adjuvant enabled the immune system to completely eradicate most tumors, either on its own or delivered along with a tumor antigen. The adjuvant also boosted the T-cell response to vaccines against influenza and Covid-19.
“When these adjuvant mRNAs are included in the vaccines, the number of antigen-targeted T cells is substantially increased. These T cells play an important role in the immune response, assisting in the clearance of virally infected cells or, in the case of cancer, killing cancerous cells,” says Daniel Anderson, a professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science.
Anderson and Christopher Garris, an assistant professor at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears today in Nature Biotechnology. The paper’s lead authors are Akash Gupta, a former Koch Institute research scientist who is now an assistant professor at the University of Houston; Kaelan Reed, an MIT graduate student; and Riddha Das, a research fellow at Harvard Medical School and MGH. Robert Langer, the David H. Koch Institute Professor at MIT, and Ralph Weissleder, a professor of radiology and systems biology at MGH and Harvard Medical School, are also authors.
More powerful vaccines
Vaccines that stimulate the body’s immune system to attack tumors have shown promise in clinical trials, and a handful have been FDA-approved for certain cancers. In some patients, these vaccines stimulate a strong response, but in others, a weak response fails to kill the cancerous cells.
The MIT-MGH team wanted to find a way to make those immune responses more powerful. One way to do that is to deliver immune-stimulating molecules called cytokines along with a vaccine. However, cytokines can overstimulate the immune system, leading to potentially severe side effects.
As an alternative approach, the researchers decided to deliver mRNA strands encoding two genes, IRF8 and NIK, which are involved in antigen presentation and can switch immune cells into a more active state.
NIK is an enzyme that activates a signaling pathway involved in immunity and inflammation, while IRF8 is a transcription factor that helps program dendritic cells, particularly a subset called cDC1, which are especially effective at activating T cells. These antigen-presenting cells can digest foreign antigens and present them to T cells, stimulating the T cells to mount an immune response against the antigen.
“We see that the dendritic cells start shifting toward a more cDC1 phenotype, which is the most important dendritic cell phenotype and can generate a stronger T-cell response,” Gupta says.
The researchers packaged the mRNA in lipid nanoparticles similar to those used to deliver mRNA Covid vaccines, but with a different chemical composition that promotes their delivery to the spleen after being injected intravenously.
Inside the spleen, the particles encounter antigen-presenting cells, including dendritic cells. Within 24 hours, these cells begin expressing IRF8 and NIK, and both of these pathways help drive dendritic cells to mature and become activated so that they can prime an anti-tumor response.
Over a few days to a week, the T-cell population expands. These T cells, along with other immune cells such as natural killer (NK) cells, can then recognize and attack tumors.
“Most cancer immunotherapies rely on external signals to activate immune cells. We take a different approach — reprogramming immune cells from within by targeting their internal signaling machinery, enabling a more potent and durable anti-tumor response,” Das says.
Stronger T cells
The researchers tested the immune-remodeling mRNAs in several mouse models of cancer, including an aggressive bladder cancer, colon carcinoma, melanoma, and metastatic lung cancer. In nearly all of these mice, the injected mRNA stimulated a strong T-cell response that significantly slowed tumor growth and in many cases completely eradicated the tumors. This happened even when the mice were not given a vaccine against a specific cancer antigen. When they were, the response was even stronger.
“We showed that you can get an anti-cancer response with these adjuvants without including the antigen, just by activating the immune system. However, cancer-specific antigens with the adjuvants in a vaccine further improved the responses,” Anderson says.
The mRNA adjuvant also enhanced the immune response to immunotherapy drugs called checkpoint blockade inhibitors. These drugs, which work by lifting a brake that tumor cells put on T cells, are FDA-approved to treat several kinds of cancer. These drugs don’t work for all patients, but combining them with the mRNA vaccine adjuvant could offer a way to make them more effective, the researchers say.
“The microenvironment of solid tumors is often hostile to T cells and represents a major barrier to effective immunotherapy. We find that immune remodeling with these adjuvants creates a T cell–permissive environment and promotes tumor rejection,” Garris says.
The researchers also explored whether their new adjuvant could boost the immune response to vaccination against viral infection. When they delivered the mRNA particles along with Covid or flu vaccines, they found that the vaccine generated a 10-to-15-fold stronger T cell response in the mice.
The researchers now plan to test this approach in additional animal models, in hopes of developing it for use in both cancer and infectious diseases.
“While there are differences between the mouse systems that we’ve worked in and humans, we are optimistic that these adjuvants will work in humans and could improve a range of different vaccines,” Anderson says.
The research was funded by Sanofi, the National Institutes of Health, the Marble Center for Cancer Nanomedicine, and the Koch Institute Support (core) Grant from the National Cancer Institute.
3 Questions: Shedding light on why power grids go dark
On April 28, 2025, the power grid serving continental Spain and Portugal went down, causing gridlock in cities, cutting communications networks, and stranding people on trains, in airports, and in elevators all across the Iberian peninsula and briefly in a small area in southwest France close to the Spanish border. The unprecedented, massive blackout lasted as long as 12 hours in some areas, including in the capital city, Madrid. Not surprisingly, placing blame for the outage was rapid. Quick reactions pointed to cyberattack, sabotage, and natural phenomena such as solar flares.
But such theories were quickly laid to rest, and a panel of experts was formed to determine exactly what caused the blackout. After a year following the outage — and after much analysis by many experts — there isn’t a simple answer: In short, no one technology was to blame. While solar and wind generation was high, experts agree that the renewables weren’t at fault.
In this Q&A, Pablo Duenas-Martinez, a research scientist at the MIT Energy Initiative and an assistant professor at Universidad Pontificia Comillas in Madrid, provides an update.
Q: How does a proper, well-functioning power grid behave, and what does the system operator do to help?
A: There are two components to the flows on a power grid. One is “active power” — the part that lights up our light bulbs and runs our engines. With active power, the demand on the grid must always equal supply. The other component is “reactive power,” the part we can’t see but controls the voltage at which the power is delivered so it suits our devices. If voltage is too low, lights will flicker. If voltage is too high, devices may not only fail to work, but be damaged beyond repair.
The operator of the transmission system — the TSO — must control both components, and that can be tricky. Active power supply and demand are largely coordinated through markets. But controlling reactive power is harder. The main way the TSO can control it is to call on operators of conventional power generators, so generators burning natural gas, or coal, or nuclear plants. Those systems can be adjusted to either absorb or inject reactive power as needed to control voltage on the power grid — indeed, they are typically required by law to provide “reactive power control.”
In contrast, solar and wind generators always absorb reactive power. The large solar and wind sources can provide reactive power control when it’s needed, but doing so is costly for them — and in Spain, unlike in most countries, it’s not mandated by law, so they typically don’t do it. Meanwhile, there are many small solar systems — imagine lots of rooftop solar installations and small solar farms. Those small systems are directly connected to the distribution system. As a result, they’re not controlled by the TSO; the TSO may not even know whether they’ve shut down or are still running and absorbing reactive power.
Sometimes, fluctuations in voltage called “oscillations” can happen on a power grid: for example, when a transmission line or a generator is connected or disconnected. Oscillations can increase and decrease the voltage rapidly, and if voltage gets too high, generators and user devices can start “tripping” — that is, automatically disconnecting to prevent being damaged. Operators have standard protocols to follow to bring oscillations under control.
Q: So what happened on April 28 of last year?
A: The Spanish grid is loosely connected to the French grid and in practice is merged with the grid serving Portugal. Within Spain, we have many large solar and wind farms and lots of small installations of solar systems, many located in the southwestern area of the country. On April 28 — as on most spring days, when demand is low — about two-thirds of the power on the grid came from renewable sources. The rest came from a mix of nuclear and natural gas plants.
The day before the blackout, the TSO confirmed that there were no conventional generators scheduled to run. So, to ensure safe operation the next day, the TSO took steps that included dispatching 12 conventional generators, 10 of them to provide reactive power control. One of the units in the south called him back and said, “I won’t be available. I cannot switch on tomorrow.” The TSO thought he had things under control and continued operations with only nine units available to provide reactive power control.
During the morning on April 28, several small oscillations on the power grid were detected coming from Europe, plus one from Spain. To stabilize the weakened grid, the TSO connected additional transmission lines and took other technical actions.
At 12:19 p.m., a major oscillation was detected on the grid, again coming from Europe. In response, the TSO — again following standard protocol — reduced exports to Portugal, switched the flows to France from alternating current to direct current, and connected five more transmission lines within Spain. While those steps stabilized the voltage, the TSO recognized that there was now limited capacity on the system to control voltage. So, he called on a different conventional generator to begin running. But that unit couldn’t be available for an hour.
Suddenly, as a consequence of the previous actions, the voltage increased dramatically, and generating units began to trip. Within half-a-second, many of the small solar generators — especially prone to damage from high voltages — automatically shut down. Twenty milliseconds later, a big solar plant in southwestern Spain tripped. Because the solar plants were no longer absorbing reactive power, voltage on the system went up even more, and more systems shut down. The grid went into what some have called a death spiral, resulting in a total blackout across the Iberian peninsula and some areas of southern France.
Q: What have we learned from this Iberian blackout, and have changes been implemented to ensure that the same won’t happen again — or happen elsewhere?
A: A resilient power system must prevent, mitigate, respond, and recover. In this case, the first three components clearly failed. Preventive mechanisms were insufficient; they initially mitigated the oscillatory events, but left the system in a weakened state, and the response triggered the death spiral that led to the final blackout.
The good news is that the recovery was quick. The northern and southern sections of the peninsula had power back within a few hours. I live in the suburbs of Madrid, and I had power back just six hours later. My parents live downtown, so that was far more challenging — a big city with a large, complex load. Even so, they had power back in 12 hours — and 12 hours is quick for such a major, widespread blackout.
In the end, experts and analysts have agreed that the blackout was caused by a series of events that were all happening in the same place, at the same time. And the experience did provide a number of valuable learnings:
Lesson 1
The experience clearly demonstrated the importance of having a sufficient number of conventional power plants prepared to provide reactive power control, or to turn on right away when called on. There’s a recommendation calling for a set ratio between conventional generators and renewables on a power grid. Conventional facilities such as nuclear, hydroelectric, and fossil fuel plants rely on heavy metal wheels to generate electricity. Those massive rotating wheels have high inertia, so they’ll keep running and can help stabilize frequency and voltage even when solar and wind plants shut down. Before the blackout, Spain had a sufficient number of “rotating units” to meet the recommended ratio. However, in southern Spain, there was just one such unit — well below the recommended number, given the huge number of small solar units plus several large solar units in the area.
The message here is that you can't just look at the country as a whole. You have to look at regions. Voltage is a local problem that can propagate at the system level. Before the blackout, southern Spain typically had at most three conventional power plants. Now the region usually has six or seven at the ready to help with reactive power control.
Lesson 2
The rules or protocols for controlling reactive power and dealing with oscillations were not well designed. By law, rotating generators must automatically — and without being paid — do a defined amount of reactive power control. But making the needed operational change costs money, and a plant can do less than the required amount and not incur any kind of penalty. However, the TSO doesn’t know in advance how much reactive power control a given plant will actually do. Now that loophole in the law has been reviewed by the regulator.
The main rules have been updated, and now also require large solar and wind power plants — those above 5 megawatts — to provide reactive power control. More importantly, voltage control will be auctioned and remunerated, incentivizing rotating conventional generators and bringing in a new money stream for solar and wind power plants. Those power plants that do not upgrade their installation for voltage control might be disconnected by the TSO if local voltage issues arise.
Lesson 3
Another learning concerns the many small solar power generators and the protections that cause them to trip. The TSO doesn’t know in advance when this may happen because the small solar sources are directly connected to the distribution system, and therefore are under the umbrella of the distribution system operator. So, the learning here is that there should be more communication and coordination between the operator of the transmission system — the TSO — and the operator of the distribution system.
Lesson 4
In most countries, laws dictate a range of voltage that is approved. In Spain, the upper limit is high — in fact, it’s very near a voltage at which equipment may be damaged. And the Spanish grid tends to hover close to that upper limit, even during normal operation, and that can be a big problem: If there are strong oscillations — as there were leading up to the blackout — voltage can reach that upper limit, and protections on devices will automatically trip. The panel of experts has strongly recommended to lower this upper limit in Spain and align it with the rules in neighboring countries, including Portugal and France. The TSO is still studying the recommended change.
Lesson 5
During normal operation, the TSO controls voltage by activating rotating generators that can provide reactive power control. But as we saw in conditions leading up to the blackout, the TSO doesn’t always have rotating generators available.
Theoretically, TSOs have two more ways to control voltage. They can connect a device called a shunt reactor, which absorbs reactive power — a means of dealing with voltage rise. And they can regulate voltage directly using a “STATCOM,” a special device that provides rapid, dynamic voltage control.
However, neither the shunt reactors nor the STATCOM could help prevent the blackout. The shunt reactors available at that time were operated manually, and collapse of the grid happened so quickly that the TSO didn’t have time to connect them. And at that time, there was a single STATCOM device on the Spanish system. Planning was under way to install three more devices — and that installation is being rapidly completed.
From newspaper articles and off-the-record conversations, I’ve learned that the system has — due to similar external circumstances — been close to blackout again during the past year. But in part due to the learnings and to changes that have been implemented as a result, it didn’t happen again.
A new unit of measurement to honor an influential MIT alumnus
The hallowed history of student pranks (often known as hacks) at MIT includes the annual Baker House Piano Drop and the MIT weather balloon at the Harvard-Yale football game in 1982. One hack that has shown remarkable staying power in local lore is the 1958 measurement of the Massachusetts Ave. Bridge in “smoots,” a now accepted unit of meausrement named for the 5-foot, 7-inch Oliver R. Smoot Jr. ’62. Then a first-year pledge at the Lambda Chi Alpha fraternity, Smoot famously laid down hundreds of times across the span one storied night as his peers painted markers across the bridge, totaling 364.4 smoots (plus 1 ear). Nearly 70 years later, the smoot markings remain.
On April 4, an MIT team set out on a similar journey across the Charles River to pull off a new hack, this time measuring the Longfellow Bridge in “kleins.” This new measurement is named after Smoot’s classmate Martin Klein ’62. One klein (4 feet, 9.5 inches) is equal to 0.85820896 smoots. The expedition was undertaken in honor of both Smoot and the 85th birthday of Klein.
Known as the father of commercial side-scan sonar, Martin Klein serves on the MIT Sea Grant Advisory Board and the MIT Museum Collections Committee. He is a life fellow of both the Marine Technology Society and the Explorers Club, an international organization dedicated to the advancement of field exploration and scientific inquiry. His sonar technology has been used worldwide to help locate countless famous shipwrecks, including the Titanic, the World War I ocean liner RMS Lusitania, and the treasure-laden Nuestra Señora de Atocha.
Appropriately, the MIT team used a “side-scan” method to survey the Longfellow Bridge. Reclined on a custom-engineered wooden cart topped with a mission-specific chaise lounge pillow, Klein himself acted as the official observation device — by looking to the sides — as the team pulled him along the bridge. Some of the noted anomalies and discoveries included a Duck Boat passing underneath, a mermaid tail, a kayak paddle, a sleeping goose, and a tenacious survey team.
The initiative was spearheaded by Makenna Reilly, a second-year undergraduate in mechanical engineering, and Andrew Bennett ’85, PhD ’97, MIT Sea Grant education administrator and senior lecturer in the Department of Mechanical Engineering (MechE). Over a dozen surveyors joined the expedition, including alumni, faculty, and staff from MechE, MIT Sea Grant, MIT Edgerton Center, MIT Museum Hart Nautical Collections, Harvard Extension School, and Woods Hole Oceanographic Institution. MIT students also joined the effort, including senior Teagan Sullivan, junior Adrienne Lai, and graduate students Ansel Garcia-Langley, Erin Menezes, Manuel Valencia, and Gerardo Berlanga Molina.
The Longfellow Bridge was determined to be 442 kleins (plus 2 legs) and was celebrated as the “Shortfellow Bridge” in a ceremony following the event.
One klein = 57.5 inches = 146.05 centimeters = 1.4605 meters = .0009075126 miles = 1.597222 yards = 4.791667 feet = .0007886069 nautical miles = .007260087 furlongs = 0.7986111 fathoms = 172.5 barleycorns = 292,100,000 beard seconds = 647.4421 Ligne = 14.375 horse hands = 4.819655 shaku = .85820896 smoots.
Additional participants in the event include:
- Elisabeth (Libby) Meier, assistant curator for the Hart Nautical Collections at the MIT Museum;
- Dana Yoerger, PhD ’82, senior scientist applied ocean physics and engineering at WHOI;
- Professor George Buckley, assistant director of sustainability at Harvard University Extension School and diver of the year of the Boston Sea Rovers;
- Paul K. Matthias, senior program manager of the Ocean Observatories Initiative at the WHOI;
- Jim Bales, associate director of the Edgerton Center at MIT;
- John Freidah of MechE; and
- Joice Himawan ’83.
A new way to spot signs of dark matter
Dark matter is thought to make up most of the matter in the universe, but the only way it interacts with its surroundings is through gravity. If two colliding black holes spiral through a dense region of dark matter and merge, gravitational waves rippling across space and time could carry an imprint of that dark matter.
Now, physicists may be able to spot such imprints of dark matter in gravitational waves that are detected on Earth.
Researchers at MIT and in Europe have developed a method that makes predictions for what a gravitational wave should look like if it were produced by black holes that moved through dark matter, rather than empty space. They applied the technique to publicly available gravitational-wave data previously recorded by LIGO-Virgo-KAGRA (LVK), the global network of observatories that detect gravitational waves from black hole mergers and other far-off astrophysical sources.
The researchers looked through the gravitational-wave signals recorded over the LVK’s first three observing runs. From 28 of the clearest signals, the team found that 27 originated from black holes that merged in a vacuum, as physicists expected. But the pattern of one signal, GW190728, showed possible signs of a dark matter imprint.
The scientists emphasize that they have not detected dark matter. Rather, the new method offers a new way to screen gravitational-wave data for hints of dark matter, which physicists can then follow up and confirm with other techniques.
“We know that dark matter is around us. It just has to be dense enough for us to see its effects,” says Josu Aurrekoetxea, a postdoc in the MIT Department of Physics. “Black holes provide a mechanism to enhance this density, which we can now search for by analyzing the gravitational waves emitted when they merge.”
Aurrekoetxea and his colleagues report their results in a study appearing today in Physical Review Letters. The study’s co-authors are LVK member Soumen Roy of Université Catholique de Louvain (UCLouvain) in Belgium, Rodrigo Vicente of the University of Amsterdam, Katy Clough of Queen Mary University of London, and Pedro Ferreira of Oxford University.
A dark pull
Dark matter is an invisible, hypothetical form of matter that, unlike normal everyday matter, has no interactions with the electromagnetic force. Dark matter can pass through light, magnetic fields, and any other form of energy along the electromagnetic spectrum without leaving a trace. The only evidence that dark matter exists is through its apparent interaction with one other force: gravity.
By observing how gravity bends around distant galaxies, astronomers have surmised that there must be an extra force, outside of the galaxies’ own gravitational pull, to explain the bending fields, or “lensing.” This extra force, physicists suspect, is dark matter, which could account for over 85 percent of the matter in the universe. But exactly what dark matter is is a matter of huge debate, with theories for dark matter particles that range widely in particle size and properties.
One class of proposed dark matter consists of “light scalar” particles, whose masses are many orders of magnitude lighter than an electron. Theorists predict that such dark matter should behave not just as particles, but also as coordinated waves when moving near black holes.
When waves of dark matter come in contact with a rapidly spinning black hole, physicists predict that the black hole's rotational energy can be transferred to the dark matter, amplifying it. This phenomenon, known as superradiance, would whip up the waves to extremely high densities of dark matter, akin to churning cream into butter.
At high enough densities, light scalar dark matter, which is invisible by all other accounts, should leave an imprint on the gravitational waves that reverberate from the colliding black holes.
But exactly what would that imprint look like? And could such an imprint be detectable in gravitational waves that arrive on Earth, from black holes that merged many millions of light years away?
For answers to those questions, Aurrekoetxea and his colleagues developed a model to predict the gravitational waveform, or the pattern of gravitational waves that two black holes would produce, if they collided in an environment of dark matter, versus in a vacuum (empty space, with no dark matter).
An imprint’s prediction
For their new study, the team performed detailed numerical simulations to predict the gravitational wave that would be produced given various properties of two colliding black holes — a system known as a “black hole binary.” They considered black hole binaries across a range of scenarios and properties, for example, varying the size and mass of each black hole, the environment of dark matter that the black holes might pass through, and the density of the dark matter that the black holes would spin up.
They designed the model to predict what a gravitational wave from a black hole binary would look like if it carried an imprint of dark matter, and furthermore, what that wave would look like if it traveled a given distance across space and time, to eventually arrive at a detector on Earth.
With their model, they looked to see whether any gravitational-wave signals that have been detected on Earth match their predicted patterns of dark matter imprints. To do so, they applied the model to publicly-available data recorded by LVK over the observatories’ first three observing runs. The observatories have picked up hundreds of gravitational-wave signals during this period. For their purposes, the researchers focused on the clearest signals, comprising gravitational waves from 28 separate events.
For each event, the team compared the pattern of the actual gravitational wave against their model of what the signal would look like if it were generated by the same event in an environment of dark matter. They also compared the gravitational wave to the more expected scenario in which the signal was produced in a vacuum.
Of the 28 clearest signals that they analyzed, 27 were solidly within the predictions for having been produced in a vacuum. However, the pattern of one event, GW190728, showed a “preference,” or an agreement with the team’s dark matter model. In other words, the signal may carry an imprint of dark matter.
GW190728 is a gravitational wave that is named after the date that it was detected — on July 28, 2019. Scientists previously determined that the gravitational wave originated from a black hole binary with a total mass of about 20 times the mass of the sun. With their model, the team showed that such a system could have merged through a dense cloud of dark matter and produced a similar gravitational wave to GW190728.
“The statistical significance of this is not high enough to claim a detection of dark matter, and further checks should be performed by independent groups,” Aurrekoetxea says. “What we think is important to highlight is that without waveform models like ours, we could be detecting black hole mergers in dark matter environments, but systematically classifying them as having occurred in vacuum.”
“We now have the potential to discover dark matter around black holes as the LVK detectors keep collecting data in the coming years,” says co-author Soumen Roy, who led the data analysis part of the work. “It is an exciting time to search for new physics using gravitational waves.”
“Using black holes to look for dark matter would be fantastic,” adds co-author Rodrigo Vicente, who developed the analytical model of the signal. “We would be able to probe dark matter at scales much smaller than ever before.”
This work was supported, in part, by the U.S. National Science Foundation and MIT’s Center for Theoretical Physics — a Leinweber Institute.
Powerful shrinking technique could enable devices that compute with light
Using a new technique that can create vacancies at any site across a material and then shrink it to about 1/2,000 of its original volume, MIT researchers have designed nanotechnology devices that could be used for optical computing and other applications involving the manipulation of visible light.
The new fabrication technique, known as “implosion carving,” allows researchers to imprint features throughout a hydrogel using photopatterning. If patterned with a resolution of about 800 nanometers, these features can then be shrunk to less than 100 nanometers.
Because that resolution is smaller than the wavelength of light, the devices can bend light in specific ways that allow them to perform optical computations.
“In order to enable nanophotonic applications in visible light, we need to make nanostructures with feature sizes with a resolution less than 100 nanometers. Only in that way can we precisely create the structure that can manipulate visible light,” says Quansan Yang, a former MIT postdoc, now an assistant professor at the University of Washington, and one of the lead authors of the new study.
In their paper, the researchers demonstrated a photonic device that can perform a simple digit-classification task, but future versions could be used for high-speed imaging and information processing, they say.
Gaojie Yang, a former MIT postdoc, is the co-lead author of the paper, which appears today in Nature Photonics. The paper’s senior authors are Peter So, director of the MIT Laser Biomedical Research Center (LBCR) and an MIT professor of biological engineering and mechanical engineering, and Edward Boyden, the Y. Eva Tan Professor in Neurotechnology at MIT and a professor of biological engineering, media arts and sciences, and brain and cognitive sciences. Boyden is also a Howard Hughes Medical Institute investigator and a member of MIT’s McGovern Institute for Brain Research, the Yang Tan Collective, and Koch Institute for Integrative Cancer Research.
Nanoscale feature sizes
Photonic devices, which transmit and manipulate light, hold potential for use as optical computer chips that could offer an energy-efficient alternative to semiconductor chips. However, existing techniques for creating 3D photonic devices haven’t yet achieved the 100-nanometer resolution that is needed to channel visible light, which has wavelengths between 380 and 750 nanometers.
Using an additive manufacturing technique called two-photon lithography, researchers can use light to create 3D nanoscale features, but with a resolution larger than 100 nanometers. Another technique, known as electron-beam lithography, can be used to etch smaller-resolution features onto a silicon chip, but it doesn’t generate 3D structures.
To make 3D devices with the necessary feature size, the researchers extended the concept of “implosion fabrication,” which Boyden’s lab developed in 2018, to create a new variant called “implosion carving.” In implosion carving, a laser creates vacancies — tiny voids where the hydrogel material has been removed — at precisely targeted locations. These vacancies exhibit different optical properties than the surrounding hydrogel. The hydrogel is then shrunk to bring the patterned features down to the nanoscale.
The carving process begins with immersing the hydrogel in a photosensitizing dye. Then, the researchers use a laser to excite the photosensitizer at specific places in the gel, which in turn generates reactive oxygen species that cut the bonds holding the hydrogel together. This creates a vacancy in that spot.
Once the desired vacancy pattern has been carved into the hydrogel, the researchers shrink it using a two-step process. First, they soak it in a solution containing ions, which causes it to shrink about tenfold in each dimension. To shrink it a little more, and to remove the watery solution, the hydrogel then undergoes a process called supercritical drying, which can remove liquid from a gel without damaging it.
At the end of the process, the hydrogel has been shrunk more than tenfold in each dimension, leading to a 2,000-fold reduction in volume.
Computing with light
To demonstrate the versatility of this technique, the researchers used it to create several 3D shapes, including a helix and a structure inspired by a butterfly wing. Some of these structures are too thin, and have too high an aspect ratio, to be stably created using conventional two-photon lithography.
The researchers also created a device that could perform a simple calculation known as digit classification, a task that is traditionally used to test the performance of neural networks. During this task, the device was presented with a digit, such as 1 or 5, and had to light up a specific location to indicate which number was detected.
To achieve this, the researchers patterned vacancies throughout the device so that it would act like a neural network. The pattern of vacancies would diffract input light as it passed through many layers of patterned hydrogel, so that the output light was determined by the shape of the digit that was entered into the system.
“This is a purely optical system that effectively performs optical computing,” So says.
“One of the very attractive features of this technology is that you can manipulate the property of the material at every tiny location,” says Dushan Wadduwage, an assistant professor at Old Dominion University and former MIT postdoc, who is also an author of the paper. “You have millions of different locations that you need to decide the property of, and that turns into a really interesting design problem where we can use deep-learning algorithms to find designs over these millions of parameters and come up with parts that go into optical systems in new ways.”
The researchers now plan to use the same principles to build optical devices that could classify cells based on their state as they flow through a microfluidic device. This could help identify rare cells such as circulating tumor cells in a blood sample, they say.
This approach could also enable the creation of high-throughput imaging techniques for applications such as analyzing tissue samples from biopsies or surgical specimens. And, if adapted to work with other materials such as hydrophobic polymers, it could also be used to create channels within 3D nanofluidic devices.
Other authors of the paper include Gaojie Yang, Takahiro Nambara, Hiroyuki Kusaka, Yuichiro Kunai, Alex Matlock, Corban Swain, Brett Pryor, Yannick Salamin, Daniel Oran, Hasindu Kariyawasam, Ramith Hettiarachchi, and Marin Soljacic.
The research was funded, in part, by the MIT-Fujikura Partnership Fund, the U.S. Army Research Office through the Institute for Soldier Nanotechnologies at MIT, Lisa Yang and Y. Eva Tan, John Doerr, the Open Philanthropy Project, the Howard Hughes Medical Institute, and the U.S. National Institutes of Health.
Improving the reliability of circuits for quantum computers
Quantum computers could someday solve pressing problems that are too convoluted for classical computers, such as modeling complex molecular interactions to streamline drug discovery and materials development.
But to build a superconducting quantum computer that is large and resilient enough for real-world applications, scientists must precisely engineer thousands of quantum circuits so they perform operations with the lowest possible error rate.
To help scientists design more predictable circuits, researchers from MIT and Lincoln Laboratory developed a technique to measure a property that can unexpectedly cause a superconducting quantum circuit to deviate from its expected behavior. Their analysis revealed the source of these distortions, known as second-order harmonic corrections, leading to underperforming circuit architectures.
The MIT researchers fabricated a device to detect second-order harmonic corrections, identify their origin, and precisely measure their strength. This technique could help scientists deliberately design quantum circuits that can counteract the effects of these deviations.
This is especially important in larger and more complicated quantum circuits, where the negative impact of second-order harmonic corrections can be amplified.
“As we make our quantum computers bigger and we want to have more precise control over the parameters of these devices, identifying and measuring these effects is going to be important for us to have a precise understanding of how these systems are constructed. It is always important to keep diving down into the circuit to see if there is an effect you didn’t expect, which impacts how your device is performing,” says Max Hays, a research scientist in the Engineering Quantum Systems (EQuS) group of the Research Laboratory of Electronics (RLE) and co-lead author of a paper on this research.
Hays is joined on the paper by co-lead author Junghyun Kim, an electrical engineering and computer science (EECS) graduate student in the EQuS group; senior author William D. Oliver, the Henry Ellis Warren (1894) Professor of EECS and professor of physics, leader of the EQuS group, director of the Center for Quantum Engineering, and associate director of RLE; as well as others at MIT and Lincoln Laboratory. The research appears today in Nature Physics.
A pair-wise problem
In a quantum computer that utilizes superconducting circuits, which is one of many potential computing platforms, Josephson junctions are critical elements that enable the transfer and manipulation of information. These devices utilize two superconducting wires that are brought very close together, with a nanometer-scale barrier between them. Like a traditional circuit, the electric charge in Josephson junctions is carried by electrons.
But in a superconducting circuit, charge-carrying electrons pair up, forming what are called Cooper pairs. These Cooper pairs can “quantum tunnel” through the barrier between the two wires, transporting current from one wire to the other.
Cooper pairs can usually only tunnel one pair at a time, which is a key property that makes quantum computation possible.
“If you try to force more Cooper pairs through, it just doesn’t work. This non-linear effect is extremely important for all our circuits. If we didn’t have that effect, then we wouldn’t be able to control or manipulate any quantum information that we store in these circuits,” Hays explains.
But sometimes, Cooper pairs can unexpectedly squeeze through the barrier two at a time, an effect that is known as a second-order harmonic correction. This effect limits the performance of a quantum circuit that has been configured to only allow single-pair tunneling.
“If two Cooper pairs tunnel at the same time, then the assumption we used to build our circuit doesn’t apply anymore. We need to fix the circuit so it can handle that,” Kim says.
But before they can fix the circuit, scientists need to know the source and strength of these distortions.
To obtain this information, the MIT researchers fabricated a quantum circuit so it would be very sensitive to these effects. Essentially, the device is designed to suppress the quantum tunneling process of single Cooper pairs, while allowing the two-pair tunneling process to continue.
In this way, they can detect the presence of second-order harmonic corrections and precisely measure their strength.
Straight to the source
They can also use this circuit to pinpoint the source of these harmonics, which helps researchers identify the best way to correct for them.
There are two potential sources of second-order harmonics — one source is intrinsic to the dynamics of the Josephson junction and the other is caused by the wires connecting the junction to other circuit elements.
While prior research had indicated the second-order harmonics could be due to the dynamics of the junction, the MIT researchers found that additional inductance — the tendency to oppose changes in the flow of electric current —from wires in the circuit was the actual source in their devices.
“This is important because, if we know where the second-order harmonic correction is coming from, we can predict how strong it is likely to be, and use that information to engineer more predictable circuits that will hopefully perform better,” Hays says.
In the future, the researchers want to design experiments that more accurately predict how a device will perform when second-order harmonic corrections occur. They also want to study other sources of second-order harmonic corrections and whether those sources could have negative impacts on a circuit under different fabrication conditions.
This work is funded, in part, by the U.S. Department of Energy, the U.S. Co-design Center for Quantum Advantage, the U.S. Air Force, the Korea Foundation for Advanced Studies, and the Intelligence Community Postdoctoral Research Fellowship Program at MIT.
For most US drivers, EVs offer emissions benefits and cost savings
Despite regional variability in climate, electricity sources, congestion, and the wide variation in individual driving patterns, electric vehicles generate less greenhouse gas emissions and do not cost more than comparable gas-powered vehicles for drivers and vehicle fleet owners in most parts of the United States, according to a new study by MIT researchers.
The team’s approach captures many key factors that contribute to regional and individual differences in the life-cycle emissions and ownership cost of electric vehicles, including meteorological data, the distance and duration of trips, and fuel prices.
To paint a fuller picture of emissions and costs than was previously available, the researchers sourced data from thousands of U.S. zip codes and drilled down to the level of individual drivers within those locations. Their study considers time-averaged fuel prices so as not to be overly influenced by fluctuations in prices at any one point in time. They finalized their analysis at the end of 2024 and early 2025.
Their results indicate that a person’s driving behaviors can matter as much as regional factors like the local electricity mix when it comes to the emissions savings of an electric vehicle, compared to a similar gas-powered vehicle. In most locations, a battery-electric vehicle reduces emissions between 40 and 60 percent, with larger impacts in urban areas.
They also found that colder climates do not reduce overall emission benefits as much as some media reports assume.
The researchers utilized this detailed analysis to update a public tool they previously developed, carboncounter.com, which enables individuals to compare the life-cycle emissions and total ownership costs of nearly any car on the market. A new version of carboncounter.com is also being released today.
“There are a lot of statements being thrown around, like that electric vehicles don’t reduce emissions very much in cool climates, and we wanted to analyze these factors systematically and evaluate these statements against one another simultaneously. Rather than simply asking, ‘Are EVs better?’, this paper helps answer ‘better for whom, and under what conditions?’” says Marco Miotti PhD ’20, a senior researcher at ETH Zurich who completed this research while a graduate student in the Institute for Data, Systems, and Society (IDSS) at MIT.
He is joined on the paper by senior author Jessika Trancik, a professor in IDSS. The research appears today in Environmental Research Letters.
A holistic approach
Many prior studies that compare emissions and costs of electric vehicles (EVs) to combustion-engine vehicles cover a few factors, like the amount of renewable energy in the grid and how gas prices impact affordability, Miotti says.
“To our knowledge, there have been few efforts so far that bring all these factors together. But if someone wants to buy a car and have a better understanding of the factors that affect emissions and costs, this holistic approach is important,” he adds.
The researchers focused on two types of EVs: battery-electric vehicles, which only operate on electricity, and plug-in hybrid electric vehicles, which also have a combustion engine that works in tandem with the battery to optimize fuel savings.
The team expanded and improved a set of previously developed vehicle cost and emissions models to incorporate a wider variety of factors and data types.
For instance, they refined an existing model that estimates energy use and gas mileage so it could capture more nuances of local climate variability.
“But the real effort was not just in extending these different models, but in bringing together all these different data and making them work with the models in a consistent manner,” Miotti says.
The team sourced data on a wide variety of factors for each U.S. zip code, such as typical drive cycles, the amount of traffic, local gas and electricity prices, makeup of the regional electricity mix, meteorological profiles, and more. They used statistical approaches to amalgamate different types of data.
For example, the team used a probabilistic matching technique to combine data on how often people drive, which was drawn from nationwide travel surveys, with more detailed GPS data that includes factors like drivers’ acceleration patterns and the distance they usually drive on each day of the week.
The researchers designed their analysis to focus on the spatial picture of emissions and costs, based on U.S. zip codes, while simultaneously considering the impact of the size and features of each specific vehicle model.
“At the end of the day, it’s the vehicle and fleet owners who make decisions about vehicle purchases. So, we wanted to make sure to consider their wide-ranging individual perspectives rather than simply performing a region-by-region comparison,” says Trancik.
Lower emissions, comparable costs
In the end, their modeling framework revealed that all factors they analyzed matter about equally in determining emissions-reduction potential of EVs compared to internal combustion vehicles.
EVs reduce emissions the most in areas with a cleaner electricity mix, denser traffic, higher annual travel distances, and a mild climate, in decreasing order of importance. In each area, emission reductions increase for drivers who drive more often, drive larger vehicles, and are more frequently stuck in traffic.
In a colder area like North Dakota, fuel economy of battery-electric vehicles might be reduced by as much as 50 percent on a particularly frigid night, but the effect on annual emission benefits is minimal.
“We even did a sensitivity study to see if the range is reduced in very cold climates, and we found that, even in the most unfavorable conditions, EVs still reduce emissions by a substantial amount,” Miotti says.
On the cost side, the models show that, in most places across the U.S., EVs are competitive with comparable combustion-engine vehicles in terms of lifetime ownership cost, even without clean vehicle tax credits. And in areas where electricity is relatively affordable, battery-electric vehicles tend to cost less than their plug-in hybrid or combustion-engine counterparts.
In the future, the researchers want to expand this analysis to include a temporal dimension, so the framework also considers how changes in vehicle, fuel, and electricity prices affect emissions and costs over time.
“While we found that the electricity mix is a big driver of the spatial variation in emissions savings of EVs, the electricity grid is decarbonizing everywhere. As that happens, emissions savings across space will become more homogenous for EVs, but the differences across one driver to another will remain,” Miotti says.
They could also use the framework to explore regions outside the United States or incorporate data on hybrid-electric vehicles that cannot be plugged in.
This work was funded, in part, by the MIT Martin Family Society of Fellows for Sustainability.
Solving hard problems in soft electronics
A crepe cake.
That’s how Camille Cunin describes the polymer-metal “sandwiches” that became a highlight of her doctoral thesis at MIT’s Department of Materials Science and Engineering (DMSE). Over close to five years, these composites were a key component of her research on bioelectronics — devices designed to interface with the human body.
Cunin completed her PhD in February — she’ll attend commencement in May — but traces her interest in bioelectronics to a formative summer internship at Massachusetts General Hospital (MGH) in Boston in 2019. There, she saw a patient with Parkinson’s disease struggle to swallow a tethered “capsule” intended to function as an exploratory gut probe. The device failed, and the gap between lab-based design and real life became all too apparent.
The incident validated the career path Cunin had already begun to pursue: to make usable products that have a positive impact on people’s lives. It’s a purpose that hasn’t gone unnoticed. “Some might be happy with a sketch of a concept and no actual demonstration, but Camille has a remarkable ability in that she wants to do materials science that can translate to real-world applications,” says her advisor, Aristide Gumyusenge.
Building blocks
The daughter of a psychologist and an engineer, Cunin grew up in Paris, encouraged by her parents to be curious about the world around her. LEGO blocks featured prominently in her childhood. When her father found some old lights in a box in the attic, 9-year-old Camille strung them to decorate her LEGO castle by creating a circuit, complete with a fuse.
Strong grades earned her a spot in France’s elite post-secondary preparatory classes for admission to the country’s prestigious grandes écoles. The intensive and competitive prep classes, however, left Cunin with a sour aftertaste — “for a while I hated science, because the environment was too competitive for me,” she says — and a bit rudderless in engineering school.
It was the research internship thousands of miles from home, at MGH — part of her master’s in engineering at École Centrale de Marseille in France — that rebooted her love of science. The open-ended nature of research appealed to her curiosity and helped her regain confidence in solving problems. She was delighted to be accepted at MIT DMSE for her doctoral studies. “In Boston, I thrived in collaborative environments, and it felt like anything was possible,” she says.
Stretching possibilities
Before starting at MIT, Cunin had a wealth of interdisciplinary experience, from internships and her graduate studies. Unsure about how to slot it all together, she was looking for an advisor at a time when Gumyusenge, Henry L. Doherty Career Development Professor in Ocean Utilization and assistant professor of materials science and engineering, was himself just establishing his lab at DMSE.
When Gumyusenge shared plans to work on projects to turn biological signals into electronic data, Cunin was excited to build on her prior research in biomedical devices. “Here was a chance to fine-tune the materials and to optimize the performance of bioelectronic devices. I really felt I could leverage my strengths in Aristide’s lab,” she remembers.
Gumyusenge proved a great fit, supporting Cunin’s broad research ambitions while helping her shape and integrate them into a coherent doctoral project. She tackled everything from developing and characterizing new materials to fabricating transistors and learning surgery to test the devices in animal models. The final dissertation focused on organic transistors, which boost body signals for easier detection in soft electronics.
Biological signals, like those from nerves in the body, are weak, and transistors amplify them so they can be measured. The challenge with developing bioelectronic devices is that traditional components are hard and rigid, while the human body is not. Devices must perform as needed and be soft and flexible to avoid irritating human tissue.
Another complication: Biological processes involve charged ions moving through fluids, while electronics rely on electrons moving through materials. Before transistors can amplify signals, they first have to convert biological signals into electronic ones for circuits to pick up.
Cunin’s transistor design needed to solve two major challenges: first, to facilitate the movement of electrons and ions in the “channel,” the hub of all signal activity, in soft, hydrated environments; and second, to be pliable enough to conform to the human body.
It was no easy task.
Elegant simplicity
Gumyusenge’s lab typically uses chemistry to modify material behavior, but Cunin took a different tack. Since polymers are soft, and metals are good conductors, she looked to the classic French pastry mille-feuille, which inspired the layered design: thin metal sheets sandwiched between layers of porous elastomer. The metal stretches with the elastomer and forms microcracks. Charges get trapped in the cracks but can still flow through the stack, while the elastomer’s strong adhesion keeps the layers together.
Her approach won Cunin high marks from her advisor. “Camille was working on a complex problem, but she found a way to simplify it with a straightforward approach,” Gumyusenge says.
Of course, even an elegant solution needs test drives. “The more crystalline the polymers are, the better the charges percolate and travel in the material,” Cunin points out, referring to how ordered the semiconducting polymers in the transistor channel are. But if they’re packed too tightly, ions don’t move freely, and the transistor channel can’t switch properly. The arrangement of the spaghetti-like polymer chains controls this balance, so Cunin studied the composites’ structure to optimize both ionic and electronic performance.
Professor Polina Anikeeva, who co-advised Cunin with Gumyusenge and calls her “unstoppable,” says her innovation in the lab was remarkable — but not surprising.
“She didn’t have to be pushed into trying something new,” says Anikeeva, head of DMSE. “I would have higher and higher expectations, and she would consistently meet those higher and higher expectations.”
That drive continues in industry. Cunin now works at the Cambridge-based neurotechnology startup Axoft — just minutes from her former lab at MIT — researching soft electrodes that can be implanted in the brain. The electrodes detect electrical signals that can shed light on the brain’s many functions. “By understanding the brain better, we can eventually develop therapies and treatments that improve patient outcomes,” Cunin says.
Creative outlets
During her time at MIT, Cunin also made time for activities outside the lab, driven by the same curiosity that fueled her research. Committed to sharing her love of materials science and engineering, she was a leading member of the Polymer Graduate Student Association and organized several editions of MIT Polymer Day, a one-day symposium connecting students, faculty, and industry to showcase cutting-edge polymer research.
She also pursued creative outlets. After learning to use 3D graphics software Blender, Cunin illustrated some of the journal covers featuring her work.
She is also a diehard salsa fan and teaches the dance style a couple of times a week. Salsa’s social and collaborative forms appeal to Cunin, who enjoys sharing her passion, experimenting with choreography, and helping fellow dancers improve. “Salsa is fast — I love the mental challenge it brings. I also like that it exposes you to different aspects of the community; it pushes you out of your bubble,” she says.
Gumyusenge appreciates that Cunin made time for other pursuits throughout the grueling demands of a doctoral degree. “She’d work 14 hours a day in the lab, but also go do some hiking and take a break. I love that — it’s something that other PhD students seem to forget sometimes,” he says.
That balance reflects her determination and resolve. “Camille has never been shy about facing challenging research problems,” he says. “She had a research vision and was dedicated to learning the lessons she needed to get it all done. I learned to not get in her way because when Camille told you she would learn how to do something, she would.”
