MIT Latest News
 
    By attracting the world’s sharpest talent, MIT helps keep the US a step ahead
Just as the United States has prospered through its ability to draw talent from every corner of the globe, so too has MIT thrived as a magnet for the world’s most keen and curious minds — many of whom remain here to invent solutions, create companies, and teach future leaders, contributing to America’s success.
President Ronald Reagan remarked in 1989 that the United States leads the world “because, unique among nations, we draw our people — our strength — from every country and every corner of the world. And by doing so we continuously renew and enrich our nation.” Those words ring still ring true 36 years later — and the sentiment resonates especially at MIT.
"To find people with the drive, skill, and daring to see, discover, and invent things no one else can, we open ourselves to talent from every corner of the United States and from around the globe,” says MIT President Sally Kornbluth. “MIT is an American university, proudly so — but we would be gravely diminished without the students and scholars who join us from other nations."
MIT’s steadfast commitment to attracting the best and brightest talent from around the world has contributed to not just its own success, but also that of the nation as whole. MIT’s stature as an international hub of education and innovation adds value to the U.S. economy and competitiveness in myriad ways — from foreign-born faculty delivering breakthroughs here and founding American companies that create American jobs to international students contributing over $264 million annually to the U.S. economy during the 2023-24 school year.
Highlighting the extent and value of its global character, the Office of the Vice Provost for International Activities recently expanded a new video series, “The World at MIT.” In it, 20 faculty members born outside the United States tell how they dreamed of coming to MIT while growing up abroad and eventually joined the MIT faculty, where they’ve helped establish and maintain global leadership in science while teaching the next generation of innovators. A common thread running through their stories is the importance of the campus’s distinct nature as a community that is both profoundly American and deeply connected to the people, institutions, and concerns of regions and nations around the globe.
Joining the MIT faculty in 1980, MIT President Emeritus L. Rafael Reif knew almost instantly that he would stay.
“I was impressed by the richness of the variety of groups of people and cultures here,” says Reif, who moved to the United States from Venezuela and eventually served as MIT’s president from 2012 to 2022. “There is no richer place than MIT, because every point of view is here. That is what makes the place so special.”
The benefits of welcoming international students and researchers to campus extend well beyond MIT. More than 17,000 MIT alumni born elsewhere now call the United States home, for example, and many have founded U.S.-based companies that have generated billions of dollars in economic activity.
Contributing to America’s prestige internationally, one-third of MIT’s 104 Nobel laureates — including seven of the eight Nobel winners over the last decade — were born abroad. Drawn to MIT, they went on to make their breakthroughs in the United States. Among them is Lester Wolfe Professor of Chemistry Moungi Bawendi, who won the Nobel Prize in Chemistry in 2023 for his work in the chemical production of high-quality quantum dots.
“MIT is a great environment. It’s very collegial, very collaborative. As a result, we also have amazing students,” says Bawendi, who lived in France and Tunisia as a child before moving to the U.S. “I couldn’t have done my first three years here, which eventually got me a Nobel Prize, without having really bold, smart, adventurous graduate students.”
The give-and-take among MIT faculty and students also inspires electrical engineering and computer science professor Akintunde Ibitayo (Tayo) Akinwande, who grew up in Nigeria.
“Anytime I teach a class, I always learn something from my students’ probing questions,” Akinwande says. “It gives me new insights sometimes, and that’s always the kind of environment I like — where I’m learning something new all the time.”
MIT’s global vibe inspires its students to not only explore worlds of ideas in campus labs and classrooms, but to journey the world itself. Forty-three percent of undergraduates pursued international experiences during the last academic year — taking courses at foreign universities, conducting research, or interning at multinational companies. MIT students and faculty alike are regularly engaged in research outside the United States, addressing some of the world’s toughest challenges and devising solutions that can be deployed back home, as well as abroad. In so doing, they embody MIT’s motto of “mens et manus” (“mind and hand”), reflecting the educational ideals of MIT’s founders who promoted education for practical application.
As someone who loves exploring “lofty questions” along with the practical design of things, Nergis Mavalvala found a perfect fit at MIT and calls her position as the Marble Professor of Astrophysics and dean of the School of Science “the best job in the world.”
“Everybody here wants to make the world a better place and are using their intellectual gifts and their education to do so,” says Mavalvala, who emigrated from Pakistan. “And I think that’s an amazing community to be part of.”
Daniela Rus agrees. Now the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of MIT’s Computer Science and Artificial Intelligence Laboratory, Rus was drawn to the practical application of mathematics while still a student in her native Romania.
“And so, now here I am at MIT, essentially bringing together the world of science and math with the world of making things,” Rus says. “I’ve been here for two decades, and it’s been an extraordinary journey.”
The daughter of an Albert Einstein afficionado, Yukiko Yamashita grew up in Japan thinking of science not as a job, but a calling. MIT, where she is a professor of biology, is a place where people “are really open to unconventional ideas” and “intellectual freedom” thrives.
“There is something sacred about doing science. That’s how I grew up,” Yamashita says. “There are some distinct MIT characteristics. In a good way, people can’t let go. Every day, I am creating more mystery than I answer.”
For more about the paths that brought Yamashita and others to MIT and stories of how their disparate personal histories enrich the campus and wider community, visit the “World at MIT” videos website.
“Our global community’s multiplicity of ideas, experiences, and perspectives contributes enormously to MIT’s innovative and entrepreneurial spirit and, by extension, to the innovation and competitiveness of the U.S.,” says Vice Provost for International Activities Duane Boning, whose department developed the video series. “The bottom line is that both MIT and the U.S. grow stronger when we harness the talents of the world’s best and brightest.”
Improving the workplace of the future
Whitney Zhang ’21 believes in the importance of valuing workers regardless of where they fit into an organizational chart.
Zhang is a PhD student in MIT’s Department of Economics studying labor economics. She explores how the technological and managerial decisions companies make affect workers across the pay spectrum.
“I’ve been interested in economics, economic impacts, and related social issues for a long time,” says Zhang, who majored in mathematical economics as an undergraduate. “I wanted to apply my math skills to see how we could improve policies and their effects.”
Zhang is interested in how to improve conditions for workers. She believes it’s important to build relationships with policymakers, focusing on an evidence-driven approach to policy, while always remembering to center those the policies may affect. “We have to remember the people whose lives are impacted by business operations and legislation,” she says.
She’s also aware of the complex intermixture of politics, social status, and financial obligations organizations and their employees have to navigate.
“Though I’m studying workers, it’s important to consider the entire complex ecosystem when solving for these kinds of challenges, including firm incentives and global economic conditions,” she says.
The intersection of tech and labor policy
Zhang began investigating employee productivity, artificial intelligence, and related economic and labor market phenomena early in her time as a doctoral student, collaborating frequently with fellow PhD students in the department.
A collaboration with economics doctoral student Shakked Noy yielded the 2023 study investigating ChatGPT as a tool to improve productivity. Their research found it substantially increased workers’ productivity on writing tasks, most so for workers who initially performed the worst on the tasks.
“This was one of the earliest pieces of evidence on the productivity effects of generative AI, and contributed to providing concrete data on how impactful these types of tools might be in the workplace and on the labor market,” Zhang says.
In other ongoing research — “Determinants of Irregular Worker Schedules” — Zhang is using data from a payroll provider to examine scheduling unpredictability, investigating why companies employ unpredictable schedules and how these schedules affect low-wage employees’ quality of life.
The scheduling project, conducted with MIT economics PhD student Nathan Lazarus, is motivated, in part, by existing sociological evidence that low-wage workers’ unpredictable schedules are associated with worse sleep and well-being. “We’ve seen a relationship between higher turnover and inconsistent, inadequate schedules, which suggests workers dis-prefer these kinds of schedules,” Zhang says.
At an academic roundtable, Zhang presented her results to Starbucks employees involved in scheduling and staffing. The attendees wanted to learn more about how different scheduling practices impacted workers and their productivity. “These are the kinds of questions that could reveal useful information for small businesses, large corporations, and others,” she says.
By conducting this research, Zhang hopes to better understand whether or not scheduling regulations can improve affected employees’ quality of life, while also considering potential unintended consequences. “Why are these schedules set the way they’re set?” she asks. “Do businesses with these kinds of schedules require increased regulation?”
Another project, conducted with MIT economics doctoral student Arjun Ramani, examines the linkages between offshoring, remote work, and related outcomes. “Do the technological and managerial practices that have made remote work possible further facilitate offshoring?” she asks. “Do organizations see significant gains in efficiency? What are the impacts on U.S. and offshore workers?”
Her work is being funded through the National Science Foundation Graduate Research Fellowship Program and the Washington Center for Equitable Growth.
Putting people at the center
Zhang has observed the different kinds of people economics and higher education could bring together. She followed a dual enrollment track in high school, completing college-level courses with students from across a variety of demographic identities. “I enjoyed centering people in my work,” she says. “Taking classes with a diverse group of students, including veterans and mothers returning to school to complete their studies, made me more curious about socioeconomic issues and the policies relevant to them.”
She later enrolled at MIT, where she participated in the Undergraduate Research Opportunities Program (UROP). She also completed an internship at the World Bank, worked as a summer analyst at the Federal Reserve Bank of New York, and worked as an assistant for a diverse faculty cohort including MIT economists David Autor, Jon Gruber, and Nina Roussille. Autor is her primary advisor on her doctoral research, a mentor she cites as a significant influence.
“[Autor’s] course, 14.03 (Microeconomics and Public Policy), cemented connections between theory and practice,” she says. “I thought the class was revelatory in showing the kinds of questions economics can answer.”
Doctoral study has revealed interesting pathways of investigation for Zhang, as have her relationships with her student peers and other faculty. She has, for example, leveraged faculty connections to gain access to hourly wage data in support of her scheduling and employee impacts work. “Generally, economists have had administrative data on earnings, but not on hours,” she notes.
Zhang’s focus on improving others’ lives extends to her work outside the classroom. She’s a mentor for the Boston Chinatown Neighborhood Center College Access Program and a member of MIT’s Graduate Christian Fellowship group. When she’s not enjoying spicy soups or paddling on the Charles, she takes advantage of opportunities to decompress with her art at W20 Arts Studios.
“I wanted to create time for myself outside of research and the classroom,” she says.
Zhang cites the benefits of MIT’s focus on cross-collaboration and encouraging students to explore other disciplines. As an undergraduate, Zhang minored in computer science, which taught her coding skills critical to her data work. Exposure to engineering also led her to become more interested in questions around how technology and workers interact.
Working with other scholars in the department has improved how Zhang conducts inquiries. “I’ve become the kind of well-rounded student and professional who can identify and quantify impacts, which is invaluable for future projects,” she says. Exposure to different academic and research areas, Zhang argues, helps increase access to ideas and information.
NASA selects Adam Fuhrmann ’11 for astronaut training
U.S. Air Force Maj. Adam Fuhrmann ’11 was one of 10 individuals chosen from a field of 8,000 applicants for the 2025 U.S. astronaut candidate class, NASA announced in a live ceremony on Sept. 22.
This is NASA’s 24th class of astronaut candidates since the first Mercury 7 astronauts were chosen in 1959. Upon completion of his training, Fuhrmann will be the 45th MIT graduate to become a flight-eligible astronaut.
“As test pilots we don't do anything on our own, we work with amazing teams of engineers and maintenance professionals to plan, simulate, and execute complex and sometimes risky missions in aircraft to collect data and accomplish a mission, all while assessing risk and making smart calls as a team to do that as safely as possible,” Fuhrmann said at NASA’s announcement ceremony in Houston, Texas. “I'm happy to try to bring some of that experience to do the same thing with the NASA team and learn from everyone at Johnson Space Center how to apply those lessons to human spaceflight.”
His class now begins two years of training at the Johnson Space Center in Houston that includes instruction and skills development for complex operations aboard the International Space Station, Artemis missions to the moon, and beyond. Training includes robotics, land and water survival, geology, foreign language, space medicine and physiology, and more, while also conducting simulated spacewalks and flying high-performance jets.
From MIT to astronaut training
Fuhrmann, 35, is from Leesburg, Virginia, and has accumulated more than 2,100 flight hours in 27 aircraft, including the F-16 and F-35. He has served as a U.S. Air Force fighter pilot and experimental test pilot for nearly 14 years and deployed in support of operations Freedom’s Sentinel and Resolute Support, logging 400 combat hours.
Fuhrmann holds a bachelor’s degree in aeronautics and astronautics from MIT and master’s degrees in flight test engineering and systems engineering from the U.S. Air Force Test Pilot School and Purdue University, respectively. While at MIT, he was a member of Air Force ROTC Detachment 365 and was selected as the third-ever student leader of the Bernard M. Gordon-MIT Engineering Leadership Program (GEL) in spring 2011.
“We are tremendously proud of Adam for this notable accomplishment, and we look forward to following his journey through astronaut candidate school and beyond,” says Leo McGonagle, GEL founding and executive director.
“It’s always a thrill to learn that one of our own has joined NASA's illustrious astronaut corps,” says Julie Shah, head of the MIT Department of Aeronautics and Astronautics and the H.N. Slater Professor in Aeronautics and Astronautics. “Adam is Course 16’s 19th astronaut alum. We take very seriously the responsibility to provide the very best aerospace engineering education, and it's so gratifying to see that those fundamentals continue to set individuals from our community on the path to becoming an astronaut.”
Learning to be a leader at MIT
McGonagle recalls that Fuhrmann was a very early participant in GEL from 2009 to 2011.
“The GEL Program was still in its infancy during this time and was in somewhat of a fragile state as we were seeking to grow and cement ourselves as a viable MIT program. As the fall 2010 semester was winding down, it was evident that the program needed an effective GEL2 student leader during the spring semester, who could lead by example and inspire fellow students and who was an example of what right looks like. I knew Adam was already an emerging leader as a senior cadet in MIT’s Air Force ROTC Detachment, so I tapped him for the role of spring student leader of GEL,” said McGonagle.
Fuhrmann initially sought to decline this role, citing his time as a leader in ROTC. But McGonagle, having led the Army ROTC Program prior to GEL, felt that the GEL Student Leader role would challenge and develop Fuhrmann in other ways. In GEL, he would be charged with leading and inspiring students from a broad background of experiences, and focused exclusively on leading within engineering contexts, while engaging with engineering industry organizations.
“GEL needed strong student leadership at this time, so Adam took on the role, and it ended up being a win-win for both him and the program. He later expressed to me that the experience challenged him in ways that he hadn’t anticipated and complemented his Air Force ROTC leadership development. He was grateful for the opportunity, and the program stabilized and grew under Adam’s leadership. He was the right student at the right time and place,” said McGonagle.
Fuhrmann has remained connected to the GEL program. He asked McGonagle to administer his oath of commissioning into the U.S. Air Force, with his family in attendance, at the historic Bunker Hill Monument in Boston. “One of my proudest GEL memories,” said McGonagle, who is a former U.S. Army Lt. Colonel.
Throughout his time in service which included overseas deployments, Fuhrmann has actively participated in Junior Engineering Leader’s Roundtable leadership labs (ELLs) with GEL students, and he has kept in touch with his GEL2 cohort.
“Adam’s GEL2 cohort meets informally once or twice a year, usually via Zoom, to share and discuss professional challenges, lessons learned, life stories, to keep in touch with each other. This small but excellent group of GEL alum is committed to staying connected and supporting one another, as part of the broader GEL community,” said McGonagle.
MIT’s work with Idaho National Laboratory advances America’s nuclear industry
At the center of nuclear reactors across the United States, a new type of chromium-coated fuel is being used to make the reactors more efficient and more resistant to accidents. The fuel is one of many innovations sprung from collaboration between researchers at MIT and the Idaho National Laboratory (INL) — a relationship that has altered the trajectory of the country’s nuclear industry.
Amid renewed excitement around nuclear energy in America, MIT’s research community is working to further develop next-generation fuels, accelerate the deployment of small modular reactors (SMRs), and enable the first nuclear reactor in space.
Researchers at MIT and INL have worked closely for decades, and the collaboration takes many forms, including joint research efforts, student and postdoc internships, and a standing agreement that lets INL employees spend extended periods on MIT’s campus researching and teaching classes. MIT is also a founding member of the Battelle Energy Alliance, which has managed the Idaho National Laboratory for the Department of Energy since 2005.
The collaboration gives MIT’s community a chance to work on the biggest problems facing America’s nuclear industry while bolstering INL’s research infrastructure.
“The Idaho National Laboratory is the lead lab for nuclear energy technology in the United States today — that’s why it’s essential that MIT works hand in hand with INL,” says Jacopo Buongiorno, the Battelle Energy Alliance Professor in Nuclear Science and Engineering at MIT. “Countless MIT students and postdocs have interned at INL over the years, and a memorandum of understanding that strengthened the collaboration between MIT and INL in 2019 has been extended twice.”
Ian Waitz, MIT’s vice president for research, adds, “The strong collaborative history between MIT and the Idaho National Laboratory enables us to jointly contribute practical technologies to enable the growth of clean, safe nuclear energy. It’s a clear example of how rigorous collaboration across sectors, and among the nation’s top research facilities, can advance U.S. economic prosperity, health, and well-being.”
Research with impact
Much of MIT’s joint research with INL involves tests and simulations of new nuclear materials, fuels, and instrumentation. One of the largest collaborations was part of a global push for more accident-tolerant fuels in the wake of the nuclear accident that followed the 2011 earthquake and tsunami in Fukushima, Japan.
In a series of studies involving INL and members of the nuclear energy industry, MIT researchers helped identify and evaluate alloy materials that could be deployed in the near term to not only bolster safety but also offer higher densities of fuel.
“These new alloys can withstand much more challenging conditions during abnormal occurrences without reacting chemically with steam, which could result in hydrogen explosions during accidents,” explains Buongiorno, who is also the director of science and technology at MIT’s Nuclear Reactor Laboratory and the director of MIT’s Center for Advanced Nuclear Energy Systems. “The fuels can take much more abuse without breaking apart in the reactor, resulting in a higher safety margin.”
The fuels tested at MIT were eventually adopted by power plants across the U.S., starting with the Byron Clean Energy Center in Ogle County, Illinois.
“We’re also developing new materials, fuels, and instrumentation,” Buongiorno says. “People don’t just come to MIT and say, ‘I have this idea, evaluate it for me.’ We collaborate with industry and national labs to develop the new ideas together, and then we put them to the test, reproducing the environment in which these materials and fuels would operate in commercial power reactors. That capability is quite unique.”
Another major collaboration was led by Koroush Shirvan, MIT’s Atlantic Richfield Career Development Professor in Energy Studies. Shirvan’s team analyzed the costs associated with different reactor designs, eventually developing an open-source tool to help industry leaders evaluate the feasibility of different approaches.
“The reason we’re not building a single nuclear reactor in the U.S. right now is cost and financial risk,” Shirvan says. “The projects have gone over budget by a factor of two and their schedule has lengthened by a factor of 1.5, so we’ve been doing a lot of work assessing the risk drivers. There’s also a lot of different types of reactors proposed, so we’ve looked at their cost potential as well and how those costs change if you can mass manufacture them.”
Other INL-supported research of Shirvan’s involves exploring new manufacturing methods for nuclear fuels and testing materials for use in a nuclear reactor on the surface of the moon.
“You want materials that are lightweight for these nuclear reactors because you have to send them to space, but there isn’t much data around how those light materials perform in nuclear environments,” Shirvan says.
People and progress
Every summer, MIT students at every level travel to Idaho to conduct research in INL labs as interns.
“It’s an example of our students getting access to cutting-edge research facilities,” Shirvan says.
There are also several joint research appointments between the institutions. One such appointment is held by Sacit Cetiner, a distinguished scientist at INL who also currently runs the MIT and INL Joint Center for Reactor Instrumentation and Sensor Physics (CRISP) at MIT’s Nuclear Reactor Laboratory.
CRISP focuses its research on key technology areas in the field of instrumentation and controls, which have long stymied the bottom line of nuclear power generation.
“For the current light-water reactor fleet, operations and maintenance expenditures constitute a sizeable fraction of unit electricity generation cost,” says Cetiner. “In order to make advanced reactors economically competitive, it’s much more reasonable to address anticipated operational issues during the design phase. One such critical technology area is remote and autonomous operations. Working directly with INL, which manages the projects for the design and testing of several advanced reactors under a number of federal programs, gives our students, faculty, and researchers opportunities to make a real impact.”
The sharing of experts helps strengthen MIT and the nation’s nuclear workforce overall.
“MIT has a crucial role to play in advancing the country’s nuclear industry, whether that’s testing and developing new technologies or assessing the economic feasibility of new nuclear designs,” Buongiorno says.
MIT named No. 2 university by U.S. News for 2025-26
MIT has placed second in U.S. News and World Report’s annual rankings of the nation’s best universities, announced today.
As in past years, MIT’s engineering program continues to lead the list of undergraduate engineering programs at a doctoral institution. The Institute also placed first in five out of 10 engineering disciplines.
U.S. News placed MIT first in its evaluation of undergraduate computer science programs, ranking it No. 1 in four out of 10 computer science disciplines.
MIT also topped the list of undergraduate business programs, a ranking it shares with the University of Pennsylvania. Among business subfields, MIT is ranked first in two out of 10 specialties.
Within the magazine’s rankings of “academic programs to look for,” MIT topped the list in the category of undergraduate research and creative projects. The Institute also ranks as the second most innovative national university and the fourth best value, according to the U.S. News peer assessment survey of top academics.
MIT placed first in five engineering specialties: aerospace/aeronautical/astronautical engineering; chemical engineering; computer engineering; materials engineering; and mechanical engineering. It placed within the top five in two other engineering areas: biomedical engineering and electrical/electronic/communication engineering.
Other schools in the top five overall for undergraduate engineering programs are Stanford University, the University of California at Berkeley, Georgia Tech, Caltech, the University of Illinois at Urbana-Champaign, and the University of Michigan at Ann Arbor.
In computer science, MIT placed first in four specialties: artificial intelligence (shared with Carnegie Mellon University); biocomputing/bioinformatics/biotechnology; computer systems; and theory. It placed in the top five of six other disciplines: cybersecurity; data analytics/science; game/simulation development (shared with Carnegie Mellon); mobile/web applications; programming languages; and software engineering.
Other schools in the top five overall for undergraduate computer science programs are Carnegie Mellon, Stanford, UC Berkeley, Princeton University, and Georgia Tech.
Among undergraduate business specialties, the MIT Sloan School of Management leads in production/operations management and quantitative analysis. It also placed within the top five in five other categories: analytics; entrepreneurship; finance; management information systems; and supply chain management/logistics.
Other undergraduate business programs ranking in the top five include UC Berkeley, the University of Michigan at Ann Arbor, and New York University.
Recently, U.S. News & World Report ranked medium to large undergraduate economics programs based on a peer assessment survey; MIT’s economics program has placed first in this ranking.
MIT affiliates win AI for Math grants to accelerate mathematical discovery
MIT Department of Mathematics researchers David Roe ’06 and Andrew Sutherland ’90, PhD ’07 are among the inaugural recipients of the Renaissance Philanthropy and XTX Markets’ AI for Math grants.
Four additional MIT alumni — Anshula Gandhi ’19, Viktor Kunčak SM ’01, PhD ’07; Gireeja Ranade ’07; and Damiano Testa PhD ’05 — were also honored for separate projects.
The first 29 winning projects will support mathematicians and researchers at universities and organizations working to develop artificial intelligence systems that help advance mathematical discovery and research across several key tasks.
Roe and Sutherland, along with Chris Birkbeck of the University of East Anglia, will use their grant to boost automated theorem proving by building connections between the L-Functions and Modular Forms Database (LMFDB) and the Lean4 mathematics library (mathlib).
“Automated theorem provers are quite technically involved, but their development is under-resourced,” says Sutherland. With AI technologies such as large language models (LLMs), the barrier to entry for these formal tools is dropping rapidly, making formal verification frameworks accessible to working mathematicians.
Mathlib is a large, community-driven mathematical library for the Lean theorem prover, a formal system that verifies the correctness of every step in a proof. Mathlib currently contains on the order of 105 mathematical results (such as lemmas, propositions, and theorems). The LMFDB, a massive, collaborative online resource that serves as a kind of “encyclopedia” of modern number theory, contains more than 109 concrete statements. Sutherland and Roe are managing editors of the LMFDB.
Roe and Sutherland’s grant will be used for a project that aims to augment both systems, making the LMFDB’s results available within mathlib as assertions that have not yet been formally proved, and providing precise formal definitions of the numerical data stored within the LMFDB. This bridge will benefit both human mathematicians and AI agents, and provide a framework for connecting other mathematical databases to formal theorem-proving systems.
The main obstacles to automating mathematical discovery and proof are the limited amount of formalized math knowledge, the high cost of formalizing complex results, and the gap between what is computationally accessible and what is feasible to formalize.
To address these obstacles, the researchers will use the funding to build tools for accessing the LMFDB from mathlib, making a large database of unformalized mathematical knowledge accessible to a formal proof system. This approach enables proof assistants to identify specific targets for formalization without the need to formalize the entire LMFDB corpus in advance.
“Making a large database of unformalized number-theoretic facts available within mathlib will provide a powerful technique for mathematical discovery, because the set of facts an agent might wish to consider while searching for a theorem or proof is exponentially larger than the set of facts that eventually need to be formalized in actually proving the theorem,” says Roe.
The researchers note that proving new theorems at the frontier of mathematical knowledge often involves steps that rely on a nontrivial computation. For example, Andrew Wiles’ proof of Fermat’s Last Theorem uses what is known as the “3-5 trick” at a crucial point in the proof.
“This trick depends on the fact that the modular curve X_0(15) has only finitely many rational points, and none of those rational points correspond to a semi-stable elliptic curve,” according to Sutherland. “This fact was known well before Wiles’ work, and is easy to verify using computational tools available in modern computer algebra systems, but it is not something one can realistically prove using pencil and paper, nor is it necessarily easy to formalize.”
While formal theorem provers are being connected to computer algebra systems for more efficient verification, tapping into computational outputs in existing mathematical databases offers several other benefits.
Using stored results leverages the thousands of CPU-years of computation time already spent in creating the LMFDB, saving money that would be needed to redo these computations. Having precomputed information available also makes it feasible to search for examples or counterexamples without knowing ahead of time how broad the search can be. In addition, mathematical databases are curated repositories, not simply a random collection of facts.
“The fact that number theorists emphasized the role of the conductor in databases of elliptic curves has already proved to be crucial to one notable mathematical discovery made using machine learning tools: murmurations,” says Sutherland.
“Our next steps are to build a team, engage with both the LMFDB and mathlib communities, start to formalize the definitions that underpin the elliptic curve, number field, and modular form sections of the LMFDB, and make it possible to run LMFDB searches from within mathlib,” says Roe. “If you are an MIT student interested in getting involved, feel free to reach out!”
New tool makes generative AI models more likely to create breakthrough materials
The artificial intelligence models that turn text into images are also useful for generating new materials. Over the last few years, generative materials models from companies like Google, Microsoft, and Meta have drawn on their training data to help researchers design tens of millions of new materials.
But when it comes to designing materials with exotic quantum properties like superconductivity or unique magnetic states, those models struggle. That’s too bad, because humans could use the help. For example, after a decade of research into a class of materials that could revolutionize quantum computing, called quantum spin liquids, only a dozen material candidates have been identified. The bottleneck means there are fewer materials to serve as the basis for technological breakthroughs.
Now, MIT researchers have developed a technique that lets popular generative materials models create promising quantum materials by following specific design rules. The rules, or constraints, steer models to create materials with unique structures that give rise to quantum properties.
“The models from these large companies generate materials optimized for stability,” says Mingda Li, MIT’s Class of 1947 Career Development Professor. “Our perspective is that’s not usually how materials science advances. We don’t need 10 million new materials to change the world. We just need one really good material.”
The approach is described today in a paper published by Nature Materials. The researchers applied their technique to generate millions of candidate materials consisting of geometric lattice structures associated with quantum properties. From that pool, they synthesized two actual materials with exotic magnetic traits.
“People in the quantum community really care about these geometric constraints, like the Kagome lattices that are two overlapping, upside-down triangles. We created materials with Kagome lattices because those materials can mimic the behavior of rare earth elements, so they are of high technical importance.” Li says.
Li is the senior author of the paper. His MIT co-authors include PhD students Ryotaro Okabe, Mouyang Cheng, Abhijatmedhi Chotrattanapituk, and Denisse Cordova Carrizales; postdoc Manasi Mandal; undergraduate researchers Kiran Mak and Bowen Yu; visiting scholar Nguyen Tuan Hung; Xiang Fu ’22, PhD ’24; and professor of electrical engineering and computer science Tommi Jaakkola, who is an affiliate of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and Institute for Data, Systems, and Society. Additional co-authors include Yao Wang of Emory University, Weiwei Xie of Michigan State University, YQ Cheng of Oak Ridge National Laboratory, and Robert Cava of Princeton University.
Steering models toward impact
A material’s properties are determined by its structure, and quantum materials are no different. Certain atomic structures are more likely to give rise to exotic quantum properties than others. For instance, square lattices can serve as a platform for high-temperature superconductors, while other shapes known as Kagome and Lieb lattices can support the creation of materials that could be useful for quantum computing.
To help a popular class of generative models known as a diffusion models produce materials that conform to particular geometric patterns, the researchers created SCIGEN (short for Structural Constraint Integration in GENerative model). SCIGEN is a computer code that ensures diffusion models adhere to user-defined constraints at each iterative generation step. With SCIGEN, users can give any generative AI diffusion model geometric structural rules to follow as it generates materials.
AI diffusion models work by sampling from their training dataset to generate structures that reflect the distribution of structures found in the dataset. SCIGEN blocks generations that don’t align with the structural rules.
To test SCIGEN, the researchers applied it to a popular AI materials generation model known as DiffCSP. They had the SCIGEN-equipped model generate materials with unique geometric patterns known as Archimedean lattices, which are collections of 2D lattice tilings of different polygons. Archimedean lattices can lead to a range of quantum phenomena and have been the focus of much research.
“Archimedean lattices give rise to quantum spin liquids and so-called flat bands, which can mimic the properties of rare earths without rare earth elements, so they are extremely important,” says Cheng, a co-corresponding author of the work. “Other Archimedean lattice materials have large pores that could be used for carbon capture and other applications, so it’s a collection of special materials. In some cases, there are no known materials with that lattice, so I think it will be really interesting to find the first material that fits in that lattice.”
The model generated over 10 million material candidates with Archimedean lattices. One million of those materials survived a screening for stability. Using the supercomputers in Oak Ridge National Laboratory, the researchers then took a smaller sample of 26,000 materials and ran detailed simulations to understand how the materials’ underlying atoms behaved. The researchers found magnetism in 41 percent of those structures.
From that subset, the researchers synthesized two previously undiscovered compounds, TiPdBi and TiPbSb, at Xie and Cava’s labs. Subsequent experiments showed the AI model’s predictions largely aligned with the actual material’s properties.
“We wanted to discover new materials that could have a huge potential impact by incorporating these structures that have been known to give rise to quantum properties,” says Okabe, the paper’s first author. “We already know that these materials with specific geometric patterns are interesting, so it’s natural to start with them.”
Accelerating material breakthroughs
Quantum spin liquids could unlock quantum computing by enabling stable, error-resistant qubits that serve as the basis of quantum operations. But no quantum spin liquid materials have been confirmed. Xie and Cava believe SCIGEN could accelerate the search for these materials.
“There’s a big search for quantum computer materials and topological superconductors, and these are all related to the geometric patterns of materials,” Xie says. “But experimental progress has been very, very slow,” Cava adds. “Many of these quantum spin liquid materials are subject to constraints: They have to be in a triangular lattice or a Kagome lattice. If the materials satisfy those constraints, the quantum researchers get excited; it’s a necessary but not sufficient condition. So, by generating many, many materials like that, it immediately gives experimentalists hundreds or thousands more candidates to play with to accelerate quantum computer materials research.”
“This work presents a new tool, leveraging machine learning, that can predict which materials will have specific elements in a desired geometric pattern,” says Drexel University Professor Steve May, who was not involved in the research. “This should speed up the development of previously unexplored materials for applications in next-generation electronic, magnetic, or optical technologies.”
The researchers stress that experimentation is still critical to assess whether AI-generated materials can be synthesized and how their actual properties compare with model predictions. Future work on SCIGEN could incorporate additional design rules into generative models, including chemical and functional constraints.
“People who want to change the world care about material properties more than the stability and structure of materials,” Okabe says. “With our approach, the ratio of stable materials goes down, but it opens the door to generate a whole bunch of promising materials.”
The work was supported, in part, by the U.S. Department of Energy, the National Energy Research Scientific Computing Center, the National Science Foundation, and Oak Ridge National Laboratory.
How are MIT entrepreneurs using AI?
The Martin Trust Center for MIT Entrepreneurship strives to teach students the craft of entrepreneurship. Over the last few years, no technology has changed that craft more than artificial intelligence.
While many are predicting a rapid and complete transformation in how startups are built, the Trust Center’s leaders have a more nuanced view.
“The fundamentals of entrepreneurship haven’t changed with AI,” says Trust Center Entrepreneur in Residence Macauley Kenney. “There’s been a shift in how entrepreneurs accomplish tasks, and that trickles down into how you build a company, but we’re thinking of AI as another new tool in the toolkit. In some ways the world is moving a lot faster, but we also need to make sure the fundamental principles of entrepreneurship are well-understood.”
That approach was on display during this summer’s delta v startup accelerator program, where many students regularly turned to AI tools but still ultimately relied on talking to their customers to make the right decisions for their business.
Students in this year’s cohort used AI tools to accelerate their coding, draft presentations, learn about new industries, and brainstorm ideas. The Trust Center is encouraging students to use AI as they see fit while also staying mindful of the technology’s limitations.
The Trust Center itself has also embraced AI, most notably through Jetpack, its generative AI app that walks users through the 24 steps of disciplined entrepreneurship outlined in Managing Director Bill Aulet’s book of the same name. When students input a startup idea, the tool can suggest customer segments, early markets to pursue, business models, pricing, and a product plan.
The ways the Trust Center wants students to use Jetpack is apparent in its name: It’s inspired by the acceleration a jetpack provides, but users still need to guide its direction.
Even with AI technology’s current limitations, the Trust Center’s leaders acknowledge it can be a powerful tool for people at any stage of building a business, and their use of AI will continue to evolve with the technology.
“It’s undeniable we’re in the midst of an AI revolution right now,” says Entrepreneur in Residence Ben Soltoff. “AI is reshaping a lot of things we do, and it’s also shaping how we do entrepreneurship and how students build companies. The Trust Center has recognized that for years, and we’ve welcomed AI into how we teach entrepreneurship at all levels, from the earliest stages of idea formation to exploring and testing those ideas and understanding how to commercialize and scale them.”
AI’s strengths and weaknesses
For the past few years, when the Trust Center’s delta v staff get together for strategic retreats, AI has been a central topic. The delta v program’s organizers think about how students can get the most out of the technology each year as they plan their summer-long curriculum.
Everything starts with Orbit, the mobile app designed to help students find entrepreneurial resources, network with peers, access mentorship, and identify events and jobs. Jetpack was added to Orbit last year. It is trained on Aulet’s “Disciplined Entrepreneurship” as well as former Trust Center Executive Director Paul Cheek’s “Startup Tactics” book.
The Trust Center describes Jetpack’s outputs as first drafts designed to help students brainstorm their next steps.
“You need to verify everything when you are using AI to build a business,” says Kenney, who is also a lecturer at MIT Sloan and MIT D-Lab. “I have yet to meet anyone who will base their business on the output of something like ChatGPT without verifying everything first. Sometimes, the verification can take longer than if you had done the research yourself from the beginning.”
One company in this year’s cohort, Mendhai Health, uses AI and telehealth to offer personalized physical therapy for women struggling with pelvic floor dysfunction before and after childbirth.
“AI has definitely made the entrepreneurial process more efficient and faster,” says MBA student Aanchal Arora. “Still, overreliance on AI, at least at this point, can hamper your understanding of customers. You need to be careful with every decision you make.”
Kenney notes the way large language models are built can make them less useful for entrepreneurs.
“Some AI tools can increase your speed by doing things like automatically sorting your email or helping you vibe code apps, but many AI tools are built off averages, and those can be less effective when you’re trying to connect with a very specific demographic,” Kenney says. “It’s not helpful to have AI tell you about an average person, you need to personally have strong validation that your specific customer exists. If you try to build a tool for an average person, you may build a tool for no one at all.”
Students eager to embrace AI may also be overwhelmed by the sheer volume of tools available today. Fortunately, MIT students have a long history of being at the forefront of any new technology, and this year’s delta v cohort featured teams leveraging AI at the core of their solutions and in every step of their entrepreneurial journeys.
MIT Sloan MBA candidate Murtaza Jameel, whose company Cognify uses AI to simulates user interactions with websites and apps to improve digital experiences, describes his firm as an AI-native business.
“We’re building a design intelligence tool that replaces product testing with instant, predictive simulations of user behavior,” Jameel explains. “We’re trying to integrate AI into all of our processes: ideation, go to market, programming. All of our building has been done with AI coding tools. I have a custom bot that I’ve fed tons of information about our company to, and it’s a thought partner I’m speaking to every single day.”
The more things change…
One of the fundamentals the Trust Center doesn’t see changing is the need for students to get out of the lab or the classroom to talk to customers.
“There are ways that AI can unlock new capabilities and make things move faster, but we haven’t turned our curriculum on its head because of AI,” Soltoff says. “In delta v, we stress first and foremost: What are you building and who are you building it for? AI alone can’t tell you who your customer is, what they want, and how you can better serve their needs. You need to go out into the world to make that happen.”
Indeed, many of the biggest hurdles delta v teams faced this summer looked a lot like the hurdles entrepreneurs have always faced.
“We were prepared at the Trust Center to see a big change and to adapt to that, but the companies are still building and encountering the same challenges of customer identification, beachhead market identification, team dynamics,” Kenney says. “Those are still the big meaty challenges they’ve always been working on.”
Amid endless hype about AI agents and the future of work, many founders this summer still said the human side of delta v is what makes the program special.
“I came to MIT with one goal: to start a technology company,” Jameel says. “The delta v program was on my radar when I was applying to MIT. The program gives you incredible access to resources — networks, mentorship, advisors. Some of the top folks in our industry are advising us now on how to build our company. It’s really unique. These are folks who have done what you’re doing 10 or 20 years ago, all just rooting for you. That’s why I came to MIT.”
Power-outage exercises strengthen the resilience of US bases
In recent years, power outages caused by extreme weather or substation attacks have exposed the vulnerability of the electric grid. For the nation’s military bases, which are served by the grid, being ready for outages is a matter of national security. What better way to test readiness than to cut the power?
Lincoln Laboratory is doing just that with its Energy Resilience Readiness Exercises (ERREs). During an exercise, a base is disconnected from the grid, testing the ability of backup power systems and service members to work through failure. Lasting up to 15 hours, each exercise mimics a real outage event with limited forewarning to the base population.
“No one thought that this kind of real-world test would be accepted. We’ve now done it at 33 installations, impacting over 800,000 people,” says Jean Sack ’13, SM ’15, who leads the program with Christopher Lashway and Annie Weathers in the laboratory's Energy Systems Group.
According to a Department of Energy report, 70 percent of the nation’s transmission lines are approaching end of life. This aging infrastructure, combined with increasing power demands and interdependencies, threatens cascading failures. In response, the Department of Defense (DoD) has sharpened its focus on energy resilience, or the ability to anticipate, withstand, and recover from outages. On a base, an outage could disrupt critical missions, open the door to physical or cyberattacks, and cut off water supplies.
“Threats to this already-fragile system are increasing. That's why this work is so important,” Sack says.
Safely cutting power
Before an exercise, the laboratory team works closely with base leadership and infrastructure personnel to carefully plan how it will safely disconnect from utility power. Over multiple site visits, they study each building and mission to understand power capabilities, ensure health and safety, and develop contingency plans.
“We get people together who may never have spoken before, but depend on one another. We like to say ‘connecting mission owners to their utility providers,’” says Lashway, a former electrician turned energy-systems researcher. “The planning process is a huge learning opportunity, and a chance to fix issues ahead of the outage.”
On the day of the outage, laboratory staff are on site to ensure the process runs smoothly, but the base is meant to run the exercise. Since beginning in 2018, the ERRE campaign has reached huge installations, including Fort Bragg, a U.S. Army base in North Carolina that sees nearly 150,000 people daily, and sites as far away as England and Japan.
The key is to not limit its scope. All facilities and missions, especially those that are critical, should be included, and service members are tasked with working through issues. To make exercises even more useful as an evaluation of readiness, some are modified with scripted scenarios simulating real-world incidents. These scenarios might challenge personnel to handle a cyberattack to control systems, shutdown of a backup power plant, or a rocket launch during an outage.
“We can do all the tabletop exercises in the world, but when you actually pull the plug, the question is, what actually goes on?” former assistant secretary of defense for sustainment Robert McMahon said at a joint House Armed Services subcommittee hearing about initial exercises. “Perhaps the most important lesson that I've seen is a lack of appreciation and understanding by our senior leaders at the installation level, all the way up to my level, of what we thought was going to happen versus what actually occurred, and then being able to apply those lessons learned.”
Illuminating issues
The ERREs have brought to light common issues across bases. One of them is a reliance on fragile or faulty backup systems. For example, electronic equipment experiences a hard shutdown if it isn't supported by a backup battery to bridge power transitions. In some instances, these battery systems failed or unexpectedly depleted due to age or generator issues. “We see a giant comms room drop out, and then phones and computers don’t work. It emphasizes the need for redundancies,” Lashway says.
Generators also present issues. Some fail because they aren’t regularly serviced or refueled through the long outage. Sometimes, personnel mistakenly assumed a generator would support their entire building, requiring reconfigurations after the fact. Air conditioning systems are often excluded from generator-supported emergency circuits, but rooms with a large number of computers generate a lot of heat, and overheated equipment quickly shuts down.
The exercises also unveiled interdependencies and chain reactions. In one case, a fire-suppression system accidentally went off, dousing a hangar in foam. The cause was a pressure drop at the same exact moment a switch reset.
“Executing an operation at this scale stresses how each of these factors need to work harmoniously and efficiently to ensure that the base, and ultimately missions, remain functional,” Lashway says.
Beyond resolving technical issues, the exercises have been valuable for practicing coordination and following chains of command. They’ve also revealed social challenges of operating through outages. For instance, some DoD guidance restricts the use of generators at daycare centers, so parents needed to coordinate care while maintaining their mission. 
After an exercise, the laboratory compiles all findings in a report for the base. It provides time stamps of significant events by building, identifies links between issues, and summarizes common problems site-wide. It then provides recommendations to address vulnerabilities. “Our goal is to provide as much justification as possible for the base to get the resources they need to fix a problem,” Sack says. 
The researchers also want to help bases prevent issues and avoid costly repairs. Recently, they’ve been using power meters to capture electrical data before, during, and after an exercise. These monitoring tools reveal power-quality issues that are otherwise hidden.
“Not all power is created equal, and standards must be followed to ensure equipment, especially specialized military equipment, operates properly and doesn’t get damaged over the long term. Power metering provides a view into that,” says Lashway.
Sparking resiliency ahead
Lincoln Laboratory’s ERRE campaign has resulted in legislation. In 2021, Congress passed a law requiring each military branch to perform at least five ERREs, or "Black Start Exercises," per year through 2027. That law was recently reauthorized until 2032. The team has transitioned the ERRE process to two private companies, as well as within the Air Force and Army, to conduct exercises in the coming years.
“It's very exciting that this got Congress' attention and has scaled across the DoD,” says Nick Judson, who leads the portfolio of energy, water, and natural hazard resilience efforts within the Energy Systems Group. “This idea started out as a way to enable change on DoD installations, and included a lot of difficult conversations about turning the power off to critical missions, and now we're seeing significant improvements to the readiness of bases and their missions.”
It may even be encouraging some healthy competition across the services, Lashway says. At a recent regional event in Colorado, three U.S. Space Force installations each vied to push the scope and duration of their exercises.
The team’s focus is now turning to related analysis, such as water resiliency. Water and wastewater systems are vulnerable to disruptions beyond power outages, including equipment failure, sabotage, or water source depletion.
“We are conducting tabletop exercises and workshops uniting stakeholders around the importance of water and wastewater systems to enable missions,” says Amelia Servi, who leads this work. “So far, we’ve seen great engagement from groups managing water systems who have been seeking funds to fix these aging systems, and from missions who have previously taken water for granted.”
They are also working on long-term energy planning, including ways for installations to be less dependent on the grid. One way is to install microgrids, which are self-sufficient systems that can tap into stored energy. According to Sack, microgrids are highly customized and complicated to operate, so one goal is to design a standardized system. The team's recent power-metering data is providing useful initial inputs into such a design.
The researchers are also considering how this work could improve energy resiliency for civilians. Large-scale exercises might not be feasible for the public, but they could be conducted in areas important to public safety, or in places that rely on military resources. During one exercise in Georgia, city residents partially depended upon a base's power plant, so that exercise included working with the city to ensure its resiliency to the outage.
“Striking that balance of testing readiness without causing harm is a big challenge in this field and a huge motivation for us,” Sack says. “We are encouraged by the outcomes. Our work is impacting the services at the highest level, rewriting infrastructure policy, and making sure people can better sustain operations during grid disruptions.”
What does the future hold for generative AI?
When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.
What comes next for this powerful but imperfect tool?
With that question in mind, hundreds of researchers, business leaders, educators, and students gathered at MIT’s Kresge Auditorium for the inaugural MIT Generative AI Impact Consortium (MGAIC) Symposium on Sept. 17 to share insights and discuss the potential future of generative AI.
“This is a pivotal moment — generative AI is moving fast. It is our job to make sure that, as the technology keeps advancing, our collective wisdom keeps pace,” said MIT Provost Anantha Chandrakasan to kick off this first symposium of the MGAIC, a consortium of industry leaders and MIT researchers launched in February to harness the power of generative AI for the good of society.
Underscoring the critical need for this collaborative effort, MIT President Sally Kornbluth said that the world is counting on faculty, researchers, and business leaders like those in MGAIC to tackle the technological and ethical challenges of generative AI as the technology advances.
“Part of MIT’s responsibility is to keep these advances coming for the world. … How can we manage the magic [of generative AI] so that all of us can confidently rely on it for critical applications in the real world?” Kornbluth said.
To keynote speaker Yann LeCun, chief AI scientist at Meta, the most exciting and significant advances in generative AI will most likely not come from continued improvements or expansions of large language models like Llama, GPT, and Claude. Through training, these enormous generative models learn patterns in huge datasets to produce new outputs.
Instead, LuCun and others are working on the development of “world models” that learn the same way an infant does — by seeing and interacting with the world around them through sensory input.
“A 4-year-old has seen as much data through vision as the largest LLM. … The world model is going to become the key component of future AI systems,” he said.
A robot with this type of world model could learn to complete a new task on its own with no training. LeCun sees world models as the best approach for companies to make robots smart enough to be generally useful in the real world.
But even if future generative AI systems do get smarter and more human-like through the incorporation of world models, LeCun doesn’t worry about robots escaping from human control.
Scientists and engineers will need to design guardrails to keep future AI systems on track, but as a society, we have already been doing this for millennia by designing rules to align human behavior with the common good, he said.
“We are going to have to design these guardrails, but by construction, the system will not be able to escape those guardrails,” LeCun said.
Keynote speaker Tye Brady, chief technologist at Amazon Robotics, also discussed how generative AI could impact the future of robotics.
For instance, Amazon has already incorporated generative AI technology into many of its warehouses to optimize how robots travel and move material to streamline order processing.
He expects many future innovations will focus on the use of generative AI in collaborative robotics by building machines that allow humans to become more efficient.
“GenAI is probably the most impactful technology I have witnessed throughout my whole robotics career,” he said.
Other presenters and panelists discussed the impacts of generative AI in businesses, from largescale enterprises like Coca-Cola and Analog Devices to startups like health care AI company Abridge.
Several MIT faculty members also spoke about their latest research projects, including the use of AI to reduce noise in ecological image data, designing new AI systems that mitigate bias and hallucinations, and enabling LLMs to learn more about the visual world.
After a day spent exploring new generative AI technology and discussing its implications for the future, MGAIC faculty co-lead Vivek Farias, the Patrick J. McGovern Professor at MIT Sloan School of Management, said he hoped attendees left with “a sense of possibility, and urgency to make that possibility real.”
Meet the 2025 tenured professors in the School of Humanities, Arts, and Social Sciences
In 2025, six faculty were granted tenure in the MIT School of Humanities, Arts, and Social Sciences.
Sara Brown is an associate professor in the Music and Theater Arts Section. She develops stage designs for theater, opera, and dance by approaching the scenographic space as a catalyst for collective imagination. Her work is rooted in curiosity and interdisciplinary collaboration, and spans virtual environments, immersive performance installations, and evocative stage landscapes. Her recent projects include “Carousel” at the Boston Lyric Opera; the virtual dance performance “The Other Shore” at the Massachusetts Museum of Contemporary Art and Jacob’s Pillow; and “The Lehman Trilogy” at the Huntington Theatre Company. Her upcoming co-directed work, “Circlusion,” takes place within a fully immersive inflatable space and reimagines the female body’s response to power and violence. Her designs have been seen at the BAM Next Wave Festival in New York, the Festival d’Automne in Paris, and the American Repertory Theater in Cambridge.
Naoki Egami is a professor in the Department of Political Science. He is also a faculty affiliate of the MIT Institute for Data, Systems, and Society. Egami specializes in political methodology and develops statistical methods for questions in political science and the social sciences. His current research programs focus on three areas: external validity and generalizability; machine learning and AI for the social sciences; and causal inference with network and spatial data. His work has appeared in various academic journals in political science, statistics, and computer science, such as American Political Science Review, American Journal of Political Science, Journal of the American Statistical Association, Journal of the Royal Statistical Society (Series B), NeurIPS, and Science Advances. Before joining MIT, Egami was an assistant professor at Columbia University. He received a PhD from Princeton University (2020) and a BA from the University of Tokyo (2015).
Rachel Fraser is an associate professor in the Department of Linguistics and Philosophy. Before coming to MIT, Fraser taught at the University of Oxford, where she also completed her graduate work in philosophy. She has interests in epistemology, language, feminism, aesthetics, and political philosophy. At present, her main project is a book manuscript on the epistemology of narrative.
Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His research focuses on how we ought to form beliefs and make decisions. He works in epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics including collective action problems, legal standards of proof, algorithmic fairness, and political polarization, among others. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford. He received his BA From Princeton University in 2006 and his PhD from MIT in 2012.
Viola Schmitt is an associate professor in the Department of Linguistics and Philosophy. She is a linguist with a special interest in semantics. Much of her work focuses on trying to understand general constraints on human language meaning; that is, the principles regulating which meanings can be expressed by human languages and how languages can package meaning. Variants of this question were also central to grants she received from the Austrian and German research foundations. She earned her PhD in linguistics from the University of Vienna and worked as a postdoc and/or lecturer at the Universities of Vienna, Graz, Göttingen, and at the University of California at Los Angeles. Her most recent position was as a junior professor at Humboldt University in Berlin.
Miguel Zenón is an associate professor in the Music and Theater Arts Section. The Puerto Rican alto saxophonist, composer, band leader, music producer, and educator is a Grammy Award winner, the recipient of a Guggenheim Fellowship, a MacArthur Fellowship, and a Doris Duke Artist Award. He also holds an honorary doctorate degree in the arts from Universidad del Sagrado Corazón. Zenón has released 18 albums as a band leader and collaborated with some of the great musicians and ensembles of his time. As a composer, Zenón has been commissioned by Chamber Music America, Logan Center for The Arts, The Hyde Park Jazz Festival, Miller Theater, The Hewlett Foundation, Peak Performances, and many of his peers. Zenón has given hundreds of lectures and master classes at institutions all over the world, and in 2011 he founded Caravana Cultural — a program that presents jazz concerts free of charge in rural areas of Puerto Rico.
Inflammation jolts “sleeping” cancer cells awake, enabling them to multiply again
Cancer cells have one relentless goal: to grow and divide. While most stick together within the original tumor, some rogue cells break away to traverse to distant organs. There, they can lie dormant — undetectable and not dividing — for years, like landmines waiting to go off.
This migration of cancer cells, called metastasis, is especially common in breast cancer. For many patients, the disease can return months — or even decades — after initial treatment, this time in an entirely different organ.
Robert Weinberg, the Daniel K. Ludwig Professor for Cancer Research at MIT and a Whitehead Institute for Biomedical Research founding member, has spent decades unraveling the complex biology of metastasis and pursuing research that could improve survival rates among patients with metastatic breast cancer — or prevent metastasis altogether.
In his latest study, Weinberg, postdoc Jingwei Zhang, and colleagues ask a critical question: What causes these dormant cancer cells to erupt into a frenzy of growth and division? The group’s findings, published Sept. 1 in The Proceedings of the National Academy of Sciences (PNAS), point to a unique culprit.
This awakening of dormant cancer cells, they’ve discovered, isn’t a spontaneous process. Instead, the wake-up call comes from the inflamed tissue surrounding the cells. One trigger for this inflammation is bleomycin, a common chemotherapy drug that can scar and thicken lung tissue.
“The inflammation jolts the dormant cancer cells awake,” Weinberg says. “Once awakened, they start multiplying again, seeding new life-threatening tumors in the body.”
Decoding metastasis
There’s a lot that scientists still don’t know about metastasis, but this much is clear: Cancer cells must undergo a long and arduous journey to achieve it. The first step is to break away from their neighbors within the original tumor.
Normally, cells stick to one another using surface proteins that act as molecular “velcro,” but some cancer cells can acquire genetic changes that disrupt the production of these proteins and make them more mobile and invasive, allowing them to detach from the parent tumor. 
Once detached, they can penetrate blood vessels and lymphatic channels, which act as highways to distant organs.
While most cancer cells die at some point during this journey, a few persist. These cells exit the bloodstream and invade different tissues—lungs, liver, bone, and even the brain — to give birth to new, often more-aggressive tumors.
“Almost 90 percent of cancer-related deaths occur not from the original tumor, but when cancer cells spread to other parts of the body,” says Weinberg. “This is why it’s so important to understand how these ‘sleeping’ cancer cells can wake up and start growing again.”
Setting up shop in new tissue comes with changes in surroundings — the “tumor microenvironment” — to which the cancer cells may not be well-suited. These cells face constant threats, including detection and attack by the immune system. 
To survive, they often enter a protective state of dormancy that puts a pause on growth and division. This dormant state also makes them resistant to conventional cancer treatments, which often target rapidly dividing cells.
To investigate what makes this dormancy reversible months or years down the line, researchers in the Weinberg Lab injected human breast cancer cells into mice. These cancer cells were modified to produce a fluorescent protein, allowing the scientists to track their behavior in the body.
The group then focused on cancer cells that had lodged themselves in the lung tissue. By examining them for specific proteins — Ki67, ITGB4, and p63 — that act as markers of cell activity and state, the researchers were able to confirm that these cells were in a non-dividing, dormant state.
Previous work from the Weinberg Lab had shown that inflammation in organ tissue can provoke dormant breast cancer cells to start growing again. In this study, the team tested bleomycin — a chemotherapy drug known to cause lung inflammation — that can be given to patients after surgery to lower the risk of cancer recurrence.
The researchers found that lung inflammation from bleomycin was sufficient to trigger the growth of large lung cancer colonies in treated mice — and to shift the character of these once-dormant cells to those that are more invasive and mobile.
Zeroing in on the tumor microenvironment, the team identified a type of immune cells, called M2 macrophages, as drivers of this process. These macrophages release molecules called epidermal growth factor receptor (EGFR) ligands, which bind to receptors on the surface of dormant cancer cells. This activates a cascade of signals that provoke dormant cancer cells to start multiplying rapidly. 
But EGFR signaling is only the initial spark that ignites the fire. “We found that once dormant cancer cells are awakened, they retain what we call an ‘awakening memory,’” Zhang says. “They no longer require ongoing inflammatory signals from the microenvironment to stay active [growing and multiplying] — they remember the awakened state.”
While signals related to inflammation are necessary to awaken dormant cancer cells, exactly how much signaling is needed remains unclear. “This aspect of cancer biology is particularly challenging, because multiple signals contribute to the state change in these dormant cells,” Zhang says.
The team has already identified one key player in the awakening process, but understanding the full set of signals and how each contributes is far more complex — a question they are continuing to investigate in their new work. 
Studying these pivotal changes in the lives of cancer cells — such as their transition from dormancy to active growth — will deepen our scientific understanding of metastasis and, as researchers in the Weinberg Lab hope, lead to more effective treatments for patients with metastatic cancers.
Biogen groundbreaking stirs optimism in Kendall Square
Nearly 300 people gathered Tuesday to mark the ceremonial groundbreaking for Biogen’s new state-of-the-art facility in Kendall Square. The project is the first building to be constructed at MIT’s Kendall Common on the former Volpe federal site, and will serve as a consolidated headquarters for the pioneering biotechnology company which has called Cambridge home for more than 40 years.
In marking the start of construction, Massachusetts Governor Maura Healey addressed the enthusiastic crowd, saying, “Massachusetts science saves lives — saves lives here, saves lives around the world. We celebrate that in Biogen today, we celebrate that in Kendall Common, and we celebrate that in this incredible ecosystem that extends all across our great state. Today, Biogen is not just building a new facility, they are building the future of medicine and innovation.”
Emceed by Kirk Taylor, president and CEO of the Massachusetts Life Sciences Center, the event featured a specially created Lego model of the new building and a historic timeline of Biogen’s origin story overlaid on Kendall Square’s transformation. The program’s theme — “Making breakthroughs happen in Kendall Square” — seemed to elicit a palpable sense of pride among the Biogen and MIT employees, business leaders, and public officials in attendance.
MIT President Sally Kornbluth reflected on the vibrancy of the local innovation ecosystem: “I sometimes say that Kendall Square’s motto might as well be ‘talent in proximity.’ By following that essential recipe, Biogen’s latest decision to intensify its presence here promises great things for the whole region.” Kornbluth described Biogen’s move as “a very important signal to the world right now.”
Biogen’s March 2025 announcement that it will centralize operations at 75 Broadway was lauded as a show of strength for the historic company and the life sciences sector. The 580,000-square-foot research and development headquarters, designed by Elkus Manfredi Architects, will optimize Biogen’s scientific discovery and clinical processes. The new facility is scheduled to open in 2028.
CEO Chris Veihbacher shared his thoughts on Biogen’s decision: “I am proud to stand here with so many individuals who have shaped our past and who are dedicated to our future in Kendall Square. … We decided to invest in the next chapter of Kendall Square because of what this community represents: talent, energy, ingenuity, and collaboration.” Biogen was founded in 1978 by Nobel laureates Phillip Sharp (an MIT Institute Professor and professor of biology emeritus) and Wally Gilbert, both of whom were not only present, but received an impromptu standing ovation, led by Viehbacher.
Kendall Common is being developed by MIT’s Investment Management Company (MITIMCo) and will ultimately include four commercial buildings, four residential buildings (including affordable housing), open space, retail, entertainment, and a community center. MITIMCo’s joint venture partner for the Biogen project is BioMed Realty, a Blackstone Real Estate portfolio company.
Senior Vice President Patrick Rowe, who oversees MITIMCo’s real estate group, says, “Biogen is such a critical anchor for the area. I’m excited for the impact that this project will have on Kendall Square, and for the way that the Kendall Common development can help to further advance our innovation ecosystem.”
Could a primordial black hole’s last burst explain a mysteriously energetic neutrino?
The last gasp of a primordial black hole may be the source of the highest-energy “ghost particle” detected to date, a new MIT study proposes.
In a paper appearing today in Physical Review Letters, MIT physicists put forth a strong theoretical case that a recently observed, highly energetic neutrino may have been the product of a primordial black hole exploding outside our solar system.
Neutrinos are sometimes referred to as ghost particles, for their invisible yet pervasive nature: They are the most abundant particle type in the universe, yet they leave barely a trace. Scientists recently identified signs of a neutrino with the highest energy ever recorded, but the source of such an unusually powerful particle has yet to be confirmed.
The MIT researchers propose that the mysterious neutrino may have come from the inevitable explosion of a primordial black hole. Primordial black holes (PBHs) are hypothetical black holes that are microscopic versions of the much more massive black holes that lie at the center of most galaxies. PBHs are theorized to have formed in the first moments following the Big Bang. Some scientists believe that primordial black holes could constitute most or all of the dark matter in the universe today.
Like their more massive counterparts, PBHs should leak energy and shrink over their lifetimes, in a process known as Hawking radiation, which was predicted by the physicist Stephen Hawking. The more a black hole radiates, the hotter it gets and the more high-energy particles it releases. This is a runaway process that should produce an incredibly violent explosion of the most energetic particles just before a black hole evaporates away.
The MIT physicists calculate that, if PBHs make up most of the dark matter in the universe, then a small subpopulation of them would be undergoing their final explosions today throughout the Milky Way galaxy. And, there should be a statistically significant possibility that such an explosion could have occurred relatively close to our solar system. The explosion would have released a burst of high-energy particles, including neutrinos, one of which could have had a good chance of hitting a detector on Earth.
If such a scenario had indeed occurred, the recent detection of the highest-energy neutrino would represent the first observation of Hawking radiation, which has long been assumed, but has never been directly observed from any black hole. What’s more, the event might indicate that primordial black holes exist and that they make up most of dark matter — a mysterious substance that comprises 85 percent of the total matter in the universe, the nature of which remains unknown.
“It turns out there’s this scenario where everything seems to line up, and not only can we show that most of the dark matter [in this scenario] is made of primordial black holes, but we can also produce these high-energy neutrinos from a fluke nearby PBH explosion,” says study lead author Alexandra Klipfel, a graduate student in MIT’s Department of Physics. “It’s something we can now try to look for and confirm with various experiments.”
The study’s other co-author is David Kaiser, professor of physics and the Germeshausen Professor of the History of Science at MIT.
High-energy tension
In February, scientists at the Cubic Kilometer Neutrino Telescope, or KM3NeT, reported the detection of the highest-energy neutrino recorded to date. KM3NeT is a large-scale underwater neutrino detector located at the bottom of the Mediterranean Sea, where the environment is meant to mute the effects of any particles other than neutrinos.
The scientists operating the detector picked up signatures of a passing neutrino with an energy of over 100 peta-electron-volts. One peta-electron volt is equivalent to the energy of 1 quadrillion electron volts.
“This is an incredibly high energy, far beyond anything humans are capable of accelerating particles up to,” Klipfel says. “There’s not much consensus on the origin of such high-energy particles.”
Similarly high-energy neutrinos, though not as high as what KM3NeT observed, have been detected by the IceCube Observatory — a neutrino detector embedded deep in the ice at the South Pole. IceCube has detected about half a dozen such neutrinos, whose unusually high energies have also eluded explanation. Whatever their source, the IceCube observations enable scientists to work out a plausible rate at which neutrinos of those energies typically hit Earth. If this rate were correct, however, it would be extremely unlikely to have seen the ultra-high-energy neutrino that KM3NeT recently detected. The two detectors’ discoveries, then, seemed to be what scientists call “in tension.”
Kaiser and Klipfel, who had been working on a separate project involving primordial black holes, wondered: Could a PBH have produced both the KM3NeT neutrino and the handful of IceCube neutrinos, under conditions in which PBHs comprise most of the dark matter in the galaxy? If they could show a chance existed, it would raise an even more exciting possibility — that both observatories observed not only high-energy neutrinos but also the remnants of Hawking radiation.
“Our best chance”
The first step the scientists took in their theoretical analysis was to calculate how many particles would be emitted by an exploding black hole. All black holes should slowly radiate over time. The larger a black hole, the colder it is, and the lower-energy particles it emits as it slowly evaporates. Thus, any particles that are emitted as Hawking radiation from heavy stellar-mass black holes would be near impossible to detect. By the same token, however, much smaller primordial black holes would be very hot and emit high-energy particles in a process that accelerates the closer the black hole gets to disappearing entirely.
“We don’t have any hope of detecting Hawking radiation from astrophysical black holes,” Klipfel says. “So if we ever want to see it, the smallest primordial black holes are our best chance.”
The researchers calculated the number and energies of particles that a black hole should emit, given its temperature and shrinking mass. In its final nanosecond, they estimate that once a black hole is smaller than an atom, it should emit a final burst of particles, including about 1020 neutrinos, or about a sextillion of the particles, with energies of about 100 peta-electron-volts (around the energy that KM3NeT observed).
They used this result to calculate the number of PBH explosions that would have to occur in a galaxy in order to explain the reported IceCube results. They found that, in our region of the Milky Way galaxy, about 1,000 primordial black holes should be exploding per cubic parsec per year. (A parsec is a unit of distance equal to about 3 light years, which is more than 10 trillion kilometers.)
They then calculated the distance at which one such explosion in the Milky Way could have occurred, such that just a handful of the high-energy neutrinos could have reached Earth and produced the recent KM3NeT event. They find that a PBH would have to explode relatively close to our solar system — at a distance about 2,000 times further than the distance between the Earth and our sun.
The particles emitted from such a nearby explosion would radiate in all directions. However, the team found there is a small, 8 percent chance that an explosion can happen close enough to the solar system, once every 14 years, such that enough ultra-high-energy neutrinos hit the Earth.
“An 8 percent chance is not terribly high, but it’s well within the range for which we should take such chances seriously — all the more so because so far, no other explanation has been found that can account for both the unexplained very-high-energy neutrinos and the even more surprising ultra-high-energy neutrino event,” Kaiser says.
The team’s scenario seems to hold up, at least in theory. To confirm their idea will require many more detections of particles, including neutrinos at “insanely high energies.” Then, scientists can build up better statistics regarding such rare events.
“In that case, we could use all of our combined experience and instrumentation, to try to measure still-hypothetical Hawking radiation,” Kaiser says. “That would provide the first-of-its-kind evidence for one of the pillars of our understanding of black holes — and could account for these otherwise anomalous high-energy neutrino events as well. That’s a very exciting prospect!”
In tandem, other efforts to detect nearby PBHs could further bolster the hypothesis that these unusual objects make up most or all of the dark matter.
This work was supported, in part, by the National Science Foundation, MIT’s Center for Theoretical Physics – A Leinweber Institute, and the U.S. Department of Energy.
New 3D bioprinting technique may improve production of engineered tissue
The field of tissue engineering aims to replicate the structure and function of real biological tissues. This engineered tissue has potential applications in disease modeling, drug discovery, and implantable grafts.
3D bioprinting, which uses living cells, biocompatible materials, and growth factors to build three-dimensional tissue and organ structures, has emerged as a key tool in the field. To date, one of the most-used approaches for bioprinting relies on additive manufacturing techniques and digital models, depositing 2D layers of bio-inks, composed of cells in a soft gel, into a support bath, layer-by-layer, to build a 3D structure. While these techniques do enable fabrication of complex architectures with features that are not easy to build manually, current approaches have limitations.
“A major drawback of current 3D bioprinting approaches is that they do not integrate process control methods that limit defects in printed tissues. Incorporating process control could improve inter-tissue reproducibility and enhance resource efficiency, for example limiting material waste,” says Ritu Raman, the Eugene Bell Career Development Chair of Tissue Engineering and an assistant professor of mechanical engineering.
She adds, “given the diverse array of available 3D bioprinting tools, there is a significant need to develop process optimization techniques that are modular, efficient, and accessible.”
The need motivated Raman to seek the expertise of Professor Bianca Colosimo of the Polytechnic University of Milan, also known as Polimi. Colosimo recently completed a sabbatical at MIT, which was hosted by John Hart, Class of 1922 Professor, co-director of MIT’s Initiative for New Manufacturing, director of the Center for Advanced Production Technologies, and head of the Department of Mechanical Engineering.
“Artificial Intelligence and data mining are already reshaping our daily lives, and their impact will be even more profound in the emerging field of 3D bioprinting, and in manufacturing at large,” says Colosimo. During her MIT sabbatical, she collaborated with Raman and her team to co-develop a solution that represents a first step toward intelligent bioprinting.
“This solution is now available in both our labs at Polimi and MIT, serving as a twin platform to exchange data and results across different environments and paving the way for many new joint projects in the years to come,” Colosimo says.
A new paper by Raman, Colosimo, and lead authors Giovanni Zanderigo, a Rocca Fellow at Polimi, and Ferdows Afghah of MIT published this week in the journal Device presents a novel technique that addresses this challenge. The team built and validated a modular, low-cost, and printer-agnostic monitoring technique that integrates a compact tool for layer-by-layer imaging. In their method, a digital microscope captures high-resolution images of tissues during printing and rapidly compares them to the intended design with an AI-based image analysis pipeline.
“This method enabled us to quickly identify print defects, such as depositing too much or too little bio-ink, thus helping us identify optimal print parameters for a variety of different materials,” says Raman. “The approach is a low-cost — less than $500 — scalable, and adaptable solution that can be readily implemented on any standard 3D bioprinter. Here at MIT, the monitoring platform has already been integrated into the 3D bioprinting facilities in The SHED. Beyond MIT, our research offers a practical path toward greater reproducibility, improved sustainability, and automation in the field of tissue engineering. This research could have a positive impact on human health by improving the quality of the tissues we fabricate to study and treat debilitating injuries and disease.”
The authors indicate that the new method is more than a monitoring tool. It also serves as a foundation for intelligent process control in embedded bioprinting. By enabling real-time inspection, adaptive correction, and automated parameter tuning, the researchers anticipate that the approach can improve reproducibility, reduce material waste, and accelerate process optimization for real-world applications in tissue engineering.
A more precise way to edit the genome
A genome-editing technique known as prime editing holds potential for treating many diseases by transforming faulty genes into functional ones. However, the process carries a small chance of inserting errors that could be harmful.
MIT researchers have now found a way to dramatically lower the error rate of prime editing, using modified versions of the proteins involved in the process. This advance could make it easier to develop gene therapy treatments for a variety of diseases, the researchers say.
“This paper outlines a new approach to doing gene editing that doesn’t complicate the delivery system and doesn’t add additional steps, but results in a much more precise edit with fewer unwanted mutations,” says Phillip Sharp, an MIT Institute Professor Emeritus, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the new study.
With their new strategy, the MIT team was able to improve the error rate of prime editors from about one error in seven edits to one in 101 for the most-used editing mode, or from one error in 122 edits to one in 543 for a high-precision mode.
“For any drug, what you want is something that is effective, but with as few side effects as possible,” says Robert Langer, the David H. Koch Institute Professor at MIT, a member of the Koch Institute, and one of the senior authors of the new study. “For any disease where you might do genome editing, I would think this would ultimately be a safer, better way of doing it.”
Koch Institute research scientist Vikash Chauhan is the lead author of the paper, which appears today in Nature.
The potential for error
The earliest forms of gene therapy, first tested in the 1990s, involved delivering new genes carried by viruses. Subsequently, gene-editing techniques that use enzymes such as zinc finger nucleases to correct genes were developed. These nucleases are difficult to engineer, however, so adapting them to target different DNA sequences is a very laborious process.
Many years later, the CRISPR genome-editing system was discovered in bacteria, offering scientists a potentially much easier way to edit the genome. The CRISPR system consists of an enzyme called Cas9 that can cut double-stranded DNA at a particular spot, along with a guide RNA that tells Cas9 where to cut. Researchers have adapted this approach to cut out faulty gene sequences or to insert new ones, following an RNA template.
In 2019, researchers at the Broad Institute of MIT and Harvard reported the development of prime editing: a new system, based on CRISPR, that is more precise and has fewer off-target effects. A recent study reported that prime editors were successfully used to treat a patient with chronic granulomatous disease (CGD), a rare genetic disease that affects white blood cells.
“In principle, this technology could eventually be used to address many hundreds of genetic diseases by correcting small mutations directly in cells and tissues,” Chauhan says.
One of the advantages of prime editing is that it doesn’t require making a double-stranded cut in the target DNA. Instead, it uses a modified version of Cas9 that cuts just one of the complementary strands, opening up a flap where a new sequence can be inserted. A guide RNA delivered along with the prime editor serves as the template for the new sequence.
Once the new sequence has been copied, however, it must compete with the old DNA strand to be incorporated into the genome. If the old strand outcompetes the new one, the extra flap of new DNA hanging off may accidentally get incorporated somewhere else, giving rise to errors.
Many of these errors might be relatively harmless, but it’s possible that some could eventually lead to tumor development or other complications. With the most recent version of prime editors, this error rate ranges from one per seven edits to one per 121 edits for different editing modes.
“The technologies we have now are really a lot better than earlier gene therapy tools, but there’s always a chance for these unintended consequences,” Chauhan says.
Precise editing
To reduce those error rates, the MIT team decided to take advantage of a phenomenon they had observed in a 2023 study. In that paper, they found that while Cas9 usually cuts in the same DNA location every time, some mutated versions of the protein show a relaxation of those constraints. Instead of always cutting the same location, those Cas9 proteins would sometimes make their cut one or two bases further along the DNA sequence.
This relaxation, the researchers discovered, makes the old DNA strands less stable, so they get degraded, making it easier for the new strands to be incorporated without introducing any errors.
In the new study, the researchers were able to identify Cas9 mutations that dropped the error rate to 1/20th its original value. Then, by combining pairs of those mutations, they created a Cas9 editor that lowered the error rate even further, to 1/36th the original amount.
To make the editors even more accurate, the researchers incorporated their new Cas9 proteins into a prime editing system that has an RNA binding protein that stabilizes the ends of the RNA template more efficiently. This final editor, which the researchers call vPE, had an error rate just 1/60th of the original, ranging from one in 101 edits to one in 543 edits for different editing modes. These tests were performed in mouse and human cells.
The MIT team is now working on further improving the efficiency of prime editors, through further modifications of Cas9 and the RNA template. They are also working on ways to deliver the editors to specific tissues of the body, which is a longstanding challenge in gene therapy.
They also hope that other labs will begin using the new prime editing approach in their research studies. Prime editors are commonly used to explore many different questions, including how tissues develop, how populations of cancer cells evolve, and how cells respond to drug treatment.
“Genome editors are used extensively in research labs,” Chauhan says. “So the therapeutic aspect is exciting, but we are really excited to see how people start to integrate our editors into their research workflows.”
The research was funded by the Life Sciences Research Foundation, the National Institute of Biomedical Imaging and Bioengineering, the National Cancer Institute, and the Koch Institute Support (core) Grant from the National Cancer Institute.
Working to make fusion a viable energy source
George Tynan followed a nonlinear path to fusion.
Following his undergraduate degree in aerospace engineering, Tynann's work in the industry spurred his interest in rocket propulsion technology. Because most methods for propulsion involve the manipulation of hot ionized matter, or plasmas, Tynan focused his attention on plasma physics.
It was then that he realized that plasmas could also drive nuclear fusion. “As a potential energy source, it could really be transformative, and the idea that I could work on something that could have that kind of impact on the future was really attractive to me,” he says.
That same drive, to realize the promise of fusion by researching both plasma physics and fusion engineering, drives Tynan today. It’s work he will be pursuing as the Norman C. Rasmussen Adjunct Professor in the Department of Nuclear Science and Engineering (NSE) at MIT.
An early interest in fluid flow
Tynan’s enthusiasm for science and engineering traces back to his childhood. His electrical engineer father found employment in the U.S. space program and moved the family to Cape Canaveral in Florida.
“This was in the ’60s, when we were launching Saturn V to the moon, and I got to watch all the launches from the beach,” Tynan remembers. That experience was formative and Tynan became fascinated with how fluids flow.
“I would stick my hand out the window and pretend it was an airplane wing and tilt it with oncoming wind flow and see how the force would change on my hand,” Tynan laughs. The interest eventually led to an undergraduate degree in aerospace engineering at California State Polytechnic University in Pomona.
The switch to a new career would happen after work in the private sector, when Tynan discovered an interest in the use of plasmas for propulsion systems. He moved to the University of California at Los Angeles for graduate school, and it was here that the realization that plasmas could also anchor fusion moved Tynan into this field.
This was in the ’80s, when climate change was not as much in the public consciousness as it is today. Even so, “I knew there’s not an infinite amount of oil and gas around, and that at some point we would have to have widespread adoption of nuclear-based sources,” Tynan remembers. He was also attracted by the sustained effort it would take to make fusion a reality.
Doctoral work
To create energy from fusion, it’s important to get an accurate measurement of the “energy confinement time,” which is a measure of how long it takes for the hot fuel to cool down when all heat sources are turned off. When Tynan started graduate school, this measure was still an empirical guess. He decided to focus his research on the physics of observable confinement time.
It was during this doctoral research that Tynan was able to study the fundamental differences in the behavior of turbulence in plasma as compared to conventional fluids. Typically, when an ordinary fluid is stirred with increasing vigor, the fluid’s motion eventually becomes chaotic or turbulent. However, plasmas can act in a surprising way: confined plasmas, when heated sufficiently strongly, would spontaneously quench the turbulent transport at the boundary of the plasma
An experiment in Germany had unexpectedly discovered this plasma behavior. While subsequent work on other experimental devices confirmed this surprising finding, all earlier experiments lacked the ability to measure the turbulence in detail.
Brian LaBombard, now a senior research scientist at MIT’s Plasma Science and Fusion Center (PSFC), was a postdoc at UCLA at the time. Under LaBombard’s direction, Tynan developed a set of Langmuir probes, which are reasonably simple diagnostics for plasma turbulence studies, to further investigate this unusual phenomenon. It formed the basis for his doctoral dissertation. “I happened to be at the right place at the right time so I could study this turbulence quenching phenomenon in much more detail than anyone else could, up until that time,” Tynan says.
As a PhD student and then postdoc, Tynan studied the phenomenon in depth, shuttling between research facilities in Germany, Princeton University’s Plasma Physics Laboratory, and UCLA.
Fusion at UCSD
After completing his doctorate and postdoctoral work, Tynan worked at a startup for a few years when he learned that the University of California at San Diego was launching a new fusion research group at the engineering school. When they reached out, Tynan joined the faculty and built a research program focused on plasma turbulence and plasma-material interactions in fusion systems. Eventually, he became associate dean of engineering, and later, chair of the Department of Mechanical and Aerospace Engineering, serving in these roles for nearly a decade.
Tynan visited MIT on sabbatical in 2023, when his conversations with NSE faculty members Dennis Whyte, Zach Hartwig, and Michael Short excited him about the challenges the private sector faces in making fusion a reality. He saw opportunities to solve important problems at MIT that complemented his work at UC San Diego.
Tynan is excited to tackle what he calls, “the big physics and engineering challenges of fusion plasmas” at NSE: how to remove the heat and exhaust generated by burning plasma so it doesn’t damage the walls of the fusion device and the plasma does not choke on the helium ash. He also hopes to explore robust engineering solutions for practical fusion energy, with a particular focus on developing better materials for use in fusion devices that will make them longer-lasting, while minimizing the production of radioactive waste.
“Ten or 15 years ago, I was somewhat pessimistic that I would ever see commercial exploitation of fusion in my lifetime,” Tynan says. But that outlook has changed, as he has seen collaborations between MIT and Commonwealth Fusion Systems (CFS) and other private-sector firms that seek to accelerate the timeline to the deployment of fusion in the real world.
In 2021, for example, MIT’s PSFC and CFS took a significant step toward commercial carbon-free power generation. They designed and built a high-temperature superconducting magnet, the strongest fusion magnet in the world.
The milestone was especially exciting because the promise of realizing the dream of fusion energy now felt closer. And being at MIT “seemed like a really quick way to get deeply connected with what’s going on in the efforts to develop fusion energy,” Tynan says.
In addition, “while on sabbatical at MIT, I saw how quickly research staff and students can capitalize on a suggestion of a new idea, and that intrigued me,” he adds.
Tynan brings his special blend of expertise to the table. In addition to extensive experience in plasma physics, he has spent a lot more time on hardcore engineering issues like materials, as well. “The key is to integrate the whole thing into a workable and viable system,” Tynan says.
Q&A: David Whelihan on the challenges of operating in the Arctic
To most, the Arctic can feel like an abstract place, difficult to imagine beyond images of ice and polar bears. But researcher David Whelihan of MIT Lincoln Laboratory's Advanced Undersea Systems and Technology Group is no stranger to the Arctic. Through Operation Ice Camp, a U.S. Navy–sponsored biennial mission to assess operational readiness in the Arctic region, he has traveled to this vast and remote wilderness twice over the past few years to test low-cost sensor nodes developed by the group to monitor loss in Arctic sea ice extent and thickness. The research team envisions establishing a network of such sensors across the Arctic that will persistently detect ice-fracturing events and correlate these events with environmental conditions to provide insights into why the sea ice is breaking up. Whelihan shared his perspectives on why the Arctic matters and what operating there is like.
Q: Why do we need to be able to operate in the Arctic?
A: Spanning approximately 5.5 million square miles, the Arctic is huge, and one of its salient features is that the ice covering much of the Arctic Ocean is decreasing in volume with every passing year. Melting ice opens up previously impassable areas, resulting in increasing interest from potential adversaries and allies alike for activities such as military operations, commercial shipping, and natural resource extraction. Through Alaska, the United States has approximately 1,060 miles of Arctic coastline that is becoming much more accessible because of reduced ice cover. So, U.S. operation in the Arctic is a matter of national security.
Q: What are the technological limitations to Arctic operations?
A: The Arctic is an incredibly harsh environment. The cold kills battery life, so collecting sensor data at high rates over long periods of time is very difficult. The ice is dynamic and can easily swallow or crush sensors. In addition, most deployments involve "boots-on-the-ice," which is expensive and at times dangerous. One of the technological limitations is how to deploy sensors while keeping humans alive.
Q: How does the group's sensor node R&D work seek to support Arctic operations?
A: A lot of the work we put into our sensors pertains to deployability. Our ultimate goal is to free researchers from going onto the ice to deploy sensors. This goal will become increasingly necessary as the shrinking ice pack becomes more dynamic, unstable, and unpredictable. At the last Operation Ice Camp (OIC) in March 2024, we built and rapidly tested deployable and recoverable sensors, as well as novel concepts such as using UAVs (uncrewed aerial vehicles), or drones, as "data mules" that can fly out to and interrogate the sensors to see what they captured. We also built a prototype wearable system that cues automatic download of sensor data over Wi-Fi so that operators don't have to take off their gloves.
Q: The Arctic Circle is the northernmost region on Earth. How do you reach this remote place?
A: We usually fly on commercial airlines from Boston to Seattle to Anchorage to Prudhoe Bay on the North Slope of Alaska. From there, the Navy flies us on small prop planes, like Single and Twin Otters, about 200 miles north and lands us on an ice runway built by the Navy's Arctic Submarine Lab (ASL). The runway is part of a temporary camp that ASL establishes on floating sea ice for their operational readiness exercises conducted during OIC.
Q: Think back to the first time you stepped foot in the Arctic. Can you paint a picture of what you experienced?
A: My first experience was at Prudhoe Bay, coming out of the airport, which is a corrugated metal building with a single gate. Before you open the door to the outside, a sign warns you to be on the lookout for polar bears. Walking out into the sheer desolation and blinding whiteness of everything made me realize I was experiencing something very new.
When I flew out onto the ice and stepped out of the plane, I was amazed that the area could somehow be even more desolate. Bright white snowy ice goes in every direction, broken up by pressure ridges that form when ice sheets collide. The sun is low, and seems to move horizontally only. It is very hard to tell the time. The air temperature is really variable. On our first trip in 2022, it really wasn't (relatively) that cold — only around minus 5 or 10 degrees during the day. On our second trip in 2024, we were hit by minus 30 almost every day, and with winds of 20 to 25 miles per hour. The last night we were on the ice that year, it warmed up a bit to minus 10 to 20, but the winds kicked up and started blowing snow onto the heaters attached to our tents. Those heaters started failing one by one as the blowing snow covered them, blocking airflow. After our heater failed, I asked myself, while warm in my bed, whether I wanted to go outside to the command tent for help or try to make it until dawn in my thick sleeping bag. I picked the first option, but mostly because the heater control was beeping loudly right next to my bunk, so I couldn’t sleep anyway. Shout-out to the ASL staff who ran around fixing heaters all night!
Q: How do you survive in a place generally inhospitable to humans?
A: In partnership with the native population, ASL brings a lot of gear — from insulated, heated tents and communications equipment to large snowblowers to keep the runway clear. A few months before OIC, participants attend training on what conditions you will be exposed to and how to protect yourself through appropriate clothing, and how to use survival gear in case of an emergency.
Q: Do you have plans to return to the Arctic?
A: We are hoping to go back this winter as part of OIC 2026! We plan to test a through-ice communication device. Communicating through 4 to 12 feet of ice is pretty tricky but could allow us to connect underwater drones and stationary sensors under the ice to the rest of the world. To support the through-ice communication system, we will repurpose our sensor-node boxes deployed during OIC 2024. If this setup works, those same boxes could be used as control centers for all sorts of undersea systems and relay information about the under-ice world back home via satellite.
Q: What lessons learned will you bring to your upcoming trip, and any potential future trips?
A: After the first trip, I had a visceral understanding of how hard operating there is. Prototyping of systems becomes a different game. Prototypes are often fragile, but fragility doesn't go over too well on the ice. So, there is a robustification step, which can take some time.
On this last trip, I realized that you have to really be careful with your energy expenditure and pace yourself. While the average adult may require about 2,000 calories a day, an Arctic explorer may burn several times more than that exerting themselves (we do a lot of walking around camp) and keeping warm. Usually, we live on the same freeze-dried food that you would take on camping trips. Each package only has so many calories, so you find yourself eating multiple of those and supplementing with lots of snacks such as Clif Bars or, my favorite, Babybel cheeses (which I bring myself). You also have to be really careful of dehydration. Your body's reaction to extreme cold is to reduce blood flow to your skin, which generally results in less liquid in your body. We have to drink constantly — water, cocoa, and coffee — to avoid dehydration.
We only have access to the ice every two years with the Navy, so we try to make the most of our time. In the several-day lead-up to our field expedition, my research partner Ben and I were really pushing ourselves to ready our sensor nodes for deployment and probably not eating and drinking as regularly as we should. When we ventured to our sensor deployment site about 5 kilometers outside of camp, I had to learn to slow down so I didn't sweat under my gear, as sweating in the extremely cold conditions can quickly lead to hypothermia. I also learned to pay more attention to exposed places on my face, as I got a bit of frostnip around my goggles.
Operating in the Arctic is a fine balance: you can't spend too much time out there, but you also can't rush.
Decoding the sounds of battery formation and degradation
Before batteries lose power, fail suddenly, or burst into flames, they tend to produce faint sounds over time that provide a signature of the degradation processes going on within their structure. But until now, nobody had figured out how to interpret exactly what those sounds meant, and how to distinguish between ordinary background noise and significant signs of possible trouble.
Now, a team of researchers at MIT’s Department of Chemical Engineering have done a detailed analysis of the sounds emanating from lithium ion batteries, and has been able to correlate particular sound patterns with specific degradation processes taking place inside the cells. The new findings could provide the basis for relatively simple, totally passive and nondestructive devices that could continuously monitor the health of battery systems, for example in electric vehicles or grid-scale storage facilities, to provide ways of predicting useful operating lifetimes and forecasting failures before they occur.
The findings were reported Sept. 5 in the journal Joule, in a paper by MIT graduate students Yash Samantaray and Alexander Cohen, former MIT research scientist Daniel Cogswell PhD ’10, and Chevron Professor of Chemical Engineering and professor of mathematics Martin Z. Bazant.
“In this study, through some careful scientific work, our team has managed to decode the acoustic emissions,” Bazant says. “We were able to classify them as coming from gas bubbles that are generated by side reactions, or by fractures from the expansion and contraction of the active material, and to find signatures of those signals even in noisy data.”
Samantaray explains that, “I think the core of this work is to look at a way to investigate internal battery mechanisms while they’re still charging and discharging, and to do this nondestructively.” He adds, “Out there in the world now, there are a few methods that exist, but most are very expensive and not really conducive to batteries in their normal format.”
To carry out their analysis, the team coupled electrochemical testing with recording of the acoustic emissions, under real-world charging and discharging conditions, using detailed signal processing to correlate the electrical and acoustic data. By doing so, he says, “we were able to come up with a very cost-effective and efficient method of actually understanding gas generation and fracture of materials.”
Gas generation and fracturing are two primary mechanisms of degradation and failure in batteries, so being able to detect and distinguish those processes, just by monitoring the sounds produced by the batteries, could be a significant tool for those managing battery systems.
Previous approaches have simply monitored the sounds and recorded times when the overall sound level exceeded some threshold. But in this work, by simultaneously monitoring the voltage and current as well as the sound characteristics, Bazant says, “We know that [sound] emissions happen at a certain potential [voltage], and that helps us identify what the process might be that is causing that emission.”
After these tests, they would then take the batteries apart and study them under an electron microscope to detect fracturing of the materials.
In addition, they took a wavelet transform — essentially, a way of encoding the frequency and duration of each signal that is captured, providing distinct signatures that can then be more easily extracted from background noise. “No one had done that before,” Bazant says, “so that was another breakthrough.”
Acoustic emissions are widely used in engineering, he points out, for example to monitor structures such as bridges for signs of incipient failure. “It’s a great way to monitor a system,” he says, “because those emissions are happening whether you’re listening to them or not,” so by listening, you can learn something about internal processes that would otherwise be invisible.
With batteries, he says, “we often have a hard time interpreting the voltage and current information as precisely as we’d like, to know what’s happening inside a cell. And so this offers another window into the cell’s state of health, including its remaining useful life, and safety, too.” In a related paper with Oak Ridge National Laboratory researchers, the team has shown that acoustic emissions can provide an early warning of thermal runaway, a situation that can lead to fires if not caught. The new study suggests that these sounds can be used to detect gas generation prior to combustion, “like seeing the first tiny bubbles in a pot of heated water, long before it boils,” says Bazant.
The next step will be to take this new knowledge of how certain sounds relate to specific conditions, and develop a practical, inexpensive monitoring system based on this understanding. For example, the team has a grant from Tata Motors to develop a battery monitoring system for its electric vehicles. “Now, we know what to look for, and how to correlate that with lifetime and health and safety,” Bazant says.
One possible application of this new understanding, Samantaray says, is “as a lab tool for groups that are trying to develop new materials or test new environments, so they can actually determine gas generation or active material fracturing without having to open up the battery.”
Bazant adds that the system could also be useful for quality control in battery manufacturing. “The most expensive and rate-limiting process in battery production is often the formation cycling,” he says. This is the process where batteries are cycled through charging and discharging to break them in, and part of that process involves chemical reactions that release some gas. The new system would allow detection of these gas formation signatures, he says, “and by sensing them, it may be easier to isolate well-formed cells from poorly formed cells very early, even before the useful life of the battery, when it’s being made,” he says.
The work was supported by the Toyota Research Institute, the Center for Battery Sustainability, the National Science Foundation, and the Department of Defense, and made use of the facilities of MIT.nano.
A new community for computational science and engineering
For the past decade, MIT has offered doctoral-level study in computational science and engineering (CSE) exclusively through an interdisciplinary program designed for students applying computation within a specific science or engineering field.
As interest grew among students focused primarily on advancing CSE methodology itself, it became clear that a dedicated academic home for this group — students and faculty deeply invested in the foundations of computational science and engineering — was needed.
Now, with a stand-alone CSE PhD program, they have not only a space for fostering discovery in the cross-cutting methodological dimensions of computational science and engineering, but also a tight-knit community.
“This program recognizes the existence of computational science and engineering as a discipline in and of itself, so you don’t have to be doing this work through the lens of mechanical or chemical engineering, but instead in its own right,” says Nicolas Hadjiconstantinou, co-director of the Center for Computational Science and Engineering (CCSE).
Offered by CCSE and launched in 2023, the stand-alone program blends both coursework and a thesis, much like other MIT PhD programs, yet its methodological focus sets it apart from other Institute offerings.
“What’s unique about this program is that it’s not hosted by one specific department. The stand-alone program is, at its core, about computational science and cross-cutting methodology. We connect this research with people in a lot of different application areas. We have oceanographers, people doing materials science, students with a focus on aeronautics and astronautics, and more,” says outgoing co-director Youssef Marzouk, now the associate dean of the MIT Schwarzman College of Computing.
Expanding horizons
Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering, and Marzouk, the Breene M. Kerr Professor of Aeronautics and Astronautics, have led the center’s efforts since 2018, and developed the program and curriculum together. The duo was intentional about crafting a program that fosters students’ individual research while also exposing them to all the field has to offer.
To expand students’ horizons and continue to build a collaborative community, the PhD in CSE program features two popular seminar series: weekly community seminars that focus primarily on internal speakers (current graduate students, postdocs, research scientists, and faculty), and monthly distinguished seminars in CSE, which are Institute-wide and bring external speakers from various institutions and industry roles.
“Something surprising about the program has been the seminars. I thought it would be the same people I see in my classes and labs, but it’s much broader than that,” says Emily Williams, a fourth-year PhD student and a Department of Energy Computational Science graduate fellow. “One of the most interesting seminars was around simulating fluid flow for biomedical applications. My background is in fluids, so I understand that part, but seeing it applied in a totally different domain than what I work in was eye-opening,” says Williams.
That seminar, “Astrophysical Fluid Dynamics at Exascale,” presented by James Stone, a professor in the School of Natural Sciences at the Institute for Advanced Study and at Princeton University, represented one of many opportunities for CSE students to engage with practitioners in small groups, gaining academic insight as well as a wider perspective on future career paths.
Designing for impact
The interdisciplinary PhD program served as a departure point from which Hadjiconstantinou and Marzouk created a new offering that was uniquely its own.
For Marzouk, that meant focusing on expanding the stand-alone program to be able to constantly grow and pivot to retain relevancy as technology speeds up, too: “In my view, the vitality of this program is that science and engineering applications nowadays rest on computation in a really foundational way, whether it’s engineering design or scientific discovery. So it’s essential to perform research on the building blocks of this kind of computation. This research also has to be shaped by the way that we apply it so that scientists or engineers will actually use it,” Marzouk says.
The curriculum is structured around six core focus areas, or “ways of thinking,” that are fundamental to CSE:
- Discretization and numerical methods for partial differential equations;
- Optimization methods;
- Inference, statistical computing, and data-driven modeling;
- High performance computing, software engineering, and algorithms;
- Mathematical foundations (e.g., functional analysis, probability); and
- Modeling (i.e., a subject that treats computational modeling in any science or engineering discipline).
Students select and build their own thesis committee that consists of faculty from across MIT, not just those associated with CCSE. The combination of a curriculum that’s “modern and applicable to what employers are looking for in industry and academics," according to Williams, and the ability to build your own group of engaged advisors, allows for a level of specialization that’s hard to find elsewhere.
“Academically, I feel like this program is designed in such a flexible and interdisciplinary way. You have a lot of control in terms of which direction you want to go in,” says Rosen Yu, a PhD student. Yu’s research is focused on engineering design optimization, an interest she discovered during her first year of research at MIT with Professor Faez Ahmed. The CSE PhD was about to launch, and it became clear that her research interests skewed more toward computation than the existing mechanical engineering degree; it was a natural fit.
“At other schools, you often see just a pure computer science program or an engineering department with hardly any intersection. But this CSE program, I like to say it’s like a glue between these two communities,” says Yu.
That “glue” is strengthening, with more students matriculating each year, as well as Institute faculty and staff becoming affiliated with CSE. While the thesis topics of students range from WIlliams’ stochastic methods for model reduction of multiscale chaotic systems to scalable and robust GPU-cased optimization for energy systems, the goal of the program remains the same: develop students and research that will make a difference.
“That's why MIT is an ‘Institute of Technology’ and not a ‘university.’ There’s always this question, no matter what you’re studying: what is it good for? Our students will go on to work in systems biology, simulators of climate models, electrification, hypersonic vehicles, and more, but the whole point is that their research is helping with something,” says Hadjiconstantinou.
