MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 day 28 min ago

Youssef Marzouk appointed associate dean of MIT Schwarzman College of Computing

Fri, 08/01/2025 - 3:35pm

Youssef Marzouk ’97, SM ’99, PhD ’04, the Breene M. Kerr (1951) Professor in the Department of Aeronautics and Astronautics (AeroAstro) at MIT, has been appointed associate dean of the MIT Schwarzman College of Computing, effective July 1.

Marzouk, who has served as co-director of the Center for Computational Science and Engineering (CCSE) since 2018, will work in his new role to foster a stronger community among bilingual computing faculty across MIT. A key aspect of this work will be providing additional structure and support for faculty members who have been hired into shared positions in departments and the college.

Shared faculty at MIT represent a new generation of scholars whose research and teaching integrate the forefront of computing and another discipline (positions that were initially envisioned as “bridge faculty” in the 2019 Provost’s Task Force reports). Since 2021, the MIT Schwarzman College of Computing has been steadily growing this cohort. In collaboration with 24 departments across the Institute, 20 faculty have been hired in shared positions: three in the School of Architecture and Planning; four in the School of Engineering; seven in the School of Humanities, Arts, and Social Sciences; four in the School of Science; and two in the MIT Sloan School of Management.

“Youssef’s experience leading cross-cutting efforts in research and education in CCSE is of direct relevance to the broader goal of bringing MIT’s computing bilinguals together in meaningful ways. His insights and collaborative spirit position him to make a lasting impact in this role. We are delighted to welcome him to this new leadership position in the college,” says Dan Huttenlocher, dean of the MIT Schwarzman College of Computing and the Henry Ellis Warren Professor of Electrical Engineering and Computer Science.

“I’m excited that Youssef has agreed to take on this important role in the college. His thoughtful approach and nuanced understanding of MIT’s academic landscape make him ideally suited to support our shared faculty community. I look forward to working closely with him,” says Asu Ozdaglar, deputy dean of the MIT Schwarzman College of Computing, head of the Department of Electrical Engineering and Computer Science (EECS), and the MathWorks Professor of EECS.

Marzouk’s research interests lie at the intersection of computational mathematics, statistical inference, and physical modeling. He and his students develop and analyze new methodologies for uncertainty quantification, Bayesian computation, and machine learning in complex physical systems. His recent work has centered on algorithms for data assimilation and inverse problems; high-dimensional learning and surrogate modeling; optimal experimental design; and transportation of measure as a tool for statistical inference and generative modeling. He is strongly motivated by the interplay between theory, methods, and diverse applications, and has collaborated with other researchers at MIT on topics ranging from materials science to fusion energy to the geosciences.

In 2018, he was appointed co-director of CCSE with Nicolas Hadjiconstantinou, the Quentin Berg Professor of Mechanical Engineering. An interdisciplinary research and education center dedicated to advancing innovative computational methods and applications, CCSE became one of the academic units of the MIT Schwarzman College of Computing when it formally launched in 2020.

CCSE has grown significantly under Marzouk and Hadjiconstantinou’s leadership. Most recently, they spearheaded the design and launch of the center’s new standalone PhD program in computational science and engineering, which will welcome its second cohort in September. Collectively, CCSE’s standalone and interdisciplinary PhD programs currently enroll more than 70 graduate students.

Marzouk is also a principal investigator in the MIT Laboratory for Information and Decision Systems, and a core member of MIT’s Statistics and Data Science Center.

Among his many honors and awards, he was named a fellow of the Society for Industrial and Applied Mathematics (SIAM) in 2025. He was elected associate fellow of the American Institute of Aeronautics and Astronautics (AIAA) in 2018 and received the National Academy of Engineering Frontiers of Engineering Award in 2012, the MIT Junior Bose Award for Teaching Excellence in 2012, and the DOE Early Career Research Award in 2010. His recent external engagement includes service on multiple journal editorial boards; co-chairing major SIAM conferences and elected service on various SIAM committees; leadership of scientific advisory boards, including that of the Institute for Computational and Experimental Research in Mathematics (ICERM); and organizing many other international programs and workshops.

At MIT, in addition to co-directing CCSE, Marzouk has served as both graduate and undergraduate officer of the Department of AeroAstro. He also leads the MIT Center for the Exascale Simulation of Materials in Extreme Environments, an interdisciplinary computing effort sponsored by the U.S. Department of Energy’s Predictive Science Academic Alliance program.

Marzouk received his bachelor’s, master’s, and doctoral degrees from MIT. He spent four years at Sandia National Laboratories, as a Truman Fellow and a member of the technical staff, before joining the MIT faculty in 2009.

Ultrasmall optical devices rewrite the rules of light manipulation

Fri, 08/01/2025 - 12:30pm

In the push to shrink and enhance technologies that control light, MIT researchers have unveiled a new platform that pushes the limits of modern optics through nanophotonics, the manipulation of light on the nanoscale, or billionths of a meter.

The result is a class of ultracompact optical devices that are not only smaller and more efficient than existing technologies, but also dynamically tunable, or switchable, from one optical mode to another. Until now, this has been an elusive combination in nanophotonics.

The work is reported in the July 8 issue of Nature Photonics.

“This work marks a significant step toward a future in which nanophotonic devices are not only compact and efficient, but also reprogrammable and adaptive, capable of dynamically responding to external inputs. The  marriage of emerging quantum materials and established nanophotonics architectures will surely bring advances to both fields,” says Riccardo Comin, MIT’s Class of 1947 Career Development Associate Professor of Physics and leader of the work. Comin is also affiliated with MIT’s Materials Research Laboratory and Research Laboratory of Electronics (RLE).

Comin’s colleagues on the work are Ahmet Kemal Demir, an MIT graduate student in physics; Luca Nessi, a former MIT postdoc who is now a postdoc at Politecnico di Milano; Sachin Vaidya, a postdoc in RLE; Connor A. Occhialini PhD ’24, who is now a postdoc at Columbia University; and Marin Soljačić, the Cecil and Ida Green Professor of Physics at MIT.

Demir and Nessi are co-first authors of the Nature Photonics paper.

Toward new nanophotonic materials

Nanophotonics has traditionally relied on materials like silicon, silicon nitride, or titanium dioxide. These are the building blocks of devices that guide and confine light using structures such as waveguides, resonators, and photonic crystals. The latter are periodic arrangements of materials that control how light propagates, much like how a semiconductor crystal affects electron motion.

While highly effective, these materials are constrained by two major limitations. The first involves their refractive indices. These are a measure of how strongly a material interacts with light; the higher the refractive index, the more the material “grabs” or interacts with the light, bending it more sharply and slowing it down more. The refractive indices of silicon and other traditional nanophotonic materials are often modest, which limits how tightly light can be confined and how small optical devices can be made.

A second major limitation of traditional nanophotonic materials: once a structure is fabricated, its optical behavior is essentially fixed. There is usually no way to significantly reconfigure how it responds to light without physically altering it. “Tunability is essential for many next-gen photonics applications, enabling adaptive imaging, precision sensing, reconfigurable light sources, and trainable optical neural networks,” says Vaidya.

Introducing chromium sulfide bromide

These are the longstanding challenges that chromium sulfide bromide (CrSBr) is poised to solve. CrSBr is a layered quantum material with a rare combination of magnetic order and strong optical response. Central to its unique optical properties are excitons: quasiparticles formed when a material absorbs light and an electron is excited, leaving behind a positively charged “hole.” The electron and hole remain bound together by electrostatic attraction, forming a sort of neutral particle that can strongly interact with light.

In CrSBr, excitons dominate the optical response and are highly sensitive to magnetic fields, which means they can be manipulated using external controls.

Because of these excitons, CrSBr exhibits an exceptionally large refractive index that allows researchers to sculpt the material to fabricate optical structures like photonic crystals that are up to an order of magnitude thinner than those made from traditional materials. “We can make optical structures as thin as 6 nanometers, or just seven layers of atoms stacked on top of each other,” says Demir.

And crucially, by applying a modest magnetic field, the MIT researchers were able to continuously and reversibly switch the optical mode. In other words, they demonstrated the ability to dynamically change how light flows through the nanostructure, all without any moving parts or changes in temperature. “This degree of control is enabled by a giant, magnetically induced shift in the refractive index, far beyond what is typically achievable in established photonic materials,” says Demir.

In fact, the interaction between light and excitons in CrSBr is so strong that it leads to the formation of polaritons, hybrid light-matter particles that inherit properties from both components. These polaritons enable new forms of photonic behavior, such as enhanced nonlinearities and new regimes of quantum light transport. And unlike conventional systems that require external optical cavities to reach this regime, CrSBr supports polaritons intrinsically.

While this demonstration uses standalone CrSBr flakes, the material can also be integrated into existing photonic platforms, such as integrated photonic circuits. This makes CrSBr immediately relevant to real-world applications, where it can serve as a tunable layer or component in otherwise passive devices.

The MIT results were achieved at very cold temperatures of up to 132 kelvins (-222 degrees Fahrenheit). Although this is below room temperature, there are compelling use cases, such as quantum simulation, nonlinear optics, and reconfigurable polaritonic platforms, where the unparalleled tunability of CrSBr could justify operation in cryogenic environments.

In other words, says Demir, “CrSBr is so unique with respect to other common materials that even going down to cryogenic temperatures will be worth the trouble, hopefully.”

That said, the team is also exploring related materials with higher magnetic ordering temperatures to enable similar functionality at more accessible conditions.

This work was supported by the U.S. Department of Energy, the U.S. Army Research Office, and a MathWorks Science Fellowship. The work was performed in part at MIT.nano.

Ushering in a new era of suture-free tissue reconstruction for better healing

Fri, 08/01/2025 - 12:00am

When surgeons repair tissues, they’re currently limited to mechanical solutions like sutures and staples, which can cause their own damage, or meshes and glues that may not adequately bond with tissues and can be rejected by the body.

Now, Tissium is offering surgeons a new solution based on a biopolymer technology first developed at MIT. The company’s flexible, biocompatible polymers conform to surrounding tissues, attaching to them in order to repair torn tissue after being activated using blue light.

“Our goal is to make this technology the new standard in fixation,” says Tissium co-founder Maria Pereira, who began working with polymers as a PhD student through the MIT Portugal Program. “Surgeons have been using sutures, staples, or tacks for decades or centuries, and they’re quite penetrating. We’re trying to help surgeons repair tissues in a less traumatic way.”

In June, Tissium reached a major milestone when it received marketing authorization from the Food and Drug Administration for its non-traumatic, sutureless solution to repair peripheral nerves. The FDA’s De Novo marketing authorization acknowledges the novelty of the company’s platform and enables commercialization of the MIT spinout’s first product. It came after studies showing the platform helped patients regain full flexion and extension of their injured fingers or toes without pain.

Tissium’s polymers can work with a range of tissue types, from nerves to cardiovascular and the abdominal walls, and the company is eager to apply its programmable platform to other areas.

“We really think this approval is just the beginning,” Tissium CEO Christophe Bancel says. “It was a critical step, and it wasn’t easy, but we knew if we could get the first one, it would begin a new phase for the company. Now it’s our responsibility to show this works with other applications and can benefit more patients.”

From lab to patients

Years before he co-founded Tissium, Jeff Karp was a postdoc in the lab of MIT Institute Professor Robert Langer, where he worked to develop elastic materials that were biodegradable and photocurable for a range of clinical applications. After graduation, Karp became an affiliate faculty member in the Harvard-MIT Program in Health Sciences and Technology. He is also a faculty member at Harvard Medical School and Brigham and Women’s Hospital. In 2008, Pereira joined Karp’s lab as a visiting PhD student through funding from the MIT Portugal Program, tuning the polymers’ thickness and ability to repel water to optimize the material’s ability to attach to wet tissue.

“Maria took this polymer platform and turned it into a fixation platform that could be used in many areas in medicine,” Karp recalls. “[The cardiac surgeon] Pedro del Nido at Boston Children’s Hospital had alerted us to this major problem of a birth defect that causes holes in the heart of newborns. There were no suitable solutions, so that was one of the applications we began working on that Maria led.”

Pereira and her collaborators went on to demonstrate they could use the biopolymers to seal holes in the hearts of rats and pigs without bleeding or complications. Bancel, a pharmaceutical industry veteran, was introduced to the technology when he met with Karp, Pereira, and Langer during a visit to Cambridge in 2012, and he spent the next few months speaking with surgeons.

“I spoke with about 15 surgeons from a range of fields about their challenges,” Bancel says. “I realized if the technology could work in these settings, it would address a big set of challenges. All of the surgeons were excited about how the material could impact their practice.”

Bancel worked with MIT’s Technology Licensing Office to take the biopolymer technology out of the lab, including patents from Karp’s original work in Langer’s lab. Pereira moved to Paris upon completing her PhD, and Tissium was officially founded in 2013 by Pereira, Bancel, Karp, Langer, and others.

“The MIT and Harvard ecosystems are at the core of our success,” Pereira says. “From the get-go, we tried to solve problems that would be meaningful for patients. We weren’t just doing research for the sake of doing research. We started in the cardiovascular space, but we quickly realized we wanted to create new standards for tissue repair and tissue fixation.”

After licensing the technology, Tissium had a lot of work to do to make it scalable commercially. The founders partnered with companies that specialize in synthesizing polymers and created a method to 3D print a casing for polymer-wrapped nerves.

“We quickly realized the product is a combination of the polymer and the accessories,” Bancel says. “It was about how surgeons used the product. We had to design the right accessories for the right procedures.”

The new system is sorely needed. A recent meta-analysis of nerve repairs using sutures found that only 54 percent of patients achieved highly meaningful recovery following surgery. By not using sutures, Tissium’s flexible polymer technology offers an atraumatic way to reconnect nerves. In a recent trial of 12 patients, all patients that completed follow up regained full flexion and extension of their injured digits and reported no pain 12 months after surgery.

“The current standard of care is suboptimal,” Pereira says. “There are variabilities in the outcome, sutures can create trauma, tension, misalignment, and all that can impact patient outcomes, from sensation to motor function and overall quality of life.”

Trauma-free tissue repair

Today Tissium has six products in development, including one ongoing clinical trial in the hernia space and another set to begin soon for a cardiovascular application.

“Early on, we had the intuition that if this were to work in one application, it would be surprising if it didn’t work in many other applications,” Bancel says.

The company also believes its 3D-printed production process will make it easier to expand.

“Not only can this be used for tissue fixation broadly across medicine, but we can leverage the 3D printing method to make all kinds of implantable medical devices from the same polymeric platform,” Karp explains. “Our polymers are programmable, so we can program the degradation, the mechanical properties, and this could open up the door to other exciting breakthroughs in medical devices with new capabilities.”

Now Tissium’s team is encouraging people in the medical field to reach out if they think their platform could improve on the standard of care — and they’re mindful that the first approval is a milestone worth celebrating unto itself.

“It’s the best possible outcome for your research to generate not just a paper, but a treatment with potential to improve the standard of care along with patients’ lives,” Karp says. “It’s the dream, and it’s an incredible feeling to be able to celebrate this with all the collaborators that have been involved along the way.”

Langer adds, “I agree with Jeff. It’s wonderful to see the research we started at MIT reach the point of FDA approval and change peoples’ lives.”

How government accountability and responsiveness affect tax payment

Thu, 07/31/2025 - 5:00pm

A fundamental problem for governments is getting citizens to comply with their laws and policies. They can’t monitor everyone and catch all the rule-breakers. “It’s a logistical impossibility,” says Lily L. Tsai, MIT’s Ford Professor of Political Science and the director and founder of the MIT Governance Lab.

Instead, governments need citizens to choose to follow the rules of their own accord. “As a government, you have to rely on them to voluntarily comply with the laws, policies, and regulations that are put into place,” Tsai says.

One particularly important thing governments need citizens to do is pay their taxes. In a paper in the October issue of the journal World Development, Tsai and her co-authors, including Minh Trinh ’22, a graduate of the Department of Political Science, look at different factors that might affect compliance with property tax laws in China. They found that study participants in an in-person tax-paying experiment were more likely to pay their taxes if government officials were monitoring and punishing corruption.

“When people think that government authorities are motivated by the public good, have moral character, and have integrity, then the requests that those authorities make of citizens are more likely to seem legitimate, and so they’re more likely to pay their taxes,” Tsai says.

In China, only two cities, Chongqing and Shanghai, collect property taxes. Officials have been concerned that citizens might resist property taxes because homeownership is the main source of urban household wealth in China. Private homeownership accounts for 64 percent of household wealth in China, compared to only 29 percent in the United States.

Tsai and her co-authors wanted to test how governments might make people more willing to pay their property taxes. Researchers have theorized that citizens are more likely to comply with tax laws when they feel like they’re getting something in return from the government. The government can be responsive to citizens’ demands for public services, for example. Or the government can punish officials who are corrupt or perform poorly.

In the first part of the study, a survey of Chinese citizens, respondents expressed preferences for different hypothetical property tax policies. The results suggested that participants wanted the government to be responsive to their needs and to hold officials accountable. People preferred a policy that allowed for citizen input on the use of tax revenue over one that did not, and a policy that allowed for the sanctioning of corrupt officials garnered more support than a policy that did not.

Survey participants also preferred a lighter penalty for not paying their taxes over a harsher penalty, and they supported a tax exemption for first apartments. Interestingly to the researchers, policies that allowed for government responsiveness and accountability received roughly the same support as these policies with economic benefits. “This is evidence to show that we should really pay attention to non-economic factors, because they can have similar magnitudes of impact on tax-paying behavior,” Tsai says.

For the second stage of the study, researchers recruited people for a lab experiment in Shanghai (one of the two cities that collects property taxes). Participants played a game on an iPad in which they chose repeatedly whether or not to pay property taxes. At the end of the game, they received an amount of real money that varied depending on how they and other participants played the game.

Participants were then randomly split into different groups. In one group, participants were given an opportunity to voice their preference for how their property tax revenue was used. Some were told the government incorporated their feedback, while others were told their preferences were not considered — in other words, participants learned whether or not the government was responsive to their needs. In another group, participants learned that a corrupt official had stolen money from property tax revenue. Some were told that the official had been caught and punished, while others were told the official got away with stealing.

The researchers measured whether game players’ willingness to pay property taxes changed after receiving this new information. They found that while the willingness of players who learned the government was responsive to their needs did not change significantly, players who learned the government punished corrupt officials paid their property taxes more frequently.

“It was kind of amazing to see that people care a lot about whether or not higher-level authorities are making sure that tax dollars are not being wasted through corruption,” Tsai says. She argues in her 2021 book, “When People Want Punishment: Retributive Justice and the Puzzle of Authoritarian Popularity,” that when authorities are willing to punish their own officials, it may signal to people that leaders have moral integrity and share the values of ordinary people, making them appear more legitimate.

While the researchers expected to see government responsiveness affect tax payment as well, Tsai says it’s not totally surprising that for people living in places without direct channels for citizen input, the opportunity to participate in the decision-making process in a lab setting might not resonate as strongly.

The findings don’t mean that government responsiveness isn’t important. But they suggest that even when there aren’t opportunities for citizens to make their voices heard, there are other ways for governments to appear legitimate and get people to comply with rules voluntarily.

As the strength of democratic institutions declines globally, scholars wonder whether perceptions of governments’ legitimacy will decline at the same time. “These findings suggest that maybe that’s not necessarily the case,” Tsai says.

School of Humanities, Arts, and Social Sciences welcomes 14 new faculty for 2025

Thu, 07/31/2025 - 4:15pm

Dean Agustín Rayo and the MIT School of Humanities, Arts, and Social Sciences (SHASS) recently welcomed 14 new professors to the MIT community. They arrive with diverse backgrounds and vast knowledge in their areas of research.

Naoki Egami joins MIT as an associate professor in the Department of Political Science. He is also a faculty affiliate of the Institute for Data, Systems, and Society. Egami specializes in political methodology and develops statistical methods for questions in political science and the social sciences. His current research programs focus on three areas: external validity and generalizability; machine learning and artificial intelligence for the social sciences; and causal inference with network and spatial data. His work has appeared in various academic journals in political science, statistics, and computer science, such as American Political Science Review, American Journal of Political Science, Journal of the American Statistical Association, Journal of the Royal Statistical Society (Series B), NeurIPS, and Science Advances. Before joining MIT, Egami was an assistant professor at Columbia University. He received a PhD from Princeton University (2020) and a BA from the University of Tokyo (2015).

Valentin Figueroa joins the Department of Political Science as an assistant professor. His research examines historical state building, ideological change, and scientific innovation, with a regional focus on Western Europe and Latin America. His current book project investigates the disestablishment of patrimonial administrations and the rise of bureaucratic states in early modern Europe. Before joining MIT, he was an assistant professor at the Pontificia Universidad Católica de Chile. Originally from Argentina, Figueroa holds a BA and an MA in political science from Universidad de San Andrés and Universidad Torcuato Di Tella, respectively, and a PhD in political science from Stanford University.

Bailey Flanigan is an assistant professor in the Department of Political Science, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. Her research combines tools from across these disciplines — including social choice theory, game theory, algorithms, statistics, and survey methods — to advance political methodology and strengthen public participation in democracy. She is specifically interested in sampling algorithms, opinion measurement/preference elicitation, and the design of democratic innovations like deliberative minipublics and participatory budgeting. Before joining MIT, Flanigan was a postdoc at Harvard University’s Data Science Initiative. She earned her PhD in computer science from Carnegie Mellon University and her BS in bioengineering from the University of Wisconsin at Madison.

Rachel Fraser is an associate professor in the Department of Linguistics and Philosophy. Before coming to MIT, Fraser taught at Oxford University, where she also completed her graduate work in philosophy. She has interests in epistemology, language, feminism, aesthetics, and political philosophy. At present, her main project is a book manuscript on the epistemology of narrative.

Brian Hedden PhD ’12 is a professor in the Department of Linguistics and Philosophy, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His research focuses on how we ought to form beliefs and make decisions. He works in epistemology, decision theory, and ethics, including ethics of AI. He is the author of “Reasons without Persons: Rationality, Identity, and Time” (Oxford University Press, 2015) and articles on topics including collective action problems, legal standards of proof, algorithmic fairness, and political polarization, among others. Prior to joining MIT, he was a faculty member at the Australian National University and the University of Sydney, and a junior research fellow at Oxford. He received his BA From Princeton University in 2006 and his PhD from MIT in 2012.

Rebekah Larsen is an assistant professor in the Comparative Media Studies/Writing program. A media sociologist with a PhD from Cambridge University, her work uncovers and analyzes understudied media ecosystems, with special attention to sociotechnical change and power relations within these systems. Recent scholarly sites of inquiry include conservative talk radio stations in rural Utah (and ethnographic work in conservative spaces); the new global network of fact checkers funded by social media platform content moderation contracts; and search engine manipulation of journalists and activists around a controversial 2010s privacy regulation. Prior to MIT, Larsen held a Marie Curie grant at the University of Copenhagen, and was a visiting fellow at the Information Society Project (Yale Law School). She maintains current affiliations as a faculty associate at the Berkman Klein Center (Harvard Law School) and a research associate at the Center for Governance and Human Rights (Cambridge University).

Pascal Le Boeuf joins the Music and Theater Arts Section as an assistant professor. Described as “sleek, new,” “hyper-fluent,” and “a composer that rocks” by The New York Times, he is a Grammy Award-winning composer, jazz pianist, and producer whose works range from improvised music to hybridizing notation-based chamber music with production-based technology. Recent projects include collaborations with Akropolis Reed Quintet, Christian Euman, Jamie Lidell, Alarm Will Sound, Ji Hye Jung, Tasha Warren, Dave Eggar, Barbora Kolarova and Arx Duo, JACK Quartet, Friction Quartet, Hub New Music, Todd Reynolds, Sara Caswell, Jessica Meyer, Nick Photinos, Ian Chang, Dayna Stephens, Linda May Han Oh, Justin Brown, and Le Boeuf Brothers. He received a 2025 Grammy Award for Best Instrumental Composition, a 2024 Barlow Commission, a 2023 Guggenheim Fellowship, and a 2020 Copland House Residency Award. Le Boeuf is a Harold W. Dodds Honorific Fellow and PhD candidate in music composition at Princeton University.

Becca Lewis is an assistant professor in the Comparative Media Studies/Writing program. An interdisciplinary scholar who examines the rise of right-wing politics in Silicon Valley and online, she holds a PhD in communication theory and research from Stanford University and an MS in social science from the University of Oxford. Her work has been published in academic journals including New Media and Society, Social Media and Society, and American Behavioral Scientist, and in news outlets such as The Guardian and Business Insider. She previously worked as a researcher at the Data and Society Research Institute, where she published the organization’s flagship reports on media manipulation, disinformation, and right-wing digital media. In 2022, she served as an expert witness in the defamation lawsuit brought against Alex Jones by the parents of a Sandy Hook shooting victim.

Ben Lindquist is an assistant professor in the History Section, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. His work observes the historical ways that computing has circulated with ideas of religion, emotion, and divergent thinking. “The Feeling Machine,” his first book, under contract with the University of Chicago Press, follows the history of synthetic speech to ask how emotion became a subject of computer science. Before coming to MIT, he was a postdoc in the Science in Human Culture Program at Northwestern University and earned his PhD in history from Princeton University.

Bar Luzon joins the Department of Linguistics and Philosophy as an assistant professor. Luzon completed her BA in philosophy in 2017 at the Hebrew University of Jerusalem, and her PhD in philosophy in 2024 at New York University. Before coming to MIT, she was a Mellon Postdoctoral Fellow in the Philosophy Department at Rutgers University. She works in the philosophy of mind and language, metaphysics, and epistemology. Her research focuses on the nature of representation and the structure of reality. In the course of pursuing these issues, she writes about mental content, metaphysical determination, the vehicles of mental representation, and the connection between truth and different epistemic notions.

Mark Rau is an assistant professor in the Music and Theater Arts Section, with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. He is involved in developing graduate programming focused on music technology. He is interested in the fields of musical acoustics, vibration and acoustic measurement, audio signal processing, and physical modeling synthesis, among other areas. As a lifelong musician, his research focuses on musical instruments and creative audio effects. Before joining MIT, he was a postdoc at McGill University and a lecturer at Stanford University. He completed his PhD at Stanford’s Center for Computer Research in Music and Acoustics. He also holds an MA in music, science, and technology from Stanford, as well as a BS in physics and BMus in jazz from McGill University.

Viola Schmitt is an associate professor in the Department of Linguistics and Philosophy. She is a linguist with a special interest in semantics. Much of her work focuses on trying to understand general constraints on human language meaning; that is, the principles regulating which meanings can be expressed by human languages and how languages can package meaning. Variants of this question were also central to grants she received from the Austrian and German research foundations. She earned her PhD in linguistics from the University of Vienna and worked as a postdoc and/or lecturer at the Universities of Vienna, Graz, Göttingen, and at the University of California at Los Angeles. Her most recent position was as a junior professor at the Humboldt University Berlin.

Angela Saini joins the Comparative Media Studies/Writing program as an assistant professor. A science journalist and author, she presents television and radio documentaries for the BBC and her writing has appeared in National Geographic, Wired, Science, and Foreign Policy. She has published four books, which have together been translated into 18 languages. Her bestselling 2019 book, “Superior: The Return of Race Science,” was a finalist for the LA Times Book Prize, and her latest, “The Patriarchs: The Origins of Inequality,” was a finalist for the Orwell Prize for Political Writing. She has an MEng from the University of Oxford, and was made an honorary fellow of her alma mater, Keble College, in 2023.

Paris Smaragdis SM ’97, PhD ’01 joins the Music and Theater Arts Section as a professor with a shared appointment in the MIT Schwarzman College of Computing in the Department of Electrical Engineering and Computer Science. He holds a BMus (cum laude ’95) from Berklee College of Music. His research lies at the intersection of signal processing and machine learning, especially as it relates to sound and music. He has been a research scientist at Mitsubishi Electric Research Labs, a senior research scientist at Adobe Research, and an Amazon Scholar with Amazon’s AWS. He spent 15 years as a professor at the University of Illinois Urbana Champaign in the Computer Science Department, where he spearheaded the design of the CS+Music program, and served as an associate director of the School of Computer and Data Science.

How the brain distinguishes oozing fluids from solid objects

Thu, 07/31/2025 - 11:00am

Imagine a ball bouncing down a flight of stairs. Now think about a cascade of water flowing down those same stairs. The ball and the water behave very differently, and it turns out that your brain has different regions for processing visual information about each type of physical matter.

In a new study, MIT neuroscientists have identified parts of the brain’s visual cortex that respond preferentially when you look at “things” — that is, rigid or deformable objects like a bouncing ball. Other brain regions are more activated when looking at “stuff” — liquids or granular substances such as sand.

This distinction, which has never been seen in the brain before, may help the brain plan how to interact with different kinds of physical materials, the researchers say.

“When you’re looking at some fluid or gooey stuff, you engage with it in different way than you do with a rigid object. With a rigid object, you might pick it up or grasp it, whereas with fluid or gooey stuff, you probably are going to have to use a tool to deal with it,” says Nancy Kanwisher, the Walter A. Rosenblith Professor of Cognitive Neuroscience; a member of the McGovern Institute for Brain Research and MIT’s Center for Brains, Minds, and Machines; and the senior author of the study.

MIT postdoc Vivian Paulun, who is joining the faculty of the University of Wisconsin at Madison this fall, is the lead author of the paper, which appears today in the journal Current Biology. RT Pramod, an MIT postdoc, and Josh Tenenbaum, an MIT professor of brain and cognitive sciences, are also authors of the study.

Stuff vs. things

Decades of brain imaging studies, including early work by Kanwisher, have revealed regions in the brain’s ventral visual pathway that are involved in recognizing the shapes of 3D objects, including an area called the lateral occipital complex (LOC). A region in the brain’s dorsal visual pathway, known as the frontoparietal physics network (FPN), analyzes the physical properties of materials, such as mass or stability.

Although scientists have learned a great deal about how these pathways respond to different features of objects, the vast majority of these studies have been done with solid objects, or “things.”

“Nobody has asked how we perceive what we call ‘stuff’ — that is, liquids or sand, honey, water, all sorts of gooey things. And so we decided to study that,” Paulun says.

These gooey materials behave very differently from solids. They flow rather than bounce, and interacting with them usually requires containers and tools such as spoons. The researchers wondered if these physical features might require the brain to devote specialized regions to interpreting them.

To explore how the brain processes these materials, Paulun used a software program designed for visual effects artists to create more than 100 video clips showing different types of things or stuff interacting with the physical environment. In these videos, the materials could be seen sloshing or tumbling inside a transparent box, being dropped onto another object, or bouncing or flowing down a set of stairs.

The researchers used functional magnetic resonance imaging (fMRI) to scan the visual cortex of people as they watched the videos. They found that both the LOC and the FPN respond to “things” and “stuff,” but that each pathway has distinctive subregions that respond more strongly to one or the other.

“Both the ventral and the dorsal visual pathway seem to have this subdivision, with one part responding more strongly to ‘things,’ and the other responding more strongly to ‘stuff,’” Paulun says. “We haven’t seen this before because nobody has asked that before.”

Roland Fleming, a professor of experimental psychology at Justus Liebig University of Geissen, described the findings as a “major breakthrough in the scientific understanding of how our brains represent the physical properties of our surrounding world.”

“We’ve known the distinction exists for a long time psychologically, but this is the first time that it’s been really mapped onto separate cortical structures in the brain. Now we can investigate the different computations that the distinct brain regions use to process and represent objects and materials,” says Fleming, who was not involved in the study.

Physical interactions

The findings suggest that the brain may have different ways of representing these two categories of material, similar to the artificial physics engines that are used to create video game graphics. These engines usually represent a 3D object as a mesh, while fluids are represented as sets of particles that can be rearranged.

“The interesting hypothesis that we can draw from this is that maybe the brain, similar to artificial game engines, has separate computations for representing and simulating ‘stuff’ and ‘things.’ And that would be something to test in the future,” Paulun says.

The researchers also hypothesize that these regions may have developed to help the brain understand important distinctions that allow it to plan how to interact with the physical world. To further explore this possibility, the researchers plan to study whether the areas involved in processing rigid objects are also active when a brain circuit involved in planning to grasp objects is active.

They also hope to look at whether any of the areas within the FPN correlate with the processing of more specific features of materials, such as the viscosity of liquids or the bounciness of objects. And in the LOC, they plan to study how the brain represents changes in the shape of fluids and deformable substances.

The research was funded by the German Research Foundation, the U.S. National Institutes of Health, and a U.S. National Science Foundation grant to the Center for Brains, Minds, and Machines.

Mapping cells in time and space: New tool reveals a detailed history of tumor growth

Wed, 07/30/2025 - 5:00pm

All life is connected in a vast family tree. Every organism exists in relationship to its ancestors, descendants, and cousins, and the path between any two individuals can be traced. The same is true of cells within organisms — each of the trillions of cells in the human body is produced through successive divisions from a fertilized egg, and can all be related to one another through a cellular family tree. In simpler organisms, such as the worm C. elegans, this cellular family tree has been fully mapped, but the cellular family tree of a human is many times larger and more complex.

In the past, MIT professor and Whitehead Institute for Biomedical Research member Jonathan Weissman and other researchers developed lineage tracing methods to track and reconstruct the family trees of cell divisions in model organisms in order to understand more about the relationships between cells and how they assemble into tissues, organs, and — in some cases — tumors. These methods could help to answer many questions about how organisms develop and diseases like cancer are initiated and progress.

Now, Weissman and colleagues have developed an advanced lineage tracing tool that not only captures an accurate family tree of cell divisions, but also combines that with spatial information: identifying where each cell ends up within a tissue. The researchers used their tool, PEtracer, to observe the growth of metastatic tumors in mice. Combining lineage tracing and spatial data provided the researchers with a detailed view of how elements intrinsic to the cancer cells and from their environments influenced tumor growth, as Weissman and postdocs in his lab Luke Koblan, Kathryn Yost, and Pu Zheng, and graduate student William Colgan share in a paper published in the journal Science on July 24.

“Developing this tool required combining diverse skill sets through the sort of ambitious interdisciplinary collaboration that’s only possible at a place like Whitehead Institute,” says Weissman, who is also a Howard Hughes Medical Institute investigator. “Luke came in with an expertise in genetic engineering, Pu in imaging, Katie in cancer biology, and William in computation, but the real key to their success was their ability to work together to build PEtracer.”

“Understanding how cells move in time and space is an important way to look at biology, and here we were able to see both of those things in high resolution. The idea is that by understanding both a cell’s past and where it ends up, you can see how different factors throughout its life influenced its behaviors. In this study, we use these approaches to look at tumor growth, though in principle we can now begin to apply these tools to study other biology of interest, like embryonic development,” Koblan says.

Designing a tool to track cells in space and time

PEtracer tracks cells’ lineages by repeatedly adding short, predetermined codes to the DNA of cells over time. Each piece of code, called a lineage tracing mark, is made up of five bases, the building blocks of DNA. These marks are inserted using a gene editing technology called prime editing, which directly rewrites stretches of DNA with minimal undesired byproducts. Over time, each cell acquires more lineage tracing marks, while also maintaining the marks of its ancestors. The researchers can then compare cells’ combinations of marks to figure out relationships and reconstruct the family tree.

“We used computational modeling to design the tool from first principles, to make sure that it was highly accurate, and compatible with imaging technology. We ran many simulations to land on the optimal parameters for a new lineage tracing tool, and then engineered our system to fit those parameters,” Colgan says.

When the tissue — in this case, a tumor growing in the lung of a mouse — had sufficiently grown, the researchers collected these tissues and used advanced imaging approaches to look at each cell’s lineage relationship to other cells via the lineage tracing marks, along with its spatial position within the imaged tissue and its identity (as determined by the levels of different RNAs expressed in each cell). PEtracer is compatible with both imaging approaches and sequencing methods that capture genetic information from single cells.

“Making it possible to collect and analyze all of this data from the imaging was a large challenge,” Zheng says. “What’s particularly exciting to me is not just that we were able to collect terabytes of data, but that we designed the project to collect data that we knew we could use to answer important questions and drive biological discovery.”

Reconstructing the history of a tumor

Combining the lineage tracing, gene expression, and spatial data let the researchers understand how the tumor grew. They could tell how closely related neighboring cells are and compare their traits. Using this approach, the researchers found that the tumors they were analyzing were made up of four distinct modules, or neighborhoods, of cells.

The tumor cells closest to the lung, the most nutrient-dense region, were the most fit, meaning their lineage history indicated the highest rate of cell division over time. Fitness in cancer cells tends to correlate to how aggressively tumors will grow.

The cells at the “leading edge” of the tumor, the far side from the lung, were more diverse and not as fit. Below the leading edge was a low-oxygen neighborhood of cells that might once have been leading edge cells, now trapped in a less-desirable spot. Between these cells and the lung-adjacent cells was the tumor core, a region with both living and dead cells, as well as cellular debris.

The researchers found that cancer cells across the family tree were equally likely to end up in most of the regions, with the exception of the lung-adjacent region, where a few branches of the family tree dominated. This suggests that the cancer cells’ differing traits were heavily influenced by their environments, or the conditions in their local neighborhoods, rather than their family history. Further evidence of this point was that expression of certain fitness-related genes, such as Fgf1/Fgfbp1, correlated to a cell’s location, rather than its ancestry. However, lung-adjacent cells also had inherited traits that gave them an edge, including expression of the fitness-related gene Cldn4­ — showing that family history influenced outcomes as well.

These findings demonstrate how cancer growth is influenced both by factors intrinsic to certain lineages of cancer cells and by environmental factors that shape the behavior of cancer cells exposed to them.

“By looking at so many dimensions of the tumor in concert, we could gain insights that would not have been possible with a more limited view,” Yost says. “Being able to characterize different populations of cells within a tumor will enable researchers to develop therapies that target the most aggressive populations more effectively.”

“Now that we’ve done the hard work of designing the tool, we’re excited to apply it to look at all sorts of questions in health and disease, in embryonic development, and across other model species, with an eye toward understanding important problems in human health,” Koblan says. “The data we collect will also be useful for training AI models of cellular behavior. We’re excited to share this technology with other researchers and see what we all can discover.”

Creeping crystals: Scientists observe “salt creep” at the single-crystal scale

Wed, 07/30/2025 - 3:45pm

Salt creeping, a phenomenon that occurs in both natural and industrial processes, describes the collection and migration of salt crystals from evaporating solutions onto surfaces. Once they start collecting, the crystals climb, spreading away from the solution. This creeping behavior, according to researchers, can cause damage or be harnessed for good, depending on the context. New research published June 30 in the journal Langmuir is the first to show salt creeping at a single-crystal scale and beneath a liquid’s meniscus.

“The work not only explains how salt creeping begins, but why it begins and when it does,” says Joseph Phelim Mooney, a postdoc in the MIT Device Research Laboratory and one of the authors of the new study. “We hope this level of insight helps others, whether they’re tackling water scarcity, preserving ancient murals, or designing longer-lasting infrastructure.”

The work is the first to directly visualize how salt crystals grow and interact with surfaces underneath a liquid meniscus, something that’s been theorized for decades but never actually imaged or confirmed at this level, and it offers fundamental insights that could impact a wide range of fields — from mineral extraction and desalination to anti-fouling coatings, membrane design for separation science, and even art conservation, where salt damage is a major threat to heritage materials.

In civil engineering applications, for example, the research can help explain why and when salt crystals start growing across surfaces like concrete, stone, or building materials. “These crystals can exert pressure and cause cracking or flaking, reducing the long-term durability of structures,” says Mooney. “By pinpointing the moment when salt begins to creep, engineers can better design protective coatings or drainage systems to prevent this form of degradation.”

For a field like art conservation, where salt can be devastating to murals, frescoes, and ancient artifacts, often forming beneath the surface before visible damage appears, the work can help identify the exact conditions that cause salt to start moving and spreading, allowing conservators to act earlier and more precisely to protect heritage objects.

The work began during Mooney’s Marie Curie Fellowship at MIT. “I was focused on improving desalination systems and quickly ran into [salt buildup as] a major roadblock,” he says. “[Salt] was everywhere, coating surfaces, clogging flow paths, and undermining the efficiency of our designs. I realized we didn’t fully understand how or why salt starts creeping across surfaces in the first place.”

That experience led Mooney to team up with colleagues to dig into the fundamentals of salt crystallization at the air–liquid–solid interface. “We wanted to zoom in, to really see the moment salt begins to move, so we turned to in situ X-ray microscopy,” he says. “What we found gave us a whole new way to think about surface fouling, material degradation, and controlled crystallization.”

The new research may, in fact, allow better control of a crystallization processes required to remove salt from water in zero-liquid discharge systems. It can also be used to explain how and when scaling happens on equipment surfaces, and may support emerging climate technologies that depend on smart control of evaporation and crystallization.

The work also supports mineral and salt extraction applications, where salt creeping can be both a bottleneck and an opportunity. In these applications, Mooney says, “by understanding the precise physics of salt formation at surfaces, operators can optimize crystal growth, improving recovery rates and reducing material losses.”

Mooney’s co-authors on the paper include fellow MIT Device Lab researchers Omer Refet Caylan, Bachir El Fil (now an associate professor at Georgia Tech), and Lenan Zhang (now an associate professor at Cornell University); Jeff Punch and Vanessa Egan of the University of Limerick; and Jintong Gao of Cornell.

The research was conducted using in situ X-ray microscopy. Mooney says the team’s big realization moment occurred when they were able to observe a single salt crystal pinning itself to the surface, which kicked off a cascading chain reaction of growth.

“People had speculated about this, but we captured it on X-ray for the first time. It felt like watching the microscopic moment where everything tips, the ignition points of a self-propagating process,” says Mooney. “Even more surprising was what followed: The salt crystal didn’t just grow passively to fill the available space. It pierced through the liquid-air interface and reshaped the meniscus itself, setting up the perfect conditions for the next crystal. That subtle, recursive mechanism had never been visually documented before — and seeing it play out in real time completely changed how we thought about salt crystallization.”

The paper, “In Situ X-ray Microscopy Unraveling the Onset of Salt Creeping at a Single-Crystal Level,” is available now in the journal Langmuir. Research was conducted in MIT.nano. 

New algorithms enable efficient machine learning with symmetric data

Wed, 07/30/2025 - 12:00am

If you rotate an image of a molecular structure, a human can tell the rotated image is still the same molecule, but a machine-learning model might think it is a new data point. In computer science parlance, the molecule is “symmetric,” meaning the fundamental structure of that molecule remains the same if it undergoes certain transformations, like rotation.

If a drug discovery model doesn’t understand symmetry, it could make inaccurate predictions about molecular properties. But despite some empirical successes, it’s been unclear whether there is a computationally efficient method to train a good model that is guaranteed to respect symmetry.

A new study by MIT researchers answers this question, and shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed.

These results clarify a foundational question, and they could aid researchers in the development of more powerful machine-learning models that are designed to handle symmetry. Such models would be useful in a variety of applications, from discovering new materials to identifying astronomical anomalies to unraveling complex climate patterns.

“These symmetries are important because they are some sort of information that nature is telling us about the data, and we should take it into account in our machine-learning models. We’ve now shown that it is possible to do machine-learning with symmetric data in an efficient way,” says Behrooz Tahmasebi, an MIT graduate student and co-lead author of this study.

He is joined on the paper by co-lead author and MIT graduate student Ashkan Soleymani; Stefanie Jegelka, an associate professor of electrical engineering and computer science (EECS) and a member of the Institute for Data, Systems, and Society (IDSS) and the Computer Science and Artificial Intelligence Laboratory (CSAIL); and senior author Patrick Jaillet, the Dugald C. Jackson Professor of Electrical Engineering and Computer Science and a principal investigator in the Laboratory for Information and Decision Systems (LIDS). The research was recently presented at the International Conference on Machine Learning.

Studying symmetry

Symmetric data appear in many domains, especially the natural sciences and physics. A model that recognizes symmetries is able to identify an object, like a car, no matter where that object is placed in an image, for example.

Unless a machine-learning model is designed to handle symmetry, it could be less accurate and prone to failure when faced with new symmetric data in real-world situations. On the flip side, models that take advantage of symmetry could be faster and require fewer data for training.

But training a model to process symmetric data is no easy task.

One common approach is called data augmentation, where researchers transform each symmetric data point into multiple data points to help the model generalize better to new data. For instance, one could rotate a molecular structure many times to produce new training data, but if researchers want the model to be guaranteed to respect symmetry, this can be computationally prohibitive.

An alternative approach is to encode symmetry into the model’s architecture. A well-known example of this is a graph neural network (GNN), which inherently handles symmetric data because of how it is designed.

“Graph neural networks are fast and efficient, and they take care of symmetry quite well, but nobody really knows what these models are learning or why they work. Understanding GNNs is a main motivation of our work, so we started with a theoretical evaluation of what happens when data are symmetric,” Tahmasebi says.

They explored the statistical-computational tradeoff in machine learning with symmetric data. This tradeoff means methods that require fewer data can be more computationally expensive, so researchers need to find the right balance.

Building on this theoretical evaluation, the researchers designed an efficient algorithm for machine learning with symmetric data.

Mathematical combinations

To do this, they borrowed ideas from algebra to shrink and simplify the problem. Then, they reformulated the problem using ideas from geometry that effectively capture symmetry.

Finally, they combined the algebra and the geometry into an optimization problem that can be solved efficiently, resulting in their new algorithm.

“Most of the theory and applications were focusing on either algebra or geometry. Here we just combined them,” Tahmasebi says.

The algorithm requires fewer data samples for training than classical approaches, which would improve a model’s accuracy and ability to adapt to new applications.

By proving that scientists can develop efficient algorithms for machine learning with symmetry, and demonstrating how it can be done, these results could lead to the development of new neural network architectures that could be more accurate and less resource-intensive than current models.

Scientists could also use this analysis as a starting point to examine the inner workings of GNNs, and how their operations differ from the algorithm the MIT researchers developed.

“Once we know that better, we can design more interpretable, more robust, and more efficient neural network architectures,” adds Soleymani.

This research is funded, in part, by the National Research Foundation of Singapore, DSO National Laboratories of Singapore, the U.S. Office of Naval Research, the U.S. National Science Foundation, and an Alexander von Humboldt Professorship.

“FUTURE PHASES” showcases new frontiers in music technology and interactive performance

Tue, 07/29/2025 - 5:00pm

Music technology took center stage at MIT during “FUTURE PHASES,” an evening of works for string orchestra and electronics, presented by the MIT Music Technology and Computation Graduate Program as part of the 2025 International Computer Music Conference (ICMC). 

The well-attended event was held last month in the Thomas Tull Concert Hall within the new Edward and Joyce Linde Music Building. Produced in collaboration with the MIT Media Lab’s Opera of the Future Group and Boston’s self-conducted chamber orchestra A Far Cry, “FUTURE PHASES” was the first event to be presented by the MIT Music Technology and Computation Graduate Program in MIT Music’s new space.

“FUTURE PHASES” offerings included two new works by MIT composers: the world premiere of “EV6,” by MIT Music’s Kenan Sahin Distinguished Professor Evan Ziporyn and professor of the practice Eran Egozy; and the U.S. premiere of “FLOW Symphony,” by the MIT Media Lab’s Muriel R. Cooper Professor of Music and Media Tod Machover. Three additional works were selected by a jury from an open call for works: “The Wind Will Carry Us Away,” by Ali Balighi; “A Blank Page,” by Celeste Betancur Gutiérrez and Luna Valentin; and “Coastal Portrait: Cycles and Thresholds,” by Peter Lane. Each work was performed by Boston’s own multi-Grammy-nominated string orchestra, A Far Cry.

“The ICMC is all about presenting the latest research, compositions, and performances in electronic music,” says Egozy, director of the new Music Technology and Computation Graduate Program at MIT. When approached to be a part of this year’s conference, “it seemed the perfect opportunity to showcase MIT’s commitment to music technology, and in particular the exciting new areas being developed right now: a new master’s program in music technology and computation, the new Edward and Joyce Linde Music Building with its enhanced music technology facilities, and new faculty arriving at MIT with joint appointments between MIT Music and Theater Arts (MTA) and the Department of Electrical Engineering and Computer Science (EECS).” These recently hired professors include Anna Huang, a keynote speaker for the conference and creator of the machine learning model Coconet that powered Google’s first AI Doodle, the Bach Doodle.

Egozy emphasizes the uniqueness of this occasion: “You have to understand that this is a very special situation. Having a full 18-member string orchestra [A Far Cry] perform new works that include electronics does not happen very often. In most cases, ICMC performances consist either entirely of electronics and computer-generated music, or perhaps a small ensemble of two-to-four musicians. So the opportunity we could present to the larger community of music technology was particularly exciting.”

To take advantage of this exciting opportunity, an open call was put out internationally to select the other pieces that would accompany Ziporyn and Egozy’s “EV6” and Machover’s “FLOW Symphony.” Three pieces were selected from a total of 46 entries to be a part of the evening’s program by a panel of judges that included Egozy, Machover, and other distinguished composers and technologists.

“We received a huge variety of works from this call,” says Egozy. “We saw all kinds of musical styles and ways that electronics would be used. No two pieces were very similar to each other, and I think because of that, our audience got a sense of how varied and interesting a concert can be for this format. A Far Cry was really the unifying presence. They played all pieces with great passion and nuance. They have a way of really drawing audiences into the music. And, of course, with the Thomas Tull Concert Hall being in the round, the audience felt even more connected to the music.”

Egozy continues, “we took advantage of the technology built into the Thomas Tull Concert Hall, which has 24 built-in speakers for surround sound allowing us to broadcast unique, amplified sound to every seat in the house. Chances are that every person might have experienced the sound slightly differently, but there was always some sense of a multidimensional evolution of sound and music as the pieces unfolded.”

The five works of the evening employed a range of technological components that included playing synthesized, prerecorded, or electronically manipulated sounds; attaching microphones to instruments for use in real-time signal processing algorithms; broadcasting custom-generated musical notation to the musicians; utilizing generative AI to process live sound and play it back in interesting and unpredictable ways; and audience participation, where spectators use their cellphones as musical instruments to become a part of the ensemble.

Ziporyn and Egozy’s piece, “EV6,” took particular advantage of this last innovation: “Evan and I had previously collaborated on a system called Tutti, which means ‘together’ in Italian. Tutti gives an audience the ability to use their smartphones as musical instruments so that we can all play together.” Egozy developed the technology, which was first used in the MIT Campaign for a Better World in 2017. The original application involved a three-minute piece for cellphones only. “But for this concert,” Egozy explains, “Evan had the idea that we could use the same technology to write a new piece — this time, for audience phones and a live string orchestra as well.”

To explain the piece’s title, Ziporyn says, “I drive an EV6; it’s my first electric car, and when I first got it, it felt like I was driving an iPhone. But of course it’s still just a car: it’s got wheels and an engine, and it gets me from one place to another. It seemed like a good metaphor for this piece, in which a lot of the sound is literally played on cellphones, but still has to work like any other piece of music. It’s also a bit of an homage to David Bowie’s song ‘TVC 15,’ which is about falling in love with a robot.”

Egozy adds, “We wanted audience members to feel what it is like to play together in an orchestra. Through this technology, each audience member becomes a part of an orchestral section (winds, brass, strings, etc.). As they play together, they can hear their whole section playing similar music while also hearing other sections in different parts of the hall play different music. This allows an audience to feel a responsibility to their section, hear how music can move between different sections of an orchestra, and experience the thrill of live performance. In ‘EV6,’ this experience was even more electrifying because everyone in the audience got to play with a live string orchestra — perhaps for the first time in recorded history.”

After the concert, guests were treated to six music technology demonstrations that showcased the research of undergraduate and graduate students from both the MIT Music program and the MIT Media Lab. These included a gamified interface for harnessing just intonation systems (Antonis Christou); insights from a human-AI co-created concert (Lancelot Blanchard and Perry Naseck); a system for analyzing piano playing data across campus (Ayyub Abdulrezak ’24, MEng ’25); capturing music features from audio using latent frequency-masked autoencoders (Mason Wang); a device that turns any surface into a drum machine (Matthew Caren ’25); and a play-along interface for learning traditional Senegalese rhythms (Mariano Salcedo ’25). This last example led to the creation of Senegroove, a drumming-based application specifically designed for an upcoming edX online course taught by ethnomusicologist and MIT associate professor in music Patricia Tang, and world-renowned Senegalese drummer and MIT lecturer in music Lamine Touré, who provided performance videos of the foundational rhythms used in the system.

Ultimately, Egozy muses, “'FUTURE PHASES' showed how having the right space — in this case, the new Edward and Joyce Linde Music Building — really can be a driving force for new ways of thinking, new projects, and new ways of collaborating. My hope is that everyone in the MIT community, the Boston area, and beyond soon discovers what a truly amazing place and space we have built, and are still building here, for music and music technology at MIT.”

New transmitter could make wireless devices more energy-efficient

Tue, 07/29/2025 - 12:00am

Researchers from MIT and elsewhere have designed a novel transmitter chip that significantly improves the energy efficiency of wireless communications, which could boost the range and battery life of a connected device.

Their approach employs a unique modulation scheme to encode digital data into a wireless signal, which reduces the amount of error in the transmission and leads to more reliable communications.

The compact, flexible system could be incorporated into existing internet-of-things devices to provide immediate gains, while also meeting the more stringent efficiency requirements of future 6G technologies.

The versatility of the chip could make it well-suited for a range of applications that require careful management of energy for communications, such as industrial sensors that continuously monitor factory conditions and smart appliances that provide real-time notifications.

“By thinking outside the box, we created a more efficient, intelligent circuit for next-generation devices that is also even better than the state-of-the-art for legacy architectures. This is just one example of how adopting a modular approach to allow for adaptability can drive innovation at every level,” says Muriel Médard, the School of Science NEC Professor of Software Science and Engineering, a professor in the MIT Department of Electrical Engineering and Computer Science (EECS), and co-author of a paper on the new transmitter.

Médard’s co-authors include Timur Zirtiloglu, the lead author and a graduate student at Boston University; Arman Tan, a graduate student at BU; Basak Ozaydin, an MIT graduate student in EECS; Ken Duffy, a professor at Northeastern University; and Rabia Tugce Yazicigil, associate professor of electrical and computer engineering at BU. The research was recently presented at the IEEE Radio Frequency Circuits Symposium.

Optimizing transmissions

In wireless devices, a transmitter converts digital data into an electromagnetic signal that is sent over the airwaves to a receiver. The transmitter does this by mapping digital bits to symbols that represent the amplitude and phase of the electromagnetic signal, which is a process called modulation.

Traditional systems transmit signals that are evenly spaced by creating a uniform pattern of symbols, which helps avoid interference. But this uniform structure lacks adaptability and can be inefficient, since wireless channel conditions are dynamic and often change rapidly.

As an alternative, optimal modulation schemes follow a non-uniform pattern that can adapt to changing channel conditions, maximizing the amount of data transmitted while minimizing energy usage.

But while optimal modulation can be more energy efficient, it is also more susceptible to errors, especially in crowded wireless environments. When the signals aren’t uniform in length, it can be harder for the receiver to distinguish between symbols and noise that squeezed into the transmission.

To overcome this problem, the MIT transmitter adds a small amount of padding, in the form of extra bits between symbols, so that every transmission is the same length.

This helps the receiver identify the beginning and end of each transmission, preventing misinterpretation of the message. However, the device enjoys the energy efficiency gains of using a non-uniform, optimal modulation scheme.

This approach works because of a technique the researchers previously developed known as GRAND, which is a universal decoding algorithm that crack any code by guessing the noise that affected the transmission.

Here, they employ a GRAND-inspired algorithm to adjust the length of the received transmission by guessing the extra bits that have been added. In this way, the receiver can effectively reconstruct the original message.

“Now, thanks to GRAND, we can have a transmitter that is capable of doing these more efficient transmissions with non-uniform constellations of data, and we can see the gains,” Médard says.

A flexible circuit

The new chip, which has a compact architecture that allows the researchers to integrate additional efficiency-boosting methods, enabled transmissions with only about one-quarter the amount of signal error of methods that use optimal modulation.

Surprisingly, the device also achieved significantly lower error rates than transmitters that use traditional modulation.

“The traditional approach has become so ingrained that it was challenging to not get lured back to the status quo, especially since we were changing things that we often take for granted and concepts we’ve been teaching for decades,” Médard says.

This innovative architecture could be used to improve the energy efficiency and reliability of current wireless communication devices, while also offering the flexibility to be incorporated into future devices that employ optimal modulation.

Next, the researchers want to adapt their approach to leverage additional techniques that could boost efficiency and reduce the error rates in wireless transmissions.

“This optimal modulation transmitter radio frequency integrated circuit is a game-changing innovation over the traditional RF signal modulation. It’s set to play a major role for the next generation of wireless connectivity such as 6G and Wi-Fi,” says Rocco Tam, NXP Fellow for Wireless Connectivity SoC Research and Development at NXP Semiconductors, who was not involved with this research.

This work is supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA), the National Science Foundation (NSF), and the Texas Analog Center for Excellence. 

Why animals are a critical part of forest carbon absorption

Mon, 07/28/2025 - 2:30pm

A lot of attention has been paid to how climate change can drive biodiversity loss. Now, MIT researchers have shown the reverse is also true: Reductions in biodiversity can jeopardize one of Earth’s most powerful levers for mitigating climate change.

In a paper published in PNAS, the researchers showed that following deforestation, naturally-regrowing tropical forests, with healthy populations of seed-dispersing animals, can absorb up to four times more carbon than similar forests with fewer seed-dispersing animals.

Because tropical forests are currently Earth’s largest land-based carbon sink, the findings improve our understanding of a potent tool to fight climate change.

“The results underscore the importance of animals in maintaining healthy, carbon-rich tropical forests,” says Evan Fricke, a research scientist in the MIT Department of Civil and Environmental Engineering and the lead author of the new study. “When seed-dispersing animals decline, we risk weakening the climate-mitigating power of tropical forests.”

Fricke’s co-authors on the paper include César Terrer, the Tianfu Career Development Associate Professor at MIT; Charles Harvey, an MIT professor of civil and environmental engineering; and Susan Cook-Patton of The Nature Conservancy.

The study combines a wide array of data on animal biodiversity, movement, and seed dispersal across thousands of animal species, along with carbon accumulation data from thousands of tropical forest sites.

The researchers say the results are the clearest evidence yet that seed-dispersing animals play an important role in forests’ ability to absorb carbon, and that the findings underscore the need to address biodiversity loss and climate change as connected parts of a delicate ecosystem rather as separate problems in isolation.

“It’s been clear that climate change threatens biodiversity, and now this study shows how biodiversity losses can exacerbate climate change,” Fricke says. “Understanding that two-way street helps us understand the connections between these challenges, and how we can address them. These are challenges we need to tackle in tandem, and the contribution of animals to tropical forest carbon shows that there are win-wins possible when supporting biodiversity and fighting climate change at the same time.”

Putting the pieces together

The next time you see a video of a monkey or bird enjoying a piece of fruit, consider that the animals are actually playing an important role in their ecosystems. Research has shown that by digesting the seeds and defecating somewhere else, animals can help with the germination, growth, and long-term survival of the plant.

Fricke has been studying animals that disperse seeds for nearly 15 years. His previous research has shown that without animal seed dispersal, trees have lower survival rates and a harder time keeping up with environmental changes.

“We’re now thinking more about the roles that animals might play in affecting the climate through seed dispersal,” Fricke says. “We know that in tropical forests, where more than three-quarters of trees rely on animals for seed dispersal, the decline of seed dispersal could affect not just the biodiversity of forests, but how they bounce back from deforestation. We also know that all around the world, animal populations are declining.”

Regrowing forests is an often-cited way to mitigate the effects of climate change, but the influence of biodiversity on forests’ ability to absorb carbon has not been fully quantified, especially at larger scales.

For their study, the researchers combined data from thousands of separate studies and used new tools for quantifying disparate but interconnected ecological processes. After analyzing data from more than 17,000 vegetation plots, the researchers decided to focus on tropical regions, looking at data on where seed-dispersing animals live, how many seeds each animal disperses, and how they affect germination.

The researchers then incorporated data showing how human activity impacts different seed-dispersing animals’ presence and movement. They found, for example, that animals move less when they consume seeds in areas with a bigger human footprint.

Combining all that data, the researchers created an index of seed-dispersal disruption that revealed a link between human activities and declines in animal seed dispersal. They then analyzed the relationship between that index and records of carbon accumulation in naturally regrowing tropical forests over time, controlling for factors like drought conditions, the prevalence of fires, and the presence of grazing livestock.

“It was a big task to bring data from thousands of field studies together into a map of the disruption of seed dispersal,” Fricke says. “But it lets us go beyond just asking what animals are there to actually quantifying the ecological roles those animals are playing and understanding how human pressures affect them.”

The researchers acknowledged that the quality of animal biodiversity data could be improved and introduces uncertainty into their findings. They also note that other processes, such as pollination, seed predation, and competition influence seed dispersal and can constrain forest regrowth. Still, the findings were in line with recent estimates.

“What’s particularly new about this study is we’re actually getting the numbers around these effects,” Fricke says. “Finding that seed dispersal disruption explains a fourfold difference in carbon absorption across the thousands of tropical regrowth sites included in the study points to seed dispersers as a major lever on tropical forest carbon.”

Quantifying lost carbon

In forests identified as potential regrowth sites, the researchers found seed-dispersal declines were linked to reductions in carbon absorption each year averaging 1.8 metric tons per hectare, equal to a reduction in regrowth of 57 percent.

The researchers say the results show natural regrowth projects will be more impactful in landscapes where seed-dispersing animals have been less disrupted, including areas that were recently deforested, are near high-integrity forests, or have higher tree cover.

“In the discussion around planting trees versus allowing trees to regrow naturally, regrowth is basically free, whereas planting trees costs money, and it also leads to less diverse forests,” Terrer says. “With these results, now we can understand where natural regrowth can happen effectively because there are animals planting the seeds for free, and we also can identify areas where, because animals are affected, natural regrowth is not going to happen, and therefore planting trees actively is necessary.”

To support seed-dispersing animals, the researchers encourage interventions that protect or improve their habitats and that reduce pressures on species, ranging from wildlife corridors to restrictions on wildlife trade. Restoring the ecological roles of seed dispersers is also possible by reintroducing seed-dispersing species where they’ve been lost or planting certain trees that attract those animals.

The findings could also make modeling the climate impact of naturally regrowing forests more accurate.

“Overlooking the impact of seed-dispersal disruption may overestimate natural regrowth potential in many areas and underestimate it in others,” the authors write.

The researchers believe the findings open up new avenues of inquiry for the field.

“Forests provide a huge climate subsidy by sequestering about a third of all human carbon emissions,” Terrer says. “Tropical forests are by far the most important carbon sink globally, but in the last few decades, their ability to sequester carbon has been declining. We will next explore how much of that decline is due to an increase in extreme droughts or fires versus declines in animal seed dispersal.”

Overall, the researchers hope the study helps improves our understanding of the planet’s complex ecological processes.

“When we lose our animals, we’re losing the ecological infrastructure that keeps our tropical forests healthy and resilient,” Fricke says.

The research was supported by the MIT Climate and Sustainability Consortium, the Government of Portugal, and the Bezos Earth Fund.

Staff members honored with 2025 Excellence Awards, Collier Medal, and Staff Award for Distinction in Service

Mon, 07/28/2025 - 11:50am

On Thursday, June 5, 11 individuals and four teams were awarded MIT Excellence Awards — the highest awards for staff at the Institute. Cheers from colleagues holding brightly colored signs and pompoms rang out in Kresge Auditorium in celebration of the honorees. In addition to the Excellence Awards, staff members received the Collier Medal, the Staff Award for Distinction in Service, and the Gordon Y. Billard Award.  

The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving MIT. The medal recognizes an individual or group whose actions demonstrate the importance of community, and whose contributions exceed the boundaries of their profession. The Staff Award for Distinction in Service is presented to an individual whose service results in a positive, lasting impact on the MIT community. The Gordon Y. Billard Award is given to staff or faculty members, or MIT-affiliated individuals, who provide "special service of outstanding merit performed for the Institute."

The 2025 MIT Excellence Award recipients and their award categories are:

Bringing Out the Best

  • Timothy Collard
  • Whitney Cornforth
  • Roger Khazan

Embracing Inclusion

  • Denise Phillips

Innovative Solutions

  • Ari Jacobovits
  • Stephanie Tran
  • MIT Health Rebranding Team, Office of the Executive Vice President and Treasurer: Ann Adelsberger, Amy Ciarametaro, Kimberly Schive, Emily Wade

Outstanding Contributor

  • Sharon Clarke
  • Charles "Chip" Coldwell
  • Jeremy Mineweaser
  • Christopher "Petey" Peterson
  • MIT Health Accreditation Team, Office of the Executive Vice President and Treasurer: Christianne Garcia, David Podradchik, Janis Puibello, Kristen Raymond
  • MIT Museum Visitor Experience Supervisor Team, Associate Provost for the Arts: Mariah Crowley, Brianna Vega

Serving Our Community

  • Nada Miqdadi El-Alami
  • MIT International Scholars Office, Office of the Vice President for Research: Portia Brummitt-Vachon, Amanda Doran, Brianna L. Drakos, Fumiko Futai, Bay Heidrich, Benjamin Hull, Penny Rosser, Henry Rotchford, Patricia Toledo, Makiko Wada
  • Building 68 Kitchen Staff, Department of Biology, School of Science: Brikti Abera, AnnMarie Budhai, Nicholas Budhai, Daniel Honiker, Janet Katin, Umme Khan, Shuming Lin, Kelly McKinnon, Karen O'Leary

The 2025 Collier Medal recipient was Kathleen Monagle, associate dean and director of disability and access services, student support, and wellbeing in the Division of Student Life. Monagle oversees a team that supports almost 600 undergraduate, graduate, and MITx students with more than 4,000 accommodations. She works with faculty to ensure those students have the best possible learning experience — both in MIT’s classrooms and online.

This year’s recipient of the 2025 Staff Award for Distinction in Service was Stu Schmill, dean of admissions and student financial services in the Office of the Vice Chancellor. Schmill graduated from MIT in 1986 and has since served the Institute in a variety of roles. His colleagues admire his passion for sharing knowledge; his insight and integrity; and his deep love for MIT’s culture, values, and people.

Three community members were honored with a 2025 Gordon Y. Billard Award

  • William "Bill" Cormier, project technician, Department of Mechanical Engineering, School of Engineering

  • John E. Fernández, professor, Department of Architecture, School of Architecture and Planning; and director of MIT Environmental Solutions Initiative, Office of the Vice President for Research

  • Tony Lee, coach, MIT Women's Volleyball Club, Student Organizations, Leadership, and Engagement, Division of Student Life

Presenters included President Sally Kornbluth; MIT Chief of Police John DiFava and Deputy Chief Steven DeMarco; Dean of the School of Science Nergis Mavalvala; Vice President for Human Resources Ramona Allen; Executive Vice President and Treasurer Glen Shor; Lincoln Laboratory Assistant Director Justin Brooke; Chancellor Melissa Nobles; and Provost Anantha Chandrakasan.

Visit the MIT Human Resources website for more information about the award recipients, categories, and to view photos and video of the event. 

New system dramatically speeds the search for polymer materials

Mon, 07/28/2025 - 11:00am

Scientists often seek new materials derived from polymers. Rather than starting a polymer search from scratch, they save time and money by blending existing polymers to achieve desired properties.

But identifying the best blend is a thorny problem. Not only is there a practically limitless number of potential combinations, but polymers interact in complex ways, so the properties of a new blend are challenging to predict.

To accelerate the discovery of new materials, MIT researchers developed a fully autonomous experimental platform that can efficiently identify optimal polymer blends.

The closed-loop workflow uses a powerful algorithm to explore a wide range of potential polymer blends, feeding a selection of combinations to a robotic system that mixes chemicals and tests each blend.

Based on the results, the algorithm decides which experiments to conduct next, continuing the process until the new polymer meets the user’s goals.

During experiments, the system autonomously identified hundreds of blends that outperformed their constituent polymers. Interestingly, the researchers found that the best-performing blends did not necessarily use the best individual components.

“I found that to be good confirmation of the value of using an optimization algorithm that considers the full design space at the same time,” says Connor Coley, the Class of 1957 Career Development Assistant Professor in the MIT departments of Chemical Engineering and Electrical Engineering and Computer Science, and senior author of a paper on this new approach. “If you consider the full formulation space, you can potentially find new or better properties. Using a different approach, you could easily overlook the underperforming components that happen to be the important parts of the best blend.”

This workflow could someday facilitate the discovery of polymer blend materials that lead to advancements like improved battery electrolytes, more cost-effective solar panels, or tailored nanoparticles for safer drug delivery.

Coley is joined on the paper by lead author Guangqi Wu, a former MIT postdoc who is now a Marie Skłodowska-Curie Postdoctoral Fellow at Oxford University; Tianyi Jin, an MIT graduate student; and Alfredo Alexander-Katz, the Michael and Sonja Koerner Professor in the MIT Department of Materials Science and Engineering. The work appears today in Matter.

Building better blends

When scientists design new polymer blends, they are faced with a nearly endless number of possible polymers to start with. Once they select a few to mix, they still must choose the composition of each polymer and the concentration of polymers in the blend.

“Having that large of a design space necessitates algorithmic solutions and higher-throughput workflows because you simply couldn’t test all the combinations using brute force,” Coley adds.

While researchers have studied autonomous workflows for single polymers, less work has focused on polymer blends because of the dramatically larger design space.

In this study, the MIT researchers sought new random heteropolymer blends, made by mixing two or more polymers with different structural features. These versatile polymers have shown particularly promising relevance to high-temperature enzymatic catalysis, a process that increases the rate of chemical reactions.

Their closed-loop workflow begins with an algorithm that, based on the user’s desired properties, autonomously identifies a handful of promising polymer blends.

The researchers originally tried a machine-learning model to predict the performance of new blends, but it was difficult to make accurate predictions across the astronomically large space of possibilities. Instead, they utilized a genetic algorithm, which uses biologically inspired operations like selection and mutation to find an optimal solution.

Their system encodes the composition of a polymer blend into what is effectively a digital chromosome, which the genetic algorithm iteratively improves to identify the most promising combinations.

“This algorithm is not new, but we had to modify the algorithm to fit into our system. For instance, we had to limit the number of polymers that could be in one material to make discovery more efficient,” Wu adds.

In addition, because the search space is so large, they tuned the algorithm to balance its choice of exploration (searching for random polymers) versus exploitation (optimizing the best polymers from the last experiment).

The algorithm sends 96 polymer blends at a time to the autonomous robotic platform, which mixes the chemicals and measures the properties of each.

The experiments were focused on improving the thermal stability of enzymes by optimizing the retained enzymatic activity (REA), a measure of how stable an enzyme is after mixing with the polymer blends and being exposed to high temperatures.

These results are sent back to the algorithm, which uses them to generate a new set of polymers until the system finds the optimal blend.

Accelerating discovery

Building the robotic system involved numerous challenges, such as developing a technique to evenly heat polymers and optimizing the speed at which the pipette tip moves up and down.

“In autonomous discovery platforms, we emphasize algorithmic innovations, but there are many detailed and subtle aspects of the procedure you have to validate before you can trust the information coming out of it,” Coley says.

When tested, the optimal blends their system identified often outperformed the polymers that formed them. The best overall blend performed 18 percent better than any of its individual components, achieving an REA of 73 percent.

“This indicates that, instead of developing new polymers, we could sometimes blend existing polymers to design new materials that perform even better than individual polymers do,” Wu says.

Moreover, their autonomous platform can generate and test 700 new polymer blends per day and only requires human intervention for refilling and replacing chemicals.

While this research focused on polymers for protein stabilization, their platform could be modified for other uses, like the development or new plastics or battery electrolytes.

In addition to exploring additional polymer properties, the researchers want to use experimental data to improve the efficiency of their algorithm and develop new algorithms to streamline the operations of the autonomous liquid handler.

“Technologically, there are urgent needs to enhance thermal stability of proteins and enzymes. The results demonstrated here are quite impressive. Being a platform technology and given the rapid advancement in machine learning and AI for material science, one can envision the possibility for this team to further enhance random heteropolymer performances or to optimize design based on end needs and usages,” says Ting Xu, an associate professor at the University of California at Berkeley, who was not involved with this work.

This work is funded, in part, by the U.S. Department of Energy, the National Science Foundation, and the Class of 1947 Career Development Chair.

Famous double-slit experiment holds up when stripped to its quantum essentials

Mon, 07/28/2025 - 12:00am

MIT physicists have performed an idealized version of one of the most famous experiments in quantum physics. Their findings demonstrate, with atomic-level precision, the dual yet evasive nature of light. They also happen to confirm that Albert Einstein was wrong about this particular quantum scenario.

The experiment in question is the double-slit experiment, which was first performed in 1801 by the British scholar Thomas Young to show how light behaves as a wave. Today, with the formulation of quantum mechanics, the double-slit experiment is now known for its surprisingly simple demonstration of a head-scratching reality: that light exists as both a particle and a wave. Stranger still, this duality cannot be simultaneously observed. Seeing light in the form of particles instantly obscures its wave-like nature, and vice versa.

The original experiment involved shining a beam of light through two parallel slits in a screen and observing the pattern that formed on a second, faraway screen. One might expect to see two overlapping spots of light, which would imply that light exists as particles, a.k.a. photons, like paintballs that follow a direct path. But instead, the light produces alternating bright and dark stripes on the screen, in an interference pattern similar to what happens when two ripples in a pond meet. This suggests light behaves as a wave. Even weirder, when one tries to measure which slit the light is traveling through, the light suddenly behaves as particles and the interference pattern disappears.

The double-slit experiment is taught today in most high school physics classes as a simple way to illustrate the fundamental principle of quantum mechanics: that all physical objects, including light, are simultaneously particles and waves.

Nearly a century ago, the experiment was at the center of a friendly debate between physicists Albert Einstein and Niels Bohr. In 1927, Einstein argued that a photon particle should pass through just one of the two slits and in the process generate a slight force on that slit, like a bird rustling a leaf as it flies by. He proposed that one could detect such a force while also observing an interference pattern, thereby catching light’s particle and wave nature at the same time. In response, Bohr applied the quantum mechanical uncertainty principle and showed that the detection of the photon’s path would wash out the interference pattern.

Scientists have since carried out multiple versions of the double-slit experiment, and they have all, to various degrees, confirmed the validity of the quantum theory formulated by Bohr. Now, MIT physicists have performed the most “idealized” version of the double-slit experiment to date. Their version strips down the experiment to its quantum essentials. They used individual atoms as slits, and used weak beams of light so that each atom scattered at most one photon. By preparing the atoms in different quantum states, they were able to modify what information the atoms obtained about the path of the photons. The researchers thus confirmed the predictions of quantum theory: The more information was obtained about the path (i.e. the particle nature) of light, the lower the visibility of the interference pattern was. 

They demonstrated what Einstein got wrong. Whenever an atom is “rustled” by a passing photon, the wave interference is diminished.

“Einstein and Bohr would have never thought that this is possible, to perform such an experiment with single atoms and single photons,” says Wolfgang Ketterle, the John D. MacArthur Professor of Physics and leader of the MIT team. “What we have done is an idealized Gedanken experiment.”

Their results appear in the journal Physical Review Letters. Ketterle’s MIT co-authors include first author Vitaly Fedoseev, Hanzhen Lin, Yu-Kun Lu, Yoo Kyung Lee, and Jiahao Lyu, who all are affiliated with MIT’s Department of Physics, the Research Laboratory of Electronics, and the MIT-Harvard Center for Ultracold Atoms.

Cold confinement

Ketterle’s group at MIT experiments with atoms and molecules that they super-cool to temperatures just above absolute zero and arrange in configurations that they confine with laser light. Within these ultracold, carefully tuned clouds, exotic phenomena that only occur at the quantum, single-atom scale can emerge.

In a recent experiment, the team was investigating a seemingly unrelated question, studying how light scattering can reveal the properties of materials built from ultracold atoms.

“We realized we can quantify the degree to which this scattering process is like a particle or a wave, and we quickly realized we can apply this new method to realize this famous experiment in a very idealized way,” Fedoseev says.

In their new study, the team worked with more than 10,000 atoms, which they cooled to microkelvin temperatures. They used an array of laser beams to arrange the frozen atoms into an evenly spaced, crystal-like lattice configuration. In this arrangement, each atom is far enough away from any other atom that each can effectively be considered a single, isolated and identical atom. And 10,000 such atoms can produce a signal that is more easily detected, compared to a single atom or two.

The group reasoned that with this arrangement, they might shine a weak beam of light through the atoms and observe how a single photon scatters off two adjacent atoms, as a wave or a particle. This would be similar to how, in the original double-slit experiment, light passes through two slits.

“What we have done can be regarded as a new variant to the double-slit experiment,” Ketterle says. “These single atoms are like the smallest slits you could possibly build.”

Tuning fuzz

Working at the level of single photons required repeating the experiment many times and using an ultrasensitive detector to record the pattern of light scattered off the atoms. From the intensity of the detected light, the researchers could directly infer whether the light behaved as a particle or a wave.

They were particularly interested in the situation where half the photons they sent in behaved as waves, and half behaved as particles. They achieved this by using a method to tune the probability that a photon will appear as a wave versus a particle, by adjusting an atom’s “fuzziness,” or the certainty of its location. In their experiment, each of the 10,000 atoms is held in place by laser light that can be adjusted to tighten or loosen the light’s hold. The more loosely an atom is held, the fuzzier, or more “spatially extensive,” it appears. The fuzzier atom rustles more easily and records the path of the photon. Therefore, in tuning up an atom’s fuzziness, researchers can increase the probability that a photon will exhibit particle-like behavior. Their observations were in full agreement with the theoretical description.

Springs away

In their experiment, the group tested Einstein’s idea about how to detect the path of the photon. Conceptually, if each slit were cut into an extremely thin sheet of paper that was suspended in the air by a spring, a photon passing through one slit should shake the corresponding spring by a certain degree that would be a signal of the photon’s particle nature. In previous realizations of the double slit experiment, physicists have incorporated such a spring-like ingredient, and the spring played a major role in describing the photon’s dual nature.

But Ketterle and his colleagues were able to perform the experiment without the proverbial springs. The team’s cloud of atoms is initially held in place by laser light, similar to Einstein’s conception of a slit suspended by a spring. The researchers reasoned that if they were to do away with their “spring,” and observe exactly the same phenomenon, then it would show that the spring has no effect on a photon’s wave/particle duality.

This, too, was what they found. Over multiple runs, they turned off the spring-like laser holding the atoms in place and then quickly took a measurement in a millionth of a second,  before the atoms became more fuzzy and eventually fell down due to gravity. In this tiny amount of time, the atoms were effectively floating in free space. In this spring-free scenario, the team observed the same phenomenon: A photon’s wave and particle nature could not be observed simultaneously.

“In many descriptions, the springs play a major role. But we show, no, the springs do not matter here; what matters is only the fuzziness of the atoms,” Fedoseev says. “Therefore,  one has to use a more profound description, which uses quantum correlations between photons and atoms.”

The researchers note that the year 2025 has been declared by the United Nations as the International Year of Quantum Science and Technology, celebrating the formulation of quantum mechanics 100 years ago. The discussion between Bohr and Einstein about the double-slit experiment took place only two years later.

“It’s a wonderful coincidence that we could help clarify this historic controversy in the same year we celebrate quantum physics,” says co-author Lee.

This work was supported, in part, by the National Science Foundation, the U.S. Department of Defense, and the Gordon and Betty Moore Foundation.

InvenTeams turns students into inventors

Fri, 07/25/2025 - 12:00am

In 2023, students from Calistoga Junior/Senior High School in California entered a year-long invention project run by the Lemelson-MIT Program. Tasked with finding problems to solve in their community, the students settled on an invention to keep firefighters and agricultural workers cool in hot working conditions.

Over the next 12 months, the students learned more about the problem from the workers, developed a prototype cooling system, and filed a patent for their invention. After presenting their solution at the program’s capstone Eurekafest event at MIT, the students were invited to the California State Capitol to share their work with lawmakers, and they went on to be selected as finalists in the student SXSW Innovation Awards.

For 20 years, the Lemelson-MIT InvenTeams Grant Initiative has inspired high school students across the country by supporting them through an extracurricular invention program that culminates in presentations on MIT’s campus each spring. The students select their own problems and invent their own solutions, receiving $7,500 in grants from Lemelson-MIT, along with mentorship, technical consultation, and more support to turn their ideas into reality.

In total, high school InvenTeams have been granted 19 U.S. patents since the program’s start, with many more teams, like the one from Calistoga, continuing work on their inventions after the program. Students often report an increased sense of confidence and interest in STEM subjects following their InvenTeams experience. In some cases, that new mindset changes students’ life trajectories.

“In a traditional school setting, students don’t always get the chance to show what they can do,” says Calistoga High School teacher Heather Brooks, who sponsored the 2023 team. “I was blown away by the students’ power and creativity.”

Turning students into inventors

The Lemelson Prize program started in 2004 with one $500,000 award given to a prolific inventor each year and smaller prizes given to inventor teams from MIT. In 2006, following a National Science Foundation report on the best ways to foster and support inventors, the program started awarding smaller grants to teams of high school students across the country.

“[Program founder] Jerome Lemelson wanted to inspire young people to become inventors and had a deep belief that America’s strength and innovation was driven by invention,” says Lemelson-MIT Executive Director Stephanie Couch. “He wanted young people to celebrate inventors like they celebrate rock stars and football players.”

When Couch arrived at MIT nine years ago, her research showed that giving small grants to younger students was the most successful way to increase students’ interest in STEM subjects.

Each year, the InvenTeams program receives between 50 and 80 applications from student teams across the country. From there, 20 to 30 teams are selected for Excite Awards. Those teams submit an in-depth application in which they describe the problem they’re solving, conduct patent research, and share early ideas for their solution. They also outline plans for community engagement, budget allocation, and additional background research.

Judges with a range of expertise select the finalists, who submit monthly updates throughout the year. Teams also meet with the community members they are inventing solutions for regularly.

“We see invention as a practice in empathy,” says Edwin Marrero, the interim invention education manager of the Lemelson-MIT program. “When you’re inventing, you’re inventing for somebody — and we like to say you’re inventing with somebody. Students learn to communicate and work in their communities. It’s a good skill to learn early in life.”

The final event at MIT, dubbed Eurekafest, is held every June. It features live presentations at the Stata Center that are open to the public and allow the students to showcase their inventions. Students stay in MIT dormitories for a few days leading up to the presentations and participate in a series of networking opportunities.

“The presentations are my favorite part, because people are peppering students with questions, and their depth of understanding, along with the confidence they project, is totally unlike anything you’ve ever seen from a high schooler,” Couch says.

This year’s teams presented ways to detect contamination in drinking water, help visually impaired people communicate, treat groundwater for use in agriculture, and more. Finalist teams hailed from Lubbock, Texas; Edison, New Jersey; Nitro, West Virginia, and — for the first time in the program’s history — MIT’s backyard of Cambridge, Massachusetts. The team from Cambridge invented a communication device for rowers on crew teams so they can hear from their coaches. They filed a patent for their invention.

“We’ve learned from our research that this one-year program really does transform the students’ perceptions of themselves, what they’re capable of, and what they’ll do next,” Couch says. “Also, by letting them pick what problem they want to solve and for whom they want to solve it, we’re giving them agency and tapping into that intrinsic motivation in life — to find meaning and purpose. How often in school do you get to find a problem versus being told which one to work on?”

Scaling invention education

There are many stories about the impact of the InvenTeam program on students. In 2016, a team of students on the autism spectrum developed a treadmill device and app to detect lameness in cows on dairy farms — a way to catch injury or disease in the animals. The students filed a patent for the device, which cost far less than other solutions on the market.

In 2018, a team from Garey High School in California developed a sensor device to help monitor foot health in diabetic patients and prevent amputations.

“Our school is one of the lowest-performing academically, and 99 percent of our students are low income,” says Antonio Gamboa, the school district’s former science department chair. “Before the Lemelson-MIT InvenTeams grant, district administrators said they didn’t have money to support science. Once they saw what these students could do, that turned around — not just in our school, but across the district.”

The InvenTeams program has been so successful the Lemelson-MIT program created a membership program, called Partners in Invention Education, to help many more schools adopt invention education. The curriculum stretches from kindergarten all the way to the first two years of college.

“As a middle school math teacher in New York City Public Schools, I noticed kids are falling out of love with these STEM subjects at an early age,” Marrero says. “I think a big reason for that is it’s not taught in a way that meaningful to them. There often aren’t real-world applications in lessons. Lemelson-MIT’s invention education makes STEM subjects relevant to kids. They’re the drivers of the learning. They might discover they need math or science skills to solve the problem they’re working on, and it creates a different level of motivation.”

3 Questions: Applying lessons in data, economics, and policy design to the real world

Thu, 07/24/2025 - 3:45pm

Gevorg Minasyan MAP ’23 first discovered the MITx MicroMasters Program in Data, Economics, and Design of Policy (DEDP) — jointly led by the Abdul Latif Jameel Poverty Action Lab (J-PAL) and MIT Open Learning — when he was looking to better understand the process of building effective, evidence-based policies while working at the Central Bank of Armenia. After completing the MicroMasters program, Minasyan was inspired to pursue MIT’s Master’s in Data, Economics, and Design of Policy program.

Today, Minasyan is the director of the Research and Training Center at the Central Bank of Armenia. He has not only been able to apply what he has learned at MIT to his work, but he has also sought to institutionalize a culture of evidence-based policymaking at the bank and more broadly in Armenia. He spoke with MIT Open Learning about his journey through the DEDP programs, key takeaways, and how what he learned at MIT continues to guide his work.

Q: What initially drew you to the DEDP MicroMasters, and what were some highlights of the program?

A: Working at the Central Bank of Armenia, I was constantly asking myself: Can we build a system in which public policy decisions are grounded in rigorous evidence? Too often, I observed public programs that were well-intentioned and seemed to address pressing challenges, but ultimately failed to bring tangible change. Sometimes it was due to flawed design; other times, the goals simply didn’t align with what the public actually needed or expected. These experiences left a deep impression on me and sparked a strong desire to better understand what works, what doesn’t, and why.

That search led me to the DEDP MicroMasters program, which turned out to be a pivotal step in my professional journey. From the very first course, I realized that this was not just another academic program — it was a completely new way of thinking about development policy. The courses combined rigorous training in economics, data analysis, and impact evaluation with a strong emphasis on practical application. We weren’t just learning formulas or running regressions — we were being trained to ask the right questions, to think critically about causality, and to understand the trade-offs of policy choices.

Another aspect that set the MicroMasters apart was its blended structure. I was able to pursue a globally top-tier education while continuing my full-time responsibilities at the Central Bank. This made the learning deeply relevant and immediately applicable. Even as I was studying, I found myself incorporating insights from class into my day-to-day policy work, whether it was refining how we evaluated financial inclusion programs or rethinking the way we analyzed administrative data.

At the same time, the global nature of the program created a vibrant, diverse community. I engaged with students and professionals from dozens of countries, each bringing different perspectives. These interactions enriched the coursework and helped me to realize that despite the differences in context, the challenges of effective policy design — and the power of evidence to improve lives — were remarkably universal. It was a rare combination: intellectually rigorous, practically grounded, globally connected, and personally transformative.

Q: Can you describe your experiences in the Master’s in Data, Economics, and Design of Policy residential program?

A: The MicroMasters experience inspired me to go further, and I decided to apply for the full-time, residential master’s at MIT. That year was nothing short of transformative. It not only sharpened my technical and analytical skills, but also fundamentally changed the way I think about policymaking.

One of the most influential courses I took during the master’s program was 14.760 (Firms, Markets, Trade, and Growth). The analytical tools it provided mapped directly onto the systemic challenges I saw among Armenian firms. Motivated by this connection, I developed a similar course, which I now teach at the American University of Armenia. Each year, I work with students to investigate the everyday constraints that hinder firm performance, with the ultimate goal of producing data-driven research that could inform business strategy in Armenia.

The residential master’s program taught me that evidence-based decision-making starts with a mindset shift. It’s not just about applying tools, it’s about being open to questioning assumptions, being transparent about uncertainty, and being humble enough to let data challenge intuition. I also came to appreciate that truly effective policy design isn’t about finding one-off solutions, but about creating dynamic feedback loops that allow us to continuously learn from implementation.

This is essential to refining programs in real time, adapting to new information, and avoiding the trap of static, one-size-fits-all approaches. Equally valuable was becoming part of the MIT and J-PAL’s global network. The relationships I built with researchers, practitioners, and fellow students from around the world gave me lasting insights into how institutions can systematically embed analysis in their core operations. This exposure helped me to see the possibilities not just for my own work, but for how public institutions like central banks can lead the way in advancing an evidence-based culture.

Q: How are you applying what you’ve learned in the DEDP programs to the Central Bank of Armenia?

A: As director of the Research and Training Center at the Central Bank of Armenia, I have taken on a new kind of responsibility: leading the effort to scale evidence-based decision-making not only within the Central Bank, but across a broader ecosystem of public institutions in Armenia. This means building internal capacity, rethinking how research informs policy, and fostering partnerships that promote a culture of data-driven decision-making.

Beyond the classroom, the skills I developed through the DEDP program have been critical to my role in shaping real-world policy in Armenia. A particularly timely example is our national push toward a cashless economy — one of the most prominent and complex reform agendas today. In recent years, the government has rolled out a suite of bold policies aimed at boosting the adoption of non-cash payments, all part of a larger vision to modernize the financial system, reduce the shadow economy, and increase transparency. Key initiatives include a cashback program designed to encourage pensioners to use digital payments and the mandatory installation of non-cash payment terminals across businesses nationwide. In my role on an inter-agency policy team, I rely heavily on the analytical tools from DEDP to evaluate these policies and propose regulatory adjustments to ensure the transition is not only effective, but also inclusive and sustainable.

The Central Bank of Armenia recently collaborated with J-PAL Europe to co-design and host a policy design and evaluation workshop. The workshop brought together policymakers, central bankers, and analysts from various sectors and focused on integrating evidence throughout the policy cycle, from defining the problem to designing interventions and conducting rigorous evaluations. It’s just the beginning, but it already reflects how the ideas, tools, and values I absorbed at MIT are now taking institutional form back home.

Our ultimate goal is to institutionalize the use of policy evaluation as a standard practice — not as an occasional activity, but as a core part of how we govern. We’re working to embed a stronger feedback culture in policymaking, one that prioritizes learning before scaling. More experimentation, piloting, and iteration are essential before committing to large-scale rollouts of public programs. This shift requires patience and persistence, but it is critical if we want policies that are not only well-designed, but also effective, inclusive, and responsive to people’s needs.

Looking ahead, I remain committed to advancing this transformation, by building the systems, skills, and partnerships that can sustain evidence-based policymaking in Armenia for the long term. 

Robot, know thyself: New vision-based system teaches machines to understand their bodies

Thu, 07/24/2025 - 3:30pm

In an office at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a soft robotic hand carefully curls its fingers to grasp a small object. The intriguing part isn’t the mechanical design or embedded sensors — in fact, the hand contains none. Instead, the entire system relies on a single camera that watches the robot’s movements and uses that visual data to control it.

This capability comes from a new system CSAIL scientists developed, offering a different perspective on robotic control. Rather than using hand-designed models or complex sensor arrays, it allows robots to learn how their bodies respond to control commands, solely through vision. The approach, called Neural Jacobian Fields (NJF), gives robots a kind of bodily self-awareness. An open-access paper about the work was published in Nature on June 25.

“This work points to a shift from programming robots to teaching robots,” says Sizhe Lester Li, MIT PhD student in electrical engineering and computer science, CSAIL affiliate, and lead researcher on the work. “Today, many robotics tasks require extensive engineering and coding. In the future, we envision showing a robot what to do, and letting it learn how to achieve the goal autonomously.”

The motivation stems from a simple but powerful reframing: The main barrier to affordable, flexible robotics isn't hardware — it’s control of capability, which could be achieved in multiple ways. Traditional robots are built to be rigid and sensor-rich, making it easier to construct a digital twin, a precise mathematical replica used for control. But when a robot is soft, deformable, or irregularly shaped, those assumptions fall apart. Rather than forcing robots to match our models, NJF flips the script — giving robots the ability to learn their own internal model from observation.

Look and learn

This decoupling of modeling and hardware design could significantly expand the design space for robotics. In soft and bio-inspired robots, designers often embed sensors or reinforce parts of the structure just to make modeling feasible. NJF lifts that constraint. The system doesn’t need onboard sensors or design tweaks to make control possible. Designers are freer to explore unconventional, unconstrained morphologies without worrying about whether they’ll be able to model or control them later.

“Think about how you learn to control your fingers: you wiggle, you observe, you adapt,” says Li. “That’s what our system does. It experiments with random actions and figures out which controls move which parts of the robot.”

The system has proven robust across a range of robot types. The team tested NJF on a pneumatic soft robotic hand capable of pinching and grasping, a rigid Allegro hand, a 3D-printed robotic arm, and even a rotating platform with no embedded sensors. In every case, the system learned both the robot’s shape and how it responded to control signals, just from vision and random motion.

The researchers see potential far beyond the lab. Robots equipped with NJF could one day perform agricultural tasks with centimeter-level localization accuracy, operate on construction sites without elaborate sensor arrays, or navigate dynamic environments where traditional methods break down.

At the core of NJF is a neural network that captures two intertwined aspects of a robot’s embodiment: its three-dimensional geometry and its sensitivity to control inputs. The system builds on neural radiance fields (NeRF), a technique that reconstructs 3D scenes from images by mapping spatial coordinates to color and density values. NJF extends this approach by learning not only the robot’s shape, but also a Jacobian field, a function that predicts how any point on the robot’s body moves in response to motor commands.

To train the model, the robot performs random motions while multiple cameras record the outcomes. No human supervision or prior knowledge of the robot’s structure is required — the system simply infers the relationship between control signals and motion by watching.

Once training is complete, the robot only needs a single monocular camera for real-time closed-loop control, running at about 12 Hertz. This allows it to continuously observe itself, plan, and act responsively. That speed makes NJF more viable than many physics-based simulators for soft robots, which are often too computationally intensive for real-time use.

In early simulations, even simple 2D fingers and sliders were able to learn this mapping using just a few examples. By modeling how specific points deform or shift in response to action, NJF builds a dense map of controllability. That internal model allows it to generalize motion across the robot’s body, even when the data are noisy or incomplete.

“What’s really interesting is that the system figures out on its own which motors control which parts of the robot,” says Li. “This isn’t programmed — it emerges naturally through learning, much like a person discovering the buttons on a new device.”

The future is soft

For decades, robotics has favored rigid, easily modeled machines — like the industrial arms found in factories — because their properties simplify control. But the field has been moving toward soft, bio-inspired robots that can adapt to the real world more fluidly. The trade-off? These robots are harder to model.

“Robotics today often feels out of reach because of costly sensors and complex programming. Our goal with Neural Jacobian Fields is to lower the barrier, making robotics affordable, adaptable, and accessible to more people. Vision is a resilient, reliable sensor,” says senior author and MIT Assistant Professor Vincent Sitzmann, who leads the Scene Representation group. “It opens the door to robots that can operate in messy, unstructured environments, from farms to construction sites, without expensive infrastructure.”

“Vision alone can provide the cues needed for localization and control — eliminating the need for GPS, external tracking systems, or complex onboard sensors. This opens the door to robust, adaptive behavior in unstructured environments, from drones navigating indoors or underground without maps to mobile manipulators working in cluttered homes or warehouses, and even legged robots traversing uneven terrain,” says co-author Daniela Rus, MIT professor of electrical engineering and computer science and director of CSAIL. “By learning from visual feedback, these systems develop internal models of their own motion and dynamics, enabling flexible, self-supervised operation where traditional localization methods would fail.”

While training NJF currently requires multiple cameras and must be redone for each robot, the researchers are already imagining a more accessible version. In the future, hobbyists could record a robot’s random movements with their phone, much like you’d take a video of a rental car before driving off, and use that footage to create a control model, with no prior knowledge or special equipment required.

The system doesn’t yet generalize across different robots, and it lacks force or tactile sensing, limiting its effectiveness on contact-rich tasks. But the team is exploring new ways to address these limitations: improving generalization, handling occlusions, and extending the model’s ability to reason over longer spatial and temporal horizons.

“Just as humans develop an intuitive understanding of how their bodies move and respond to commands, NJF gives robots that kind of embodied self-awareness through vision alone,” says Li. “This understanding is a foundation for flexible manipulation and control in real-world environments. Our work, essentially, reflects a broader trend in robotics: moving away from manually programming detailed models toward teaching robots through observation and interaction.”

This paper brought together the computer vision and self-supervised learning work from the Sitzmann lab and the expertise in soft robots from the Rus lab. Li, Sitzmann, and Rus co-authored the paper with CSAIL affiliates Annan Zhang SM ’22, a PhD student in electrical engineering and computer science (EECS); Boyuan Chen, a PhD student in EECS; Hanna Matusik, an undergraduate researcher in mechanical engineering; and Chao Liu, a postdoc in the Senseable City Lab at MIT. 

The research was supported by the Solomon Buchsbaum Research Fund through MIT’s Research Support Committee, an MIT Presidential Fellowship, the National Science Foundation, and the Gwangju Institute of Science and Technology.

Pedestrians now walk faster and linger less, researchers find

Thu, 07/24/2025 - 1:45pm

City life is often described as “fast-paced.” A new study suggests that’s more true that ever.

The research, co-authored by MIT scholars, shows that the average walking speed of pedestrians in three northeastern U.S. cities increased 15 percent from 1980 to 2010. The number of people lingering in public spaces declined by 14 percent in that time as well.

The researchers used machine-learning tools to assess 1980s-era video footage captured by renowned urbanist William Whyte, in Boston, New York, and Philadelphia. They compared the old material with newer videos from the same locations.

“Something has changed over the past 40 years,” says MIT professor of the practice Carlo Ratti, a co-author of the new study. “How fast we walk, how people meet in public space — what we’re seeing here is that public spaces are working in somewhat different ways, more as a thoroughfare and less a space of encounter.”

The paper, “Exploring the social life of urban spaces through AI,” is published this week in the Proceedings of the National Academy of Sciences. The co-authors are Arianna Salazar-Miranda MCP ’16, PhD ’23, an assistant professor at Yale University’s School of the Environment; Zhuanguan Fan of the University of Hong Kong; Michael Baick; Keith N. Hampton, a professor at Michigan State University; Fabio Duarte, associate director of the Senseable City Lab; Becky P.Y. Loo of the University of Hong Kong; Edward Glaeser, the Fred and Eleanor Glimp Professor of Economics at Harvard University; and Ratti, who is also director of MIT’s Senseable City Lab.

The results could help inform urban planning, as designers seek to create new public areas or modify existing ones.

“Public space is such an important element of civic life, and today partly because it counteracts the polarization of digital space,” says Salazar-Miranda. “The more we can keep improving public space, the more we can make our cities suited for convening.”

Meet you at the Met

Whyte was a prominent social thinker whose famous 1956 book, “The Organization Man,” probing the apparent culture of corporate conformity in the U.S., became a touchstone of its decade.

However, Whyte spent the latter decades of his career focused on urbanism. The footage he filmed, from 1978 through 1980, was archived by a Brooklyn-based nonprofit organization called the Project for Public Spaces and later digitized by Hampton and his students.

Whyte chose to make his recording at four spots in the three cities combined: Boston’s Downtown Crossing area; New York City’s Bryant Park; the steps of the Metropolitan Museum of Art in New York, a famous gathering point and people-watching spot; and Philadelphia’s Chestnut Street.

In 2010, a group led by Hampton then shot new footage at those locations, at the same times of day Whyte had, to compare and contrast current-day dynamics with those of Whyte’s time. To conduct the study, the co-authors used computer vision and AI models to summarize and quantify the activity in the videos.

The researchers have found that some things have not changed greatly. The percentage of people walking alone barely moved, from 67 percent in 1980 to 68 percent in 2010. On the other hand, the percentage of individuals entering these public spaces who became part of a group declined a bit. In 1980, 5.5 percent of the people approaching these spots met up with a group; in 2010, that was down to 2 percent.

“Perhaps there’s a more transactional nature to public space today,” Ratti says.

Fewer outdoor groups: Anomie or Starbucks?

If people’s behavioral patterns have altered since 1980, it’s natural to ask why. Certainly some of the visible changes seem consistent with the pervasive use of cellphones; people organize their social lives by phone now, and perhaps zip around more quickly from place to place as a result.

“When you look at the footage from William Whyte, the people in public spaces were looking at each other more,” Ratti says. “It was a place you could start a conversation or run into a friend. You couldn’t do things online then. Today, behavior is more predicated on texting first, to meet in public space.”

As the scholars note, if groups of people hang out together slightly less often in public spaces, there could be still another reason for that: Starbucks and its competitors. As the paper states, outdoor group socializing may be less common due to “the proliferation of coffee shops and other indoor venues. Instead of lingering on sidewalks, people may have moved their social interactions into air-conditioned, more comfortable private spaces.”

Certainly coffeeshops were far less common in big cities in 1980, and the big chain coffeeshops did not exist.

On the other hand, public-space behavior might have been evolving all this time regardless of Starbucks and the like. The researchers say the new study offers a proof-of-concept for its method and has encouraged them to conduct additional work. Ratti, Duarte, and other researchers from MIT’s Senseable City Lab have turned their attention to an extensive survey of European public spaces in an attempt to shed more light on the interaction between people and the public form.

“We are collecting footage from 40 squares in Europe,” Duarte says. “The question is: How can we learn at a larger scale? This is in part what we’re doing.” 

New machine-learning application to help researchers predict chemical properties

Thu, 07/24/2025 - 1:00pm

One of the shared, fundamental goals of most chemistry researchers is the need to predict a molecule’s properties, such as its boiling or melting point. Once researchers can pinpoint that prediction, they’re able to move forward with their work yielding discoveries that lead to medicines, materials, and more. Historically, however, the traditional methods of unveiling these predictions are associated with a significant cost — expending time and wear and tear on equipment, in addition to funds.

Enter a branch of artificial intelligence known as machine learning (ML). ML has lessened the burden of molecule property prediction to a degree, but the advanced tools that most effectively expedite the process — by learning from existing data to make rapid predictions for new molecules — require the user to have a significant level of programming expertise. This creates an accessibility barrier for many chemists, who may not have the significant computational proficiency required to navigate the prediction pipeline. 

To alleviate this challenge, researchers in the McGuire Research Group at MIT have created ChemXploreML, a user-friendly desktop app that helps chemists make these critical predictions without requiring advanced programming skills. Freely available, easy to download, and functional on mainstream platforms, this app is also built to operate entirely offline, which helps keep research data proprietary. The exciting new technology is outlined in an article published recently in the Journal of Chemical Information and Modeling.

One specific hurdle in chemical machine learning is translating molecular structures into a numerical language that computers can understand. ChemXploreML automates this complex process with powerful, built-in "molecular embedders" that transform chemical structures into informative numerical vectors. Next, the software implements state-of-the-art algorithms to identify patterns and accurately predict molecular properties like boiling and melting points, all through an intuitive, interactive graphical interface. 

"The goal of ChemXploreML is to democratize the use of machine learning in the chemical sciences,” says Aravindh Nivas Marimuthu, a postdoc in the McGuire Group and lead author of the article. “By creating an intuitive, powerful, and offline-capable desktop application, we are putting state-of-the-art predictive modeling directly into the hands of chemists, regardless of their programming background. This work not only accelerates the search for new drugs and materials by making the screening process faster and cheaper, but its flexible design also opens doors for future innovations.” 

ChemXploreML is designed to to evolve over time, so as future techniques and algorithms are developed, they can be seamlessly integrated into the app, ensuring that researchers are always able to access and implement the most up-to-date methods. The application was tested on five key molecular properties of organic compounds — melting point, boiling point, vapor pressure, critical temperature, and critical pressure — and achieved high accuracy scores of up to 93 percent for the critical temperature. The researchers also demonstrated that a new, more compact method of representing molecules (VICGAE) was nearly as accurate as standard methods, such as Mol2Vec, but was up to 10 times faster.

“We envision a future where any researcher can easily customize and apply machine learning to solve unique challenges, from developing sustainable materials to exploring the complex chemistry of interstellar space,” says Marimuthu. Joining him on the paper is senior author and Class of 1943 Career Development Assistant Professor of Chemistry Brett McGuire.

Pages