MIT Latest News
The Azraq refugee camp in Jordan hosts about 35,000 people displaced by the Syrian civil war, who live in rows of small white steel sheds. Several years ago, a camp resident named Majid Al-Kanaan undertook a project to combat the visual and existential monotony of camp life.
Using clay and stones from camp terrain, he built a colonnade of decorative arches in front of his shed, referencing the Arch of Triumph in Palmyra, Syria — and added elements alluding to Syria’s Citadel of Aleppo and the Umayyad desert palaces in Jordan.
“I was exploring what could be done with the sand and stone of this area,” Al-Kanaan says in a new book about life in the Azraq camp. The book was edited by an MIT-based team that worked with camp refugees on design projects for years.
As the team found, the Azraq camp is full of designers and builders who create objects despite having little to work with. Camp residents have used yogurt containers to build hanging gardens for plants, carved chess sets out of broom handles, made childrens’ toys from trash, and rigged up fountains from spare parts.
These projects “speak to the ingenuity of the human spirit,” says Azra Akšamija, an associate professor in MIT’s School of Architecture and Planning and a co-editor of the new book. “These inventions point to what is missing. People invent things because they are lacking.”
At the same time, she notes, the cultural and artistic aspects of these inventions are also critical: “Those are essential human needs, it’s not just food and a roof above your head.”
The book, “Design to Live: Everyday Inventions from a Refugee Camp,” has just been published by the MIT Press. The book’s co-editors are Akšamija, an artist, architectural historian, director of the MIT Future Heritage Lab, and director of the MIT Program in Art, Culture, and Technology; Raafat Majzoub, an architect, artist, and writer who is a lecturer at the American University of Beirut and director of a Lebanon-based NGO, The Khan: The Arab Association for Prototyping Cultural Practices; and Melina Philippou, an architect and urbanist who is program director of the MIT Future Heritage Lab.
“The book is a case study about the refugee camp, but it goes beyond that,”
Akšamija says. “It’s also about the conditions of scarcity, and this kind of agency of design and art in conditions of displacement, which inevitably face our global society in the future.”
Majzoub adds: “Through the dissemination of this work, we aim to contribute to the valorization and prioritization of social and cultural activities in crisis zones, moving beyond the established paradigm of ‘basic needs.’”
Roughly 6.6 million Syrians have fled the country since war broke out in 2011, according to the United Nations. The Azraq camp opened in 2014, and the MIT Future Heritage Lab, founded by Akšamija, began helping refugees study and practice art and design in 2017 (facilitated by the humanitarian organization CARE and in collaboration with professors and students from the German-Jordanian University in Amman).
“Design to Live” details inventions Azraq residents developed before working with the MIT team. The book has text in both English and Arabic, abundant illustrations, and sections where Syrian refugees offer their own views. The volume has a tête-bêche structure — facing pages are upside down relative to each other — offering the viewpoints of people living both inside and outside the camp.
“We are not speaking for refugees, but we are highlighting their voices, while incorporating these multiple perspectives,” Akšamija says. “We want to bring out the significance of the cultural and artistic processes in the healing of society.”
She adds: “It was eye-opening to see toy trucks made out of trash and a chessboard made out of broomsticks. That is really about cultural expression and making life worth living, feeling like a human being, addressing issues of memory and hopelessness and idleness.”
Many refugees improvised forms of water storage; the book has blueprints for a fountain made from buckets and a hose. Some Azraq residents, barred from growing things in the soil, have created vertical gardens outside their sheds — with planters made from yogurt containers, where they grow traditional recipe ingredients.
“It’s impressive,” says Akšamija. “It’s about literally bringing spice to life. Plants are a beautiful metaphor for migration of culture and food, and maybe people, too. And [they’re] a way of continuing your tradition through cooking. Good food is a very important dimension of Syrian culture. People have minimal means, but they cook. You get this most incredible food in the middle of nothing. Continuing your traditions is a way of sustaining and surviving.”
As Philippou notes, “The designs of our Syrian collaborators like the vertical garden, the fountain, and the decorative arches carve space for personal and collective expression,” while merging from conditions of “confinement, with limited resources and [often] against the regulative framework of the camp.”
As a section of the book titled “Intimacy” details, camp residents also built alternate, decorated entrance halls for their sheds; these transitional spaces limit direct views into houses from the street, to grant privacy to residents.
“Over time, we observed the impact of these designs on both other residents and NGOs,” Philippou says. “Fellow residents replicated and built upon the work of their neighbors, and NGOs adapted camp regulations to accommodate and support these popular designs. Syrian designers at the camp offer alternatives that feed back into evolving camp services on a systemic level.”
As Majzoub notes, “These designs are not singular or isolated but are rather parts of a complex process of sharing, co-creation, and world-making, where camp residents defy their realities, challenge the status quo, and create frameworks for cultural continuity in the harsh and sterile conditions of a standardized refugee camp in the middle of the desert.”
Acts of resilience
As Akšamija observes, creating objects is an act of resilience for refugees. Many camp residents are depressed, as they see no way out of their situation, but others find strength and inspiration in art and design. One elderly man making toys out of trash, Aksamija recollects, was “full of spirit, but I don’t know how.” His son is a camp resident who has been unable to find work elsewhere, despite being a professional engineer. Many people feel they “have nothing to do, no work, no future.”
Akšamija experienced war and forced displacement herself while growing up; her family fled Bosnia in the 1990s when war broke out, ultimately landing in Austria.
“In my own country, we had such an amazing life, and suddenly we had to start from scratch in a new place,” Akšamija says. “I think this can happen to anyone, and it’s important to think about it this way.”
Indeed, Akšamija says the ideas in the book are not only relevant to refugee camps; in a world of resource scarcity, where climate crisis and political strife are creating further dislocation, many people endure deep deprivation.
Moreover, most refugees remain in desperate circumstances. “It’s important not to exoticize these inventions,” Akšamija says. “It’s a brutal reality. We tried to show it. And we tried to show the power of art and design in creating a life worth living amid war, destruction, and displacement.”
In that vein, consider the colonnade of earthen arches at Azraq. Well-crafted as it was, the structure got destroyed by the elements within a few years. Only briefly, then, the arches “transformed the desert from a symbol of isolation to a place for the community and a medium for cultural expression,” as Akšamija, Majzoub, and Philippou write in the book.
Like everything, the structure was a transitory creation, vulnerable to collapse. In design, as in all areas of living, the Azraq refugees face a need for rebuilding and reconstruction, despite little support and enormous uncertainty.
“It’s life,” Akšamija says. “But it’s not like we say, ‘Oh, that’s how life is,’ and we accept it. Syrian refugees in Al Azraq camp showed us that we can and must do better to address the cultural and emotional needs of displaced people.”
Using machine learning, a computer model can teach itself to smell in just a few minutes. When it does, researchers have found, it builds a neural network that closely mimics the olfactory circuits that animal brains use to process odors.
Animals from fruit flies to humans all use essentially the same strategy to process olfactory information in the brain. But neuroscientists who trained an artificial neural network to take on a simple odor classification task were surprised to see it replicate biology’s strategy so faithfully.
“The algorithm we use has no resemblance to the actual process of evolution,” says Guangyu Robert Yang, an associate investigator at MIT’s McGovern Institute for Brain Research, who led the work as a postdoc at Columbia University. The similarities between the artificial and biological systems suggest that the brain’s olfactory network is optimally suited to its task.
Yang and his collaborators, who reported their findings Oct. 6 in the journal Neuron, say their artificial network will help researchers learn more about the brain’s olfactory circuits. The work also helps demonstrate artificial neural networks’ relevance to neuroscience. “By showing that we can match the architecture [of the biological system] very precisely, I think that gives more confidence that these neural networks can continue to be useful tools for modeling the brain,” says Yang, who is also an assistant professor in MIT’s departments of Brain and Cognitive Sciences and Electrical Engineering and Computer Science.
Mapping natural olfactory circuits
For fruit flies, the organism in which the brain’s olfactory circuitry has been best mapped, smell begins in the antennae. Sensory neurons there, each equipped with odor receptors specialized to detect specific scents, transform the binding of odor molecules into electrical activity. When an odor is detected, these neurons, which make up the first layer of the olfactory network, signal to the second layer: a set of neurons that reside in a part of the brain called the antennal lobe. In the antennal lobe, sensory neurons that share the same receptor converge onto the same second-layer neuron. “They’re very choosy,” Yang says. “They don’t receive any input from neurons expressing other receptors.” Because it has fewer neurons than the first layer, this part of the network is considered a compression layer. These second-layer neurons, in turn, signal to a larger set of neurons in the third layer. Puzzlingly, those connections appear to be random.
For Yang, a computational neuroscientist, and Columbia University graduate student Peter Yiliu Wang, this knowledge of the fly’s olfactory system represented a unique opportunity. Few parts of the brain have been mapped as comprehensively, and that has made it difficult to evaluate how well certain computational models represent the true architecture of neural circuits, they say.
Building an artificial smell network
Neural networks, in which artificial neurons rewire themselves to perform specific tasks, are computational tools inspired by the brain. They can be trained to pick out patterns within complex datasets, making them valuable for speech and image recognition and other forms of artificial intelligence. There are hints that the neural networks that do this best replicate the activity of the nervous system. But, says Wang, who is now a postdoc at Stanford University, differently structured networks could generate similar results, and neuroscientists still need to know whether artificial neural networks reflect the actual structure of biological circuits. With comprehensive anatomical data about fruit fly olfactory circuits, he says, “We’re able to ask this question: Can artificial neural networks truly be used to study the brain?”
Collaborating closely with Columbia neuroscientists Richard Axel and Larry Abbott, Yang and Wang constructed a network of artificial neurons comprising an input layer, a compression layer, and an expansion layer — just like the fruit fly olfactory system. They gave it the same number of neurons as the fruit fly system, but no inherent structure: connections between neurons would be rewired as the model learned to classify odors.
The scientists asked the network to assign data representing different odors to categories, and to correctly categorize not just single odors, but also mixtures of odors. This is something that the brain’s olfactory system is uniquely good at, Yang says. If you combine the scents of two different apples, he explains, the brain still smells apple. In contrast, if two photographs of cats are blended pixel by pixel, the brain no longer sees a cat. This ability is just one feature of the brain’s odor-processing circuits, but captures the essence of the system, Yang says.
It took the artificial network only minutes to organize itself. The structure that emerged was stunningly similar to that found in the fruit fly brain. Each neuron in the compression layer received inputs from a particular type of input neuron and connected, seemingly randomly, to multiple neurons in the expansion layer. What’s more, each neuron in the expansion layer receives connections, on average, from six compression-layer neurons — exactly as occurs in the fruit fly brain.
“It could have been one, it could have been 50. It could have been anywhere in between,” Yang says. “Biology finds six, and our network finds about six as well.” Evolution found this organization through random mutation and natural selection; the artificial network found it through standard machine learning algorithms.
The surprising convergence provides strong support that the brain circuits that interpret olfactory information are optimally organized for their task, he says. Now, researchers can use the model to further explore that structure, exploring how the network evolves under different conditions and manipulating the circuitry in ways that cannot be done experimentally.
Four members of the MIT community have been elected fellows of the American Physical Society for 2021. The APS fellowship was created in 1921 for those in the physics community to recognize peers who have contributed to advances in physics through original research, innovative applications, teaching, and leadership.
Lydia Bourouiba is a physical applied mathematician and associate professor the MIT Institute for Medical Engineering and Science, the Department of Mechanical Engineering, and the Department of Civil and Environmental Engineering, where she founded and directs the Fluid Dynamics of Disease Transmission Laboratory. She joined the MIT faculty in 2014. Bourouiba was selected by APS’s Division of Fluid Dynamics for “fundamental work in quantitatively elucidating the mechanisms of droplet impact and fragmentation and for pioneering a new field at the intersection of fluid dynamics and transmission of respiratory and foodborne pathogens with clear and tangible contributions to public health.”
Hong Liu is a professor of physics and is a member of the MIT Center for Theoretical Physics. He came to MIT in 2003. Liu was selected for APS’s Division of Particles and Fields for “new discoveries in string theory and the application of string theoretic methods to understanding quark-gluon plasma and its probes in heavy ion collisions, out-of-equilibrium dynamics and equilibration, non-Fermi liquids, black holes, quantum entanglement, and hydrodynamics.”
Thomas Peacock is a professor in mechanical engineering. He joined the MIT faculty in 2000 and now directs the Environmental Dynamics Laboratory. Peacock was selected by APS’s Division of Fluid Dynamics for “pioneering investigations into the dynamics of internal waves and internal tides in the ocean using imaginative laboratory experiments and field studies, for the identification of Lagrangian coherent structures in turbulent flow, and the application of fluid mechanics to deep-sea mining.”
Lindley Winslow is an associate professor of physics and holds the Jerrold R. Zacharias Career Development Professorship. She first came to MIT in 2008 as a postdoc and is now a member of the Laboratory for Nuclear Science and the MIT Statistics and Data Science Center. Winslow was selected by APS’s Division of Particles and Fields for “leadership in the search for axion-like particles that may be dark matter candidates, and for the establishment of the groundbreaking ABRACADABRA detector for this search, and also for valuable detector development for the field of neutrinoless double beta decay.”
Everyone knows the shortest distance between two points is a straight line. However, when you’re walking along city streets, a straight line may not be possible. How do you decide which way to go?
A new MIT study suggests that our brains are actually not optimized to calculate the so-called “shortest path” when navigating on foot. Based on a dataset of more than 14,000 people going about their daily lives, the MIT team found that instead, pedestrians appear to choose paths that seem to point most directly toward their destination, even if those routes end up being longer. They call this the “pointiest path.”
This strategy, known as vector-based navigation, has also been seen in studies of animals, from insects to primates. The MIT team suggests vector-based navigation, which requires less brainpower than actually calculating the shortest route, may have evolved to let the brain devote more power to other tasks.
“There appears to be a tradeoff that allows computational power in our brain to be used for other things — 30,000 years ago, to avoid a lion, or now, to avoid a perilious SUV,” says Carlo Ratti, a professor of urban technologies in MIT’s Department of Urban Studies and Planning and director of the Senseable City Laboratory. “Vector-based navigation does not produce the shortest path, but it’s close enough to the shortest path, and it’s very simple to compute it.”
Ratti is the senior author of the study, which appears today in Nature Computational Science. Christian Bongiorno, an associate professor at Université Paris-Saclay and a member of MIT’s Senseable City Laboratory, is the study’s lead author. Joshua Tenenbaum, a professor of computational cognitive science at MIT and a member of the Center for Brains, Minds, and Machines and the Computer Science and Artificial Intelligence Laboratory (CSAIL), is also an author of the paper.
Twenty years ago, while a graduate student at Cambridge University, Ratti walked the route between his residential college and his departmental office nearly every day. One day, he realized that he was actually taking two different routes — one on to the way to the office and a slightly different one on the way back.
“Surely one route was more efficient than the other, but I had drifted into adapting two, one for each direction,” Ratti says. “I was consistently inconsistent, a small but frustrating realization for a student devoting his life to rational thinking.”
At the Senseable City Laboratory, one of Ratti’s research interests is using large datasets from mobile devices to study how people behave in urban environments. Several years ago, the lab acquired a dataset of anonymized GPS signals from cell phones of pedestrians as they walked through Boston and Cambridge, Massachusetts, over a period of one year. Ratti thought that these data, which included more than 550,000 paths taken by more than 14,000 people, could help to answer the question of how people choose their routes when navigating a city on foot.
The research team’s analysis of the data showed that instead of choosing the shortest routes, pedestrians chose routes that were slightly longer but minimized their angular deviation from the destination. That is, they choose paths that allow them to more directly face their endpoint as they start the route, even if a path that began by heading more to the left or right might actually end up being shorter.
“Instead of calculating minimal distances, we found that the most predictive model was not one that found the shortest path, but instead one that tried to minimize angular displacement — pointing directly toward the destination as much as possible, even if traveling at larger angles would actually be more efficient,” says Paolo Santi, a principal research scientist in the Senseable City Lab and at the Italian National Research Council, and a corresponding author of the paper. “We have proposed to call this the pointiest path.”
This was true for pedestrians in Boston and Cambridge, which have a convoluted network of streets, and in San Francisco, which has a grid-style street layout. In both cities, the researchers also observed that people tended to choose different routes when making a round trip between two destinations, just as Ratti did back in his graduate school days.
“When we make decisions based on angle to destination, the street network will lead you to an asymmetrical path,” Ratti says. “Based on thousands of walkers, it is very clear that I am not the only one: Human beings are not optimal navigators.”
Moving around in the world
Studies of animal behavior and brain activity, particularly in the hippocampus, have also suggested that the brain’s navigation strategies are based on calculating vectors. This type of navigation is very different from the computer algorithms used by your smartphone or GPS device, which can calculate the shortest route between any two points nearly flawlessly, based on the maps stored in their memory.
Without access to those kinds of maps, the animal brain has had to come up with alternative strategies to navigate between locations, Tenenbaum says.
“You can’t have a detailed, distance-based map downloaded into the brain, so how else are you going to do it? The more natural thing might be use information that’s more available to us from our experience,” he says. “Thinking in terms of points of reference, landmarks, and angles is a very natural way to build algorithms for mapping and navigating space based on what you learn from your own experience moving around in the world.”
“As smartphone and portable electronics increasingly couple human and artificial intelligence, it is becoming increasingly important to better understand the computational mechanisms used by our brain and how they relate to those used by machines,” Ratti says.
The research was funded by MIT Senseable City Lab Consortium; MIT’s Center for Brains, Minds, and Machines; the National Science Foundation; the MISTI/MITOR fund; and the Compagnia di San Paolo.
Nestled between buildings 12, 13, 24, and 31 is the North Corridor, an area coined as the “Outfinite” by students, where members of the MIT community gathered for an Institute community social hosted by President L. Rafael Reif to welcome MIT’s new chancellor, Melissa Nobles. After about 18 months of virtual Zoom meetings, for many it was their first time seeing and reconnecting with friends and colleagues.
It was a sunny and picturesque fall afternoon of smiles, laughter, and in-person conversations that filled MIT’s once-silent campus spaces. Hundreds of students, staff, and faculty enjoyed autumnal snacks and light refreshments while taking the opportunity to meet the new chancellor. Chancellor Nobles greeted visitors with elbow bumps and spent the afternoon chatting with members of the community.
“It was wonderful to catch up with so many members of our community at the celebration,” Nobles says. “I am honored to be serving MIT in this new role, and I very much look forward to working alongside our amazing students and the wonderful teams throughout the Office of the Chancellor to educate the whole student and to deepen the meaning of an MIT education.”
Nobles is a professor of political science and served as the Kenan Sahin Dean of the School of Humanities, Arts, and Social Sciences from 2015 to 2021. She became chancellor on Aug. 18. In her new role, Nobles oversees all aspects of student life and learning at MIT, and also plays a leading role in strategic planning, faculty appointments, resource development, and campus planning activities.
Have a creative photo of campus life you'd like to share? Submit it to Scene at MIT.
On Saturday morning, NASA’s Lucy spacecraft launched from Cape Canaveral Air Force Station in Florida, beginning a 12-year, nearly 4-billion-mile mission to explore some of the oldest objects in the solar system. Named after the famous Australopithecus fossil “Lucy,” the spacecraft will make two slingshot trips around Earth before heading toward a cluster of asteroids that share Jupiter’s orbit, called the Trojan asteroids. These are believed to be nearly as old as the solar system itself.
Through imaging and spectral mapping, the spacecraft will give scientists the first close-up view of the topography and chemical composition of the Trojan asteroids, which could offer insights into the chemistry of the early solar system, how the planets formed, and the origin of the organic molecules that enable life.
Cathy Olkin ’88, PhD ’96, who received her bachelor’s degree from the Department of Aeronautics and Astronautics (AeroAstro) and her doctorate degree from the Department of Earth, Atmospheric and Planetary Sciences (EAPS), is second-in-command as the deputy principal investigator on the mission. While she was busy preparing the spacecraft for launch, project scientist Richard Binzel, professor of planetary sciences in EAPS with a joint appointment in AeroAstro, described the goals of the Lucy mission.
Q: What are the roots of the Lucy mission? And how long has it taken to get to this moment?
A: The Lucy mission itself has been about a five-year effort to go from the first proposal to the launch pad. But the story goes back many decades, trying to understand these objects out at the distance of Jupiter that we call the Trojan asteroids. They're asteroids that are stuck in a gravitational tug of war between the sun and Jupiter itself at what we call the Lagrange points, where the gravitational tug of the sun is equal to the gravitational tug of Jupiter. Once something falls into that zone, they’re stable forever. So we think the Trojan asteroids are some of the earliest pieces of the formation of our solar system — we call them fossils of the solar system. And that's why we named the mission Lucy, after the Australopithecus fossil.
We think the Trojan asteroids date back to the very beginning of our solar system 4.56 billion years ago, which is older than any sample we can get from the Earth and any sample we've ever brought back from the moon. By studying the Trojan asteroids, we think we will be looking at some of the earliest pieces of the building blocks of planets.
Q: What are some of the outstanding questions that the Lucy mission expects to help answer?
A: We would like to know what the chemistry of the early solar system, particularly the organics, was like. Where did the organics, basically the carbon of life, come from? What was its earliest form? The Trojan asteroids are special because at Jupiter's distance most of the early chemistry is still literally frozen in time as it would have been at the beginning of our solar system. Their location further from the sun is colder than it is compared to the Earth, so essentially we think we are looking at pieces that have been frozen in time, not only in physical form but also chemically, since the very beginning.
For example, we think the earliest forms of water might be preserved in these objects. Once an object in space comes in close to the sun, about the Earth’s distance, any water present begins to evaporate. But we think the Trojan asteroids have been cold enough that the original water they might contain is still there, frozen, intact, and ready for us to explore and evaluate.
Q: What will the spacecraft's life look like from launch until it completes its mission?
A: Lucy is on an amazing race track across the solar system to visit the Trojan asteroids.
About a year from now, it will swing by the Earth to pick up a little bit of velocity. And then it does another Earth swing-by in late 2024. And that last swing-by of the Earth will put it on a path out towards the Trojan asteroids. We have to build up speed and momentum to get that far away, so we use Earth’s gravity to assist.
We will be out in the asteroid belt by 2025. First we will go by a small asteroid named “Donald Johanson.” Donald Johanson discovered the Lucy Australopithecus fossil, and when an MIT graduate student discovered this unnamed asteroid on our flight path, we were able to get it named after him.
Then, when we pass by Donald Johansen in the main belt, we will finally reach the Trojan asteroid six years from now, in August 2027. We are in one of the clouds of Trojan asteroids. These are in orbit, 60 degrees in front, and 60 degrees in the back of Jupiter. And we’ll be in the leading group of Trojan asteroids, something we call the L4 Lagrange point, in 2027. We have two encounters in 2027, a third encounter in April 2028, and a fourth encounter in November 2028. And then in 2030, we swing back around the Earth, to go to the other side of Jupiter. We get to the cloud on the other side of Jupiter in 2033.
So, if you look at a map of the trajectory of the Lucy spacecraft, it is on a wild and crazy ride to get to both sides of Jupiter over the next 13 years or so.
Each of these objects is like a time capsule. And we'd like to see just how far back each time capsule is pushing our knowledge and understanding of how the Earth and planets came to be.
Q: How do you feel about this launch after so many years of studying the Trojan asteroids and preparing for this mission?
A: I started studying the Trojan asteroids myself back in the 1980s — in fact, the first paper I published on Trojan asteroids was with an Undergraduate Research Opportunity Program (UROP) student. It's almost surreal to think that we could go from seeing these objects as tiny points of light through a telescope to revealing them as real geological and geophysical worlds. And it takes decades. It takes a whole career to go from telescopic pinpoints to real, tangible objects. So in some ways, it's surreal. But in most ways, I am simply in awe of what this team has accomplished in a very challenging last few years.
Marsh plants, which are ubiquitous along the world’s shorelines, can play a major role in mitigating the damage to coastlines as sea levels rise and storm surges increase. Now, a new MIT study provides greater detail about how these protective benefits work under real-world conditions shaped by waves and currents.
The study combined laboratory experiments using simulated plants in a large wave tank along with mathematical modeling. It appears in the journal Physical Review — Fluids, in a paper by former MIT visiting doctoral student Xiaoxia Zhang, now a postdoc at Dalian University of Technology, and professor of civil and environmental engineering Heidi Nepf.
It’s already clear that coastal marsh plants provide significant protection from surges and devastating storms. For example, it has been estimated that the damage caused by Hurricane Sandy was reduced by $625 million thanks to the damping of wave energy provided by extensive areas of marsh along the affected coasts. But the new MIT analysis incorporates details of plant morphology, such as the number and spacing of flexible leaves versus stiffer stems, and the complex interactions of currents and waves that may be coming from different directions.
This level of detail could enable coastal restoration planners to determine the area of marsh needed to mitigate expected amounts of storm surge or sea-level rise, and to decide which types of plants to introduce to maximize protection.
“When you go to a marsh, you often will see that the plants are arranged in zones,” says Nepf, who is the Donald and Martha Harleman Professor of Civil and Environmental Engineering. “Along the edge, you tend to have plants that are more flexible, because they are using their flexibility to reduce the wave forces they feel. In the next zone, the plants are a little more rigid and have a bit more leaves.”
As the zones progress, the plants become stiffer, leafier, and more effective at absorbing wave energy thanks to their greater leaf area. The new modeling done in this research, which incorporated work with simulated plants in the 24-meter-long wave tank at MIT’s Parsons Lab, can enable coastal planners to take these kinds of details into account when planning protection, mitigation, or restoration projects.
“If you put the stiffest plants at the edge, they might not survive, because they’re feeling very high wave forces. By describing why Mother Nature organizes plants in this way, we can hopefully design a more sustainable restoration,” Nepf says.
Once established, the marsh plants provide a positive feedback cycle that helps to not only stabilize but also build up these delicate coastal lands, Zhang says. “After a few years, the marsh grasses start to trap and hold the sediment, and the elevation gets higher and higher, which might keep up with sea level rise,” she says.
Awareness of the protective effects of marshland has been growing, Nepf says. For example, the Netherlands has been restoring lost marshland outside the dikes that surround much of the nation’s agricultural land, finding that the marsh can protect the dikes from erosion; the marsh and dikes work together much more effectively than the dikes alone at preventing flooding.
But most such efforts so far have been largely empirical, trial-and-error plans, Nepf says. Now, they could take advantage of this modeling to know just how much marshland with what types of plants would be needed to provide the desired level of protection.
It also provides a more quantitative way to estimate the value provided by marshes, she says. “It could allow you to more accurately say, ‘40 meters of marsh will reduce waves this much and therefore will reduce overtopping of your levee by this much.’ Someone could use that to say, ‘I’m going to save this much money over the next 10 years if I reduce flooding by maintaining this marsh.’ It might help generate some political motivation for restoration efforts.”
Nepf herself is already trying to get some of these findings included in coastal planning processes. She serves on a practitioner panel led by Chris Esposito of the Water Institute of the Gulf, which serves the storm-battered Louisiana coastline. “We’d like to get this work into the coatal simulations that are used for large-scale restoration and coastal planning,” she says.
"Understanding the wave damping process in real vegetation wetlands is of critical value, as it is needed in the assessment of the coastal defense value of these wetlands," says Zhan Hu, an associate professor of marine sciences at Sun Yat-Sen University, who was not associated with this work. "The challenge, however, lies in the quantitative representation of the wave damping process, in which many factors are at play, such as plant flexibility, morphology, and coexisting currents."
The new study, Hu says, "neatly combines experimental findings and analytical modeling to reveal the impact of each factor in the wave damping process. ... Overall, this work is a solid step forward toward a more accurate assessment of wave damping capacity of real coastal wetlands, which is needed for science-based design and management of nature-based coastal protection."
The work was partly supported by the National Science Foundation and the China Scholarship Council.
In the Indian state of Karnataka, many smallholder farmers have traditionally sold their products to intermediaries — wholesale traders who turn around and resell the goods for a quick profit. Much of the dealing between farmers and those traders has occurred locally, and farmers do not typically know what should be a “fair” price for their products.
Recognizing that these farmers were not getting a reasonable share of the value of the products they grow, with many still living in poverty, the Karnataka state government wanted to create more transparent markets. They initiated a new digital market platform to connect over 150 previously isolated physical markets and unify all trading. To further improve the platform, the government started a collaboration with MIT Associate Professor Karen Zheng, an operations management scholar whose work often relies on field research to create data-driven new ideas, especially about supply chains.
The first step in the collaboration was a rigorous empirical assessment of how much the platform has increased prices for farmers. The analysis revealed that although prices increased significantly for some products, the prices of others practically stayed the same. So Zheng, working with several colleagues and students, began looking for solutions to boost prices for these products. Eventually they designed a new two-stage auction, with a second round of bidding added on for the highest-offering traders from the first round, and ran a pilot of the new auction for a major lentils market.
The result? Over a three-month period in the spring of 2019, average prices increased by about 5 percent, benefitting more than 10,000 farmers who traded under the new auction design, when compared to a similar market using the legacy auction format.
“Implementing the two-stage auction really led to significant improvements,” says Zheng, the George M. Bunker Associate Professor at the MIT Sloan School of Management. “It is equally exciting to see that the same improvements persist into the next selling season.”
Agricultural markets are hardly the only kind of topic Zheng studies. She examines a wide variety of supply-chain issues, including matters of corporate social responsibility. But the Karnataka project does reflect the kind of work Zheng does more broadly, in that her research takes a multimethod approach, is based on extensive field work, and aims to have an applied impact.
“This whole process is really important to me as a researcher and reflects my perspective,” Zheng says. “I want to know how I can model a problem correctly and shape my research to capture what is happening in the real world. And then you bring practical solutions back to the field and evaluate their outcome. I really enjoy this entire cycle of doing research.”
For her research and teaching, Zheng was granted tenure at MIT last year.
Zheng, who grew up in China, credits her parents for fostering an environment focused on education.
“They put a lot of their influence and hope in me, but in a good way, not in a very pressurized way,” Zheng says. She attended Tsinghua University in Beijing, receiving her undergraduate diploma as well as a master’s degree in the field of automation — a blend of computer science, electrical engineering, and control theory.
Zheng then earned a PhD in management science and engineering at Stanford University, fulfilling a longstanding ambition of studying abroad.
“At that time, it wasn’t that I had determined I had to go into academia and become a professor,” Zheng says. “I always just wanted the experience.”
Zheng might have ultimately gone into the private sector, but at Stanford she became increasingly interested in the research challenges and practical applications of operations management.
“I like the applied aspect of math — to develop sensible mathematical models to capture practical challenges, and then solve them to create solutions that actually drive changes,” Zheng says. “I think that’s why I settled on operations management. I like to apply math to figure out how we can improve the world.”
Zheng received her PhD from Stanford in 2011 and joined the MIT faculty later that same year. She has remained at the Institute ever since. Having now gone through the rigors of the tenure track at MIT, Zheng adds that she appreciates “the transparency they [her senior colleagues] offered me throughout the process, and the frequent feedback on what I was doing. I’ve been very thankful for all the support I’ve received.”
Visibility, transparency, and responsibility
Zheng’s current and future research has multiple threads. While she continues to examine supply chain efficiencies and logistics in agriculture and other sectors, she also continues to pursue projects regarding environmental and social responsibility in supply chains.
“A lot of my work looks at social responsibility, especially in labor practices,” Zheng says. Referring to the 2013 Dhaka garment factory collapse in Bangladesh and the string of suicides at a Foxconn plant in China, she adds, “All of these tragedies were part of the motivation.”
A starting point for her research in this domain, Zheng says, is the lack of transparency in global supply chains. Even multinational brands cannot usually track the sourcing of many of the materials and parts in their products.
“It’s not only that consumers don’t have visibility, but companies often don’t know where products come from, how things are made,” Zheng says. “Traditionally, they at best know their first tier of suppliers, and have little knowledge beyond that.”
Using modeling, behavioral experiments, surveys, and on-the-ground-studies of companies, Zheng has been studying the potential benefits of greater visibility into companies’ global supply chains. For one thing, her research has shown, creating transparency into the social responsibility practices in supply chains — outdoor wear firm Patagonia is a well-known example — pays some obvious dividends.
“If I’m Patagonia and I tell you 80 percent of my suppliers are compliant and another 20 percent need work, versus another company with no history of supply chain monitoring but making a similar claim, what is the reaction of stakeholders?” Zheng says. “We find differences, in that companies with better visibility into their supply chains gain more trust from their stakeholders.”
Ultimately, the goal of this area of Zheng’s research is to motivate firms and their suppliers to develop end-to-end transparency in the supply chain and to adopt more responsible practices, both for the environment and for the people.
“You need a coordinated effort from all companies,” Zheng says. “Look at it from the suppliers’ perspective. If I’m working with 100 companies and only one is telling me to change my practices, I’m not going to change as much as if 99 tell me to change. The bigger challenge is how to organize that coordinated industry-level, and even cross-industry-level, effort.”
That’s an ambitious goal, but as Zheng knows, creating work that can have an impact is a long-term endeavor that blends theory, field testing, and action.
“We have a theory of change,” Zheng says. “Can we create a system, together, that will generate value for all? I believe the answer is yes, though not an easy yes, and I am excited to be part of the MIT family to contribute to that effort.”
Mark Bear, Picower Professor of Neuroscience at MIT, recalls the “eureka moment” 20 years ago when he realized that a severe developmental brain disorder — fragile X syndrome — might be treated with drugs that inhibit a neurotransmitter receptor called mGluR5. The idea, that mGluR5 stimulates excessive protein synthesis in fragile X neurons that disrupts their functions, became well-validated by experiments in his lab and others worldwide using several animal models of the disease.
“There was great anticipation that this would be a breakthrough treatment for this disease,” says Bear, a faculty member of the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences. “Thus, it was a profound disappointment when the first human clinical trials using mGluR5 negative modulators failed to show a benefit.”
This finding led many to question the theory or the usefulness of the animal models, Bear acknowledges. But now a new study in mice provides substantial evidence that this promising treatment for fragile X syndrome missed the mark because the brain builds up resistance, or “tolerance,” to it. Importantly, the research also points to several new therapeutic opportunities that could still turn the tide against fragile X, the most common inherited form of autism.
Bear and his team led by postdoc David Stoppel showed that giving just a few doses early in life, while the brain is still developing, and then not giving further doses as the subjects got older, could produce lasting benefits in cognitive ability. That finding suggests that the timing and duration of mGluR5 inhibition are more important than previously recognized.
“The development of acquired treatment resistance to a medication is nothing new,” says Bear, senior author of the new paper in Frontiers in Psychiatry. “The fact that it happens doesn’t mean that, therefore, you give up all hope. It means that you have to be aware of it.”
In addition to the strategy of administering mGluR5 inhibitors at a young age and then stopping, the study also implies that patients could benefit if dosing were structured with breaks to prevent a buildup of resistance, Bear says. Moreover, the study also suggests that amid treatment resistance, fragile X mice resumed synthesis of an unknown protein that leads to symptoms. Identifying and targeting that protein, Bear adds, could also be a fertile new avenue for drug development.
These new findings follow on a 2020 study in Science Translational Medicine (STM) by Bear’s lab and scientists at the Broad Institute of MIT and Harvard in which they developed a compound, BRD0705, that acts downstream in the molecular pathway between mGluR5 and protein synthesis. BRD0705 did not incur treatment resistance in mature fragile X mice.
A hard lesson
Fragile X syndrome is caused by a mutation in which repeats of the nucleotides CGG disable a gene’s ability to make the protein FMRP. In the absence of FMRP, neurons exhibit excessive protein synthesis, degraded circuit connections called synapses, and hyperexcitability leading to symptoms such as cognitive disability. In the early 2000s, Bear’s lab recognized that inhibiting the mGluR5 receptor in brain cells could prevent the problems with protein synthesis and treat many fragile X symptoms. After successful animal tests, the treatment was tried in clinical trials.
One participant in the trial of the drug mavoglurant was Andy Tranfaglia of Massachusetts. At the time of treatment eight years ago, he was 24, says his father, Dr. Michael Tranfaglia, medical director of FRAXA Research Foundation, an organization working to find a cure for the disorder.
“Andy had an almost miraculous response to the drug and showed dramatic improvement in virtually all areas of function, behaviorally and cognitively, but he also had significant improvements in motor function and a complete resolution of lifelong, severe gastroesophageal reflux (GERD),” Tranfaglia says. “Unfortunately, after three to four months, the benefits of the treatment began to wane, and continued to decrease over time. The re-emergence of his GERD closely paralleled the return of his other symptoms, though he still showed some benefit after eight months, when the trials ended. This strongly suggested to us the possibility of tolerance to this treatment strategy.”
Indeed, in a 2005 a study in the journal Neuropharmacology by Tranfaglia and researchers at Columbia University showed that in a common test of an mGluR5 inhibitor, whether audio tones led to seizures, found a treatment resistance effect in mature fragile X mice. Until recently, though, the evidence that patients were acquiring treatment resistance wasn’t abundant, Bear says.
In the new study, Bear’s lab replicated the 2005 findings and showed that treatment resistance emerges in two other assays as well. After initial doses of the mGluR5 inhibitor CTEP caused improvements in neural hyperexcitability in the visual cortex, fragile X mice lost that benefit with chronic dosing over the next few days. Fragile X mice also gave up initial progress after chronic dosing in tamping down protein synthesis in a brain region called the hippocampus that is central for memory formation. The results therefore validate the treatment resistance hypothesis by showing it affecting three different tests that involve three different parts of the brain.
Routing around resistance
“This study suggests answers to important questions from the failed fragile X mGluR5 trials and about the preclinical research that inspired them,” Stoppel says. “It also highlights the kinds of experiments that are essential to consider as other therapeutic strategies are developed for Fragile X or other neurodevelopmental disorders. Defining treatment resistance is just the first step, however. Our next goal is to uncover its mechanism and then generate strategies to bypass it altogether. We have some exciting preliminary hypotheses as this work begins.”
Given the evidence that treatment resistance can build, the researchers say, a more effective approach to sustaining benefits from the drugs may be to give patients breaks between doses to allow resistance to subside.
The experiments showing treatment resistance also yielded another important result. In each case, researchers were able to restore the benefits of the medication by adding a drug called CHX, which broadly suppresses protein synthesis. That finding suggests that amid resistance the fragile X mice resumed producing a protein that restored disease symptoms. Bear says a key next step for his lab will be to try to identify that protein.
Treat early, then stop?
The study also followed up on another finding in STM in 2019 by the lab of Peter Kind at the University of Edinborough, which found that administering the drug lovastatin appeared to rescue memory formation and extinction in rats without any signs of treatment resistance. Looking at those results — Bear was a co-author — the MIT team focused on how the first dose was administered to the rats at the young age of five weeks, during a “critical period” of brain development. Bear, Stoppel, and their team reasoned that maybe the first dose produced an enduring effect into adulthood by changing the trajectory of development for the better.
In the new study, the MIT scientists treated some fragile X mice with CTEP a few times 28 days after their birth — roughly equivalent to about 10 years old for humans — and left other Fragile X mice untreated. Then, after no further treatment, when the mice were 60 days of age, the team administered a memory test where the rodents were supposed to first learn that an area was associated with a risk of a mild electric shock, and then learn that the risk had abated. Fragile X mice left untreated during their youth showed difficulty with the test, but fragile X mice who were treated with CTEP while young were much more successful.
Bear says these findings are particularly significant because they replicate the results in Kind’s study using a different drug in a different species. They therefore seem more likely to generalize to other mammalian brains, including humans.
In fact, a new clinical trial of an mGluR5 inhibitor made by the drug company Novartis is underway in young children. Bear says the results from his new study make him feel more encouraged about that trial.
In addition to Bear and Stoppel, the paper’s other authors are Patrick McCamphill, Rebecca Senter, and Arnold Heynen.
FRAXA, The National Institutes of Health, and the JPB Foundation funded the research.
Artificial intelligence is transforming industries around the world — and health care is no exception. A recent Mayo Clinic study found that AI-enhanced electrocardiograms (ECGs) have the potential to save lives by speeding diagnosis and treatment in patients with heart failure who are seen in the emergency room.
The lead author of the study is Demilade “Demi” Adedinsewo, a noninvasive cardiologist at the Mayo Clinic who is actively integrating the latest AI advancements into cardiac care and drawing largely on her learning experience with MIT Professional Education.
Identifying AI opportunities in health care
A dedicated practitioner, Adedinsewo is a Mayo Clinic Florida Women's Health Scholar and director of research for the Cardiovascular Disease Fellowship program. Her clinical research interests include cardiovascular disease prevention, women's heart health, cardiovascular health disparities, and the use of digital tools in cardiovascular disease management.
Adedinsewo’s interest in AI emerged toward the end of her cardiology fellowship, when she began learning about its potential to transform the field of health care. “I started to wonder how we could leverage AI tools in my field to enhance health equity and alleviate cardiovascular care disparities,” she says.
During her fellowship at the Mayo Clinic, Adedinsewo began looking at how AI could be used with ECGs to improve clinical care. To determine the effectiveness of the approach, the team retroactively used deep learning to analyze ECG results from patients with shortness of breath. They then compared the results with the current standard of care — a blood test analysis — to determine if the AI enhancement improved the diagnosis of cardiomyopathy, a condition where the heart is unable to adequately pump blood to the rest of the body. While she understood the clinical implications of the research, she found the AI components challenging.
“Even though I have a medical degree and a master’s degree in public health, those credentials aren’t really sufficient to work in this space,” Adedinsewo says. “I began looking for an opportunity to learn more about AI so that I could speak the language, bridge the gap, and bring those game-changing tools to my field.”
Bridging the gap at MIT
Adedinsewo’s desire to bring together advanced data science and clinical care led her to MIT Professional Education, where she recently completed the Professional Certificate Program in Machine Learning & AI. To date, she has completed nine courses, including AI Strategies and Roadmap.
“All of the courses were great,” Adedinsewo says. “I especially appreciated how the faculty, like professors Regina Barzilay, Tommi Jaakkola, and Stefanie Jegelka, provided practical examples from health care and non–health care fields to illustrate what we were learning.”
Adedinsewo’s goals align closely with those of Barzilay, the AI lead for the MIT Jameel Clinic for Machine Learning in Health. “There are so many areas of health care that can benefit from AI,” Barzilay says. “It’s exciting to see practitioners like Demi join the conversation and help identify new ideas for high-impact AI solutions.”
Adedinsewo also valued the opportunity to work and learn within the greater MIT community alongside accomplished peers from around the world, explaining that she learned different things from each person. “It was great to get different perspectives from course participants who deploy AI in other industries,” she says.
Putting knowledge into action
Armed with her updated AI toolkit, Adedinsewo was able to make meaningful contributions to Mayo Clinic’s research. The team successfully completed and published their ECG project in August 2020, with promising results. In analyzing the ECGs of about 1,600 patients, the AI-enhanced method was both faster and more effective — outperforming the standard blood tests with a performance measure (AUC) of 0.89 versus 0.80. This improvement could enhance health outcomes by improving diagnostic accuracy and increasing the speed with which patients receive appropriate care.
But the benefits of Adedinsewo’s MIT experience go beyond a single project. Adedinsewo says that the tools and strategies she acquired have helped her communicate the complexities of her work more effectively, extending its reach and impact. “I feel more equipped to explain the research — and AI strategies in general — to my clinical colleagues. Now, people reach out to me to ask, ‘I want to work on this project. Can I use AI to answer this question?’’ she said.
Looking to the AI-powered future
What’s next for Adedinsewo’s research? Taking AI mainstream within the field of cardiology. While AI tools are not currently widely used in evaluating Mayo Clinic patients, she believes they hold the potential to have a significant positive impact on clinical care.
“These tools are still in the research phase,” Adedinsewo says. “But I’m hoping that within the next several months or years we can start to do more implementation research to see how well they improve care and outcomes for cardiac patients over time.”
Bhaskar Pant, executive director of MIT Professional Education, says “We at MIT Professional Education feel particularly gratified that we are able to provide practitioner-oriented insights and tools in machine learning and AI from expert MIT faculty to frontline health researchers such as Dr. Demi Adedinsewo, who are working on ways to enhance markedly clinical care and health outcomes in cardiac and other patient populations. This is also very much in keeping with MIT’s mission of 'working with others for the betterment of humankind!'”
In the early solar system, a “protoplanetary disk” of dust and gas rotated around the sun and eventually coalesced into the planets we know today.
A new analysis of ancient meteorites by scientists at MIT and elsewhere suggests that a mysterious gap existed within this disk around 4.567 billion years ago, near the location where the asteroid belt resides today.
The team’s results, appearing today in Science Advances, provide direct evidence for this gap.
“Over the last decade, observations have shown that cavities, gaps, and rings are common in disks around other young stars,” says Benjamin Weiss, professor of planetary sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “These are important but poorly understood signatures of the physical processes by which gas and dust transform into the young sun and planets.”
Likewise the cause of such a gap in our own solar system remains a mystery. One possibility is that Jupiter may have been an influence. As the gas giant took shape, its immense gravitational pull could have pushed gas and dust toward the outskirts, leaving behind a gap in the developing disk.
Another explanation may have to do with winds emerging from the surface of the disk. Early planetary systems are governed by strong magnetic fields. When these fields interact with a rotating disk of gas and dust, they can produce winds powerful enough to blow material out, leaving behind a gap in the disk.
Regardless of its origins, a gap in the early solar system likely served as a cosmic boundary, keeping material on either side of it from interacting. This physical separation could have shaped the composition of the solar system’s planets. For instance, on the inner side of the gap, gas and dust coalesced as terrestrial planets, including the Earth and Mars, while gas and dust relegated to the farther side of the gap formed in icier regions, as Jupiter and its neighboring gas giants.
“It’s pretty hard to cross this gap, and a planet would need a lot of external torque and momentum,” says lead author and EAPS graduate student Cauê Borlina. “So, this provides evidence that the formation of our planets was restricted to specific regions in the early solar system.”
Weiss and Borlina’s co-authors include Eduardo Lima, Nilanjan Chatterjee, and Elias Mansbach of MIT; James Bryson of Oxford University; and Xue-Ning Bai of Tsinghua University.
A split in space
Over the last decade, scientists have observed a curious split in the composition of meteorites that have made their way to Earth. These space rocks originally formed at different times and locations as the solar system was taking shape. Those that have been analyzed exhibit one of two isotope combinations. Rarely have meteorites been found to exhibit both — a conundrum known as the “isotopic dichotomy.”
Scientists have proposed that this dichotomy may be the result of a gap in the early solar system’s disk, but such a gap has not been directly confirmed.
Weiss’ group analyzes meteorites for signs of ancient magnetic fields. As a young planetary system takes shape, it carries with it a magnetic field, the strength and direction of which can change depending on various processes within the evolving disk. As ancient dust gathered into grains known as chondrules, electrons within chondrules aligned with the magnetic field in which they formed.
Chondrules can be smaller than the diameter of a human hair, and are found in meteorites today. Weiss’ group specializes in measuring chondrules to identify the ancient magnetic fields in which they originally formed.
In previous work, the group analyzed samples from one of the two isotopic groups of meteorites, known as the noncarbonaceous meteorites. These rocks are thought to have originated in a “reservoir,” or region of the early solar system, relatively close to the sun. Weiss’ group previously identified the ancient magnetic field in samples from this close-in region.
A meteorite mismatch
In their new study, the researchers wondered whether the magnetic field would be the same in the second isotopic, “carbonaceous” group of meteorites, which, judging from their isotopic composition, are thought to have originated farther out in the solar system.
They analyzed chondrules, each measuring about 100 microns, from two carbonaceous meteorites that were discovered in Antarctica. Using the superconducting quantum interference device, or SQUID, a high-precision microscope in Weiss’ lab, the team determined each chondrule’s original, ancient magnetic field.
Surprisingly, they found that their field strength was stronger than that of the closer-in noncarbonaceous meteorites they previously measured. As young planetary systems are taking shape, scientists expect that the strength of the magnetic field should decay with distance from the sun.
In contrast, Borlina and his colleagues found the far-out chondrules had a stronger magnetic field, of about 100 microteslas, compared to a field of 50 microteslas in the closer chondrules. For reference, the Earth’s magnetic field today is around 50 microteslas.
A planetary system’s magnetic field is a measure of its accretion rate, or the amount of gas and dust it can draw into its center over time. Based on the carbonaceous chondrules’ magnetic field, the solar system’s outer region must have been accreting much more mass than the inner region.
Using models to simulate various scenarios, the team concluded that the most likely explanation for the mismatch in accretion rates is the existence of a gap between the inner and outer regions, which could have reduced the amount of gas and dust flowing toward the sun from the outer regions.
“Gaps are common in protoplanetary systems, and we now show that we had one in our own solar system,” Borlina says. “This gives the answer to this weird dichotomy we see in meteorites, and provides evidence that gaps affect the composition of planets.”
This research was supported, in part, by NASA, and the National Science Foundation.
The growing popularity of 3D printing for manufacturing all sorts of items, from customized medical devices to affordable homes, has created more demand for new 3D printing materials designed for very specific uses.
To cut down on the time it takes to discover these new materials, researchers at MIT have developed a data-driven process that uses machine learning to optimize new 3D printing materials with multiple characteristics, like toughness and compression strength.
By streamlining materials development, the system lowers costs and lessens the environmental impact by reducing the amount of chemical waste. The machine learning algorithm could also spur innovation by suggesting unique chemical formulations that human intuition might miss.
“Materials development is still very much a manual process. A chemist goes into a lab, mixes ingredients by hand, makes samples, tests them, and comes to a final formulation. But rather than having a chemist who can only do a couple of iterations over a span of days, our system can do hundreds of iterations over the same time span,” says Mike Foshey, a mechanical engineer and project manager in the Computational Design and Fabrication Group (CDFG) of the Computer Science and Artificial Intelligence Laboratory (CSAIL), and co-lead author of the paper.
Additional authors include co-lead author Timothy Erps, a technical associate in CDFG; Mina Konaković Luković, a CSAIL postdoc; Wan Shou, a former MIT postdoc who is now an assistant professor at the University of Arkansas; senior author Wojciech Matusik, professor of electrical engineering and computer science at MIT; and Hanns Hagen Geotzke, Herve Dietsch, and Klaus Stoll of BASF. The research was published today in Science Advances.
In the system the researchers developed, an optimization algorithm performs much of the trial-and-error discovery process.
A material developer selects a few ingredients, inputs details on their chemical compositions into the algorithm, and defines the mechanical properties the new material should have. Then the algorithm increases and decreases the amounts of those components (like turning knobs on an amplifier) and checks how each formula affects the material’s properties, before arriving at the ideal combination.
Then the developer mixes, processes, and tests that sample to find out how the material actually performs. The developer reports the results to the algorithm, which automatically learns from the experiment and uses the new information to decide on another formulation to test.
“We think, for a number of applications, this would outperform the conventional method because you can rely more heavily on the optimization algorithm to find the optimal solution. You wouldn’t need an expert chemist on hand to preselect the material formulations,” Foshey says.
The researchers have created a free, open-source materials optimization platform called AutoOED that incorporates the same optimization algorithm. AutoOED is a full software package that also allows researchers to conduct their own optimization.
The researchers tested the system by using it to optimize formulations for a new 3D printing ink that hardens when it is exposed to ultraviolet light.
They identified six chemicals to use in the formulations and set the algorithm’s objective to uncover the best-performing material with respect to toughness, compression modulus (stiffness), and strength.
Maximizing these three properties manually would be especially challenging because they can be conflicting; for instance, the strongest material may not be the stiffest. Using a manual process, a chemist would typically try to maximize one property at a time, resulting in many experiments and a lot of waste.
The algorithm came up with 12 top performing materials that had optimal tradeoffs of the three different properties after testing only 120 samples.
Foshey and his collaborators were surprised by the wide variety of materials the algorithm was able to generate, and say the results were far more varied than they expected based on the six ingredients. The system encourages exploration, which could be especially useful in situations when specific material properties can’t be easily discovered intuitively.
Faster in the future
The process could be accelerated even more through the use of additional automation. Researchers mixed and tested each sample by hand, but robots could operate the dispensing and mixing systems in future versions of the system, Foshey says.
Farther down the road, the researchers would also like to test this data-driven discovery process for uses beyond developing new 3D printing inks.
“This has broad applications across materials science in general. For instance, if you wanted to design new types of batteries that were higher efficiency and lower cost, you could use a system like this to do it. Or if you wanted to optimize paint for a car that performed well and was environmentally friendly, this system could do that, too,” he says.
Because it presents a systematic approach for identifying optimal materials, this work could be a major step toward realizing high performance structures, says Keith A. Brown, assistant professor in the Department of Mechanical Engineering at Boston University.
“The focus on novel material formulations is particularly encouraging as this is a factor that is often overlooked by researchers who are constrained by commercially available materials. And the combination of data-driven methods and experimental science allows the team to identify materials in an efficient manner. Since experimental efficiency is something with which all experimenters can identify, the methods here have a chance of motivating the community to adopt more data-driven practices,” he says.
The research was supported by BASF
A new kind of fiber developed by researchers at MIT and in Sweden can be made into clothing that senses how much it is being stretched or compressed, and then provides immediate tactile feedback in the form of pressure, lateral stretch, or vibration. Such fabrics, the team suggests, could be used in garments that help train singers or athletes to better control their breathing, or that help patients recovering from disease or surgery to recover their breathing patterns.
The multilayered fibers contain a fluid channel in the center, which can be activated by a fluidic system. This system controls the fibers’ geometry by pressurizing and releasing a fluid medium, such as compressed air or water, into the channel, allowing the fiber to act as an artificial muscle. The fibers also contain stretchable sensors that can detect and measure the degree of stretching of the fibers. The resulting composite fibers are thin and flexible enough to be sewn, woven, or knitted using standard commercial machines.
The fibers, dubbed OmniFibers, are being presented this week at the Association for Computing Machinery’s User Interface Software and Technology online conference, in a paper by Ozgun Kilic Afsar, a visiting doctoral student and research affiliate at MIT; Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences; and eight others from the MIT Media Lab, Uppsala University, and KTH Royal Institute of Technology in Sweden.
The new fiber architecture has a number of key features. Its extremely narrow size and use of inexpensive material make it relatively easy to structure the fibers into a variety of fabric forms. It’s also compatible with human skin, since its outer layer is based on a material similar to common polyester. And, its fast response time and the strength and variety of the forces it can impart allow for a rapid feedback system for training or remote communications using haptics (based on the sense of touch).
Afsar says that the shortcomings of most existing artificial muscle fibers are that they are either thermally activated, which can cause overheating when used in contact with human skin, or they have low-power efficiency or arduous training processes. These systems often have slow response and recovery times, limiting their immediate usability in applications that require rapid feedback, she says.
As an initial test application of the material, the team made a type of undergarment that singers can wear to monitor and play back the movement of respiratory muscles, to later provide kinesthetic feedback through the same garment to encourage optimal posture and breathing patterns for the desired vocal performance. “Singing is particularly close to home, as my mom is an opera singer. She’s a soprano,” she says. In the design and fabrication process of this garment, Afsar has worked closely with a classically trained opera singer, Kelsey Cotton.
“I really wanted to capture this expertise in a tangible form,” Afsar says. The researchers had the singer perform while wearing the garment made of their robotic fibers, and recorded the movement data from the strain sensors woven into the garment. Then, they translated the sensor data to the corresponding tactile feedback. “We eventually were able to achieve both the sensing and the modes of actuation that we wanted in the textile, to record and replay the complex movements that we could capture from an expert singer’s physiology and transpose it to a nonsinger, a novice learner’s body. So, we are not just capturing this knowledge from an expert, but we are able to haptically transfer that to someone who is just learning,” she says.
Though this initial testing is in the context of vocal pedagogy, the same approach could be used to help athletes to learn how best to control their breathing in a given situation, based on monitoring accomplished athletes as they carry out various activities and stimulating the muscle groups that are in action, Afsar says. Eventually, the hope is that such garments could also be used to help patients regain healthy breathing patterns after major surgery or a respiratory disease such as Covid-19, or even as an alternative treatment for sleep apnea (which Afsar suffered from as a child, she says).
The physiology of breathing is actually quite complex, explains Afsar, who is carrying out this work as part of her doctoral thesis at KTH Royal Institute of Technology. “We are not quite aware of which muscles we use and what the physiology of breathing consists of,” she says. So, the garments they designed have separate modules to monitor different muscle groups as the wearer breathes in and out, and can replay the individual motions to stimulate the activation of each muscle group.
Ishii says he can foresee a variety of applications for this technology. “Everybody has to breathe. Breathing has a major impact on productivity, confidence, and performance,” he says. “Breathing is important for singing, but also this can help when recovering from surgery or depression. For example, breathing is so important for meditation.”
The system also might be useful for training other kinds of muscle movements besides breathing, he says. For example, “Many of our artists studied amazing calligraphy, but I want to feel the dynamics of the stroke of the brushes,” which might be accomplished with a sleeve and glove made of this closed-loop-feedback material. And Olympic athletes might sharpen their skills by wearing a garment that reproduces the movements of a top athlete, whether a weightlifter or a skier, he suggests.
The soft fiber composite, which resembles a strand of yarn, has five layers: the innermost fluid channel, a silicone-based elastomeric tube to contain the working fluid, a soft stretchable sensor that detects strain as a change in electrical resistance, a braided polymer stretchable outer mesh that controls the outer dimensions of the fiber, and a nonstretchy filament that provides a mechanical constraint on the overall extensibility.
“The fiber-level engineering and fabric-level design are nicely integrated in this study,” says Lining Yao, an assistant professor of human-computer interaction at Carnegie Mellon University, who was not associated with this research. This work demonstrates “different machine knitting techniques, including inlay and active spacer fabric, advanced the state-of-the-art regarding ways of embedding actuating fibers into textiles,” she says. “Integrating strain sensing and feedbacks is essential when we talk about wearable interactions with actuating fabrics.”
Afsar plans to continue working on making the whole system, including its control electronics and compressed air supply, even more miniaturized to keep it as unobtrusive as possible, and to develop the manufacturing system to be able to produce longer filaments. In coming months, she plans to begin experiments in using the system for transferring skills from an expert to a novice singer, and later to explore different kinds of movement practices, including those of choreographers and dancers.
The research was supported by the Swedish Foundation for Strategic Research. The team included Ali Shtarbanov, Hila Mor, Ken Nakagaki, and Jack Forman at MIT; Kristina Hook at KTH Royal Institute of Technology; and Karen Modrei, Seung Hee Jeong, and Klas Hjort at Uppsala University in Sweden.
Documentary short, “The Uprising,” showcases women in science who pressed for equal rights at MIT in the 1990s
The MIT Press today announced the digital release of “The Uprising,” a documentary short about the unprecedented behind-the-scenes effort that amassed irrefutable evidence of differential treatment of men and women on the MIT faculty in the 1990s. Directed by Ian Cheney and Sharon Shattuck, the film premiered on the MIT Press’ YouTube channel, and is now openly distributed.
A 13-minute film, “The Uprising” introduces the story behind the 1999 Study on the Status of Women Faculty in Science at MIT and its impact both at the Institute and around the globe. Featuring Nancy Hopkins, professor emerita of biology at MIT, the film chronicles the experiences of marginalization and discouragement that accompanied Hopkins’ research leading up to the study and further highlights the steps a group of 16 female faculty members took to make science more diverse and equitable.
The MIT report is today widely credited with advancing gender equity in universities both nationally and internationally. This ripple effect is highlighted in the film by Hopkins, who says, “Look at the talent of these women. This is what you lose when you do not solve this problem. It's true not just of women, it's true of minorities, it's true of all groups that get excluded. It's all of that talent that you lose. For me, the success of these women is the reward for the work we did. That's really what it's about. It's about the science.”
“The Uprising” features interviews with leading current and former MIT scientists, including social psychologist Lotte Bailyn, biomedical engineer Sangeeta N. Bhatia, chemist Sylvia Ceyer, ecologist Sallie “Penny” Chisholm, materials engineer Lorna Gibson, biologist Ruth Lehmann, geophysicist and National Academy of Sciences President Marcia McNutt, cognitive scientist Mary Potter, oceanographer Paola Rizzoli, geophysicist Leigh Royden, and biologist Lisa Steiner. “The Uprising” was produced in conjunction with the feature-length documentary film, “Picture a Scientist.”
“The Uprising” was funded by a grant from the Alfred P. Sloan Foundation, as well as support from Nancy Blachman and an anonymous donor. The film was produced by Manette Pottle, in collaboration with the MIT Press. Amy Brand, director and publisher at the MIT Press, served as executive producer.
As we interact with the world, we are constantly presented with information that is unreliable or incomplete — from jumbled voices in a crowded room to solicitous strangers with unknown motivations. Fortunately, our brains are well equipped to evaluate the quality of the evidence we use to make decisions, usually allowing us to act deliberately, without jumping to conclusions.
Now, neuroscientists at MIT’s McGovern Institute for Brain Research have homed in on key brain circuits that help guide decision-making under conditions of uncertainty. By studying how mice interpret ambiguous sensory cues, they’ve found neurons that stop the brain from using unreliable information.
The findings, published Oct. 6 in the journal Nature, could help researchers develop treatments for schizophrenia and related conditions, whose symptoms may be at least partly due to affected individuals’ inability to effectively gauge uncertainty.
“A lot of cognition is really about handling different types of uncertainty,” says MIT associate professor of brain and cognitive sciences Michael Halassa, explaining that we all must use ambiguous information to make inferences about what’s happening in the world. Part of dealing with this ambiguity involves recognizing how confident we can be in our conclusions. And when this process fails, it can dramatically skew our interpretation of the world around us.
“In my mind, schizophrenia spectrum disorders are really disorders of appropriately inferring the causes of events in the world and what other people think,” says Halassa, who is a practicing psychiatrist. Patients with these disorders often develop strong beliefs based on events or signals most people would dismiss as meaningless or irrelevant, he says. They may assume hidden messages are embedded in a garbled audio recording, or worry that laughing strangers are plotting against them. Such things are not impossible — but delusions arise when patients fail to recognize that they are highly unlikely.
Halassa and postdoc Arghya Mukherjee wanted to know how healthy brains handle uncertainty, and recent research from other labs provided some clues. Functional brain imaging had shown that when people are asked to study a scene but they aren’t sure what to pay attention to, a part of the brain called the mediodorsal thalamus becomes active. The less guidance people are given for this task, the harder the mediodorsal thalamus works.
The thalamus is a sort of crossroads within the brain, made up of cells that connect distant brain regions to one another. Its mediodorsal region sends signals to the prefrontal cortex, where sensory information is integrated with our goals, desires, and knowledge to guide behavior. Previous work in the Halassa lab showed that the mediodorsal thalamus helps the prefrontal cortex tune in to the right signals during decision-making, adjusting signaling as needed when circumstances change. Intriguingly, this brain region has been found to be less active in people with schizophrenia than it is in others.
Working with postdoc Norman Lam and Research Scientist Ralf Wimmer, Halassa and Mukherjee designed a set of animal experiments to examine the mediodorsal thalamus’s role in handling uncertainty. Mice were trained to respond to sensory signals according to audio cues that alerted them whether to focus on either light or sound. When the animals were given conflicting cues, it was up to them animal to figure out which one was represented most prominently and act accordingly. The experimenters varied the uncertainty of this task by manipulating the numbers and ratio of the cues.
Division of labor
By manipulating and recording activity in the animals’ brains, the researchers found that the prefrontal cortex got involved every time mice completed this task, but the mediodorsal thalamus was only needed when the animals were given signals that left them uncertain how to behave. There was a simple division of labor within the brain, Halassa says. “One area cares about the content of the message — that’s the prefrontal cortex — and the thalamus seems to care about how certain the input is.”
Within the mediodorsal thalamus, Halassa and Mukherjee found a subset of cells that were especially active when the animals were presented with conflicting sound cues. These neurons, which connect directly to the prefrontal cortex, are inhibitory neurons, capable of dampening downstream signaling. So when they fire, Halassa says, they effectively stop the brain from acting on unreliable information. Cells of a different type were focused on the uncertainty that arises when signaling is sparse. “There’s a dedicated circuitry to integrate evidence across time to extract meaning out of this kind of assessment,” Mukherjee explains.
As Halassa and Mukherjee investigate these circuits more deeply, a priority will be determining whether they are disrupted in people with schizophrenia. To that end, they are now exploring the circuitry in animal models of the disorder. The hope, Mukherjee says, is to eventually target dysfunctional circuits in patients, using noninvasive, focused drug delivery methods currently under development. “We have the genetic identity of these circuits. We know they express specific types of receptors, so we can find drugs that target these receptors,” he says. “Then you can specifically release these drugs in the mediodorsal thalamus to modulate the circuits as a potential therapeutic strategy.”
This work was funded by grants from the National Institute of Mental Health.
The cost of DNA sequencing has plummeted at a rate faster than Moore’s Law, opening large markets in the sequencing space. Genomics for cancer care alone is predicted to hit $23 billion by 2025, but sample preparation costs for sequencing have stagnated, causing a significant bottleneck in the space.
Conventional sample preparation, converting DNA from a saliva sample, for example, into something that can be fed to a sequencing machine, relies on a liquid-handling robot. It is essentially a mechanical arm equipped with pipette tips that moves liquid samples to plastic plates and other instruments placed on the deck. These systems involve multiple fluidic transfers that lead to poor utilization of reagents and samples, which means less DNA sequenced. Moreover, they are systems of separate data silos that lack integration and rely on expensive consumables.
Unlike traditional liquid-handling automation, the suite of solutions developed by MIT Media Lab spinoff Volta Labs provides end-to-end integration for a wide variety of workflows. It’s a sleek alternative to costly liquid handling machines and manual pipetting. “Our technology is a small-scale, benchtop device that is low-cost and has minimal consumable usage, enabling rapid and flexible composition of new biological workflows,” says Volta Labs co-founder and Head of Engineering Will Langford SM ’14, PhD ’19.
The Volta platform is based on digital microfluidic technology developed at MIT by Langford's co-founder, Volta Labs CEO Udayan Umapathi SM ’17. The core principle behind the innovation is called electrowetting. It allows its users to manipulate droplets around a printed circuit board to perform biological reactions, automating from raw sample to prepared library that can be run on a sequencing machine.
Umapathi arrived at the Media Lab with what he describes as "a fascination for building automation from the ground up." Though trained as an engineer, Umapathi has applied his skills to a variety of fields. In 2015, he founded a startup that created web and physical tools to enable content creation for digital manufacturing. However, it was while working for a synthetic biology company, engineering liquid-handling systems for genome engineering solutions, that he identified the scaling up of automation as a pain point for the field.
Meanwhile, Langford spent his MIT days at the Center for Bits and Atoms, a proudly interdisciplinary program that explores the boundary between computer science and physical science. His research centered on the idea that engineering could learn from biology. Put another way, all of life is assembled from 20 amino acids, so, thought Langford, why not attempt something similar with engineering?
In practice, this meant he built integrated robots from a small set of millimeter-scale parts. "Ultimately, I was trying to make engineering more like biology,” he reflects. “I see Volta as an opportunity to flip that on its head and use automation to treat biology more like engineering. We want to give biologists tools to manipulate liquids and biological reactions at a finer granularity and with more digital flexibility.”
While Volta’s automation platform simplifies sample prep by integrating complicated workflows, it also drives down costs in the space with a new consumable construction. Between the circuit board and the sample board is a consumable layer, which is removed and replaced after each run. Conventional consumables are expensive, conductively coded plastics or large microfluidic structures. Volta, however, uses a simple plastic film to reduce the cost of consumables, which opens the door for the widespread adoption of gene sequencing.
All of this points to a more efficient and inclusionary model in the gene sequencing space. Thanks to Volta, soon, it won't be just large biotechnology companies with the ability to invest in automation. Academic labs, core facilities, and small-to-medium biotech companies won’t need to worry about whether they can afford an expensive mechanical robot. "The thing that excites me is that we’re providing early-stage and mid-to-low-throughput biotech companies with powerful tools that will allow them to compete with bigger players, which is good for the industry as a whole,” says Umapathi.
And the fact is that traditional automation machines used in the biotechnology space come with their own set of problems. They're error-prone and you can't scale them. Consider Illumina's NovaSeq sequencer. It's capable of sequencing 48 whole human genomes in under two days — that’s 20 billion unique reads — but there is currently no automation to feed those machines at scale. “To run those machines day in and day out, the cost simply doesn’t make sense, which is why we have to tackle the cost of sequencing and sample prep,” says Umapathi.
Volta's system is built on solid-state electronics, and the Boston-based startup is looking to leverage the scalability of the semiconductor fabrication industry and the PCB manufacturing industry. “The goal,” explains Langford, “is to enable biologists to create an experiment and modify it quickly, iterate on it, and generate the data necessary to see biology at scale.”
Beyond the sample prep bottleneck, eventually, the work of Umapathi and Langfordwork will impact a variety of applications in the synthetic biology industry and the biopharma industry. Diagnostics will be transformed, according to Umapathi. “We can help the biology industry by cutting down on the use of pipette tips by 20 or 50 times. In specific workflows, we can almost entirely eliminate this bottleneck in the supply chain," he says.
To accomplish all of this, to truly innovate in a field as complex as biology, Umapathi and Langford insist that a multidisciplinary systems perspective is essential. It’s what informs the Volta approach to genomic sequencing in particular, and biology as a whole. “Volta is a new type of biotechnology company,” says Umapathi. “It’s inevitable that more engineers and systems thinkers and those who want to build tools to engineer biology better will join companies like ours or start their own.”
Turning biology into an engineering principle is no small feat, but according to Umapathi and Langford, it’s a necessity.
MIT announced today that unusually strong performance by its endowment will enable greater support for undergraduate and graduate students, and investment in research operations that will strengthen its capacity to advance breakthrough science and technology.
The Institute’s unitized pool of endowment and other MIT funds recorded an investment return of 55.5 percent during the fiscal year ending June 30, 2021, as measured using valuations received within one month of fiscal year end — its strongest annual performance in more than 20 years. At the end of the fiscal year the endowment’s value stood at $27.4 billion, an increase of $9 billion.
President L. Rafael Reif, Chancellor Melissa Nobles, and Vice President for Research Maria Zuber shared the news in a series of letters today to MIT faculty, staff, students, and postdocs. MIT’s Report of the Treasurer for fiscal year 2021 was also released today.
“This is a once-in-a-generation opportunity, and we must use it in a way that inspires big ideas and builds a stronger MIT at a time when the world needs breakthroughs in science more than ever,” President Reif says.
Reif says the new funds will be deployed in ways that benefit students at all levels, and make the Institute more capable of advancing the cutting-edge research and science the world needs. Task Force 2021 and Beyond (TF21), an MIT-wide process of mapping the future of the Institute that was launched shortly after the onset of Covid-19, has helped identify key priorities, as have reports of MIT’s visiting committees, which examine and advise on every aspect of the Institute. Priorities identified by TF21 and the visiting committees include:
- increasing financial support for MIT’s 7,000 graduate students, who face rising housing costs in the Boston area, and most of whom contribute directly to research and teaching;
- improving student life by continuing to modernize campus facilities, strengthening classroom and digital learning experiences, and providing greater support to ensure success at MIT; and
- growing MIT’s investments in core research infrastructure, such as computing power.
Beyond the opportunities for community feedback offered through TF21 — which has involved over 200 faculty, staff, and students, as well as alumni and student advisory boards — members of the MIT community will have opportunities to offer input on spending priorities through the budget process that unfolds each fall.
“Budgeting is an act of balancing,” Provost Martin Schmidt says. “Our mission of learning and discovery will be just as pivotal tomorrow as it is today. As we unlock some of these gains to strengthen MIT’s work right now, we have to do it in ways that keep our work strong in the long run. Often, breakthroughs in science and technology take years and years of steadfast support before they pay off. We aim to strike that balance in a compelling way.”
About the MIT endowment
MIT’s endowment is intended to support current and future generations of MIT scholars with the resources needed to advance knowledge, research, and innovation. As such, endowment funds are used for Institute activities including education, research, campus renewal, faculty work, and student financial aid.
In the last fiscal year, returns from the unitized investment pool containing MIT’s endowment provided $851 million, or about 30 percent of campus operating revenues. Under the Institute’s usual endowment spending formula, designed to smooth the highs and lows of market returns over time and ensure stable funding for its education and research, deployment of last year’s unexpected gains to support MIT’s operations would be deferred significantly.
The 55.5 percent return on the investment pool containing MIT’s endowment over the past fiscal year enables the Institute to put these gains to use more quickly, while still maintaining appropriate balances to meet future needs. Accordingly, MIT plans to increase endowment payout by 30 percent starting in fiscal year 2023 (which begins on July 1, 2022), aiming to “accelerate the work the world needs right now and deepen the support our students need,” Schmidt says.
Taking this step will provide an estimated $286 million in additional resources to support the work of the MIT community in the coming fiscal year. It will also set a new baseline for endowment support for the budget in following years.
“We have strived to craft a carefully calibrated approach,” Executive Vice President and Treasurer Glen Shor says. “The steps we are taking balance today’s needs for meaningful investment in the Institute, while taking care to preserve our endowment for future generations of learners.”
Endowment funds help the Institute cover everything from utilities, to support for teaching and learning, to the sudden need for a community-wide Covid testing system. Funds from the endowment are also critical to supporting the Institute’s need-blind undergraduate admissions policy, which ensures that an MIT education is accessible to all qualified candidates regardless of financial resources.
MIT works closely with all families who qualify for financial aid to develop an individual affordability plan tailored to their financial circumstances. In 2020-21, the average need-based MIT scholarship was $45,146. In the fall of 2020, MIT offered an additional $21.8 million in aid ($5,000 to every student) to help cover pandemic-related expenses. Fifty-seven percent of MIT undergraduates received need-based financial aid, and 38 percent of MIT undergraduate students received scholarship funding from MIT and other sources sufficient to cover the total cost of tuition.
The core of MIT’s endowment is made up of gifts from alumni and friends who make “endowment gifts” with the expectation that the funds will be invested in ways that enable ongoing benefits over many years. Those gifts are frequently restricted by donors for certain kinds of uses, such as need-based scholarships, for example, and cannot be used for any other purpose. This means that MIT’s ability to set spending priorities for the new funds is limited by endowment restrictions.
Neural networks can learn to solve all sorts of problems, from identifying cats in photographs to steering a self-driving car. But whether these powerful, pattern-recognizing algorithms actually understand the tasks they are performing remains an open question.
For example, a neural network tasked with keeping a self-driving car in its lane might learn to do so by watching the bushes at the side of the road, rather than learning to detect the lanes and focus on the road’s horizon.
Researchers at MIT have now shown that a certain type of neural network is able to learn the true cause-and-effect structure of the navigation task it is being trained to perform. Because these networks can understand the task directly from visual data, they should be more effective than other neural networks when navigating in a complex environment, like a location with dense trees or rapidly changing weather conditions.
In the future, this work could improve the reliability and trustworthiness of machine learning agents that are performing high-stakes tasks, like driving an autonomous vehicle on a busy highway.
“Because these machine-learning systems are able to perform reasoning in a causal way, we can know and point out how they function and make decisions. This is essential for safety-critical applications,” says co-lead author Ramin Hasani, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL).
Co-authors include electrical engineering and computer science graduate student and co-lead author Charles Vorbach; CSAIL PhD student Alexander Amini; Institute of Science and Technology Austria graduate student Mathias Lechner; and senior author Daniela Rus, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of CSAIL. The research will be presented at the 2021 Conference on Neural Information Processing Systems (NeurIPS) in December.
An attention-grabbing result
Neural networks are a method for doing machine learning in which the computer learns to complete a task through trial-and-error by analyzing many training examples. And “liquid” neural networks change their underlying equations to continuously adapt to new inputs.
The new research draws on previous work in which Hasani and others showed how a brain-inspired type of deep learning system called a Neural Circuit Policy (NCP), built by liquid neural network cells, is able to autonomously control a self-driving vehicle, with a network of only 19 control neurons.
The researchers observed that the NCPs performing a lane-keeping task kept their attention on the road’s horizon and borders when making a driving decision, the same way a human would (or should) while driving a car. Other neural networks they studied didn’t always focus on the road.
“That was a cool observation, but we didn’t quantify it. So, we wanted to find the mathematical principles of why and how these networks are able to capture the true causation of the data,” he says.
They found that, when an NCP is being trained to complete a task, the network learns to interact with the environment and account for interventions. In essence, the network recognizes if its output is being changed by a certain intervention, and then relates the cause and effect together.
During training, the network is run forward to generate an output, and then backward to correct for errors. The researchers observed that NCPs relate cause-and-effect during forward-mode and backward-mode, which enables the network to place very focused attention on the true causal structure of a task.
Hasani and his colleagues didn’t need to impose any additional constraints on the system or perform any special set up for the NCP to learn this causality.
“Causality is especially important to characterize for safety-critical applications such as flight,” says Rus. “Our work demonstrates the causality properties of Neural Circuit Policies for decision-making in flight, including flying in environments with dense obstacles such as forests and flying in formation.”
Weathering environmental changes
They tested NCPs through a series of simulations in which autonomous drones performed navigation tasks. Each drone used inputs from a single camera to navigate.
The drones were tasked with traveling to a target object, chasing a moving target, or following a series of markers in varied environments, including a redwood forest and a neighborhood. They also traveled under different weather conditions, like clear skies, heavy rain, and fog.
The researchers found that the NCPs performed as well as the other networks on simpler tasks in good weather, but outperformed them all on the more challenging tasks, such as chasing a moving object through a rainstorm.
“We observed that NCPs are the only network that pay attention to the object of interest in different environments while completing the navigation task, wherever you test it, and in different lighting or environmental conditions. This is the only system that can do this casually and actually learn the behavior we intend the system to learn,” he says.
Their results show that the use of NCPs could also enable autonomous drones to navigate successfully in environments with changing conditions, like a sunny landscape that suddenly becomes foggy.
“Once the system learns what it is actually supposed to do, it can perform well in novel scenarios and environmental conditions it has never experienced. This is a big challenge of current machine learning systems that are not causal. We believe these results are very exciting, as they show how causality can emerge from the choice of a neural network,” he says.
In the future, the researchers want to explore the use of NCPs to build larger systems. Putting thousands or millions of networks together could enable them to tackle even more complicated tasks.
This research was supported by the United States Air Force Research Laboratory, the United States Air Force Artificial Intelligence Accelerator, and the Boeing Company.
With the tools of modern neuroscience, researchers can peer into the brain with unprecedented accuracy. Recording devices listen in on the electrical conversations between neurons, picking up the voices of hundreds of cells at a time. Genetic tools allow us to focus on specific types of neurons based on their molecular signatures. Microscopes zoom in to illuminate the brain’s circuitry, capturing thousands of images of elaborately branched dendrites. Functional MRIs detect changes in blood flow to map activity within a person’s brain, generating a complete picture by compiling hundreds of scans.
This deluge of data provides insights into brain function and dynamics at different levels — molecules, cells, circuits, and behavior — but the insights remain compartmentalized in separate research silos for each level. An innovative new center at MIT’s McGovern Institute for Brain Research aims to leverage them into powerful revelations of the brain’s inner workings.
The K. Lisa Yang Integrative Computational Neuroscience (ICoN) Center will create advanced mathematical models and computational tools to synthesize the deluge of data across scales and advance our understanding of the brain and mental health.
The center, funded by a $24 million donation from philanthropist Lisa Yang and led by McGovern Institute Associate Investigator Ila Fiete, will take a collaborative approach to computational neuroscience, integrating cutting-edge modeling techniques and data from MIT labs to explain brain function at every level, from the molecular to the behavioral.
“Our goal is that sophisticated, truly integrated computational models of the brain will make it possible to identify how ‘control knobs’ such as genes, proteins, chemicals, and environment drive thoughts and behavior, and to make inroads toward urgent unmet needs in understanding and treating brain disorders,” says Fiete, who is also a brain and cognitive sciences professor at MIT.
“Driven by technologies that generate massive amounts of data, we are entering a new era of translational neuroscience research,” says Yang, whose philanthropic investment in MIT research now exceeds $130 million. “I am confident that the multidisciplinary expertise convened by the ICoN center will revolutionize how we synthesize this data and ultimately understand the brain in health and disease.”
Connecting the data
It is impossible to separate the molecules in the brain from their effects on behavior — although those aspects of neuroscience have traditionally been studied independently, by researchers with vastly different expertise. The ICoN Center will eliminate the divides, bringing together neuroscientists and software engineers to deal with all types of data about the brain.
“The center’s highly collaborative structure, which is essential for unifying multiple levels of understanding, will enable us to recruit talented young scientists eager to revolutionize the field of computational neuroscience,” says Robert Desimone, director of the McGovern Institute. “It is our hope that the ICoN Center’s unique research environment will truly demonstrate a new academic research structure that catalyzes bold, creative research.”
To foster interdisciplinary collaboration, every postdoc and engineer at the center will work with multiple faculty mentors. In order to attract young scientists and engineers to the field of computational neuroscience, the center will also provide four graduate fellowships to MIT students each year in perpetuity. Interacting closely with three scientific cores, engineers and fellows will develop computational models and technologies for analyzing molecular data, neural circuits, and behavior, such as tools to identify patterns in neural recordings or automate the analysis of human behavior to aid psychiatric diagnoses. These technologies and models will be instrumental in synthesizing data into knowledge and understanding.
In its first five years, the ICoN Center will prioritize four areas of investigation: episodic memory and exploration, including functions like navigation and spatial memory; complex or stereotypical behavior, such as the perseverative behaviors associated with autism and obsessive-compulsive disorder; cognition and attention; and sleep. Models of complex behavior will be created in collaboration with clinicians and researchers at Children’s Hospital of Philadelphia.
The goal, Fiete says, is to model the neuronal interactions that underlie these functions so that researchers can predict what will happen when something changes — when certain neurons become more active or when a genetic mutation is introduced, for example. When paired with experimental data from MIT labs, the center’s models will help explain not just how these circuits work, but also how they are altered by genes, the environment, aging, and disease. These focus areas encompass circuits and behaviors often affected by psychiatric disorders and neurodegeneration, and models will give researchers new opportunities to explore their origins and potential treatment strategies.
“Lisa Yang is focused on helping the scientific community realize its goals in translational research,” says Nergis Mavalvala, dean of the School of Science and the Curtis and Kathleen Marble Professor of Astrophysics. “With her generous support, we can accelerate the pace of research by connecting the data to the delivery of tangible results.”
Thriving Stars: An initiative to improve gender representation in electrical engineering and computer science
The MIT Department of Electrical Engineering and Computer Science announced yesterday the Thriving Stars initiative, a new effort to improve gender representation in MIT’s largest doctoral graduate program.
“All types of representation are vital to EECS at MIT, and Thriving Stars will unify multiple disparate efforts focusing on women and other underrepresented genders,” says Asu Ozdaglar, head of the Department of Electrical Engineering and Computer Science (EECS), MIT Schwarzman College of Computing deputy dean of academics, and MathWorks Professor of EECS.
The announcement was made on Ada Lovelace Day, a worldwide event celebrating women in science, technology, engineering, and mathematics. The day is named for the 19th century British mathematician who is considered by some to be the first computer programmer for her early contributions to Charles Babbage's proposed Analytical Engine, an early computing machine. “It is particularly meaningful to announce Thriving Stars on Ada Lovelace Day, a moment when the world celebrates the achievements of women in STEM,” Ozdaglar says. “We honor the contributions of Ada Lovelace, and will continue to support women, and to achieve our goal of improving gender representation in our department and beyond.”
Addressing a critical gap
Computing and information technology are primary drivers of technological progress in the 21st century. Groundbreaking advances from electrical engineering (EE), computer science (CS), and artificial intelligence and decision-making (AI+D) are changing our day-to-day life. And yet, in all three fields, a substantial gender gap persists.
For the technology of tomorrow to meet the needs of all members of society, critical work is needed to close the gender gap in these crucial fields. With this new initiative, EECS aims to focus on increasing participation of female graduate students in the department, considering their wide-ranging influence as the scientific, technical, and policy leaders of tomorrow.
Thriving Stars will take a holistic, and concerted, approach to achieving this goal by providing support and information to students throughout every step of their PhD journey: from recruitment to admission all the way to graduation. The initiative will focus on efforts to navigate the application process and showcase research opportunities, interdisciplinary collaborations, and internships, as well as the diverse array of career opportunities accessible to doctoral EECS graduates. Thriving Stars will further strive not only to inspire women and other underrepresented genders to pursue graduate work at MIT, but also to build a more supportive and representative community for all graduate students.
MIT: Driving change
MIT EECS is uniquely positioned to make inroads to impacting gender representation in its graduate student body. Routinely ranked the top PhD program in the world, the department is an internationally recognized leader in EE, CS, and AI+D, with a long history of significant research contributions to all three fields. Importantly, the undergraduate population of EECS is steadfastly approaching gender balance (with an overall percentage of women at 42 percent), while the department is making strides toward better gender representation among the faculty and department leaders.
However, according to the National Science Foundation, only one in four doctoral degrees are awarded to women in engineering, computer science, and math, and one in three awarded in physical sciences. The situation within the graduate program at MIT EECS is no better, with female graduate students representing only 25 percent of the graduate student body. For the future of all people whose lives will be impacted by computing and information technologies, that needs to change — and MIT EECS is ideally suited to lead the way. With a pipeline full of promising and talented undergraduate women (at MIT and elsewhere); a multitude of role models; the goal of building a vibrant, supportive and inclusive community; along with strenuous effort to provide fellowship support, Thriving Stars will affect change in the PhD journey for women and underrepresented genders.
Thriving Stars road map
Thriving Stars will take a multi-pronged approach, capitalizing on the department’s strong pipeline programs and wide variety of student-led support organizations. In partnership with the MIT Summer Research Program (MSRP), the EECS Graduate Application Assistance Program (GAAP), and Undergraduate Women in EECS (WiEECS), the department will work to unveil the vast world of exciting multidisciplinary research opportunities, including paths toward PhD degrees, for undergraduates. Through collaboration with Graduate Women in Course VI (GW6) and THRIVE (Tools for Honing Resilience and Inspiring Voices of Empowerment), and with mentorship from EECS faculty and alumni, EECS is committed to making the PhD journey supportive, exciting, rewarding, and an experience where each student thrives and makes meaningful contributions to their field. Additionally, Thriving Stars will showcase the groundbreaking research conducted by women in EE, CS, and AI+D in a series of research summits guaranteed to enlighten, educate, and inspire attendees.
The Thriving Stars Advisory Board, composed of highly accomplished academics and business leaders, will be at the forefront of the department’s efforts for systemic improvement for gender representation. Those board members include:
- Maria Klawe, president of Harvey Mudd College (co-chair);
- Asu Ozdaglar, Mathworks Professor, department head of EECS, and the deputy dean for academics in the MIT Stephen A. Schwarzman College of Computing (co-chair);
- Anne Dinning, managing director of D. E. Shaw & Co.;
- Susan Dumais, technical fellow at Microsoft and director of the Microsoft Research Labs in New England, New York City, and Montreal;
- Susan Hockfield, professor of neuroscience and president emerita at MIT;
- Leslie A. Kolodziejski, graduate officer and professor of electrical engineering in EECS;
- Aude Oliva, senior research scientist at MIT and director of strategic industry engagement at the MIT Schwarzman College of Computing; and
- Songyee Yoon, chief strategy officer and president of NCSoft.
Ozdaglar adds, “EECS will embark upon a variety of engagement and community-building methods during the first year of Thriving Stars; in subsequent years, we’ll further develop and deepen our most successful best practices, building a more supportive and inclusive community for all our graduate students. We hope that Thriving Stars will generate broad enthusiasm for creating similar positive change.”