MIT Latest News
Researchers at MIT’s McGovern Institute for Brain Research have discovered a bacterial enzyme that they say could expand scientists’ CRISPR toolkit, making it easy to cut and edit RNA with the kind of precision that, until now, has only been available for DNA editing. The enzyme, called Cas7-11, modifies RNA targets without harming cells, suggesting that in addition to being a valuable research tool, it provides a fertile platform for therapeutic applications.
“This new enzyme is like the Cas9 of RNA,” says McGovern Fellow Omar Abudayyeh, referring to the DNA-cutting CRISPR enzyme that has revolutionized modern biology by making DNA editing fast, inexpensive, and exact. “It creates two precise cuts and doesn't destroy the cell in the process, like other enzymes,” he adds.
Up until now, only one other family of RNA-targeting enzymes, Cas13, has extensively been developed for RNA targeting applications. However, when Cas13 recognizes its target, it shreds any RNAs in the cell, destroying the cell along the way. Like Cas9, Cas7-11 is part of a programmable system; it can be directed at specific RNA targets using a CRISPR guide. Abudayyeh, McGovern Fellow Jonathan Gootenberg, and their colleagues discovered Cas7-11 through a deep exploration of the CRISPR systems found in the microbial world. Their findings were recently reported in the journal Nature.
Exploring natural diversity
Like other CRISPR proteins, Cas7-11 is used by bacteria as a defense mechanism against viruses. After encountering a new virus, bacteria that employ the CRISPR system keep a record of the infection in the form of a small snippet of the pathogen’s genetic material. Should that virus reappear, the CRISPR system is activated, guided by a small piece of RNA to destroy the viral genome and eliminate the infection.
These ancient immune systems are widespread and diverse, with different bacteria deploying different proteins to counter their viral invaders.
“Some target DNA, some target RNA. Some are very efficient in cleaving the target but have some toxicity, and others do not. They introduce different types of cuts, they can differ in specificity — and so on,” says Eugene Koonin, an evolutionary biologist at the National Center for Biotechnology Information.
Abudayyeh, Gootenberg, and Koonin have been scouring genome sequences to learn about the natural diversity of CRISPR systems — and to mine them for potential tools. The idea, Abudayyeh says, is to take advantage of the work that evolution has already done in engineering protein machines.
“We don’t know what we’ll find,” Abudayyeh says, “but let’s just explore and see what’s out there.”
As the team was poring through public databases to examine the components of different bacterial defense systems, a protein from a bacterium that had been isolated from Tokyo Bay caught their attention. Its amino acid sequence indicated that it belonged to a class of CRISPR systems that use large, multiprotein machines to find and cleave their targets. But this protein appeared to have everything it needed to carry out the job on its own. Other known single-protein Cas enzymes, including the Cas9 protein that has been widely adopted for DNA editing, belong to a separate class of CRISPR systems — but Cas7-11 blurs the boundaries of the CRISPR classification system, Koonin says.
The enzyme, which the team eventually named Cas7-11, was attractive from an engineering perspective, because single proteins are easier to deliver to cells and make better tools than their complex counterparts. But its composition also signaled an unexpected evolutionary history. The team found evidence that through evolution, the components of a more complex Cas machine had fused together to make the Cas7-11 protein. Gootenberg equates this to discovering a bat when you had previously assumed that birds are the only animals that fly, thereby recognizing that there are multiple evolutionary paths to flight. “It totally changes the landscape of how these systems are thought about, both functionally and evolutionarily,” he says.
When Gootenberg and Abudayyeh produced the Cas7-11 protein in their lab and began experimenting with it, they realized this unusual enzyme offered a powerful means to manipulate and study RNA. When they introduced it into cells along with an RNA guide, it made remarkably precise cuts, snipping its targets while leaving other RNA undisturbed. This meant they could use Cas7-11 to change specific letters in the RNA code, correcting errors introduced by genetic mutations. They were also able to program Cas7-11 to either stabilize or destroy particular RNA molecules inside cells, which gave them the ability to adjust the levels of the proteins encoded by those RNAs.
Abudayyeh and Gootenberg also found that Cas7-11’s ability to cut RNA could be dampened by a protein that appeared likely to also be involved in triggering programmed cell death, suggesting a possible link between CRISPR defense and a more extreme response to infection.
The team showed that a gene therapy vector can deliver the complete Cas7-11 editing system to cells and that Cas7-11 does not compromise cells’ health. They hope that with further development, the enzyme might one day be used to edit disease-causing sequences out of a patient’s RNA so their cells can produce healthy proteins, or to dial down the level of a protein that is doing harm due to genetic disease.
“We think that the unique way that Cas7-11 cuts enables many interesting and diverse applications,” Gootenberg says, noting that no other CRISPR tool cuts RNA so precisely. “It's yet another great example of how these basic-biology driven explorations can yield new tools for therapeutics and diagnostics,” he adds. “And we're certainly still just scratching the surface of what's out there in natural diversity.”
Amblyopia is the most common cause of vision loss in children, according to the U.S. National Eye Institute. It arises when visual experience is disrupted during infancy, for example by a cataract in one eye. Even after the cataract is removed, vision through the affected eye is impaired because of a failure of this eye to develop strong connections in the brain. The current treatment of covering the “good” eye with a patch to strengthen the amblyopic one is only partially effective and cannot help after a “critical period” ends before age 8.
In a new study, MIT and Dalhousie University neuroscientists demonstrate that by temporarily anesthetizing the retina of the good eye, they could lastingly improve vision in the amblyopic one, even after the critical period in two different mammal species.
The encouraging results support further preclinical testing of the novel therapy, in which the non-amblyopic eye’s retina is temporarily and reversibly silenced by an injection of tetrodotoxin (TTX), says Mark Bear, Picower Professor of Neuroscience in The Picower Institute for Learning and Memory at MIT and corresponding author of the study published in eLife.
“We observed a recovery in every animal,” says Bear, a faculty member of MIT’s Department of Brain and Cognitive Sciences. “We’ve done much better than anyone would have anticipated.”
The results provide hope that the approach can eventually be translated to people, adds Kevin Duffy, professor in the Department of Psychology and Neuroscience at Dalhousie.
“These are remarkable data that demonstrate an unequaled profile of recovery,” says Duffy, who co-led the study with Ming-fai Fong, a postdoc in Bear’s Picower Institute lab. “I am hopeful and optimistic that this study can provide a pathway for a new and more effective approach to amblyopia treatment. I am very proud to have been part of this rewarding collaboration.”
A new approach to amblyopia
The new approach is based on decades of underlying neuroscience discoveries led by Bear that have revealed how amblyopia develops. When input from an amblyopic eye is weak, key connections, or “synapses,” in neural circuits leading from the eye to the brain’s visual cortex wither via a process he discovered called “long-term depression.” But theoretical and experimental studies by his lab have also shown that completely but temporarily suspending visual input creates a condition in which the synaptic connections can fully restrengthen, almost as if they are being “rebooted.”
In 2016, Bear, Duffy, Fong, and colleagues showed they could restore vision in amblyopic mice past the critical period by temporarily inactivating both retinas with TTX, but in the new study they sought to determine whether vision could recover by temporarily suspending retinal activity in just the non-amblyopic eye in older animals, Fong says.
“These differences may seem small, but they are a big deal for a couple of reasons,” she says. “First, inactivating both retinas effectively eliminates vision; even if temporary, this presents some practical challenges. Therefore, our ability to limit inactivation just to one eye makes it potentially more tractable for clinical translation. Second, there is currently no treatment for adult amblyopia in humans. In our study we used mature amblyopic animals that are recalcitrant to any other treatment due to the decline in the capacity for plasticity that comes with age.”
The authors also sought to confirm their result in more than one species to ensure the effect generalizes to the mammalian brain. There is good reason to think so. Clinical observations in humans show that in some cases when a person with amblyopia loses their non-amblyopic eye to disease or injury, their amblyopic eye can improve even if they are adults.
In the new study the team therefore tested whether administering TTX in the non-amblyopic eye of animal models would produce a full recovery of visual response in their amblyopic eye beyond the critical period. Not only did it do so in each animal tested, but also, visual responses always recovered to normal levels in the eye that received the TTX.
“This is a very clear demonstration of how understanding principles of synaptic plasticity can yield a novel therapeutic strategy,” Bear says.
The researchers even showed that neurons that relay visual input to the visual cortex that shrink with amblyopia were able to regain normal size.
The effect of the therapy was stronger and more consistent than in the human clinical cases of non-amblyopic eye loss because once synaptic connections are rebooted for the amblyopic eye and retinal activity returns in the non-amblyopic eye, they become mutually reinforcing, Bear says.
The results lend support for the theory that temporary inactivation of the non-amblyopic eye sets the stage for permanent strengthening of the synapses from the amblyopic eye. This theory holds that when activity is completely absent from the non-amblyopic eye, the degree of input through the amblyopic eye becomes sufficient to trigger synaptic strengthening, or “long-term potentiation.”
Theory, however, doesn’t need to be resolved for the promising results to take next steps toward clinical use, the authors note. Instead, Bear says he plans to pursue new studies to ensure that the approach would be safe and effective for people such as adults for whom patch therapy is no longer plausible.
In addition to Fong, Duffy, and Bear, the paper’s other authors are Madison Leet and Christian Candler of The Picower Institute.
The U.S. National Eye Institute, the Canadian Institutes of Health Research, and the JPB Foundation provided support for the research.
Sometimes patterns repeat in nature. Spirals appear in sunflowers and hurricanes. Branches occur in veins and lightning. Limiao Zhang, a doctoral student in MIT’s Department of Nuclear Science and Engineering, has found another similarity: between street traffic and boiling water, with implications for preventing nuclear meltdowns.
Growing up in China, Zhang enjoyed watching her father repair things around the house. He couldn’t fulfill his dream of becoming an engineer, instead joining the police force, but Zhang did have that opportunity and studied mechanical engineering at Three Gorges University. Being one of four girls among about 50 boys in the major didn’t discourage her. “My father always told me girls can do anything,” she says. She graduated at the top of her class.
In college, she and a team of classmates won a national engineering competition. They designed and built a model of a carousel powered by solar, hydroelectric, and pedal power. One judge asked how long the system could operate safely. “I didn’t have a perfect answer,” she recalls. She realized that engineering means designing products that not only function, but are resilient. So for her master’s degree, at Beihang University, she turned to industrial engineering and analyzed the reliability of critical infrastructure, in particular traffic networks.
“Among all the critical infrastructures, nuclear power plants are quite special,” Zhang says. “Although one can provide very enormous carbon-free energy, once it fails, it can cause catastrophic results.” So she decided to switch fields again and study nuclear engineering. At the time she had no nuclear background, and hadn’t studied in the United States, but “I tried to step out of my comfort zone,” she says. “I just applied and MIT welcomed me.” Her supervisor, Matteo Bucci, and her classmates explained the basics of fission reactions as she adjusted to the new material, language, and environment. She doubted herself — “my friend told me, ‘I saw clouds above your head’” — but she passed her first-year courses and published her first paper soon afterward.
Much of the work in Bucci’s lab deals with what’s called the boiling crisis. In many applications, such as nuclear plants and powerful computers, water cools things. When a hot surface boils water, bubbles cling to the surface before rising, but if too many form, they merge into a layer of vapor that insulates the surface. The heat has nowhere to go — a boiling crisis.
Bucci invited Zhang into his lab in part because she saw a connection between traffic and heat transfer. The data plots of both phenomena look surprisingly similar. “The mathematical tools she had developed for the study of traffic jams were a completely different way of looking into our problem” Bucci says, “by using something which is intuitively not connected.”
One can view bubbles as cars. The more there are, the more they interfere with each other. People studying boiling had focused on the physics of individual bubbles. Zhang instead uses statistical physics to analyze collective patterns of behavior. “She brings a different set of skills, a different set of knowledge, to our research,” says Guanyu Su, a postdoc in the lab. “That’s very refreshing.”
In her first paper on the boiling crisis, published in Physical Review Letters, Zhang used theory and simulations to identify scale-free behavior in boiling: just as in traffic, the same patterns appear whether zoomed in or out, in terms of space or time. Both small and large bubbles matter. Using this insight, the team found certain physical parameters that could predict a boiling crisis. Zhang’s mathematical tools both explain experimental data and suggest new experiments to try. For a second paper, the team collected more data and found ways to predict the boiling crisis in a wider variety of conditions.
Zhang’s thesis and third paper, both in progress, propose a universal law for explaining the crisis. “She translated the mechanism into a physical law, like F=ma or E=mc2,” Bucci says. “She came up with an equally simple equation.” Zhang says she’s learned a lot from colleagues in the department who are pioneering new nuclear reactors or other technologies, “but for my own work, I try to get down to the very basics of a phenomenon.”
Bucci describes Zhang as determined, open-minded, and commendably self-critical. Su says she’s careful, optimistic, and courageous. “If I imagine going from heat transfer to city planning, that would be almost impossible for me,” he says. “She has a strong mind.” Last year, Zhang gave birth to a boy, whom she’s raising on her own as she does her research. (Her husband is stuck in China during the pandemic.) “This, to me,” Bucci says, “is almost superhuman.”
Zhang will graduate at the end of the year, and has started looking for jobs back in China. She wants to continue in the energy field, though maybe not nuclear. “I will use my interdisciplinary knowledge,” she says. “I hope I can design safer and more efficient and more reliable systems to provide energy for our society.”
Most of the magnets we encounter daily are made of “ferromagnetic” materials. The north-south magnetic axes of most atoms in these materials are lined up in the same direction, so their collective force is strong enough to produce significant attraction. These materials form the basis for most of the data storage devices in today’s high-tech world.
Less common are magnets based on ferrimagnetic materials, with an “i.” In these, some of the atoms are aligned in one direction, but others are aligned in precisely the opposite way. As a result, the overall magnetic field they produce depends on the balance between the two types — if there are more atoms pointed one way than the other, that difference produces a net magnetic field in that direction.
In principle, because of their magnetic properties are strongly influenced by external forces, ferrimagnetic materials should be able to produce data storage or logic circuits that are much faster and can pack more data into a given space than today’s conventional ferromagnets. But until now there has been no simple, fast, and reliable way of switching the orientation of these magnets, in order to flip from a 0 to a 1 in a data storage device.
Researchers at MIT and elsewhere have developed such a method, a way of rapidly switching the magnetic polarity of a ferrimagnet 180 degrees, using just a small applied voltage. The discovery could usher in a new era of ferrimagnetic logic and data storage devices, the researchers say.
The findings appear in the journal Nature Nanotechnology, in a paper by postdoc Mantao Huang, MIT professor of materials science and technology Geoffrey Beach, and professor of nuclear science and technology Bilge Yildiz, along with 15 others at MIT and in Minnesota, Germany, Spain, and Korea.
The new system uses a film of material called gadolinium cobalt, part of a class of materials known as rare earth transition metal ferrimagnets. In it, the two elements form interlocking lattices of atoms, and the gadolinium atoms preferentially have their magnetic axes aligned in one direction, while the cobalt atoms point the opposite way. The balance between the two in the composition of the alloy determines the material’s overall magnetization.
But the researchers found that by using a voltage to split water molecules along the film’s surface into oxygen and hydrogen, the oxygen can be vented away while the hydrogen atoms — or more precisely their nuclei, which are single protons — can penetrate deeply into the material, and this alters the balance of the magnetic orientations. The change is sufficient to switch the net magnetic field orientation by 180 degrees — exactly the kind of complete reversal that is needed for devices such as magnetic memories.
“We found that by loading hydrogen into this structure we can reduce the gadolinium’s magnetic moment by a lot,” Huang explains. Magnetic moment is a measure of the strength of the field produced by the atom’s spin axis alignment.
Because the change is accomplished just by a change of voltage, rather than an applied electrical current that would cause heating and thus waste energy through heat dissipation, this process is highly energy efficient, says Beach, who is the co-director of MIT’s Materials Research Laboratory.
The process of pumping hydrogen nuclei into the material turns out to be remarkably benign, he says. “You would think that if you take some material and pump some other atoms or ions into that material, you would expand it and crack it. But it turns out for these films, and by virtue of the fact that the proton is such a small entity, it can infiltrate the bulk of this material without causing the kind of structural fatigue that leads to failure.”
That stability has been proved through grueling tests. The material was subjected to 10,000 polarity reversals with no signs of degradation, Huang says.
The material has additional properties that may find useful applications, Beach says. The magnetic alignment between the individual atoms in the material functions a bit like springs, he explains. If one atom starts to move out of alignment with the others, this spring-like force pulls it back. And when objects are connected by springs, they tend to generate waves that can travel along the material. “For this magnetic material, these are called spin waves. You get oscillations of magnetization in the material, and they can have very high frequencies.”
In fact, they can oscillate upward of the terahertz range, he says, “which makes them uniquely capable of generating or sensing very high-frequency electromagnetic radiation. Not a lot of materials can do that.”
Relatively simple applications of this phenomenon, in the form of sensors, could be possible within a few years, Beach says, but more complex ones such as data and logic circuits will take longer, partly because the whole field of ferrimagnet-based technology is relatively new.
The basic methodology, apart from these specific kinds of magnetic applications, could have other uses as well, he says. “This is a way to control properties inside the bulk of the material by using an electric field,” he explains. “That by itself is quite remarkable.” Other work has been done on controlling surface properties using applied voltages, but the fact that this hydrogen-pumping approach allows such deep alteration allows “control of a broad range of properties,” he says.
The team included researchers at the University of Minnesota, the ALBA Synchrotron Light Source in Barcelona, Spain; the Chemnitz University of Technology; Leibnitz IFW in Germany; the Korea Institute of Science and Technology; and Yonsei University, in Seoul. The work was supported by the National Science Foundation; the Defense Advanced Research Projects Agency; the Center for Spintronic Materials for Advanced Information Technologies; the Korea Institute of Science and Technology; the German Science Foundation; the Ministry of Economy and Competitiveness of Spain; and the Kavanaugh Fellows Program in the Department of Materials Science and Engineering at MIT.
On his first day of classes at the Technical University of Athens’ School of Naval Architecture and Marine Engineering, Themistoklis Sapsis had a very satisfying realization.
“I realized that ships and other maritime structures are the only ones that operate at the interface of two different media: air and water,” says Sapsis. “This property alone creates so many challenges in terms of mathematical and computational modeling. And, of course, these media are not calm at all — they are random and often surprisingly unpredictable.”
In other words, Sapsis did not have to choose between his two great passions: huge, ocean-going ships and structures on the one hand, and mathematics on the other. Today, Sapsis, an associate professor of mechanical engineering at MIT, uses analytical and computational methods to try to predict behavior — such as that of ocean waves or instability inside a gas turbine — amid uncertain and occasionally extreme dynamics. His goal is to create designs for structures that are robust and safe even in a broad range of conditions. For example, he may study the loads acting on a ship during a storm, or the flow separation and lift reduction around a helicopter rotor blade during a difficult maneuver.
“These events are real — they often lead to big catastrophes and casualties,” Sapsis says. “My goal is to predict them and develop algorithms that can simulate them quickly. If we achieve this goal, then we could start talking about optimization and design of these systems with consideration of these extreme, rare, but possibly catastrophic events.”
Growing up in Athens, where great seafaring and mathematical traditions date back to ancient times, Sapsis’ house was “full of machine elements, spare engines, and engineering blueprints,” the tools of his father’s trade as a superintendent engineer in the maritime industry.
His father traveled internationally to oversee major ship repairs, and Sapsis often went along.
“I think what made the biggest impression on me as a child was the size of these vessels and especially the engines. You had to climb five or six flights of stairs to see the whole thing,” he recalls.
Also in the Sapsis home were math and engineering books — “lots of them,” he says. His father insisted that he study math closely, at the same time that the young Sapsis was conducting physics experiments in the basement.
“This back-and-forth transition between dynamical systems — more generally mathematics — and naval architecture” was frequently on his mind, Sapsis says.
In college, Sapsis ended up taking every math class that was offered. He says he had the good fortune to get in touch early on with the most mathematically inclined professor in the School of Naval Architecture and Marine Engineering, who then mentored Sapsis for three years. In his spare time, Sapsis even attended classes in the university’s School of Applied Mathematics.
His undergraduate thesis was on probabilistic description of dynamical systems subjected to random excitations, a topic important to the understanding of the motions of large ships and loads. One of Sapsis’ most memorable research breakthroughs occurred while he was working on that thesis.
“I was given a nice problem by my thesis advisor,” Sapsis says. “He warned me that most likely I would not be able to get something new, as this was an old problem and many had tried in the past decades without success.”
Over the next six months, Sapsis went over every step of the methods that were in the academic literature, “again and again,” he says, trying to understand why various approaches failed. He started to discern a path toward deriving a new set of equations that could achieve his goal, but there were technical obstacles.
“Without a lot of hope, as I knew that his was an old problem, but with a lot of curiosity, I began working on the different steps,” Sapsis says. “After a few weeks of work, I realized that the steps were complete, and I had a new set of equations!”
“It was certainly one of my most enthusiastic moments,” Sapsis says, “when I heard my advisor saying, ‘Yes, this is new and it is important!’”
Since that early success, the engineering and architecture problems associated with building for the extreme and unpredictable ocean environment have provided Sapsis with plenty of research problems to solve.
“Naval architecture is one of the oldest professions, with many open problems remaining and many more new ones coming,” he says. “The theoretical tools should not be more complex than the problem itself. However, in this case there are some really challenging physical problems that require the development of fundamentally new mathematics and computational methods. I am always trying to begin with the fundamentals and build the right theoretical and computational tools to, hopefully, come closer to the modeling of certain complex phenomena.”
Sapsis, who joined the MIT faculty in 2013 and was tenured in 2019, says he loves the energy and pace of the Institute, where “there are so many things happening here that you can never feel you have achieved enough — but in a healthy way.”
“I always feel humbled by the amazing achievements of my colleagues and our students and postdocs,” he says. “It is a place filled with pure passion and talent, blended together for a good cause, to solve the world’s hardest problems.”
These days, Sapsis says it is his students who experience the pure excitement of finding solutions to problems in the field.
“My students and postdocs are now the ones who have the pleasure to be the first to find out when a new idea works,” Sapsis says. “I have to admit, however, that I save some problems for myself.”
In fact, Sapsis says he relaxes by “thinking about a nice problem: a high-risk and low-expectations one. I think of a strategy to go about it but know that most likely it will not work. This is something I don’t consider work.”
“There’s a stereotype of dictatorship where one person decides everything, but that’s not always how politics works in an authoritarian regime,” says Emilia Simison, a sixth-year doctoral student in political science. Since 2015, Simison has been able to access and study documents that chronicle the lawmaking machinery of some of the past century’s most notorious dictatorships. Her analysis of these voluminous materials suggests that autocracies do not routinely follow a single “strongman” model, and that some even make room for opposition groups and legislatures.
“I want to understand what makes autocracies different from democracies, by looking closely at the policymaking process inside dictatorships and determining if and how those policies change when a regime becomes democratic,” says Simison.
Mining previously inaccessible, declassified archives as well as vast public databases, Simison is creating a new and perhaps controversial picture of how autocratic regimes functioned — ranging from Francisco Franco’s Spain to more recent dictatorships in Brazil and Argentina.
“We expect policies to be different under democracies because we have elections where people vote for those who offer the political economy they want,” says Simison. But Simison finds that when autocracy gives way to democracy, policy change does not automatically follow.
“There are real-life implications to my research,” she says. “If we know what things are going to be different, and what are not, we can build realistic expectations in our democratic governments, activating the right mechanisms to make policies we need, such as those that reduce inequality and improve the provision of public goods.”
Turbulent times in Argentina
A native Argentine, Simison grew up amidst political turmoil and witnessed the devastating consequences of both military rule and unstable democracies. One of her earliest memories is from 2001, when she was in grade school: “The country was experiencing a huge economic crisis, and in Buenos Aires there was panic and looting because store shelves were empty, and police were clubbing demonstrators in the streets,” she recalls. As a teenager, she participated in protests herself.
Compelled by her country’s traumatic history and ongoing struggles, Simison determined to study political science at the University of Buenos Aires. After helping one of her professors write a history book, Simison realized she had a gift and passion for research, and set out on a master’s program in political science at Torcuato Di Tella University directly after completing her bachelor’s degree.
Then a life-changing opportunity arrived: “My advisor told me that an insane number of documents from one of Argentina’s dictatorships had recently been discovered and he invited me to help study them,” says Simison. “We were among the very first to gain access to these records.”
Simison and this professor, Alejandro Bonvecchi, combed through detailed records documenting el Proceso, the military junta that ruled Argentina between 1976 and 1983, a cruel government infamous for torturing and murdering citizens. This regime had established a legislative body drawn from the different military branches, “but the history of this congress had been erased,” says Simison. “In school we learned only that the junta made all the decisions.” Yet the archives proved otherwise.
“The records showed that this congress was relevant in shaping public policy during the dictatorship, offering amendments to bills supported by the junta, and delivering some legislative defeats,” says Simison. A paper describing this legislative power-sharing was published in Comparative Politics in 2017.
Historical data could illuminate the political machinery of regimes that were shrouded in secrecy, Simison realized, and there was much more to investigate — in Argentina and elsewhere. She decided to deepen her scholarship in historical analysis, and headed to MIT, where she believed training in rigorous, quantitative methods would enable her “to ask really interesting questions.”
Despots and legislatures
Simison quickly found rich topics to tap as she began doctoral studies. She began examining media and other accounts of a Brazilian dictatorship that ruled from 1964 to 1985. “The military created a bipartisan system, with pro-government and opposition parties,” she says. Her research, which applied machine learning to classify bills into policy topics, revealed that the pro-government party confined itself to proposing legislation on local issues, and the “opposition party introduced bills on out-of-bound national topics that generally didn’t pass.”
Her dissertation emerged from a review of historical records of this Brazilian dictatorship, and those of the el Proceso Argentine autocracy. With guidance from thesis advisor Ben Ross Schneider, Ford International Professor of Political Science and a Latin America economics and politics expert, Simison is focusing on the evolution and impacts of policies produced during and after these two regimes.
One prominent policy area involves financing laws. Simison found that in both regimes, banking powers held the ears of junta leaders and legislators, influencing policies that stayed in place even after the regimes fell. There could sometimes be dissent: She found letters from Brazilian renters' associations complaining about inequities in the banking system, for instance. Yet financial regulations remained untouched. “People representing banks are always there, whether in democracies or dictatorships,” she says.
In contrast, policies touching health care, education, and housing generated enormous interest after regime change — particularly in Argentina — and were subject to significant shifts. “In democracies where there is space for people to mobilize, where elections are competitive, people will demand improvements on issues they deeply care about, such as high rents, and receive better policies from their elected officials.”
As she completes her dissertation, Simison is collaborating with scholars from Argentina and other Latin American countries to uncover and detail the policymaking mechanisms of additional authoritarian governments, the fate of legislation they create, and whether these policies advance or frustrate the social and economic interests of their citizens.
It is work, Simison believes, that will continue to yield essential insights for those concerned with strengthening democracies. Perhaps a better understanding of how authoritarian governments permit small openings for policymaking through legislation might shed light on “what can be done in authoritarian regimes to push for democratization,” she says. By the same token, it’s important to identify what is required in post-authoritarian democracies to achieve meaningful policy change. “At a time when some democracies are backsliding,” Simison says, “we must know what to expect from democracy, and to learn the mechanisms by which we can make those things happen that we want to happen.”
Using specialized nanoparticles embedded in plant leaves, MIT engineers have created a light-emitting plant that can be charged by an LED. After 10 seconds of charging, plants glow brightly for several minutes, and they can be recharged repeatedly.
These plants can produce light that is 10 times brighter than the first generation of glowing plants that the research group reported in 2017.
“We wanted to create a light-emitting plant with particles that will absorb light, store some of it, and emit it gradually,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT and the senior author of the new study. “This is a big step toward plant-based lighting.”
“Creating ambient light with the renewable chemical energy of living plants is a bold idea,” says Sheila Kennedy, a professor of architecture at MIT and an author of the paper who has worked with Strano’s group on plant-based lighting. “It represents a fundamental shift in how we think about living plants and electrical energy for lighting.”
The particles can also boost the light production of any other type of light-emitting plant, including those Strano’s lab originally developed. Those plants use nanoparticles containing the enzyme luciferase, which is found in fireflies, to produce light. The ability to mix and match functional nanoparticles inserted into a living plant to produce new functional properties is an example of the emerging field of “plant nanobionics.”
Pavlo Gordiichuk, a former MIT postdoc, is the lead author of the new paper, which appears in Science Advances.
Strano’s lab has been working for several years in the new field of plant nanobionics, which aims to give plants novel features by embedding them with different types of nanoparticles. Their first generation of light-emitting plants contained nanoparticles that carry luciferase and luciferin, which work together to give fireflies their glow. Using these particles, the researchers generated watercress plants that could emit dim light, about one-thousandth the amount needed to read by, for a few hours.
In the new study, Strano and his colleagues wanted to create components that could extend the duration of the light and make it brighter. They came up with the idea of using a capacitor, which is a part of an electrical circuit that can store electricity and release it when needed. In the case of glowing plants, a light capacitor can be used to store light in the form of photons, then gradually release it over time.
To create their “light capacitor,” the researchers decided to use a type of material known as a phosphor. These materials can absorb either visible or ultraviolet light and then slowly release it as a phosphorescent glow. The researchers used a compound called strontium aluminate, which can be formed into nanoparticles, as their phosphor. Before embedding them in plants, the researchers coated the particles in silica, which protects the plant from damage.
The particles, which are several hundred nanometers in diameter, can be infused into the plants through the stomata — small pores located on the surfaces of leaves. The particles accumulate in a spongy layer called the mesophyll, where they form a thin film. A major conclusion of the new study is that the mesophyll of a living plant can be made to display these photonic particles without hurting the plant or sacrificing lighting properties, the researchers say.
This film can absorb photons either from sunlight or an LED. The researchers showed that after 10 seconds of blue LED exposure, their plants could emit light for about an hour. The light was brightest for the first five minutes and then gradually diminished. The plants can be continually recharged for at least two weeks, as the team demonstrated during an experimental exhibition at the Smithsonian Institute of Design in 2019.
“We need to have an intense light, delivered as one pulse for a few seconds, and that can charge it,” Gordiichuk says. “We also showed that we can use big lenses, such as a Fresnel lens, to transfer our amplified light a distance more than one meter. This is a good step toward creating lighting at a scale that people could use.”
“The Plant Properties exhibition at the Smithsonian demonstrated a future vision where lighting infrastructure from living plants is an integral part of the spaces where people work and live,” Kennedy says. “If living plants could be the starting point of advanced technology, plants might replace our current unsustainable urban electrical lighting grid for the mutual benefit of all plant-dependent species — including people.”
The MIT researchers found that the “light capacitor” approach can work in many different plant species, including basil, watercress, and tobacco, the researchers found. They also showed that they could illuminate the leaves of a plant called the Thailand elephant ear, which can be more than a foot wide — a size that could make the plants useful as an outdoor lighting source.
The researchers also investigated whether the nanoparticles interfere with normal plant function. They found that over a 10-day period, the plants were able to photosynthesize normally and to evaporate water through their stomata. Once the experiments were over, the researchers were able to extract about 60 percent of the phosphors from plants and reuse them in another plant.
Researchers in Strano’s lab are now working on combining the phosphor light capacitor particles with the luciferase nanoparticles that they used in their 2017 study, in hopes that combining the two technologies will produce plants that can produce even brighter light, for longer periods of time.
The research was funded by Thailand Magnolia Quality Development Corp., a Professor Amar G. Bose Research Grant, MIT’s Advanced Undergraduate Research Opportunities Program, the Singapore Agency of Science, Research, and Technology, a Samsung scholarship, and a German Research Foundation research fellowship.
Any houseplant owner knows that changes in the amount of water or sunlight a plant receives can put it under immense stress. A dying plant brings certain disappointment to anyone with a green thumb.
But for farmers who make their living by successfully growing plants, and whose crops may nourish hundreds or thousands of people, the devastation of failing flora is that much greater. As climate change is poised to cause increasingly unpredictable weather patterns globally, crops may be subject to more extreme environmental conditions like droughts, fluctuating temperatures, floods, and wildfire.
Climate scientists and food systems researchers worry about the stress climate change may put on crops, and on global food security. In an ambitious interdisciplinary project funded by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), David Des Marais, the Gale Assistant Professor in the Department of Civil and Environmental Engineering at MIT, and Caroline Uhler, an associate professor in the MIT Department of Electrical Engineering and Computer Science and the Institute for Data, Systems, and Society, are investigating how plant genes communicate with one another under stress. Their research results can be used to breed plants more resilient to climate change.
Crops in trouble
Governing plants’ responses to environmental stress are gene regulatory networks, or GRNs, which guide the development and behaviors of living things. A GRN may be comprised of thousands of genes and proteins that all communicate with one another. GRNs help a particular cell, tissue, or organism respond to environmental changes by signaling certain genes to turn their expression on or off.
Even seemingly minor or short-term changes in weather patterns can have large effects on crop yield and food security. An environmental trigger, like a lack of water during a crucial phase of plant development, can turn a gene on or off, and is likely to affect many others in the GRN. For example, without water, a gene enabling photosynthesis may switch off. This can create a domino effect, where the genes that rely on those regulating photosynthesis are silenced, and the cycle continues. As a result, when photosynthesis is halted, the plant may experience other detrimental side effects, like no longer being able to reproduce or defend against pathogens. The chain reaction could even kill a plant before it has the chance to be revived by a big rain.
Des Marais says he wishes there was a way to stop those genes from completely shutting off in such a situation. To do that, scientists would need to better understand how exactly gene networks respond to different environmental triggers. Bringing light to this molecular process is exactly what he aims to do in this collaborative research effort.
Solving complex problems across disciplines
Despite their crucial importance, GRNs are difficult to study because of how complex and interconnected they are. Usually, to understand how a particular gene is affecting others, biologists must silence one gene and see how the others in the network respond.
For years, scientists have aspired to an algorithm that could synthesize the massive amount of information contained in GRNs to “identify correct regulatory relationships among genes,” according to a 2019 article in the Encyclopedia of Bioinformatics and Computational Biology.
“A GRN can be seen as a large causal network, and understanding the effects that silencing one gene has on all other genes requires understanding the causal relationships among the genes,” says Uhler. “These are exactly the kinds of algorithms my group develops.”
Des Marais and Uhler’s project aims to unravel these complex communication networks and discover how to breed crops that are more resilient to the increased droughts, flooding, and erratic weather patterns that climate change is already causing globally.
In addition to climate change, by 2050, the world will demand 70 percent more food to feed a booming population. “Food systems challenges cannot be addressed individually in disciplinary or topic area silos,” says Greg Sixt, J-WAFS’ research manager for climate and food systems. “They must be addressed in a systems context that reflects the interconnected nature of the food system.”
Des Marais’ background is in biology, and Uhler’s in statistics. “Dave's project with Caroline was essentially experimental,” says Renee J. Robins, J-WAFS’ executive director. “This kind of exploratory research is exactly what the J-WAFS seed grant program is for.”
Getting inside gene regulatory networks
Des Marais and Uhler’s work begins in a windowless basement on MIT’s campus, where 300 genetically identical Brachypodium distachyon plants grow in large, temperature-controlled chambers. The plant, which contains more than 30,000 genes, is a good model for studying important cereal crops like wheat, barley, maize, and millet. For three weeks, all plants receive the same temperature, humidity, light, and water. Then, half are slowly tapered off water, simulating drought-like conditions.
Six days into the forced drought, the plants are clearly suffering. Des Marais' PhD student Jie Yun takes tissues from 50 hydrated and 50 dry plants, freezes them in liquid nitrogen to immediately halt metabolic activity, grinds them up into a fine powder, and chemically separates the genetic material. The genes from all 100 samples are then sequenced at a lab across the street.
The team is left with a spreadsheet listing the 30,000 genes found in each of the 100 plants at the moment they were frozen, and how many copies there were. Uhler’s PhD student Anastasiya Belyaeva inputs the massive spreadsheet into the computer program she developed and runs her novel algorithm. Within a few hours, the group can see which genes were most active in one condition over another, how the genes were communicating, and which were causing changes in others.
The methodology captures important subtleties that could allow researchers to eventually alter gene pathways and breed more resilient crops. “When you expose a plant to drought stress, it's not like there's some canonical response,” Des Marais says. “There's lots of things going on. It's turning this physiologic process up, this one down, this one didn't exist before, and now suddenly is turned on.”
In addition to Des Marais and Uhler’s research, J-WAFS has funded projects in food and water from researchers in 29 departments across all five MIT schools as well as the MIT Schwarzman College of Computing. J-WAFS seed grants typically fund seven to eight new projects every year.
“The grants are really aimed at catalyzing new ideas, providing the sort of support [for MIT researchers] to be pushing boundaries, and also bringing in faculty who may have some interesting ideas that they haven’t yet applied to water or food concerns,” Robins says. “It’s an avenue for researchers all over the Institute to apply their ideas to water and food.”
Alison Gold is a student in MIT’s Graduate Program in Science Writing.
In May, responding to the world’s accelerating climate crisis, MIT issued an ambitious new plan, “Fast Forward: MIT’s Climate Action Plan for the Decade.” The plan outlines a broad array of new and expanded initiatives across campus to build on the Institute’s longstanding climate work.
Now, to unite these varied climate efforts, maximize their impact, and identify new ways for MIT to contribute climate solutions, the Institute has appointed more than a dozen faculty members to a new committee established by the Fast Forward plan, named the Climate Nucleus.
The committee includes leaders of a number of climate- and energy-focused departments, labs, and centers that have significant responsibilities under the plan. Its membership spans all five schools and the MIT Schwarzman College of Computing. Professors Noelle Selin and Anne White have agreed to co-chair the Climate Nucleus for a term of three years.
“I am thrilled and grateful that Noelle and Anne have agreed to step up to this important task,” says Maria T. Zuber, MIT’s vice president for research. “Under their leadership, I’m confident that the Climate Nucleus will bring new ideas and new energy to making the strategy laid out in the climate action plan a reality.”
The Climate Nucleus has broad responsibility for the management and implementation of the Fast Forward plan across its five areas of action: sparking innovation, educating future generations, informing and leveraging government action, reducing MIT’s own climate impact, and uniting and coordinating all of MIT’s climate efforts.
Over the next few years, the nucleus will aim to advance MIT’s contribution to a two-track approach to decarbonizing the global economy, an approach described in the Fast Forward plan. First, humanity must go as far and as fast as it can to reduce greenhouse gas emissions using existing tools and methods. Second, societies need to invest in, invent, and deploy new tools — and promote new institutions and policies — to get the global economy to net-zero emissions by mid-century.
The co-chairs of the nucleus bring significant climate and energy expertise, along with deep knowledge of the MIT community, to their task.
Selin is a professor with joint appointments in the Institute for Data, Systems, and Society and the Department of Earth, Atmospheric and Planetary Sciences. She is also the director of the Technology and Policy Program. She began at MIT in 2007 as a postdoc with the Center for Global Change Science and the Joint Program on the Science and Policy of Global Change. Her research uses modeling to inform decision-making on air pollution, climate change, and hazardous substances.
“Climate change affects everything we do at MIT. For the new climate action plan to be effective, the Climate Nucleus will need to engage the entire MIT community and beyond, including policymakers as well as people and communities most affected by climate change,” says Selin. “I look forward to helping to guide this effort.”
White is the School of Engineering’s Distinguished Professor of Engineering and the head of the Department of Nuclear Science and Engineering. She joined the MIT faculty in 2009 and has also served as the associate director of MIT’s Plasma Science and Fusion Center. Her research focuses on assessing and refining the mathematical models used in the design of fusion energy devices, such as tokamaks, which hold promise for delivering limitless zero-carbon energy.
“The latest IPCC report underscores the fact that we have no time to lose in decarbonizing the global economy quickly. This is a problem that demands we use every tool in our toolbox — and develop new ones — and we’re committed to doing that,” says White, referring to an August 2021 report from the Intergovernmental Panel on Climate Change, a UN climate science body, that found that climate change has already affected every region on Earth and is intensifying. “We must train future technical and policy leaders, expand opportunities for students to work on climate problems, and weave sustainability into every one of MIT’s activities. I am honored to be a part of helping foster this Institute-wide collaboration.”
A first order of business for the Climate Nucleus will be standing up three working groups to address specific aspects of climate action at MIT: climate education, climate policy, and MIT’s own carbon footprint. The working groups will be responsible for making progress on their particular areas of focus under the plan and will make recommendations to the nucleus on ways of increasing MIT’s effectiveness and impact. The working groups will also include student, staff, and alumni members, so that the entire MIT community has the opportunity to contribute to the plan’s implementation.
The nucleus, in turn, will report and make regular recommendations to the Climate Steering Committee, a senior-level team consisting of Zuber; Richard Lester, the associate provost for international activities; Glen Shor, the executive vice president and treasurer; and the deans of the five schools and the MIT Schwarzman College of Computing. The new plan created the Climate Steering Committee to ensure that climate efforts will receive both the high-level attention and the resources needed to succeed.
Together the new committees and working groups are meant to form a robust new infrastructure for uniting and coordinating MIT’s climate action efforts in order to maximize their impact. They replace the Climate Action Advisory Committee, which was created in 2016 following the release of MIT’s first climate action plan.
In addition to Selin and White, the members of the Climate Nucleus are:
- Bob Armstrong, professor in the Department of Chemical Engineering and director of the MIT Energy Initiative;
- Dara Entekhabi, professor in the departments of Civil and Environmental Engineering and Earth, Atmospheric and Planetary Sciences;
- John Fernández, professor in the Department of Architecture and director of the Environmental Solutions Initiative;
- Stefan Helmreich, professor in the Department of Anthropology;
- Christopher Knittel, professor in the MIT Sloan School of Management and director of the Center for Energy and Environmental Policy Research;
- John Lienhard, professor in the Department of Mechanical Engineering and director of the Abdul Latif Jameel Water and Food Systems Lab;
- Julie Newman, director of the Office of Sustainability and lecturer in the Department of Urban Studies and Planning;
- Elsa Olivetti, professor in the Department of Materials Science and Engineering and co-director of the Climate and Sustainability Consortium;
- Christoph Reinhart, professor in the Department of Architecture and director of the Building Technology Program;
- John Sterman, professor in the MIT Sloan School of Management and director of the Sloan Sustainability Initiative;
- Rob van der Hilst, professor and head of the Department of Earth, Atmospheric and Planetary Sciences; and
- Chris Zegras, professor and head of the Department of Urban Studies and Planning.
Encountering concrete is a common, even routine, occurrence. And that’s exactly what makes concrete exceptional.
As the most consumed material after water, concrete is indispensable to the many essential systems — from roads to buildings — in which it is used.
But due to its extensive use, concrete production also contributes to around 1 percent of emissions in the United States and remains one of several carbon-intensive industries globally. Tackling climate change, then, will mean reducing the environmental impacts of concrete, even as its use continues to increase.
In a new paper in the Proceedings of the National Academy of Sciences, a team of current and former researchers at the MIT Concrete Sustainability Hub (CSHub) outlines how this can be achieved.
They present an extensive life-cycle assessment of the building and pavements sectors that estimates how greenhouse gas (GHG) reduction strategies — including those for concrete and cement — could minimize the cumulative emissions of each sector and how those reductions would compare to national GHG reduction targets.
The team found that, if reduction strategies were implemented, the emissions for pavements and buildings between 2016 and 2050 could fall by up to 65 percent and 57 percent, respectively, even if concrete use accelerated greatly over that period. These are close to U.S. reduction targets set as part of the Paris Climate Accords. The solutions considered would also enable concrete production for both sectors to attain carbon neutrality by 2050.
Despite continued grid decarbonization and increases in fuel efficiency, they found that the vast majority of the GHG emissions from new buildings and pavements during this period would derive from operational energy consumption rather than so-called embodied emissions — emissions from materials production and construction.
Sources and solutions
The consumption of concrete, due to its versatility, durability, constructability, and role in economic development, has been projected to increase around the world.
While it is essential to consider the embodied impacts of ongoing concrete production, it is equally essential to place these initial impacts in the context of the material’s life cycle.
Due to concrete’s unique attributes, it can influence the long-term sustainability performance of the systems in which it is used. Concrete pavements, for instance, can reduce vehicle fuel consumption, while concrete structures can endure hazards without needing energy- and materials-intensive repairs.
Concrete’s impacts, then, are as complex as the material itself — a carefully proportioned mixture of cement powder, water, sand, and aggregates. Untangling concrete’s contribution to the operational and embodied impacts of buildings and pavements is essential for planning GHG reductions in both sectors.
Set of scenarios
In their paper, CSHub researchers forecast the potential greenhouse gas emissions from the building and pavements sectors as numerous emissions reduction strategies were introduced between 2016 and 2050.
Since both of these sectors are immense and rapidly evolving, modeling them required an intricate framework.
“We don’t have details on every building and pavement in the United States,” explains Randolph Kirchain, a research scientist at the Materials Research Laboratory and co-director of CSHub.
“As such, we began by developing reference designs, which are intended to be representative of current and future buildings and pavements. These were adapted to be appropriate for 14 different climate zones in the United States and then distributed across the U.S. based on data from the U.S. Census and the Federal Highway Administration”
To reflect the complexity of these systems, their models had to have the highest resolutions possible.
“In the pavements sector, we collected the current stock of the U.S. network based on high-precision 10-mile segments, along with the surface conditions, traffic, thickness, lane width, and number of lanes for each segment,” says Hessam AzariJafari, a postdoc at CSHub and a co-author on the paper.
“To model future paving actions over the analysis period, we assumed four climate conditions; four road types; asphalt, concrete, and composite pavement structures; as well as major, minor, and reconstruction paving actions specified for each climate condition.”
Using this framework, they analyzed a “projected” and an “ambitious” scenario of reduction strategies and system attributes for buildings and pavements over the 34-year analysis period. The scenarios were defined by the timing and intensity of GHG reduction strategies.
As its name might suggest, the projected scenario reflected current trends. For the building sector, solutions encompassed expected grid decarbonization and improvements to building codes and energy efficiency that are currently being implemented across the country. For pavements, the sole projected solution was improvements to vehicle fuel economy. That’s because as vehicle efficiency continues to increase, excess vehicle emissions due to poor road quality will also decrease.
Both the projected scenarios for buildings and pavements featured the gradual introduction of low-carbon concrete strategies, such as recycled content, carbon capture in cement production, and the use of captured carbon to produce aggregates and cure concrete.
“In the ambitious scenario,” explains Kirchain, “we went beyond projected trends and explored reasonable changes that exceed current policies and [industry] commitments.”
Here, the building sector strategies were the same, but implemented more aggressively. The pavements sector also abided by more aggressive targets and incorporated several novel strategies, including investing more to yield smoother roads, selectively applying concrete overlays to produce stiffer pavements, and introducing more reflective pavements — which can change the Earth’s energy balance by sending more energy out of the atmosphere.
As the grid becomes greener and new homes and buildings become more efficient, many experts have predicted the operational impacts of new construction projects to shrink in comparison to their embodied emissions.
“What our life-cycle assessment found,” says Jeremy Gregory, the executive director of the MIT Climate Consortium and the lead author on the paper, “is that [this prediction] isn’t necessarily the case.”
“Instead, we found that more than 80 percent of the total emissions from new buildings and pavements between 2016 and 2050 would derive from their operation.”
In fact, the study found that operations will create the majority of emissions through 2050 unless all energy sources — electrical and thermal — are carbon-neutral by 2040. This suggests that ambitious interventions to the electricity grid and other sources of operational emissions can have the greatest impact.
Their predictions for emissions reductions generated additional insights.
For the building sector, they found that the projected scenario would lead to a reduction of 49 percent compared to 2016 levels, and that the ambitious scenario provided a 57 percent reduction.
As most buildings during the analysis period were existing rather than new, energy consumption dominated emissions in both scenarios. Consequently, decarbonizing the electricity grid and improving the efficiency of appliances and lighting led to the greatest improvements for buildings, they found.
In contrast to the building sector, the pavements scenarios had a sizeable gulf between outcomes: the projected scenario led to only a 14 percent reduction while the ambitious scenario had a 65 percent reduction — enough to meet U.S. Paris Accord targets for that sector. This gulf derives from the lack of GHG reduction strategies being pursued under current projections.
“The gap between the pavement scenarios shows that we need to be more proactive in managing the GHG impacts from pavements,” explains Kirchain. “There is tremendous potential, but seeing those gains requires action now.”
These gains from both ambitious scenarios could occur even as concrete use tripled over the analysis period in comparison to the projected scenarios — a reflection of not only concrete’s growing demand but its potential role in decarbonizing both sectors.
Though only one of their reduction scenarios (the ambitious pavement scenario) met the Paris Accord targets, that doesn’t preclude the achievement of those targets: many other opportunities exist.
“In this study, we focused on mainly embodied reductions for concrete,” explains Gregory. “But other construction materials could receive similar treatment.
“Further reductions could also come from retrofitting existing buildings and by designing structures with durability, hazard resilience, and adaptability in mind in order to minimize the need for reconstruction.”
This study answers a paradox in the field of sustainability. For the world to become more equitable, more development is necessary. And yet, that very same development may portend greater emissions.
The MIT team found that isn’t necessarily the case. Even as America continues to use more concrete, the benefits of the material itself and the interventions made to it can make climate targets more achievable.
The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.
Sachin Bhagchandani, a graduate student in the Department of Chemical Engineering currently working at the Koch Institute for Integrative Cancer Research, has won the National Cancer Institute Predoctoral to Postdoctoral Fellow Transition (F99/K00) Award. Bhagchandani is the first student at MIT to receive the award.
The fellowship is given to outstanding graduate students with high potential and interest in becoming independent cancer researchers. Bhagchandani is one of 24 candidates selected for the fellowship this year. Nominations were limited to one student per institution. The award provides six years of funding, which will support Bhagchandani as he completes his PhD in chemical engineering and help him transition into a mentored, cancer-focused postdoctoral research position — one draws on his wide-ranging interests and newfound experiences in synthetic chemistry and immunology.
Bhagchandani’s research has evolved since his undergraduate days studying chemical engineering at the Indian Institute of Technology, Roorkee. He describes the experience as rigorous, but constraining. While at MIT, he has found more opportunities to explore, leading to highly interdisciplinary projects that allow him to put his training in chemical engineering in service of human health.
Before Bhagchandani arrived at his doctoral project, many pieces had to fall into place. While completing his Master’s thesis, Bhagchandani discovered his interest in the biomedical space while working on a project advised by MIT Institute Professor Robert Langer and Harvard Medical School Professor Jeffrey Karp developing different biomaterials for the sustained delivery of drugs for treating arthritis. As a PhD candidate, he joined the laboratory of chemistry Professor Jeremiah Johnson to learn macromolecular synthesis with a focus on nanomaterials designed for drug delivery. The final piece would fall into place with Bhagchandani’s early forays into immunology — with Darrell Irvine, the Underwood-Prescott Professor of Biological Engineering and Materials Science and Engineering at MIT and Stefani Spranger, the Howard S. (1953) and Linda B. Stern Career Development Professor and assistant professor of biology at MIT.
“When I was exposed to immunology, I learned how relevant the immune system is to our daily life. I found that the biomedical challenges I was working on could be encapsulated by immunology,” Bhagchandani explains. “Drug delivery was my way in, but immunology is my path forward, where I think I will be able to make a contribution to improving human health.”
As a result, his interests have shifted toward cancer immunotherapy — aiming to make these treatments more viable for more patients by making them less toxic. Supported, in part, by the Koch Institute Frontier Research Program, which provides seed funding for high-risk, high-reward/innovative early-stage research, Bhagchandani is focusing on imidazoquinolines, a promising class of drugs that activates the immune system to fight cancer, but can also trigger significant side effects when administered intravenously. In the clinic, topical administration has been shown to minimize these side effects in certain localized cancers, but additional challenges remain for metastatic cancers that have spread throughout the body.
In order to administer imidazoquinolines systemically with minimal toxicity to treat both primary and metastatic tumors, Bhagchandani is adapting a bottlebrush-shaped molecule developed in the Johnson lab to inactivate imidazoquinolines and carry them safely to tumors. Bhagchandani is fine-tuning linking molecules that release as little of the drug as possible while circulating in the blood, and then slowly release the drug once inside the tumor. He is also optimizing the size and architecture of the bottlebrush molecule so that it accumulates in the desired immune cells present in the tumor tissue.
“A lot of students work on interdisciplinary projects as part of a larger team, but Sachin is a one-man crew, able to synthesize new polymers using cutting edge chemistry, characterize these materials, and then test them in animal models of cancer and evaluate their effects on the immune system,” said Irvine. “His knowledge spans polymer chemistry to cancer modeling to immunology.”
Prior to enrolling at MIT, Bhagchandani already had a personal connection to cancer, both through his grandfather, who passed away from prostate cancer, and through working at a children’s hospital in his hometown of Mumbai, spending time with children with cancer. Once on campus, he discovered that working in the biomedical space would allow him to put his skills as a chemical engineer in service of addressing unmet medical needs. In addition, he found that the interdisciplinary nature of the work offered a variety of perspectives on which to build his career.
His doctoral project sits at the nexus of polymer chemistry, drug delivery, and immunology, and requires the collaboration of several laboratories, all members of the Koch Institute for Integrative Cancer Research at MIT. In addition to the Johnson lab, Bhagchandani is working with the Irvine lab for its expertise in immune engineering and the Langer lab for its expertise in drug delivery, and collaborating with the Spranger lab for its expertise in cancer immunology.
“For me, working at the Koch Institute has been one of the most formative experiences of my life, because I've gone from traditional chemical engineering training to being exposed to experts in all these different fields with many different perspectives,” said Bhagchandani. When working from the perspective of chemical engineering alone, Bhagchandani said he could not always find solutions to problems that arose.
“I was making the materials and testing them in mouse models, however I couldn’t understand why my experiments weren’t working,” he says. “But by having scientists and engineers who understand immunology, immune engineering, and drug delivery together in the same room, looking at the problem from different angles, that’s when you get that ‘a-ha’ moment, when a project actually works.”
“It is wonderful having brilliant, interdisciplinary scientists like Sachin in my group,” said Johnson. “He was the first student from the Chemical Engineering department to join my group in the Department of Chemistry for their PhD studies, and his ability to bring new perspectives to our work has been highly impactful. Now, led by Sachin, and through our collaborations with Darrell Irvine, Bob Langer, Stefani Spranger, and many others in the Koch Institute, we are able to translate our chemistry in ways we couldn't have imagined before.”
In his postdoctoral training, Bhagchandani plans to dive deeper into the regulation of the immune system, particularly how different dosing regimens change the body’s response to immunotherapies. Ultimately, he hopes to continue his work as a faculty member leading his own immunology lab — one that focuses on understanding and harnessing early immune responses in cancer therapies.
“I would love to get to a point where I can recreate a lab environment for chemists, engineers, and immunologists to come together and interact and work on interdisciplinary problems. For cancer especially, you need to attack the problem on all different fronts.”
As well as advancing his work in the biomedical space, Bhagchandani hopes to serve as a mentor for future students figuring out their own paths.
“I feel like a lot of people at MIT, myself included, face challenges throughout their PhD where they start to lose belief: 'Am I the right person, am I good enough for this?' Having overcome a lot of challenging times when the project wasn't working as we hoped it would, I think it is important to share these experiences with young trainees to empower them to pursue careers in research. Winning this award helps me look back at those challenges, and persevere, and know, yes, I’m still on the right path. Because I genuinely felt that this is what I want to do with my life and I've always felt really passionate coming in to work, that this is where I belong.”
After nearly 20 years, the U.S. has withdrawn its troops from Afghanistan, and the Taliban has regained control over the country. In light of those developments, a panel of foreign-policy experts on Tuesday addressed two separate but related questions: Why did the U.S. military action in Afghanistan fall short, and what comes next for the strife-ridden country?
The event occurred as observers are still digesting the rapid collapse of the U.S.-backed national government in Afghanistan, which could not maintain power as the U.S. undertook its military withdrawal.
“Even I didn’t think they would go down in 10 days,” said Vanda Felbab-Brown PhD ’07, a senior fellow at the Brookings Institution’s Center for Security, Strategy, and Technology.
The virtual event, “U.S., Afghanistan, 9/11: Finished or Unfinished Business?” was the latest in the Starr Forum series held by MIT’s Center for International Studies, which examines key foreign-policy and international issues. Barry Posen, the Ford International Professor of Political Science at MIT, moderated the event.
As to why the U.S. could not help build a more solid state in Afghanistan given 20 years, the panelists offered multiple answers.
Juan Cole, a professor of history at University of Michigan who specializes in the Middle East, suggested that large-scale military ambitions in Afghanistan constituted a case of strategic overreach. The Taliban controlled much of Afghanistan from 1996 to 2001, providing a haven for the Al Qaeda terrorist group that carried out the Sept. 11, 2001 attacks on the U.S. But any military activities beyond those aimed at dismantling Al Qaeda, he stated, were likely to be quixotic.
“The initial U.S. attack on Afghanistan could be justified,” Cole said. “Al Qaeda had training camps there which were used to plot out 9/11, and so destroying those camps, making sure they couldn’t continue to operate, was a legitimate military mission.”
However, Cole proposed, “occupying an entire country of millions of people, and a difficult country to run and occupy” was “foredoomed to fail.” The U.S. inevitably worked more closely with some ethnic groups and not others; local elites siphoned off foreign aid; and some militarized factions who had been aligned with the U.S. reacted strongly against seeing foreign troops in the country. All this meant U.S. expectations were soon “met with reality on the ground,” Cole said.
Felbab-Brown emphasized two long-running factors that helped undermine U.S. efforts to build a new Afghan state. For one thing, she noted, neither the U.S. nor any other country could reorient neighboring Pakistan away from its decades-long alignment with the Taliban.
“Essentially, the United States never resolved how to dissuade Pakistan from providing multifaceted support for the Taliban, down to the last days of July and August … and throughout the entire 20 years, the material support, safe havens, and all kinds of other support,” she said.
Secondly, in a country where 40 to 50 percent of income in the last two decades has come from foreign aid, Felhab-Brown noted, the U.S. and its allies were not able to determine “how to persude local governing elites to moderate their role” and create more satisfactory habits of local administration.
All that said, Felbab-Brown pointed to positive consequences of U.S. efforts in Afghanistan over the last 20 years, including economic benefits and educational gains for women in particular.
“There is still a big difference between the poverty of today [in Afghanistan] and the mass starvation and huge degradation of civil and human rights that was the case in the 1990s,” Felhab-Brown said.
So, where is Afghanistan headed, assuming the Taliban consolidate control over most or all of the country?
“The worst outcome is rule that over time will come to look like the 1990s,” Felhab-Brown said, referring to the highly repressive Taliban policies that provided virtually no rights for women and massive restrictions on cultural activity.
Alternately, Felhab-Brown suggested, “The best outcome is an Iran-like system, with both the political structures of Iran … and a set of political freedoms where women can have education, can have jobs, can leave a house without a guardian, a crucial condition.” That would still represent a restrictive state by Western standards, and as Felhab-Brown suggested, it is also possible that the Taliban will settle on a more restrictive set of policies.
The international-relations repercussions of a Taliban-controlled Afghanistan remain uncertain as well, noted Carol Saivetz, a senior advisor and Russia specialist with the MIT Security Studies Program. She observed that while some in Russia might take satisfaction in watching the U.S. struggle while departing Afghanistan, Russia itself has long-running concerns about the spread of radical Islamic groups in its sphere of influence.
“I think that it’s a short-term gain … that longer-term I think could be very problematic for the Russians,” Saivetz said. “I think they are really scared of any kind of threat of Islamist terrorism overtaking Russia again.”
Saivetz also observed that the Soviet invasion and occupation of Afghanistan, which lasted from 1979 to 1989, indicated the difficulties of trying to transform the country, especially in its rural settings.
“The Soviet experience in Afghanistan was really very similar to ours,” Saivetz said.
In his concluding thoughts, Posen called the winding up of the U.S. military presence “a tragic chapter in a 20-year book” and noted that with so much of the Afghanistan economy having consisted of foreign aid programs now seemingly about to end, outside countries still have difficult decisions to make about what sort of relationship they might pursue with the country’s new leaders.
“The West has a lot of deep ethical choices to make here, about its relationship, not just with the Taliban, but the Afghan people,” Posen said.
Over the past decade, scientists have been exploring vaccination as a way to help fight cancer. These experimental cancer vaccines are designed to stimulate the body’s own immune system to destroy a tumor, by injecting fragments of cancer proteins found on the tumor.
So far, none of these vaccines have been approved by the FDA, but some have shown promise in clinical trials to treat melanoma and some types of lung cancer. In a new finding that may help researchers decide what proteins to include in cancer vaccines, MIT researchers have found that vaccinating against certain cancer proteins can boost the overall T cell response and help to shrink tumors in mice.
The research team found that vaccinating against the types of proteins they identified can help to reawaken dormant T cell populations that target those proteins, strengthening the overall immune response.
“This study highlights the importance of exploring the details of immune responses against cancer deeply. We can now see that not all anticancer immune responses are created equal, and that vaccination can unleash a potent response against a target that was otherwise effectively ignored,” says Tyler Jacks, the David H. Koch Professor of Biology, a member of the Koch Institute for Integrative Cancer Research, and the senior author of the study.
MIT postdoc Megan Burger is the lead author of the new study, which appears today in Cell.
T cell competition
When cells begin to turn cancerous, they start producing mutated proteins not seen in healthy cells. These cancerous proteins, also called neoantigens, can alert the body’s immune system that something has gone wrong, and T cells that recognize those neoantigens start destroying the cancerous cells.
Eventually, these T cells experience a phenomenon known as “T cell exhaustion,” which occurs when the tumor creates an immunosuppressive environment that disables the T cells, allowing the tumor to grow unchecked.
Scientists hope that cancer vaccines could help to rejuvenate those T cells and help them to attack tumors. In recent years, they have worked to develop methods for identifying neoantigens in patient tumors to incorporate into personalized cancer vaccines. Some of these vaccines have shown promise in clinical trials to treat melanoma and non-small cell lung cancer.
“These therapies work amazingly in a subset of patients, but the vast majority still don’t respond very well,” Burger says. “A lot of the research in our lab is aimed at trying to understand why that is and what we can do therapeutically to get more of those patients responding.”
Previous studies have shown that of the hundreds of neoantigens found in most tumors, only a small number generate a T cell response.
The new MIT study helps to shed light on why that is. In studies of mice with lung tumors, the researchers found that as tumor-targeting T cells arise, subsets of T cells that target different cancerous proteins compete with each other, eventually leading to the emergence of one dominant population of T cells. After these T cells become exhausted, they still remain in the environment and suppress any competing T cell populations that target different proteins found on the tumor.
However, Burger found that if she vaccinated these mice with one of the neoantigens targeted by the suppressed T cells, she could rejuvenate those T cell populations.
“If you vaccinate against antigens that have suppressed responses, you can unleash those T cell responses,” she says. “Trying to identify these suppressed responses and specifically targeting them might improve patient responses to vaccine therapies.”
In this study, the researchers found that they had the most success when vaccinating with neoantigens that bind weakly to immune cells that are responsible for presenting the antigen to T cells. When they used one of those neoantigens to vaccinate mice with lung tumors, they found the tumors shrank by an average of 27 percent.
“The T cells proliferate more, they target the tumors better, and we see an overall decrease in lung tumor burden in our mouse model as a result of the therapy,” Burger says.
After vaccination, the T cell population included a type of cells that have the potential to continuously refuel the response, which could allow for long-term control of a tumor.
In future work, the researchers hope to test therapeutic approaches that would combine this vaccination strategy with cancer drugs called checkpoint inhibitors, which can take the brakes off exhausted T cells, stimulating them to attack tumors. Supporting that approach, the results published today also indicate that vaccination boosts the number of a specific type of T cells that have been shown to respond well to checkpoint therapies.
The research was funded by the Howard Hughes Medical Institute, the Ludwig Center at Harvard University, the National Institutes of Health, the Koch Institute Support (core) Grant from the National Cancer Institute, the Bridge Project of the Koch Institute and Dana-Farber/Harvard Cancer Center, and fellowship awards from the Jane Coffin Childs Memorial Fund for Medical Research and the Ludwig Center for Molecular Oncology at MIT.
Aviation became a reality in the early 20th century, but it took 20 years before the proper safety precautions enabled widespread adoption of air travel. Today, the future of fully autonomous vehicles is similarly cloudy, due in large part to safety concerns.
To accelerate that timeline, graduate student Heng “Hank” Yang and his collaborators have developed the first set of “certifiable perception” algorithms, which could help protect the next generation of self-driving vehicles — and the vehicles they share the road with.
Though Yang is now a rising star in his field, it took many years before he decided to research robotics and autonomous systems. Raised in China’s Jiangsu province, he completed his undergraduate degree with top honors from Tsinghua University. His time in college was spent studying everything from honeybees to cell mechanics. “My curiosity drove me to study a lot of things. Over time, I started to drift more toward mechanical engineering, as it intersects with so many other fields,” says Yang.
Yang went on to pursue a master’s in mechanical engineering at MIT, where he worked on improving an ultrasound imaging system to track liver fibrosis. To reach his engineering goal, Yang decided to take a class about designing algorithms to control robots.
“The class also covered mathematical optimization, which involves adapting abstract formulas to model almost everything in the world,” says Yang. “I learned a neat solution to tie up the loose ends of my thesis. It amazed me how powerful computation can be toward optimizing design. From there, I knew it was the right field for me to explore next.”
Algorithms for certified accuracy
Yang is now a graduate student in the Laboratory for Information and Decision Systems (LIDS), where he works with Luca Carlone, the Leonardo Career Development Associate Professor in Engineering, on the challenge of certifiable perception. When robots sense their surroundings, they must use algorithms to make estimations about the environment and their location. “But these perception algorithms are designed to be fast, with little guarantee of whether the robot has succeeded in gaining a correct understanding of its surroundings,” says Yang. “That’s one of the biggest existing problems. Our lab is working to design ‘certified’ algorithms that can tell you if these estimations are correct.”
For example, robot perception begins with the robot capturing an image, such as a self-driving car taking a snapshot of an approaching car. The image goes through a machine-learning system called a neural network, which generates key points within the image about the approaching car’s mirrors, wheels, doors, etc. From there, lines are drawn that seek to trace the detected keypoints on the 2D car image to the labeled 3D keypoints in a 3D car model. “We must then solve an optimization problem to rotate and translate the 3D model to align with the key points on the image,” Yang says. “This 3D model will help the robot understand the real-world environment.”
Each traced line must be analyzed to see if it has created a correct match. Since there are many key points that could be matched incorrectly (for example, the neural network could mistakenly recognize a mirror as a door handle), this problem is “non-convex” and hard to solve. Yang says that his team’s algorithm, which won the Best Paper Award in Robot Vision at the International Conference on Robotics and Automation (ICRA), smooths the non-convex problem to become convex, and finds successful matches. “If the match isn’t correct, our algorithm will know how to continue trying until it finds the best solution, known as the global minimum. A certificate is given when there are no better solutions,” he explains.
“These certifiable algorithms have a huge potential impact, because tools like self-driving cars must be robust and trustworthy. Our goal is to make it so a driver will receive an alert to take over the steering wheel if the perception system has failed.”
Adapting their model to different cars
When matching the 2D image with the 3D model, one assumption is that the 3D model will align with the identified type of car. But what happens if the imaged car has a shape that the robot has never seen in its library? “We now need to both estimate the position of the car and reconstruct the shape of the model,” says Yang.
The team has figured out a way to navigate around this challenge. The 3D model gets morphed to match the 2D image by undergoing a linear combination of previously identified vehicles. For example, the model could shift from being an Audi to a Hyundai as it registers the correct build of the actual car. Identifying the approaching car’s dimensions is key to preventing collisions. This work earned Yang and his team a Best Paper Award Finalist at the Robotics: Science and Systems (RSS) Conference, where Yang was also named an RSS Pioneer.
In addition to presenting at international conferences, Yang enjoys discussing and sharing his research with the general public. He recently shared his work on certifiable perception during MIT’s first research SLAM public showcase. He also co-organized the first virtual LIDS student conference alongside industry leaders. His favorite talks focused on ways to combine theory and practice, such as Kimon Drakopoulos’ use of AI algorithms to guide how to allocate Greece’s Covid-19 testing resources. “Something that stuck with me was how he really emphasized what these rigorous analytical tools can do to benefit social good,” says Yang.
Yang plans to continue researching challenging problems that address safe and trustworthy autonomy by pursuing a career in academia. His dream of becoming a professor is also fueled by his love of mentoring others, which he enjoys doing in Carlone’s lab. He hopes his future work will lead to more discoveries that will work to protect people’s lives. “I think many are realizing that the existing set of solutions we have to promote human safety are not sufficient,” says Yang. “In order to achieve trustworthy autonomy, it is time for us to embrace a diverse set of tools to design the next generation of safe perception algorithms.”
“There must always be a failsafe, since none of our human-made systems can be perfect. I believe it will take the power of both rigorous theory and computation to revolutionize what we can successfully unveil to the public.”
Within the last decade, scientists have adapted CRISPR systems from microbes into gene editing technology, a precise and programmable system for modifying DNA. Now, scientists at MIT’s McGovern Institute for Brain Research and the Broad Institute of MIT and Harvard have discovered a new class of programmable DNA modifying systems called OMEGAs (Obligate Mobile Element Guided Activity), which may naturally be involved in shuffling small bits of DNA throughout bacterial genomes.
These ancient DNA-cutting enzymes are guided to their targets by small pieces of RNA. While they originated in bacteria, they have now been engineered to work in human cells, suggesting they could be useful in the development of gene editing therapies, particularly as they are small (about 30 percent of the size of Cas9), making them easier to deliver to cells than bulkier enzymes. The discovery, reported Sept. 9 in the journal Science, provides evidence that natural RNA-guided enzymes are among the most abundant proteins on Earth, pointing toward a vast new area of biology that is poised to drive the next revolution in genome editing technology.
The research was led by McGovern Investigator Feng Zhang, who is the James and Patricia Poitras Professor of Neuroscience at MIT, a Howard Hughes Medical Institute investigator, and a Core Institute Member of the Broad Institute. Zhang’s team has been exploring natural diversity in search of new molecular systems that can be rationally programmed.
“We are super excited about the discovery of these widespread programmable enzymes, which have been hiding under our noses all along,” says Zhang. “These results suggest the tantalizing possibility that there are many more programmable systems that await discovery and development as useful technologies.”
Programmable enzymes, particularly those that use an RNA guide, can be rapidly adapted for different uses. For example, CRISPR enzymes naturally use an RNA guide to target viral invaders, but biologists can direct Cas9 to any target by generating their own RNA guide. “It's so easy to just change a guide sequence and set a new target,” says Soumya Kannan, MIT graduate student in biological engineering and co-first author of the paper. “So one of the broad questions that we're interested in is trying to see if other natural systems use that same kind of mechanism.”
The first hints that OMEGA proteins might be directed by RNA came from the genes for proteins called IscBs. The IscBs are not involved in CRISPR immunity and were not known to associate with RNA, but they looked like small, DNA-cutting enzymes. The team discovered that each IscB had a small RNA encoded nearby and it directed IscB enzymes to cut specific DNA sequences. They named these RNAs “ωRNAs.”
The team’s experiments showed that two other classes of small proteins known as IsrBs and TnpBs, one of the most abundant genes in bacteria, also use ωRNAs that act as guides to direct the cleavage of DNA.
IscB, IsrB, and TnpB are found in mobile genetic elements called transposons. Han Altae-Tran, MIT graduate student in biological engineering and co-first author on the paper, explains that each time these transposons move, they create a new guide RNA, allowing the enzyme they encode to cut somewhere else.
It’s not clear how bacteria benefit from this genomic shuffling — or whether they do at all. Transposons are often thought of as selfish bits of DNA, concerned only with their own mobility and preservation, Kannan says. But if hosts can “co-opt” these systems and repurpose them, hosts may gain new abilities, as with CRISPR systems that confer adaptive immunity.
IscBs and TnpBs appear to be predecessors of Cas9 and Cas12 CRISPR systems. The team suspects they, along with IsrB, likely gave rise to other RNA-guided enzymes, too — and they are eager to find them. They are curious about the range of functions that might be carried out in nature by RNA-guided enzymes, Kannan says, and suspect evolution likely already took advantage of OMEGA enzymes like IscBs and TnpBs to solve problems that biologists are keen to tackle.
"A lot of the things that we have been thinking about may already exist naturally in some capacity,” says Altae-Tran. “Natural versions of these types of systems might be a good starting point to adapt for that particular task.”
The team is also interested in tracing the evolution of RNA-guided systems further into the past. “Finding all these new systems sheds light on how RNA-guided systems have evolved, but we don't know where RNA-guided activity itself comes from,” Altae-Tran says. Understanding those origins, he says, could pave the way to developing even more classes of programmable tools.
This work was made possible with support from the Simons Center for the Social Brain at MIT, the National Institutes of Health and its Intramural Research Program, Howard Hughes Medical Institute, Open Philanthropy, G. Harold and Leila Y. Mathers Charitable Foundation, Edward Mallinckrodt, Jr. Foundation, Poitras Center for Psychiatric Disorders Research at MIT, Hock E. Tan and K. Lisa Yang Center for Autism Research at MIT, Yang-Tan Center for Molecular Therapeutics at MIT, Lisa Yang, Phillips family, R. Metcalfe, and J. and P. Poitras.
Ultrathin materials made of a single layer of atoms have riveted scientists’ attention since the isolation of the first such material — graphene — about 17 years ago. Among other advances since then, researchers including those from a pioneering lab at MIT have found that stacking individual sheets of the 2D materials, and sometimes twisting them at a slight angle to each other, can give them new properties, from superconductivity to magnetism.
Now MIT physicists from that lab and colleagues have done just that with boron nitride, known as “white graphene,” in part because it has an atomic structure similar to its famous cousin. The team has shown that when two single sheets of boron nitride are stacked parallel to each other, the material becomes ferroelectric, in which positive and negative charges in the material spontaneously head to different sides, or poles. Upon the application of an external electric field, those charges switch sides, reversing the polarization. Importantly, all of this happens at room temperature.
The new material, which works via a mechanism that is completely different from existing ferroelectric materials, could have many applications.
“Wide varieties of physical properties have already been discovered in various 2D materials. Now we can easily stack the ferroelectric boron nitride with other families of materials to generate emergent properties and novel functionalities,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics and leader of the work, which was reported this summer in the journal Science. Jarillo-Herrero is also affiliated with MIT’s Materials Research Laboratory.
In addition to Jarillo-Herrero, coauthors of the paper are Kenji Yasuda, an MIT postdoc; Xirui Wang, an MIT graduate student in physics; and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Among the potential applications of the new ultrathin ferroelectric material, “one exciting possibility is to use it for denser memory storage,” says Yasuda, lead author of the Science paper. That’s because switching the polarization of the material could be used to encode ones and zeros — digital information — and that information will be stable over time. It won’t change unless an electric field is applied. In the Science paper the team reports a proof-of-principle experiment showing this stability.
Because the new material is only billionths of a meter thick — it’s one of the thinnest ferroelectrics ever made — it could also allow much denser computer memory storage.
The team further found that twisting the parallel sheets of boron nitride at a slight angle to each other resulted in yet another “completely new type of ferroelectric state,” Yasuda says. This general approach, known as twistronics, was pioneered by the Jarillo-Herrero group, which used it to achieve unconventional superconductivity in graphene.
The new ultrathin ferroelectric material is also exciting because it involves new physics. The mechanism behind how it works is completely different from that of conventional ferroelectric materials.
Says Yasuda, “The out-of-plane ferroelectric switching occurs through the in-plane sliding motion between two boron nitride sheets. This unique coupling between vertical polarization and horizontal motion is enabled by the lateral rigidity of boron nitride.”
Toward other ferroelectrics
Yasuda notes that other new ferroelectrics could be produced using the same technique described in Science. “Our method for turning a non-ferroelectric starting material into an ultrathin ferroelectric applies to other materials with atomic structures similar to boron nitride, so we can vastly expand the family of ferroelectrics. Only a few ultrathin ferroelectrics exist today,” he says. The researchers are currently working to that end and have had some promising results.
The Jarillo-Herrero lab is a pioneer at manipulating and exploring ultrathin, two-dimensional materials like graphene. Nevertheless, the conversion of ultrathin boron nitride to a ferroelectric was unexpected.
Says Xirui Wang: “I still remember when we were doing the measurements and we saw an unusual jump in the data. We decided that we should run the experiment again, and when we did it again and again we confirmed that there was something new happening.”
This work was funded by the U.S. Department of Energy Office of Science; the Army Research Office; the Gordon and Betty Moore Foundation; the U.S. National Science Foundation; the Ministry of Education, Culture, Sports, Science and Technology (MEXT) of Japan; and the Japan Society for the Promotion of Science.
The MIT School of Engineering recently honored outstanding faculty, graduate, and undergraduate students with its 2021 awards.
The Bose Award for Excellence in Teaching, given to a faculty member whose contributions have been characterized by dedication, care, and creativity, was presented to Elsa Olivetti, the Esther and Harold E. Edgerton Associate Professor in Materials Science and Engineering.
The Junior Bose Award, for an outstanding contributor to education from among the junior faculty of the School of Engineering, went to Ellen Roche, the W.M. Keck Career Development Professor in Biomedical Engineering.
Ruth and Joel Spira Awards for Excellence in Teaching are awarded annually to one faculty member each in three departments — Electrical Engineering and Computer Science, Mechanical Engineering, and Nuclear Science and Engineering — to acknowledge “the tradition of high-quality engineering education at MIT.” A fourth award rotates among the School of Engineering’s five other academic departments. This year's recipients were:
- Polina Golland, the Henry Ellis Warren (1894) Professor of Electrical Engineering and Computer Science
- Mingda Li, the Norman C Rasmussen Assistant Professor
- Warren Seering, the Weber-Shaughness Professor of Mechanical Engineering
- Greg Wornell, the Sumitomo Electric Industries Professor of Electrical Engineering and Computer Science
The Barry M. Goldwater Scholarship, given to students who exhibit an outstanding potential and intend to pursue careers in mathematics, the natural sciences, or engineering disciplines that contribute significantly to technological advances in the United States, was awarded to engineering students Spencer Compton and Tara Venkatadri.
The Henry Ford II Award, presented to a senior engineering student who has maintained a cumulative average of 5.0 at the end of their seventh term and who has exceptional potential for leadership in the profession of engineering and in society, was presented to Jarek Kwiecinski '21, who majored in civil and environmental engineering.
The Capers and Marion McDonald Award for Excellence in Mentoring and Advising, awarded to a faculty member who has demonstrated a lasting commitment to personal and professional development, was presented to George Verghese, the Henry Ellis Warren (1894) Professor of Electrical and Biomedical Engineering.
The Graduate Student Extraordinary Teaching and Mentoring Award, given to a graduate student in the School of Engineering who has demonstrated extraordinary teaching and mentoring as a teaching or research assistant, was presented to Brian Chang and Michael Braun.
The newly launched School of Engineering Distinguished Educator Award recognizing outstanding contributions to undergraduate and/or graduate education by members of its faculty and teaching staff (lecturer or instructor), was awarded to Katrina LaCurts and Simona Socrate.
3 Questions: Daniel Cohn on the benefits of high-efficiency, flexible-fuel engines for heavy-duty trucking
The California Air Resources Board has adopted a regulation that requires truck and engine manufacturers to reduce the nitrogen oxide (NOx) emissions from new heavy-duty trucks by 90 percent starting in 2027. NOx from heavy-duty trucks is one of the main sources of air pollution, creating smog and threatening respiratory health. This regulation requires the largest air pollution cuts in California in more than a decade. How can manufacturers achieve this aggressive goal efficiently and affordably?
Daniel Cohn, a research scientist at the MIT Energy Initiative, and Leslie Bromberg, a principal research scientist at the MIT Plasma Science and Fusion Center, have been working on a high-efficiency, gasoline-ethanol engine that is cleaner and more cost-effective than existing diesel engine technologies. Here, Cohn explains the flexible-fuel engine approach and why it may be the most realistic solution — in the near term — to help California meet its stringent vehicle emission reduction goals. The research was sponsored by the Arthur Samberg MIT Energy Innovation fund.
Q. How does your high-efficiency, flexible-fuel gasoline engine technology work?
A. Our goal is to provide an affordable solution for heavy-duty vehicle (HDV) engines to emit low levels of nitrogen oxide (NOx) emissions that would meet California’s NOx regulations, while also quick-starting gasoline-consumption reductions in a substantial fraction of the HDV fleet.
Presently, large trucks and other HDVs generally use diesel engines. The main reason for this is because of their high efficiency, which reduces fuel cost — a key factor for commercial trucks (especially long-haul trucks) because of the large number of miles that are driven. However, the NOx emissions from these diesel-powered vehicles are around 10 times greater than those from spark-ignition engines powered by gasoline or ethanol.
Spark-ignition gasoline engines are primarily used in cars and light trucks (light-duty vehicles), which employ a three-way catalyst exhaust treatment system (generally referred to as a catalytic converter) that reduces vehicle NOx emissions by at least 98 percent and at a modest cost. The use of this highly effective exhaust treatment system is enabled by the capability of spark-ignition engines to be operated at a stoichiometric air/fuel ratio (where the amount of air matches what is needed for complete combustion of the fuel).
Diesel engines do not operate with stoichiometric air/fuel ratios, making it much more difficult to reduce NOx emissions. Their state-of-the-art exhaust treatment system is much more complex and expensive than catalytic converters, and even with it, vehicles produce NOx emissions around 10 times higher than spark-ignition engine vehicles. Consequently, it is very challenging for diesel engines to further reduce their NOx emissions to meet the new California regulations.
Our approach uses spark-ignition engines that can be powered by gasoline, ethanol, or mixtures of gasoline and ethanol as a substitute for diesel engines in HDVs. Gasoline has the attractive feature of being widely available and having a comparable or lower cost than diesel fuel. In addition, presently available ethanol in the U.S. produces up to 40 percent less greenhouse gas (GHG) emissions than diesel fuel or gasoline and has a widely available distribution system.
To make gasoline- and/or ethanol-powered spark-ignition engine HDVs attractive for widespread HDV applications, we developed ways to make spark-ignition engines more efficient, so their fuel costs are more palatable to owners of heavy-duty trucks. Our approach provides diesel-like high efficiency and high power in gasoline-powered engines by using various methods to prevent engine knock (unwanted self-ignition that can damage the engine) in spark-ignition gasoline engines. This enables greater levels of turbocharging and use of higher engine compression ratios. These features provide high efficiency, comparable to that provided by diesel engines. Plus, when the engine is powered by ethanol, the required knock resistance is provided by the intrinsic high knock resistance of the fuel itself.
Q. What are the major challenges to implementing your technology in California?
A. California has always been the pioneer in air pollutant control, with states such as Washington, Oregon, and New York often following suit. As the most populous state, California has a lot of sway — it’s a trendsetter. What happens in California has an impact on the rest of the United States.
The main challenge to implementation of our technology is the argument that a better internal combustion engine technology is not needed because battery-powered HDVs — particularly long-haul trucks — can play the required role in reducing NOx and GHG emissions by 2035. We think that substantial market penetration of battery electric vehicles (BEV) in this vehicle sector will take a considerably longer time. In contrast to light-duty vehicles, there has been very little penetration of battery power into the HDV fleet, especially in long-haul trucks, which are the largest users of diesel fuel. One reason for this is that long-haul trucks using battery power face the challenge of reduced cargo capability due to substantial battery weight. Another challenge is the substantially longer charging time for BEVs compared to that of most present HDVs.
Hydrogen-powered trucks using fuel cells have also been proposed as an alternative to BEV trucks, which might limit interest in adopting improved internal combustion engines. However, hydrogen-powered trucks face the formidable challenges of producing zero GHG hydrogen at affordable cost, as well as the cost of storage and transportation of hydrogen. At present the high purity hydrogen needed for fuel cells is generally very expensive.
Q. How does your idea compare overall to battery-powered and hydrogen-powered HDVs? And how will you persuade people that it is an attractive pathway to follow?
A. Our design uses existing propulsion systems and can operate on existing liquid fuels, and for these reasons, in the near term, it will be economically attractive to the operators of long-haul trucks. In fact, it can even be a lower-cost option than diesel power because of the significantly less-expensive exhaust treatment and smaller-size engines for the same power and torque. This economic attractiveness could enable the large-scale market penetration that is needed to have a substantial impact on reducing air pollution. Alternatively, we think it could take at least 20 years longer for BEVs or hydrogen-powered vehicles to obtain the same level of market penetration.
Our approach also uses existing corn-based ethanol, which can provide a greater near-term GHG reduction benefit than battery- or hydrogen-powered long-haul trucks. While the GHG reduction from using existing ethanol would initially be in the 20 percent to 40 percent range, the scale at which the market is penetrated in the near-term could be much greater than for BEV or hydrogen-powered vehicle technology. The overall impact in reducing GHGs could be considerably greater.
Moreover, we see a migration path beyond 2030 where further reductions in GHG emissions from corn ethanol can be possible through carbon capture and sequestration of the carbon dioxide (CO2) that is produced during ethanol production. In this case, overall CO2 reductions could potentially be 80 percent or more. Technologies for producing ethanol (and methanol, another alcohol fuel) from waste at attractive costs are emerging, and can provide fuel with zero or negative GHG emissions. One pathway for providing a negative GHG impact is through finding alternatives to landfilling for waste disposal, as this method leads to potent methane GHG emissions. A negative GHG impact could also be obtained by converting biomass waste into clean fuel, since the biomass waste can be carbon neutral and CO2 from the production of the clean fuel can be captured and sequestered.
In addition, our flex-fuel engine technology may be synergistically used as range extenders in plug-in hybrid HDVs, which use limited battery capacity and obviates the cargo capability reduction and fueling disadvantages of long-haul trucks powered by battery alone.
With the growing threats from air pollution and global warming, our HDV solution is an increasingly important option for near-term reduction of air pollution and offers a faster start in reducing heavy-duty fleet GHG emissions. It also provides an attractive migration path for longer-term, larger GHG reductions from the HDV sector.
You might think robots and other forms of workplace automation gain traction due to intrinsic advances in technology — that innovations naturally find their way into the economy. But a study co-authored by an MIT professor tells a different story: Robots are more widely adopted where populations become notably older, filling the gaps in an aging industrial work force.
“Demographic change — aging — is one of the most important factors leading to the adoption of robotics and other automation technologies,” says Daron Acemoglu, an MIT economist and co-author of a new paper detailing the results of the study.
The study finds that when it comes to the adoption of robots, aging alone accounts for 35 percent of the variation among countries. Within the U.S., the research shows the same pattern: Metro areas where the population is getting older at a faster rate are the places where industry invests more in robots.
“We provide a lot of evidence to bolster the case that this is a causal relationship, and it is driven by precisely the industries that are most affected by aging and have opportunities for automating work,” Acemoglu adds.
The paper, “Demographics and Automation,” has been published online by The Review of Economic Studies, and will be appearing in a forthcoming print edition of the journal. The authors are Acemoglu, an Institute Professor at MIT, and Pascual Restrepo PhD ’16, an assistant professor of economics at Boston University.
An “amazing frontier,” but driven by labor shortages
The current study is the latest in a series of papers Acemoglu and Restrepo have published about automation, robots, and the workforce. They have previously quantified job displacement in the U.S. due to robots, looked at the firm-level effects of robot use, and identified the late 1980s as a key moment when automation started replacing more jobs than it was creating.
This study involves multiple layers of demographic, technological, and industry-level data, largely from the early 1990s through the mid-2010s. First, Acemoglu and Restrepo found a strong relationship between an aging work force — defined by the ratio of workers 56 and older to those ages 21 to 55 — and robot deployment in 60 countries. Aging alone accounted for not only 35 percent of the variation in robot use among countries, but also 20 percent of the variation in imports of robots, the researchers found.
Other data points involving particular countries also stand out. South Korea has been the country both aging most rapidly and implementing robotics most extensively. And Germany’s relatively older population accounts for 80 percent of the difference in robot implementation between that country and the U.S.
Overall, Acemoglu says, “Our findings suggest that quite a bit of investment in robotics is not driven by the fact that this is the next ‘amazing frontier,’ but because some countries have shortages of labor, especially middle-aged labor that would be necessary for blue-collar work.”
Digging into a wide variety of industry-level data across 129 countries, Acemoglu and Restrepo concluded that what holds for robots also applies to other, nonrobotic types of automation.
“We find the same thing when we look at other automation technologies, such as numerically controlled machinery or automated machine tools,” Acemoglu says. Significantly, at the same time, he observes, “We do not find similar relationships when we look at nonautomated machinery, for example nonautomated machine tools or things such as computers.”
The research likely sheds light on larger-scale trends as well. In recent decades, workers have fared better economically in Germany than in the U.S. The current research suggests there is a difference between adopting automation in response to labor shortages, as opposed to adopting automation as a cost-cutting, worker-replacing strategy. In Germany, robots have entered the workplace more to compensate for the absence of workers; in the U.S., relatively more robot adoption has displaced a slightly younger workforce.
“This is a potential explanation for why South Korea, Japan, and Germany — the leaders in robot investment and the most rapidly aging countries in the world — have not seen labor market outcomes [as bad] as those in the U.S.,” Acemoglu notes.
Back in the U.S.
Having examined demographics and robot usage globally, Acemoglu and Restrepo applied the same techniques to studying automation in the roughly 700 “commuting zones” (essentially, metro areas) in the U.S. from 1990 to 2015, while controlling for factors like the industrial composition of the local economy and labor trends.
Overall, the same global trend also applied within the U.S.: Older workforce populations saw greater adoption of robots after 1990. Specifically, the study found that a 10-percentage-point increase in local population aging led to a 6.45-percentage-point increase in presence of robot “integrators” in the area — firms specializing in installing and maintaining industrial robots.
The study’s data sources included population and economic statistics from multiple United Nations sources, including the UN Comtrade data on international economic activity; technology and industry data from the International Federation of Robotics; and U.S. demographic and economic statistics from multiple government sources. On top of their other layers of analysis, Acemoglu and Restrepo also studied patent data and found a “strong association” between aging and patents in automation, as Acemoglu puts it. “Which makes sense,” he adds.
For their part, Acemoglu and Restrepo are continuing to look at the effects of artificial intelligence on the workforce, and to research the relationship between workplace automation and economic inequality.
Support for the study was provided, in part, by Google, Microsoft, the National Science Foundation, the Sloan Foundation, the Smith Richardson Foundation, and the Toulouse Network on Information Technology.
You know things are getting closer to normal at MIT when people gather at Kresge Auditorium to celebrate student entrepreneurs. That was the case Friday at the delta v summer accelerator’s capstone event, Demo Day.
None of the students who presented their startup success stories could have predicted the changing pandemic landscape they’d have to work through this summer. But the presentations — each interspersed with enthusiastic cheers from the audience as students announced company milestones — showed they were able to adapt.
“Everyone’s impressed, not only with the problems these teams are trying to solve, but also with the entrepreneurs behind them,” says Carly Chase, a senior lecturer at the MIT Sloan School of Management who directed the delta v program this year.
Each startup’s success was also a testament to the Martin Trust Center for MIT Entrepreneurship, which runs the delta v program and puts an emphasis on entrepreneurship education in addition to startup success. The Trust Center took lessons learned from running the program virtually last year and incorporated them into this summer’s hybrid model.
This year’s delta v cohort featured 20 teams. Chase says about half of participants were master’s students while the other half was evenly split between PhD candidates and undergraduates. Chase says the Trust Center continues to see increased interest from undergraduates.
“Students are coming to campus not just with ideas, but more baked ideas,” Chase says. “The popularity [of the program] across campus just keeps increasing somehow.”
Entrepreneurs on a mission
This year’s delta v cohort included more social impact startups than ever before. It was also the first to include more students from the School of Engineering than the Sloan School of Management.
The startups in this year’s cohort are solving problems in commercial industries like logistics, recruiting, and agriculture as well as problems that impact consumers more directly, like personal health and fitness.
One of this year’s startups, Almond Finance, is building a system to allow for money transfers between the hundreds of mobile wallets in use around the world. Most mobile wallets today exist in technological siloes that make transferring funds extremely cumbersome, but Almond’s system will allow users to send money through their phones quickly and affordably.
Almond’s founders say there are over 250 different mobile wallets in use around Southeast Asia alone.
“In Asia, mobile wallets are more advanced [than U.S. services like Venmo],” co-founder Yunus Sevimli MBA ’21 says. “People use them for everything from making purchases to giving loans and sending money — but there’s no way for mobile wallets to talk to each other right now.”
Another startup, La Firme, is helping low- and middle-income families in Latin America construct homes with a digital platform for navigating the design and construction processes. La Firme’s founders say over 60 percent of families in Latin America build their own homes by partnering with a general contractor.
“Our online platform matches families with local professionals to design their plans,” says co-founder Mora Orensanz, a graduate student in the Department of Urban Studies and Planning. “We’re trying to make it cost-efficient by frontloading data collection about the family’s spatial needs and characteristics, and also educating the families to increase their construction literacy.”
Both Almond Finance and La Firme pivoted this summer to refocus their businesses, something Chase says is a good sign.
“In delta v, one thing we always hope is teams will have the confidence to pivot, and we had a lot of pivoters this year, whether it was the market, or the use case, or narrowing down their application to start,” Chase says. “That’s something we’re proud of and a good thing to normalize. You don’t need to come in here and think you know everything. You have to learn, and I think teams will be better for that in the long run.”
Ivu Biologics is building a platform to manufacture and deliver microbes based on biodegradable, non-microplastic materials. The team interviewed more than 150 potential customers as they explored different markets to enter before deciding their first use case will be in the agriculture space.
“Delta v is structured so that every month you have a set of goals, and having those goals pushes the teams to achieve them,” says Augustine Zvinavashe ’16, a current PhD candidate in the Department of Civil and Environmental Engineering. “The program does a really good job of helping you build the muscles for interviewing your customers, identifying the biggest problems they’re having, and building a solution to those problems.”
Other startups in this year’s cohort include Underdog Coaching, which helps university admissions teams with an online platform that lets prospective students find current students and alumni to mentor them during the application journey; Havvi Fitness, which provides virtual fitness classes to users for free by monetizing other features of its service; Surge Employment Solutions, a workforce development, staffing, and mentoring company for formerly incarcerated people; Robigo, which is developing a technology to engineer plant microbiomes to reduce disease without using pesticides; Hibiscus Monkey, which aims to be India's first body specialist brand, addressing neglected parts of the body through skin actives and plant powered ingredients; Project Restore Us, which is partnering with community organizations and leveraging restaurant supply chains to provide meals to food insecure families in the Greater Boston area; and many more.
Back in action
Although the format of delta v changed as the circumstances of the pandemic evolved, teams were often able to come to campus to use workspace, hold strategy sessions, network, and more. Most student participants were in the Boston area, but like last year, some participated from around the world.
“We spent a lot of time making sure it was equitable for everyone,” Chase says. “When we ran an in-person event in Boston, we did one virtually simultaneously.”
Student teams were also split into rotating “pods” of five and matched with entrepreneurs in residence (EiRs) in an effort to build tighter relationships and ensure accountability.
“The pods were meant to help students get to know people on other teams because meeting online with 20 teams can make that hard,” Chase says, noting the Trust Center hosted a number of social functions for each pod in addition to weekly meetings with EiRs. “I got to know my pod really well, and it removed the friction of having to get to know everyone. With a smaller group, you can share so much more.”
Other changes this year included a new version of investor day for teams and further involvement of delta v alumni, who talked to this year’s participants about what they got out of the program and answered questions.
Things weren’t quite back to normal at Demo Day, with numerous safety precautions in place, but they showed things are trending in the right direction on MIT’s campus.
“It’s great to be back in a room with people and feel the energy,” Bill Aulet, the managing director of the Trust Center and a professor of the practice at MIT Sloan, said in his opening remarks. “It’s been a long time.”