MIT Latest News
When humans pump large volumes of fluid into the ground, they can set off potentially damaging earthquakes, depending on the underlying geology. This has been the case in certain oil- and gas-producing regions, where wastewater, often mixed with oil, is disposed of by injecting it back into the ground — a process that has triggered sizable seismic events in recent years.
Now MIT researchers, working with an interdisciplinary team of scientists from industry and academia, have developed a method to manage such human-induced seismicity, and have demonstrated that the technique successfully reduced the number of earthquakes occurring in an active oil field.
Their results, appearing today in Nature, could help mitigate earthquakes caused by the oil and gas industry, not just from the injection of wastewater produced with oil, but also that produced from hydraulic fracturing, or “fracking.” The team’s approach could also help prevent quakes from other human activities, such as the filling of water reservoirs and aquifers, and the sequestration of carbon dioxide in deep geologic formations.
“Triggered seismicity is a problem that goes way beyond producing oil,” says study lead author Bradford Hager, the Cecil and Ida Green Professor of Earth Sciences in MIT’s Department of Earth, Atmospheric and Planetary Sciences. “This is a huge problem for society that will have to be confronted if we are to safely inject carbon dioxide into the subsurface. We demonstrated the kind of study that will be necessary for doing this.”
The study’s co-authors include Ruben Juanes, professor of civil and environmental engineering at MIT, and collaborators from the University of California at Riverside, the University of Texas at Austin, Harvard University, and Eni, a multinational oil and gas company based in Italy.
Both natural and human-induced earthquakes occur along geologic faults, or fractures between two blocks of rock in the Earth’s crust. In stable periods, the rocks on either side of a fault are held in place by the pressures generated by surrounding rocks. But when a large volume of fluid is suddenly injected at high rates, it can upset a fault’s fluid stress balance. In some cases, this sudden injection can lubricate a fault and cause rocks on either side to slip and trigger an earthquake.
The most common source of such fluid injections is from the oil and gas industry’s disposal of wastewater that is brought up along with oil. Field operators dispose of this water through injection wells that continuously pump the water back into the ground at high pressures.
“There’s a lot of water produced with the oil, and that water is injected into the ground, which has caused a large number of quakes,” Hager notes. “So, for a while, oil-producing regions in Oklahoma had more magnitude 3 quakes than California, because of all this wastewater that was being injected.”
In recent years, a similar problem arose in southern Italy, where injection wells on oil fields operated by Eni triggered microseisms in an area where large naturally occurring earthquakes had previously occurred. The company, looking for ways to address the problem, sought consulation from Hager and Juanes, both leading experts in seismicity and subsurface flows.
“This was an opportunity for us to get access to high-quality seismic data about the subsurface, and learn how to do these injections safely,” Juanes says.
The team made use of detailed information, accumulated by the oil company over years of operation in the Val D’Agri oil field, a region of southern Italy that lies in a tectonically active basin. The data included information about the region’s earthquake record, dating back to the 1600s, as well as the structure of rocks and faults, and the state of the subsurface corresponding to the various injection rates of each well.
The researchers integrated these data into a coupled subsurface flow and geomechanical model, which predicts how the stresses and strains of underground structures evolve as the volume of pore fluid, such as from the injection of water, changes. They connected this model to an earthquake mechanics model in order to translate the changes in underground stress and fluid pressure into a likelihood of triggering earthquakes. They then quantified the rate of earthquakes associated with various rates of water injection, and identified scenarios that were unlikely to trigger large quakes.
When they ran the models using data from 1993 through 2016, the predictions of seismic activity matched with the earthquake record during this period, validating their approach. They then ran the models forward in time, through the year 2025, to predict the region’s seismic response to three different injection rates: 2,000, 2,500, and 3,000 cubic meters per day. The simulations showed that large earthquakes could be avoided if operators kept injection rates at 2,000 cubic meters per day — a flow rate comparable to a small public fire hydrant.
Eni field operators implemented the team’s recommended rate at the oil field’s single water injection well over a 30-month period between January 2017 and June 2019. In this time, the team observed only a few tiny seismic events, which coincided with brief periods when operators went above the recommended injection rate.
“The seismicity in the region has been very low in these two-and-a-half years, with around four quakes of 0.5 magnitude, as opposed to hundreds of quakes, of up to 3 magnitude, that were happening between 2006 and 2016,” Hager says.
The results demonstrate that operators can successfully manage earthquakes by adjusting injection rates, based on the underlying geology. Juanes says the team’s modeling approach may help to prevent earthquakes related to other processes, such as the building of water reservoirs and the sequestration of carbon dioxide — as long as there is detailed information about a region’s subsurface.
“A lot of effort needs to go into understanding the geologic setting,” says Juanes, who notes that, if carbon sequestration were carried out on depleted oil fields, “such reservoirs could have this type of history, seismic information, and geologic interpretation that you could use to build similar models for carbon sequestration. We show it’s at least possible to manage seismicity in an operational setting. And we offer a blueprint for how to do it.”
This research was supported, in part, by Eni.
As a not-so-distant future that includes space tourism and people living off-planet approaches, the MIT Media Lab Space Exploration Initiative is designing and researching the activities humans will pursue in new, weightless environments.
Since 2017, the Space Exploration Initiative (SEI) has orchestrated regular parabolic flights through the ZERO-G Research Program to test experiments that rely on microgravity. This May, the SEI supported researchers from the Media Lab; MIT's departments of Aeronautics and Astronautics (AeroAstro), Earth, Atmospheric and Planetary Sciences (EAPS), and Mechanical Engineering; MIT Kavli Institute; the MIT Program in Art, Culture, and Technology; the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL); the John A. Paulson School of Engineering and Applied Sciences (SEAS) at Harvard University; the Center for Collaborative Arts and Media at Yale University; the multi-affiliated Szostak Laboratory, and the Harvard-MIT Program in Health Sciences and Technology to fly 22 different projects exploring research as diverse as fermentation, reconfigurable space structures, and the search for life in space.
Most of these projects resulted from the 2019 or 2020 iterations of MAS.838 / 16.88 (Prototyping Our Space Future) taught by Ariel Ekblaw, SEI founder and director, who began teaching the class in 2018. (Due to the Covid-19 pandemic, the 2020 flight was postponed, leading to two cohorts being flown this year.)
“The course is intentionally titled ‘Prototyping our Sci-Fi Space Future,’” she says, “because this flight opportunity that SEI wrangles, for labs across MIT, is meant to incubate and curate the future artifacts for life in space and robotic exploration — bringing the Media Lab's uniqueness, magic, and creativity into the process.”
The class prepares researchers for the realities of parabolic flights, which involves conducting experiments in short, 20-second bursts of zero gravity. As the course continues to offer hands-on research and logistical preparation, and as more of these flights are executed, the projects themselves are demonstrating increasing ambition and maturity.
“Some students are repeat flyers who have matured their experiments, and [other experiments] come from researchers across the MIT campus from a record number of MIT departments, labs, and centers, and some included alumni and other external collaborators,” says Maria T. Zuber, MIT’s vice president for research and SEI faculty advisor. “In short, there was stiff competition to be selected, and some of the experiments are sufficiently far along that they’ll soon be suitable for spaceflight.”
Dream big, design bold
Both the 2020 and 2021 flight cohorts included daring new experiments that speak to SEI’s unique focus on research across disciplines. Some look to capitalize on the advantages of microgravity, while others seek to help find ways of living and working without the force that governs every moment of life on Earth.
Che-Wei Wang, Sands Fish, and Mehak Sarang from SEI collaborated on Zenolith, a free-flying pointing device to orient space travelers in the universe — or, as the research team puts it, a 3D space compass. “We were able to perform some maneuvers in zero gravity and confirm that our control system was functioning quite well, the first step towards having the device point to any spot in the solar system,” says Sarang. “We'll still have to tweak the design as we work towards our ultimate goal of sending the device to the International Space Station!”
Then there’s the Gravity Loading Countermeasure Skinsuit project by Rachel Bellisle, a doctoral student in the Harvard-MIT Program in Health Sciences and Technology and a Draper Fellow. The Skinsuit is designed to replicate the effects of Earth gravity for use in exercise on future missions to the moon or to Mars, and to further attenuate microgravity-induced physiological effects in current ISS mission scenarios. The suit has a 10-plus-year history of development at MIT and internationally, with prior parabolic flight experiments. Skinsuit originated in the lab of Dava Newman, who now serves as Media Lab director.
“Designing, flying, and testing an actual prototype is the best way that I know of to prepare our suit designs for actual long-term spaceflight missions,” says Newman. “And flying in microgravity and partial gravity on the ZERO-G plane is a blast!”
Alongside the Skinsuit are two more projects flown this spring that involve wearables and suit prototypes: the Peristaltic Suit developed by Media Lab researcher Irmandy Wicaksono and the Bio-Digital Wearables or Space Health Enhancement project by Media Lab researcher Pat Pataranutaporn.
“Wearables have the potential to play a critical role in monitoring, supporting, and sustaining human life in space, lessening the need for human medical expert intervention,” Pataranutaporn says. “Also, having this microgravity experience after our SpaceCHI workshop ... gave me so many ideas for thinking about other on-body systems that can augment humans in space — that I don’t think I would get from just reading a research paper.”
AgriFuge, from Somayajulu Dhulipala and Manwei Chan (graduate students in MIT's departments of Mechanical Engineering and AeroAstro, respectively), offers future astronauts a rotating plant habitat that provides simulated gravity as well as a controllable irrigation system. AgriFuge anticipates a future of long-duration missions where the crew will grow their own plants — to replenish oxygen and food, as well as for the psychological benefits of caring for plants. Two more cooking-related projects that flew this spring include H0TP0T, by Larissa Zhou from Harvard SEAS, and Gravity Proof, by Maggie Coblentz of the SEI — each of which help demonstrate a growing portfolio of practical “life in space” research being tested on these flights.
The human touch
In addition to the increasingly ambitious and sophisticated individual projects, an emerging theme in SEI’s microgravity endeavor is a focus on approaches to different aspects of life and culture in space — not only in relation to cooking, but also architecture, music, and art.
Sanjana Sharma of the SEI flew her Fluid Expressions project this spring, which centers around the design of a memory capsule that functions as both a traveler’s painting kit for space and an embodied, material reminder of home. During the flight, she was able to produce three abstract watercolor paintings. “The most important part of this experience for me,” she says, “was the ability to develop a sense of what zero gravity actually feels like, as well as how the motions associated with painting differ during weightlessness.”
Ekblaw has been mentoring two new architectural projects as part of the SEI’s portfolio, building on her own TESSERAE work for in-space self-assembly: Self Assembling Space Frames by SEI’s Che-Wei Wang and Reconfigurable space structures by Martin Nisser of MIT CSAIL. Wang envisions his project as a way to build private spaces in zero-gravity environments. “You could think of it like a pop-up tent for space,” he says. “The concept can potentially scale to much larger structures that self-assemble in space, outside space stations.”
Onward and upward
Two projects that explore different notions of the search for life in space include Ø-scillation, a collaboration between several scientists at the MIT Kavli Institute, Media Lab, EAPS, and Harvard; and the Electronic Life-detection Instrument (ELI) by Chris Carr, former MIT EAPS researcher and current Georgia Tech faculty member, and Daniel Duzdevich, a postdoc at the Szostak Laboratory.
The ELI project is a continuation of work within Zuber’s lab, and has been flown on previous flights. “Broadly, our goals are to build a low-mass life-detection instrument capable of detecting life as we know it — or as we don't know it,” says Carr. During the 2021 flight, the researchers tested upgraded hardware that permits automatic real-time sub-nanometer gap control to improve the measurement fidelity of the system — with generally successful results.
Microgravity Hybrid Extrusion, led by SEI’s mission integrator, Sean Auffinger, alongside Ekblaw, Nisser, Wang, and MIT Undergraduate Research Opportunities Program student Aiden Padilla, was tested on both flights this spring and works toward building in situ, large-scale space structures — it’s also one of the selected projects being flown on an ISS mission in December 2021. The SEI is also planning a prospective "Astronaut Interaction" mission on the ISS in 2022, where artifacts like Zenolith will have the chance to be manipulated by astronauts directly.
This is a momentous fifth anniversary year for SEI. As these annual flights continue, and the experiments aboard them keep growing more advanced, researchers are setting their sights higher — toward designing and preparing for the future of interplanetary civilization.
Keylime, a cloud security software architecture, is being adopted into IBM's cloud fleet. Originally developed at MIT Lincoln Laboratory to allow system administrators to ensure the security of their cloud environment, Keylime is now a Cloud Native Computing Foundation sandbox technology with more than 30 open-source developers contributing to it from around the world. The software will enable IBM to remotely attest to the security of its thousands of cloud servers.
"It is exciting to see the hard work of the growing Keylime community coming to fruition," says Charles Munson, a researcher in the Secure Resilient Systems and Technology Group at Lincoln Laboratory who created Keylime with Nabil Schear, now at Netflix. "Adding integrated support for Keylime into IBM's cloud fleet is an important step towards enabling cloud customers to have a zero-trust capability of 'never trust, always verify.'"
In a blog post announcing IBM's integration of Keylime, George Almasi of IBM Research said, "IBM has planned a rapid rollout of Keylime-based attestation to the entirety of its cloud fleet in order to meet requirements for a strong security posture from its financial services and other enterprise customers. This will leverage work done on expanding the scalability and resilience of Keylime to manage large numbers of nodes, allowing Keylime-based attestation to be operationalized at cloud data center scale."
Keylime is a key bootstrapping and integrity management software architecture. It was first developed to enable organizations to check for themselves that the servers storing and processing their data are as secure as cloud service providers claim they are. Today, many organizations use a form of cloud computing called infrastructure-as-a-service, whereby they rent computing resources from a cloud provider who is responsible for the security of the underlying systems.
To enable remote cloud-security checks, Keylime leverages a piece of hardware called a trusted platform module, or TPM, an industry-standard and widely used hardware security chip. A TPM generates a hash, a short string of numbers representing a much larger amount of data. If data are tampered with even slightly, the hash will change significantly, a security alarm that Keylime can detect and react to in under a second.
Before Keylime, TPMs were incompatible with cloud technology, slowing down systems and forcing engineers to change software to accommodate the module. Keylime gets around these problems by serving as a piece of intermediary software that allows users to leverage the security benefits of the TPM without having to make all of their software compatible with it.
In 2019, Keylime was transitioned into the CNCF as a sandbox technology with the help of RedHat, one of the world's leading open-source software companies. This transition better incorporated Keylime into the Linux open-source ecosystem, making it simpler for users to adopt. In 2020, the Lincoln Laboratory team that developed Keylime was awarded an R&D 100 Award, recognizing the software among the year's 100 most innovative new technologies available for sale or license.
In certain parts of the deep ocean, scattered across the seafloor, lie baseball-sized rocks layered with minerals accumulated over millions of years. A region of the central Pacific, called the Clarion Clipperton Fracture Zone (CCFZ), is estimated to contain vast reserves of these rocks, known as “polymetallic nodules,” that are rich in nickel and cobalt — minerals that are commonly mined on land for the production of lithium-ion batteries in electric vehicles, laptops, and mobile phones.
As demand for these batteries rises, efforts are moving forward to mine the ocean for these mineral-rich nodules. Such deep-sea-mining schemes propose sending down tractor-sized vehicles to vacuum up nodules and send them to the surface, where a ship would clean them and discharge any unwanted sediment back into the ocean. But the impacts of deep-sea mining — such as the effect of discharged sediment on marine ecosystems and how these impacts compare to traditional land-based mining — are currently unknown.
Now oceanographers at MIT, the Scripps Institution of Oceanography, and elsewhere have carried out an experiment at sea for the first time to study the turbulent sediment plume that mining vessels would potentially release back into the ocean. Based on their observations, they developed a model that makes realistic predictions of how a sediment plume generated by mining operations would be transported through the ocean.
The model predicts the size, concentration, and evolution of sediment plumes under various marine and mining conditions. These predictions, the researchers say, can now be used by biologists and environmental regulators to gauge whether and to what extent such plumes would impact surrounding sea life.
“There is a lot of speculation about [deep-sea-mining’s] environmental impact,” says Thomas Peacock, professor of mechanical engineering at MIT. “Our study is the first of its kind on these midwater plumes, and can be a major contributor to international discussion and the development of regulations over the next two years.”
The team’s study appears today in Nature Communications: Earth and Environment.
Peacock’s co-authors at MIT include lead author Carlos Muñoz-Royo, Raphael Ouillon, Chinmay Kulkarni, Patrick Haley, Chris Mirabito, Rohit Supekar, Andrew Rzeznik, Eric Adams, Cindy Wang, and Pierre Lermusiaux, along with collaborators at Scripps, the U.S. Geological Survey, and researchers in Belgium and South Korea.
Out to sea
Current deep-sea-mining proposals are expected to generate two types of sediment plumes in the ocean: “collector plumes” that vehicles generate on the seafloor as they drive around collecting nodules 4,500 meters below the surface; and possibly “midwater plumes” that are discharged through pipes that descend 1,000 meters or more into the ocean’s aphotic zone, where sunlight rarely penetrates.
In their new study, Peacock and his colleagues focused on the midwater plume and how the sediment would disperse once discharged from a pipe.
“The science of the plume dynamics for this scenario is well-founded, and our goal was to clearly establish the dynamic regime for such plumes to properly inform discussions,” says Peacock, who is the director of MIT’s Environmental Dynamics Laboratory.
To pin down these dynamics, the team went out to sea. In 2018, the researchers boarded the research vessel Sally Ride and set sail 50 kilometers off the coast of Southern California. They brought with them equipment designed to discharge sediment 60 meters below the ocean’s surface.
“Using foundational scientific principles from fluid dynamics, we designed the system so that it fully reproduced a commercial-scale plume, without having to go down to 1,000 meters or sail out several days to the middle of the CCFZ,” Peacock says.
Over one week the team ran a total of six plume experiments, using novel sensors systems such as a Phased Array Doppler Sonar (PADS) and epsilometer developed by Scripps scientists to monitor where the plumes traveled and how they evolved in shape and concentration. The collected data revealed that the sediment, when initially pumped out of a pipe, was a highly turbulent cloud of suspended particles that mixed rapidly with the surrounding ocean water.
“There was speculation this sediment would form large aggregates in the plume that would settle relatively quickly to the deep ocean,” Peacock says. “But we found the discharge is so turbulent that it breaks the sediment up into its finest constituent pieces, and thereafter it becomes dilute so quickly that the sediment then doesn’t have a chance to stick together.”
The team had previously developed a model to predict the dynamics of a plume that would be discharged into the ocean. When they fed the experiment’s initial conditions into the model, it produced the same behavior that the team observed at sea, proving the model could accurately predict plume dynamics within the vicinity of the discharge.
The researchers used these results to provide the correct input for simulations of ocean dynamics to see how far currents would carry the initially released plume.
“In a commercial operation, the ship is always discharging new sediment. But at the same time the background turbulence of the ocean is always mixing things. So you reach a balance. There’s a natural dilution process that occurs in the ocean that sets the scale of these plumes,” Peacock says. “What is key to determining the extent of the plumes is the strength of the ocean turbulence, the amount of sediment that gets discharged, and the environmental threshold level at which there is impact.”
Based on their findings, the researchers have developed formulae to calculate the scale of a plume depending on a given environmental threshold. For instance, if regulators determine that a certain concentration of sediments could be detrimental to surrounding sea life, the formula can be used to calculate how far a plume above that concentration would extend, and what volume of ocean water would be impacted over the course of a 20-year nodule mining operation.
“At the heart of the environmental question surrounding deep-sea mining is the extent of sediment plumes,” Peacock says. “It’s a multiscale problem, from micron-scale sediments, to turbulent flows, to ocean currents over thousands of kilometers. It’s a big jigsaw puzzle, and we are uniquely equipped to work on that problem and provide answers founded in science and data.”
The team is now working on collector plumes, having recently returned from several weeks at sea to perform the first environmental monitoring of a nodule collector vehicle in the deep ocean in over 40 years.
This research was supported in part by the MIT Environmental Solutions Initiative, the UC Ship Time Program, the MIT Policy Lab, the 11th Hour Project of the Schmidt Family Foundation, the Benioff Ocean Initiative, and Fundación Bancaria “la Caixa.”
Michael Short came to MIT in the fall of 2001 as an 18-year-old first-year who grew up in Boston’s North Shore. He immediately felt at home, so much so that he’s never really left. It’s not that Short has no interest in exploring the world beyond the confines of the Institute, as he is an energetic and venturesome fellow. It’s just that almost everything he hopes to achieve in his scientific career can, in his opinion, be best pursued at this university.
Last year — after collecting four MIT degrees and joining the faculty of the Department of Nuclear Science and Engineering (NSE) in 2013 — he was promoted to the status of tenured associate professor.
Short’s enthusiasm for MIT began early in high school when he attended weekend programs that were mainly taught by undergraduates. “It was a program filled with my kind of people,” he recalls. “My high school was very good, but this was at a different level — at the level I was seeking and hoping to achieve. I felt more at home here than I did in my hometown, and the Saturdays at MIT were the highlight of my week.” He loved his four-year experience as an MIT undergraduate, including the research he carried out in the Uhlig Corrosion Laboratory, and he wasn’t ready for it to end.
After graduating in 2005 with two BS degrees (one in NSE and another in materials science and engineering), he took on some computer programming jobs and worked half time in the Uhlig lab under the supervision of Ronald Ballinger, a professor in both NSE and the Department of Materials Science and Engineering. Short soon realized that computer programming was not for him, and he started graduate studies with Ballinger as his advisor, earning a master’s and a PhD in nuclear science and engineering in 2010.
Even as an undergraduate, Short was convinced that nuclear power was essential to our nation’s (and the world’s) energy future, especially in light of the urgent need to move toward carbon-free sources of power. During his first year, he was told by Ballinger that the main challenge confronting nuclear power was to find materials, and metals in particular, that could last long enough in the face of radiation and the chemically destructive effects of corrosion.
Those words, persuasively stated, led him to his double major. “Materials and radiation damage have been at the core of my research ever since,” Short says. “Remarkably, the stuff I started studying in my first year of college is what I do today, though I’ve extended this work in many directions.”
Corrosion has proven to be an unexpectedly rich subject. “The traditional view is to expose metals to various things and see what happens — ‘cook and look,’ as it’s called,” he says. “A lot of folks view it that way, but it’s actually much more complex. In fact, some members of our own faculty don’t want to touch corrosion because it’s too complicated, too dirty. But that’s what I like about it.”
In a 2020 paper published in Nature Communications, Short, his student Weiyue Zhou, and other colleagues made a surprising discovery. “Most people think radiation is bad and makes everything worse, but that’s not always the case,” Short maintains. His team found a specific set of conditions under which a metal (a nickel-chromium alloy) performs better when it is irradiated while undergoing corrosion in a molten salt mixture. Their finding is relevant, he adds, “because these are the conditions under which people are hoping to run the next generation of nuclear reactors.” Leading candidates for alternatives to today’s water-cooled reactors are molten salt and liquid metal (specifically liquid lead and sodium) cooled reactors. To this end, Short and his colleagues are currently carrying out similar experiments involving the irradiation of metal alloys immersed in liquid lead.
Meanwhile, Short has pursued another multiyear project, trying to devise a new standard to serve as “a measurable unit of radiation damage.” In fact, these were the very words he wrote on his research statement when applying for his first faculty position at MIT, although he admits that he didn’t know then how to realize that goal. But the effort is finally paying off, as Short and his collaborators are about to submit their first big paper on the topic. He’s found that you can’t reduce radiation damage to a single number, which is what people have tried to do in the past, because that’s too simple. Instead, their new standard relates to the density of defects — the number of radiation-induced defects (or unintentional changes to the lattice structure) per unit volume for a given material.
“Our approach is based on a theory that everyone agrees on — that defects have energy,” Short explains. However, many people told him and his team that the amount of energy stored within those defects would be too small to measure. But that just spurred them to try harder, making measurements at the microjoule level, at the very limits of detection.
Short is convinced that their new standard will become “universally useful, but it will take years of testing on many, many materials followed by more years of convincing people using the classic method: Repeat, repeat, repeat, making sure that each time you get the same result. It’s the unglamorous side of science, but that’s the side that really matters.”
The approach has already led Short, in collaboration with NSE proliferation expert Scott Kemp, into the field of nuclear security. Equipped with new insights into the signatures left behind by radiation damage, students co-supervised by Kemp and Short have devised methods for determining how much fissionable material has passed through a uranium enrichment facility, for example, by scrutinizing the materials exposed to these radioactive substances. “I never thought my preliminary work on corrosion experiments as an undergraduate would lead to this,” Short says.
He has also turned his attention to “microreactors” — nuclear reactors with power ratings as small as a single megawatt, as compared to the 1,000-megawatt behemoths of today. Flexibility in the size of future power plants is essential to the economic viability of nuclear power, he insists, “because nobody wants to pay $10 billion for a reactor now, and I don’t blame them.”
But the proposed microreactors, he says, “pose new material challenges that I want to solve. It comes down to cramming more material into a smaller volume, and we don’t have a lot of knowledge about how materials perform at such high densities.” Short is currently conducting experiments with the Idaho National Laboratory, irradiating possible microreactor materials to see how they change using a laser technique, transient grating spectroscopy (TGS), which his MIT group has had a big hand in advancing.
It’s been an exhilarating 20 years at MIT for Short, and he has even more ambitious goals for the next 20 years. “I’d like to be one of those who came up with a way to verify the Iran nuclear deal and thereby helped clamp down on nuclear proliferation worldwide,” he says. “I’d like to choose the materials for our first power-generating nuclear fusion reactors. And I’d like to have influenced perhaps 50 to 100 former students who chose to stay in science because they truly enjoy it.
“I see my job as creating scientists, not science,” he says, “though science is, of course, a convenient byproduct.”
A new MIT study of how a mammalian brain remembers what it sees shows that while individual images are stored in the visual cortex, the ability to recognize a sequence of sights critically depends on guidance from the hippocampus, a deeper structure strongly associated with memory but shrouded in mystery about exactly how.
By suggesting that the hippocampus isn’t needed for basic storage of images so much as identifying the chronological relationship they may have, the new research, published in Current Biology, can bring neuroscientists closer to understanding how the brain coordinates long-term visual memory across key regions.
“This offers the opportunity to actually understand, in a very concrete way, how the hippocampus contributes to memory storage in the cortex,” says senior author Mark Bear, the Picower Professor of Neuroscience in the Picower Institute for Learning and Memory and the Department of Brain and Cognitive Sciences.
Essentially, the hippocampus acts to influence how images are stored in the cortex if they have a sequential relationship, says lead author Peter Finnie, a former postdoc in Bear’s lab.
“The exciting part of this is that the visual cortex seems to be involved in encoding both very simple visual stimuli and also temporal sequences of them, and yet the hippocampus is selectively involved in how that sequence is stored,” Finnie says.
To have hippocampus and have not
To make their findings, the researchers, including former postdoc Rob Komorowski, trained mice with two forms of visual recognition memory discovered in Bear’s lab. The first form of memory, called stimulus selective response plasticity (SRP) involves learning to recognize a nonrewarding, nonthreatening single visual stimulus after it has been presented over and over. As learning occurs, visual cortex neurons produce an increasingly strong electrical response and the mouse ceases paying attention to the once-novel, but now decidedly uninteresting, image. The second form of memory, visual sequence plasticity, involves learning to recognize and predict a sequence of images. Here, too, the once-novel but now-familiar and innocuous sequence comes to evoke an elevated electrical response, and it is much greater than what is observed if the same stimuli are presented in reverse order or at a different speed.
In prior studies Bear’s lab has shown that the images in each form of memory are stored in the visual cortex, and are even specific to which eye beheld them, if only one did.
But the researchers were curious about whether and how the hippocampus might contribute to these forms of memory and cortical plasticity. After all, like some other forms of memory that depend on the hippocampus, SRP only takes hold after a period of “consolidation,” for instance overnight during sleep. To test whether there is a role for the hippocampus, they chemically removed large portions of the structure in a group of mice and looked for differences between groups in the telltale electrical response each kind of recognition memory should evoke.
Mice with or without a hippocampus performed equally well in learning SRP (measured not only electrophysiologically but also behaviorally), suggesting that the hippocampus was not needed for that form of memory. It appears to arise, and even consolidate, entirely within the visual cortex.
Visual sequence plasticity, however, did not occur without an intact hippocampus, the researchers found. Mice without the structure showed no elevated electrical response to the sequences when tested, no ability to recognize them in reverse or when delayed, and no inclination to “fill in the blank” when one was missing. It was as if the visual sequence — and even each image in the sequence — was not familiar.
“Together these findings are consistent with a specific role for the hippocampus in predictive response generation during exposure to familiar temporal patterns of visual stimulation,” the authors wrote.
New finding from a classic approach
The experiments follow in a long tradition of attempting to understand the hippocampus by assessing what happens when it’s damaged. For decades, neuroscientists at MIT and elsewhere were able to learn from a man known as “H.M.,” who had undergone hippocampal removal to relieve epileptic seizures. His memory of his past before the surgery remained intact, but he exhibited an inability to form “declarative” memories of new experiences, such as meeting someone or performing an activity. Over time, however, scientists realized that he could be trained to learn motor tasks better, even though he wouldn’t remember the training itself. The experiments helped to reveal that for many different forms of memory there is a “division of labor” among regions of the brain that may or may not include the hippocampus.
The new study, Bear and Finnie say, produces a clear distinction through the division of labor in visual memory between simple recognition of images and the more complex task of recognizing of sequence structure.
“It’s a nice dividing line,” Bear says. “It’s the same region of the brain, the same method of an animal looking at images on a screen. All we are changing is the temporal structure of the stimulus.”
Previous research in the lab showed that SRP and visual sequence plasticity arise via different molecular mechanisms. SRP can be disrupted by blocking receptors for the neurotransmitter glutamate on involved neurons while sequence plasticity depends on receptors for acetylcholine.
The next question Bear wants to address, therefore, is whether an acetylcholine-producing circuit links the hippocampus to the visual cortex to accomplish sequence learning. Neurons that release acetylcholine in the cortex happen to be among the earliest disrupted in Alzheimer’s disease.
If the circuit for sequence learning indeed runs through those neurons, Bear speculates, then assessing people for differences in SRP and sequence learning could become a way to diagnose early onset of dementia progression.
The National Eye Institute of the National Institutes of Health and the JPB Foundation funded the research.
Eesha Khare has always seen a world of matter. The daughter of a hardware engineer and a biologist, she has an insatiable interest in what substances — both synthetic and biological — have in common. Not surprisingly, that perspective led her to the study of materials.
“I recognized early on that everything around me is a material,” she says. “How our phones respond to touches, how trees in nature to give us both structural wood and foldable paper, or how we are able to make high skyscrapers with steel and glass, it all comes down to the fundamentals: This is materials science and engineering.”
As a rising fourth-year PhD student in the MIT Department of Materials Science and Engineering (DMSE), Khare now studies the metal-coordination bonds that allow mussels to bind to rocks along turbulent coastlines. But Khare’s scientific enthusiasm has also led to expansive interests from science policy to climate advocacy and entrepreneurship.
A material world
A Silicon Valley native, Khare recalls vividly how excited she was about science as a young girl, both at school and at myriad science fairs and high school laboratory internships. One such internship at the University of California at Santa Cruz introduced her to the study of nanomaterials, or materials that are smaller than a single human cell. The project piqued her interest in how research could lead to energy-storage applications, and she began to ponder the connections between materials, science policy, and the environment.
As an undergraduate at Harvard University, Khare pursued a degree in engineering sciences and chemistry while also working at the Harvard Kennedy School Institute of Politics. There, she grew fascinated by environmental advocacy in the policy space, working for then-professor Gina McCarthy, who is currently serving in the Biden administration as the first-ever White House climate advisor.
Following her academic explorations in college, Khare wanted to consider science in a new light before pursuing her doctorate in materials science and engineering. She deferred her program acceptance at MIT in order to attend Cambridge University in the U.K., where she earned a master’s degree in the history and philosophy of science. “Especially in a PhD program, it can often feel like your head is deep in the science as you push new research frontiers, but I wanted take a step back and be inspired by how scientists in the past made their discoveries,” she says.
Her experience at Cambridge was both challenging and informative, but Khare quickly found that her mechanistic curiosity remained persistent — a realization that came in the form of a biological material.
“My very first master’s research project was about environmental pollution indicators in the U.K., and I was looking specifically at lichen to understand the social and political reasons why they were adopted by the public as pollution indicators,” Khare explains. “But I found myself wondering more about how lichen can act as pollution indicators. And I found that to be quite similar for most of my research projects: I was more interested in how the technology or discovery actually worked.”
Enthusiasm for innovation
Fittingly, these bioindicators confirmed for her that studying materials at MIT was the right course. Now Khare works on a different organism altogether, conducting research on the metal-coordination chemical interactions of a biopolymer secreted by mussels.
“Mussels secrete this thread and can adhere to ocean walls. So, when ocean waves come, mussels don’t get dislodged that easily,” Khare says. “This is partly because of how metal ions in this material bind to different amino acids in the protein. There’s no input from the mussel itself to control anything there; all the magic is in this biological material that is not only very sticky, but also doesn’t break very readily, and if you cut it, it can re-heal that interface as well! If we could better understand and replicate this biological material in our own world, we could have materials self-heal and never break and thus eliminate so much waste.”
To study this natural material, Khare combines computational and experimental techniques, experimentally synthesizing her own biopolymers and studying their properties with in silico molecular dynamics. Her co-advisors — Markus Buehler, the Jerry McAfee Professor of Engineering in Civil and Environmental Engineering, and Niels Holten-Andersen, professor of materials science and engineering — have embraced this dual-approach to her project, as well as her abundant enthusiasm for innovation.
Khare likes to take one exploratory course per semester, and a recent offering in the MIT Sloan School of Management inspired her to pursue entrepreneurship. These days she is spending much of her free time on a startup called Taxie, formed with fellow MIT students after taking the course 15.390 (New Enterprises). Taxie attempts to electrify the rideshare business by making electric rental cars available to rideshare drivers. Khare hopes this project will initiate some small first steps in making the ridesharing industry environmentally cleaner — and in democratizing access to electric vehicles for rideshare drivers, who often hail from lower-income or immigrant backgrounds.
“There are a lot of goals thrown around for reducing emissions or helping our environment. But we are slowly getting physical things on the road, physical things to real people, and I like to think that we are helping to accelerate the electric transition,” Khare says. “These small steps are helpful for learning, at the very least, how we can make a transition to electric or to a cleaner industry.”
Alongside her startup work, Khare has pursued a number of other extracurricular activities at MIT, including co-organizing her department’s Student Application Assistance Program and serving on DMSE’s Diversity, Equity, and Inclusion Council. Her varied interests also have led to a diverse group of friends, which suits her well, because she is a self-described “people-person.”
In a year where maintaining connections has been more challenging than usual, Khare has focused on the positive, spending her spring semester with family in California and practicing Bharatanatyam, a form of Indian classical dance, over Zoom. As she looks to the future, Khare hopes to bring even more of her interests together, like materials science and climate.
“I want to understand the energy and environmental sector at large to identify the most pressing technology gaps and how can I use my knowledge to contribute. My goal is to figure out where can I personally make a difference and where it can have a bigger impact to help our climate,” she says. “I like being outside of my comfort zone.”
Thailand has become an economic leader in Southeast Asia in recent decades, but while the country has rapidly industrialized, many Thai citizens have been left behind. As a child growing up in Bangkok, Pavarin Bhandtivej would watch the news and wonder why families in the nearby countryside had next to nothing. He aspired to become a policy researcher and create beneficial change.
But Bhandtivej knew his goal wouldn’t be easy. He was born with a visual impairment, making it challenging for him to see, read, and navigate. This meant he had to work twice as hard in school to succeed. It took achieving the highest grades for Bhandtivej to break through stigmas and have his talents recognized. Still, he persevered, with a determination to uplift others. “I would return to that initial motivation I had as a kid. For me, to make even the smallest contribution to improving my country would be my dream,” he says.
“When I would face these obstacles, I would tell myself that struggling people are waiting for someone to design policies for them to have better lives. And that person could be me. I cannot fall here in front of these obstacles. I must stay motivated and move on.”
Bhandtivej completed his undergraduate degree in economics at Thailand’s top college, Chulalongkorn University. His classes introduced him to many debates about development policy, such as universal basic income. During one debate, after both sides made compelling arguments about how to alleviate poverty, Bhandtivej realized there was no clear winner. “A question came to my mind: Who's right?” he says. “In terms of theory, both sides were correct. But how could we know what approach would work in the real world?”
A new approach to higher education
The search for those answers would lead Bhandtivej to become interested in data analysis. He began investigating online courses, eventually finding the MIT MicroMasters Program in Data, Economics, and Development Policy (DEDP), which was created by MIT’s Department of Economics and the Abdul Latif Jameel Poverty Action Lab (J-PAL). The program requires learners to complete five online courses that teach quantitative methods for evaluating social programs, leading to a MicroMasters credential. Students that pass the courses’ proctored exams are then also eligible to apply for a full-time, accelerated, on-campus master’s program at MIT, led by professors Esther Duflo, Abhijit Banerjee, and Benjamin Olken.
The program’s mission to make higher education more accessible worked well for Bhandtivej. He studied tirelessly, listening and relistening to online lectures and pausing to scrutinize equations. By the end, his efforts paid off — Bhandtivej was the MicroMasters program’s top scorer. He was soon admitted into the second cohort of the highly selective DEDP master’s program.
“You can imagine how time-consuming it was to use text-to-speech to get through a 30-page reading with numerous equations, tables, and graphs,” he explains. “Luckily, Disability and Access Services provided accommodations to timed exams and I was able to push through.”
In the gap year before the master’s program began, Bhandtivej returned to Chulalongkorn University as a research assistant with Professor Thanyaporn Chankrajang. He began applying his newfound quantitative skills to study the impacts of climate change in Thailand. His contributions helped uncover how rising temperatures and irregular rainfall are leading to reduced rice crop yields. “Thailand is the world’s second largest exporter of rice, and the vast majority of Thais rely heavily on rice for its nutritional and commercial value. We need more data to encourage leaders to act now,” says Bhandtivej. “As a Buddhist, it was meaningful to be part of generating this evidence, as I am always concerned about my impact on other humans and sentient beings.”
Staying true to his mission
Now pursuing his master’s on campus, Bhandtivej is taking courses like 14.320 (Econometric Data Science) and studying how to design, conduct, and analyze empirical studies. “The professors I’ve had have opened a whole new world for me,” says Bhandtivej. “They’ve inspired me to see how we can take rigorous scientific practices and apply them to make informed policy decisions. We can do more than rely on theories.”
The final portion of the program requires a summer capstone experience, which Bhandtivej is using to work at Innovations for Poverty Action. He has recently begun to analyze how remote learning interventions in Bangladesh have performed since Covid-19. Many teachers are concerned, since disruptions in childhood education can lead to intergenerational poverty. “We have tried interventions that connect students with teachers, provide discounted data packages, and send information on where to access adaptive learning technologies and other remote learning resources,” he says. “It will be interesting to see the results. This is a truly urgent topic, as I don’t believe Covid-19 will be the last pandemic of our lifetime.”
Enhancing education has always been one of Bhandtivej’s priority interests. He sees education as the gateway that brings a person’s innate talent to light. “There is a misconception in many developing countries that disabled people cannot learn, which is untrue,” says Bhandtivej. “Education provides a critical signal to future employers and overall society that we can work and perform just as well, as long as we have appropriate accommodations.”
In the future, Bhandtivej plans on returning to Thailand to continue his journey as a policy researcher. While he has many issues he would like to tackle, his true purpose still lies in doing work that makes a positive impact on people’s lives. “My hope is that my story encourages people to think of not only what they are capable of achieving themselves, but also what they can do for others.”
“You may think you are just a small creature on a large planet. That you have just a tiny role to play. But I think — even if we are just a small part — whatever we can do to make life better for our communities, for our country, for our planet ... it’s worth it.”
MIT has granted tenure to five faculty members in the MIT School of Science in the departments of Brain and Cognitive Sciences, Chemistry, and Physics.
Physicist Joseph Checkelsky investigates exotic electronic states of matter through the synthesis, measurement, and control of solid-state materials. His research aims to uncover new physical phenomena that expand the boundaries of understanding of quantum mechanical condensed matter systems and open doorways to new technologies by realizing emergent electronic and magnetic functionalities. Checkelsky joined the Department of Physics in 2014 after a postdoc appointment at Japan’s Institute for Physical and Chemical Research and a lectureship at the University of Tokyo. He earned a bachelor’s degree in physics from Harvey Mudd College in 2004; and in 2010, he received a doctoral degree in physics from Princeton University.
A molecular neurobiologist and geneticist, Myriam Heiman studies the selective vulnerability and pathophysiology seen in neurodegenerative diseases of the brain’s basal ganglia, including Huntington’s disease and Parkinson’s disease. Using a revolutionary transcriptomic technique called translating ribosome affinity purification, she aims to understand the early molecular changes that eventually lead to cell death in these diseases. Heiman joined the Department of Brain and Cognitive Sciences, the Picower Institute for Learning and Memory, and the Broad Institute of Harvard and MIT in 2011 after completing her postdoctoral training at The Rockefeller University. She holds a PhD from Johns Hopkins University and a BA from Princeton University.
Particle physicist Kerstin Perez is interested in using cosmic particles to look beyond Standard Model physics, in particular evidence of dark matter interactions. Her work focuses on opening sensitivity to unexplored cosmic signatures with impact at the intersection of particle physics, astrophysics, and advanced instrumental techniques. Perez joined the Department of Physics in 2016, after a National Science Foundation astronomy and astrophysics postdoctoral fellowship at Columbia University and a faculty appointment at Haverford College. She earned her BA in physics from Columbia University in 2005, and her PhD from Caltech in 2011.
Alexander Radosevich works at the interface of inorganic and organic chemistry to design new chemical reactions. In particular, his interests concern the invention of compositionally new classes of molecular catalysts based on inexpensive and Earth-abundant elements of the p-block. This research explores the connection between molecular structure and reactivity in an effort to discover new efficient and sustainable approaches to chemical synthesis. Radosevich returned to the MIT Department of Chemistry, where he also held a postdoctoral appointment in 2016, after serving on the faculty at The Pennsylvania State University. He received a BS from the University of Notre Dame in 2002, and a PhD from University of California at Berkeley in 2007.
Alex K. Shalek creates and implements new experimental and computational approaches to identify the cellular and molecular features that inform tissue-level function and dysfunction across the spectrum of human health and disease. This encompasses both the development of broadly enabling technologies, such as Seq-Well, as well as their application to characterize, model, and rationally control complex multicellular systems. In addition to sharing this toolbox to empower mechanistic scientific inquiry across the global research community, Shalek is applying it to uncover principles that inform a wide range of problems in immunology, infectious diseases, and cancer. Shalek joined the Department of Chemistry and the Institute of Medical Engineering and Science in 2014 after postdoctoral training at Harvard University and the Broad Institute. He received his BA in chemical physics at Columbia University in 2004, followed by a PhD from Harvard University in 2011.
Weather is a tricky science — even more so at very high altitudes, with a mix of plasma and neutral particles.
In sudden stratospheric warmings (SSWs) — large meteorological disturbances related to the polar vortex in which the polar stratosphere temperature increases as it is affected by the winds around the pole — the polar vortex is weakened. SSWs also have profound atmospheric effects at great distances, causing changes in the hemisphere opposite from the location of the original SSW — changes that extend all the way to the upper thermosphere and ionosphere.
A study published on July 16 in Geophysical Research Letters by MIT Haystack Observatory’s Larisa Goncharenko and colleagues examines the effects of a recent major Antarctic SSW on the Northern Hemisphere by studying changes observed in the upper atmosphere over North America and Europe.
In an SSW-caused anomaly, changes over the pole cause changes in the opposite hemisphere. This important interhemispheric linkage was identified as drastic shifts at altitudes greater than 100 km — for example, in total electron content (TEC) measurements as well as variations in the thermospheric O/N2 ratio.
SSWs are more frequent over the Arctic; these cause TEC and other related anomalies in the Southern Hemisphere, and thus more observations have been made on this linkage. Since the Antarctic SSWs are less common, there are fewer opportunities to study their effects on the Northern Hemisphere. However, the greater density of TEC observation locations in the Northern Hemisphere allows for precise measurement of these upper atmospheric anomalies when they do occur.
In September 2019, an extreme, record-breaking SSW event occurred over Antarctica. Goncharenko and colleagues found significant resulting changes in the upper atmosphere in mid-latitudes over the Northern Hemisphere following this event; more observations are available for this region than in the Southern Hemisphere. The changes were notable not only in severity, but also because they are limited to a narrow (20–40 degrees) longitude range, differ between North America and Europe, and persist for a long time.
In the figure above, red areas show where TEC levels are shifted over North America and Europe in the afternoon; red indicates an increase of up to 80 percent versus the baseline regular levels, and blue indicates a decrease of up to –40 percent versus regular levels. This TEC shift persisted throughout September 2019 over the western United States, but was short-lived over Europe, indicating different mechanisms at play.
The authors suggest that a change in the thermospheric zonal (east–west) winds are one reason for the variance between regions. Another factor is differences in magnetic declination angles; in areas with greater declination, the zonal winds can more efficiently transport plasma to higher or lower altitudes, leading to the build-up or depletion of plasma density.
More study is needed to determine the precise extent to which these factors affect the linkage between polar stratospheric events and near-Earth space in the opposite hemisphere. These studies remain a challenge, given the relative rarity of Antarctic SSWs and sparse availability of ionospheric data in the Southern Hemisphere.
As the Covid-19 pandemic has shown, we live in a richly connected world, facilitating not only the efficient spread of a virus but also of information and influence. What can we learn by analyzing these connections? This is a core question of network science, a field of research that models interactions across physical, biological, social, and information systems to solve problems.
The 2021 Graph Exploitation Symposium (GraphEx), hosted by MIT Lincoln Laboratory, brought together top network science researchers to share the latest advances and applications in the field.
"We explore and identify how exploitation of graph data can offer key technology enablers to solve the most pressing problems our nation faces today," says Edward Kao, a symposium organizer and technical staff in Lincoln Laboratory's AI Software Architectures and Algorithms Group.
The themes of the virtual event revolved around some of the year's most relevant issues, such as analyzing disinformation on social media, modeling the pandemic's spread, and using graph-based machine learning models to speed drug design.
"The special sessions on influence operations and Covid-19 at GraphEx reflect the relevance of network and graph-based analysis for understanding the phenomenology of these complicated and impactful aspects of modern-day life, and also may suggest paths forward as we learn more and more about graph manipulation," says William Streilein, who co-chaired the event with Rajmonda Caceres, both of Lincoln Laboratory.
Several presentations at the symposium focused on the role of network science in analyzing influence operations (IO), or organized attempts by state and/or non-state actors to spread disinformation narratives.
Lincoln Laboratory researchers have been developing tools to classify and quantify the influence of social media accounts that are likely IO accounts, such as those willfully spreading false Covid-19 treatments to vulnerable populations.
"A cluster of IO accounts acts as an echo chamber to amplify the narrative. The vulnerable population is then engaging in these narratives," says Erika Mackin, a researcher developing the tool, called RIO or Reconnaissance of Influence Operations.
To classify IO accounts, Mackin and her team trained an algorithm to detect probable IO accounts in Twitter networks based on a specific hashtag or narrative. One example they studied was #MacronLeaks, a disinformation campaign targeting Emmanuel Macron during the 2017 French presidential election. The algorithm is trained to label accounts within this network as being IO on the basis of several factors, such as the number of interactions with foreign news accounts, the number of links tweeted, or number of languages used. Their model then uses a statistical approach to score an account's level of influence in spreading the narrative within that network.
The team has found that their classifier outperforms existing detectors of IO accounts, because it can identify both bot accounts and human-operated ones. They've also discovered that IO accounts that pushed the 2017 French election disinformation narrative largely overlap with accounts influentially spreading Covid-19 pandemic disinformation today. "This suggests that these accounts will continue to transition to disinformation narratives," Mackin says.
Throughout the Covid-19 pandemic, leaders have been looking to epidemiological models, which predict how disease will spread, to make sound decisions. Alessandro Vespignani, director of the Network Science Institute at Northeastern University, has been leading Covid-19 modeling efforts in the United States, and shared a keynote on this work at the symposium.
Besides taking into account the biological facts of the disease, such as its incubation period, Vespignani's model is especially powerful in its inclusion of community behavior. To run realistic simulations of disease spread, he develops "synthetic populations" that are built by using publicly available, highly detailed datasets about U.S. households. "We create a population that is not real, but is statistically real, and generate a map of the interactions of those individuals," he says. This information feeds back into the model to predict the spread of the disease.
Today, Vespignani is considering how to integrate genomic analysis of the virus into this kind of population modeling in order to understand how variants are spreading. "It's still a work in progress that is extremely interesting," he says, adding that this approach has been useful in modeling the dispersal of the Delta variant of SARS-CoV-2.
As researchers model the virus' spread, Lucas Laird at Lincoln Laboratory is considering how network science can be used to design effective control strategies. He and his team are developing a model for customizing strategies for different geographic regions. The effort was spurred by the differences in Covid-19 spread across U.S. communities, and what the researchers found to be a gap in intervention modeling to address those differences.
As examples, they applied their planning algorithm to three counties in Florida, Massachusetts, and California. Taking into account the characteristics of a specific geographic center, such as the number of susceptible individuals and number of infections there, their planner institutes different strategies in those communities throughout the outbreak duration.
"Our approach eradicates disease in 100 days, but it also is able to do it with much more targeted interventions than any of the global interventions. In other words, you don't have to shut down a full country." Laird adds that their planner offers a "sandbox environment" for exploring intervention strategies in the future.
Machine learning with graphs
Graph-based machine learning is receiving increasing attention for its potential to "learn" the complex relationships between graphical data, and thus extract new insights or predictions about these relationships. This interest has given rise to a new class of algorithms called graph neural networks. Today, graph neural networks are being applied in areas such as drug discovery and material design, with promising results.
"We can now apply deep learning much more broadly, not only to medical images and biological sequences. This creates new opportunities in data-rich biology and medicine," says Marinka Zitnik, an assistant professor at Harvard University who presented her research at GraphEx.
Zitnik's research focuses on the rich networks of interactions between proteins, drugs, disease, and patients, at the scale of billions of interactions. One application of this research is discovering drugs to treat diseases with no or few approved drug treatments, such as for Covid-19. In April, Zitnik's team published a paper on their research that used graph neural networks to rank 6,340 drugs for their expected efficacy against SARS-CoV-2, identifying four that could be repurposed to treat Covid-19.
At Lincoln Laboratory, researchers are similarly applying graph neural networks to the challenge of designing advanced materials, such as those that can withstand extreme radiation or capture carbon dioxide. Like the process of designing drugs, the trial-and-error approach to materials design is time-consuming and costly. The laboratory's team is developing graph neural networks that can learn relationships between a material’s crystalline structure and its properties. This network can then be used to predict a variety of properties from any new crystal structure, greatly speeding up the process of screening materials with desired properties for specific applications.
"Graph representation learning has emerged as a rich and thriving research area for incorporating inductive bias and structured priors during the machine learning process, with broad applications such as drug design, accelerated scientific discovery, and personalized recommendation systems," Caceres says.
A vibrant community
Lincoln Laboratory has hosted the GraphEx Symposium annually since 2010, with the exception of last year's cancellation due to Covid-19. "One key takeaway is that despite the postponement from last year and the need to be virtual, the GraphEx community is as vibrant and active as it's ever been," Streilein says. "Network-based analysis continues to expand its reach and is applied to ever-more important areas of science, society, and defense with increasing impact."
In addition to those from Lincoln Laboratory, technical committee members and co-chairs of the GraphEx Symposium included researchers from Harvard University, Arizona State University, Stanford University, Smith College, Duke University, the U.S. Department of Defense, and Sandia National Laboratories.
MIT physicists have observed signs of a rare type of superconductivity in a material called magic-angle twisted trilayer graphene. In a study appearing today in Nature, the researchers report that the material exhibits superconductivity at surprisingly high magnetic fields of up to 10 Tesla, which is three times higher than what the material is predicted to endure if it were a conventional superconductor.
The results strongly imply that magic-angle trilayer graphene, which was initially discovered by the same group, is a very rare type of superconductor, known as a “spin-triplet,” that is impervious to high magnetic fields. Such exotic superconductors could vastly improve technologies such as magnetic resonance imaging, which uses superconducting wires under a magnetic field to resonate with and image biological tissue. MRI machines are currently limited to magnet fields of 1 to 3 Tesla. If they could be built with spin-triplet superconductors, MRI could operate under higher magnetic fields to produce sharper, deeper images of the human body.
The new evidence of spin-triplet superconductivity in trilayer graphene could also help scientists design stronger superconductors for practical quantum computing.
“The value of this experiment is what it teaches us about fundamental superconductivity, about how materials can behave, so that with those lessons learned, we can try to design principles for other materials which would be easier to manufacture, that could perhaps give you better superconductivity,” says Pablo Jarillo-Herrero, the Cecil and Ida Green Professor of Physics at MIT.
His co-authors on the paper include postdoc Yuan Cao and graduate student Jeong Min Park at MIT, and Kenji Watanabe and Takashi Taniguchi of the National Institute for Materials Science in Japan.
Superconducting materials are defined by their super-efficient ability to conduct electricity without losing energy. When exposed to an electric current, electrons in a superconductor couple up in “Cooper pairs” that then travel through the material without resistance, like passengers on an express train.
In a vast majority of superconductors, these passenger pairs have opposite spins, with one electron spinning up, and the other down — a configuration known as a “spin-singlet.” These pairs happily speed through a superconductor, except under high magnetic fields, which can shift the energy of each electron in opposite directions, pulling the pair apart. In this way, and through mechanisms, high magnetic fields can derail superconductivity in conventional spin-singlet superconductors.
“That’s the ultimate reason why in a large-enough magnetic field, superconductivity disappears,” Park says.
But there exists a handful of exotic superconductors that are impervious to magnetic fields, up to very large strengths. These materials superconduct through pairs of electrons with the same spin — a property known as “spin-triplet.” When exposed to high magnetic fields, the energy of both electrons in a Cooper pair shift in the same direction, in a way that they are not pulled apart but continue superconducting unperturbed, regardless of the magnetic field strength.
Jarillo-Herrero’s group was curious whether magic-angle trilayer graphene might harbor signs of this more unusual spin-triplet superconductivity. The team has produced pioneering work in the study of graphene moiré structures — layers of atom-thin carbon lattices that, when stacked at specific angles, can give rise to surprising electronic behaviors.
The researchers initially reported such curious properties in two angled sheets of graphene, which they dubbed magic-angle bilayer graphene. They soon followed up with tests of trilayer graphene, a sandwich configuration of three graphene sheets that turned out to be even stronger than its bilayer counterpart, retaining superconductivity at higher temperatures. When the researchers applied a modest magnetic field, they noticed that trilayer graphene was able to superconduct at field strengths that would destroy superconductivity in bilayer graphene.
“We thought, this is something very strange,” Jarillo-Herrero says.
A super comeback
In their new study, the physicists tested trilayer graphene’s superconductivity under increasingly higher magnetic fields. They fabricated the material by peeling away atom-thin layers of carbon from a block of graphite, stacking three layers together, and rotating the middle one by 1.56 degrees with respect to the outer layers. They attached an electrode to either end of the material to run a current through and measure any energy lost in the process. Then they turned on a large magnet in the lab, with a field which they oriented parallel to the material.
As they increased the magnetic field around trilayer graphene, they observed that superconductivity held strong up to a point before disappearing, but then curiously reappeared at higher field strengths — a comeback that is highly unusual and not known to occur in conventional spin-singlet superconductors.
“In spin-singlet superconductors, if you kill superconductivity, it never comes back — it’s gone for good,” Cao says. “Here, it reappeared again. So this definitely says this material is not spin-singlet.”
They also observed that after “re-entry,” superconductivity persisted up to 10 Tesla, the maximum field strength that the lab’s magnet could produce. This is about three times higher than what the superconductor should withstand if it were a conventional spin-singlet, according to Pauli’s limit, a theory that predicts the maximum magnetic field at which a material can retain superconductivity.
Trilayer graphene’s reappearance of superconductivity, paired with its persistence at higher magnetic fields than predicted, rules out the possibility that the material is a run-of-the-mill superconductor. Instead, it is likely a very rare type, possibly a spin-triplet, hosting Cooper pairs that speed through the material, impervious to high magnetic fields. The team plans to drill down on the material to confirm its exact spin state, which could help to inform the design of more powerful MRI machines, and also more robust quantum computers.
“Regular quantum computing is super fragile,” Jarillo-Herrero says. “You look at it and, poof, it disappears. About 20 years ago, theorists proposed a type of topological superconductivity that, if realized in any material, could [enable] a quantum computer where states responsible for computation are very robust. That would give infinite more power to do computing. The key ingredient to realize that would be spin-triplet superconductors, of a certain type. We have no idea if our type is of that type. But even if it’s not, this could make it easier to put trilayer graphene with other materials to engineer that kind of superconductivity. That could be a major breakthrough. But it’s still super early.”
This research was supported by the U.S. Department of Energy, the National Science Foundation, the Gordon and Betty Moore Foundation, the Fundacion Ramon Areces, and the CIFAR Quantum Materials Program.
A critical challenge in meeting the Paris Agreement’s long-term goal of keeping global warming well below 2 degrees Celsius is to vastly reduce carbon dioxide (CO2) and other greenhouse gas emissions generated by the most energy-intensive industries. According to a recent report by the International Energy Agency, these industries — cement, iron and steel, chemicals — account for about 20 percent of global CO2 emissions. Emissions from these industries are notoriously difficult to abate because, in addition to emissions associated with energy use, a significant portion of industrial emissions come from the process itself.
For example, in the cement industry, about half the emissions come from the decomposition of limestone into lime and CO2. While a shift to zero-carbon energy sources such as solar or wind-powered electricity could lower CO2 emissions in the power sector, there are no easy substitutes for emissions-intensive industrial processes.
Enter industrial carbon capture and storage (CCS). This technology, which extracts point-source carbon emissions and sequesters them underground, has the potential to remove up to 90-99 percent of CO2 emissions from an industrial facility, including both energy-related and process emissions. And that begs the question: Might CCS alone enable hard-to-abate industries to continue to grow while eliminating nearly all of the CO2 emissions they generate from the atmosphere?
The answer is an unequivocal yes in a new study in the journal Applied Energy co-authored by researchers at the MIT Joint Program on the Science and Policy of Global Change, MIT Energy Initiative, and ExxonMobil.
Using an enhanced version of the MIT Economic Projection and Policy Analysis (EPPA) model that represents different industrial CCS technology choices — and assuming that CCS is the only greenhouse gas emissions mitigation option available to hard-to-abate industries — the study assesses the long-term economic and environmental impacts of CCS deployment under a climate policy aimed at capping the rise in average global surface temperature at 2 C above preindustrial levels.
The researchers find that absent industrial CCS deployment, the global costs of implementing the 2 C policy are higher by 12 percent in 2075 and 71 percent in 2100, relative to policy costs with CCS. They conclude that industrial CCS enables continued growth in the production and consumption of energy-intensive goods from hard-to-abate industries, along with dramatic reductions in the CO2 emissions they generate. Their projections show that as industrial CCS gains traction mid-century, this growth occurs globally as well as within geographical regions (primarily in China, Europe, and the United States) and the cement, iron and steel, and chemical sectors.
“Because it can enable deep reductions in industrial emissions, industrial CCS is an essential mitigation option in the successful implementation of policies aligned with the Paris Agreement’s long-term climate targets,” says Sergey Paltsev, the study’s lead author and a deputy director of the MIT Joint Program and senior research scientist at the MIT Energy Initiative. “As the technology advances, our modeling approach offers decision-makers a pathway for projecting the deployment of industrial CCS across industries and regions.”
But such advances will not take place without substantial, ongoing funding.
“Sustained government policy support across decades will be needed if CCS is to realize its potential to promote the growth of energy-intensive industries and a stable climate,” says Howard Herzog, a co-author of the study and senior research engineer at the MIT Energy Initiative.
The researchers also find that advanced CCS options such as cryogenic carbon capture (CCC), in which extracted CO2 is cooled to solid form using far less power than conventional coal- and gas-fired CCS technologies, could help expand the use of CCS in industrial settings through further production cost and emissions reductions.
The study was supported by sponsors of the MIT Joint Program and by ExxonMobil through its membership in the MIT Energy Initiative.
Professor Emeritus Justin “Jake” Kerwin, an expert in propeller design and ship hydrodynamics, dies at 90
Justin “Jake” Kerwin ’53, SM ’54, PhD ’61, professor emeritus of naval architecture, passed away at the age of 90 on May 23. Kerwin, who served on MIT’s ocean engineering faculty for four decades, was an internationally recognized expert in propeller design, ship hydrodynamics, and predicting racing yacht performance.
Kerwin had an international upbringing, growing up in the Netherlands, London, and eventually New York. He first arrived at MIT as an undergraduate in 1949. In addition to studying naval architecture, Kerwin was an avid sailor and member of the MIT Sailing Team. His passion for sailing would carry throughout his career.
After receiving his bachelor’s degree in from MIT in 1953 and his master’s degree in 1954, he was named a Fulbright Scholar. For his scholarship, he returned to the Netherlands, where he studied marine propeller hydrodynamics at the Delft University of Technology. Upon completing his Fulbright, Kerwin joined the U.S. Air Force as 1st Lieutenant. During his time in the Air Force, he worked on rescue boats.
Kerwin returned to MIT in 1957 to pursue his doctoral degree in marine propeller hydrodynamics while serving as a full-time lecturer. He was invited to join the then Department of Naval Architecture and Marine Engineering (now part of the Department of Mechanical Engineering) as assistant professor in 1960, one year prior to receiving his PhD.
For 40 years, Kerwin led the marine propeller research program at MIT. He was a pioneer in the use of computational techniques for marine propeller design and helped develop an open-source code used in propeller and turbine design. He also served as director of the Marine Hydrodynamics Water Tunnel, a water tank originally used for testing ship propellers.
In addition to propeller research, Kerwin conducted research on his lifelong passion of sailing. Alongside fellow faculty member Professor J.N. “Nick” Newman, he co-organized the H. Irving Pratt Ocean Racing Handicapping Project. The project greatly improved predictions of the speed of sailing yachts and resulted in the International Measurement System of handicapping yachts during races. He also pursued his passion in his personal life, often sailing and racing his sailboat “Chantey” with his family.
Throughout his long career, Kerwin was celebrated with a number of prestigious awards. He was a member of the Society of Naval Architects and Marine Engineers (SNAME) and received SNAME’s Joseph H. Linnard Prize for exceptional publications four times. Kerwin was awarded the David W. Taylor Medal for outstanding achievements in naval architecture in 1992. Several years later, he was honored with the Gibbs Brothers Medal from the National Academy of Sciences for outstanding contributions in the field of naval architecture and marine engineering. In 2000, he was elected to the National Academy of Engineering.
After retiring as professor emeritus in 2001, Kerwin and his wife Marilyn played jazz alongside fellow retired MIT ocean engineering faculty in a band known as the “Ancient Mariners.” He served as pianist and she played bass. The band was extremely active, playing gigs across New England and throughout the US.
Kerwin’s beloved wife Marilyn passed away just one month after him, on June 21. They are survived by their daughter Melinda and son John. A private celebration of life event has been organized by the Kerwin family.
“You get the high field, you get the performance.”
Senior Research Scientist Brian LaBombard is summarizing what might be considered a guiding philosophy behind designing and engineering fusion devices at MIT’s Plasma Science and Fusion Center (PSFC). Beginning in 1972 with the Alcator A tokamak, through Alcator C (1978) and Alcator C-Mod (1991), the PSFC has used magnets with high fields to confine the hot plasma in compact, high-performance tokamaks. Joining what was then the Plasma Fusion Center as a graduate student in 1978, just as Alcator A was finishing its run, LaBombard is one of the few who has worked with each iteration of the high-field concept. Now he has turned his attention to the PSFC’s latest fusion venture, a fusion energy project called SPARC.
Designed in collaboration with MIT spinoff Commonwealth Fusion Systems (CFS), SPARC employs novel high temperature superconducting (HTS) magnets at high-field to achieve fusion that will produce net energy gain. Some of these magnets will wrap toroidally around the tokamak’s doughnut-shaped vacuum chamber, confining fusion reactions and preventing damage to the walls of the device.
The PSFC has spent three years researching, developing, and manufacturing a scaled version of these toroidal field (TF) coils — the toroidal field model coil, or TFMC. Before the TF coils can be built for SPARC, LaBombard and his team need to test the model coil under the conditions that it will experience in this tokamak.
HTS magnets need to be cooled in order to remain superconducting, and to be protected from the heat generated by current. For testing, the TFMC will be enclosed in a cryostat, cooled to the low temperatures needed for eventual tokamak operation, and charged with current to produce magnetic field. How the magnet responds as the current is provided to the coil will determine if the technology is in hand to construct the 18 TF coils for SPARC.
A history of achievement
That LaBombard is part of the PSFC’s next fusion project is not unusual; that he is involved in designing, engineering, and testing the magnets is. Until 2018, when he led the R&D research team for one of the magnet designs being considered for SPARC, LaBombard’s 30-plus years of celebrated research had focused on other areas of the fusion question.
As a graduate student, he gained early acclaim for the research he reported in his PhD thesis. Working on Alcator C, he made groundbreaking discoveries about the plasma physics in the “boundary” region of the tokamak, between the edge of the fusing core and the wall of the machine. With typical modesty, LaBombard credits some of his success to the fact that the topic was not well-studied, and that Alcator C provided measurements not possible on other machines.
“People knew about the boundary, but nobody was really studying it in detail. On Alcator C, there were interesting phenomena, such as marfes [multifaceted asymmetric radiation from the edge], being detected for the first time. This pushed me to make boundary layer measurements in great detail that no one had ever seen before. It was all new territory, so I made a big splash.”
That splash established him as a leading researcher in the field of boundary plasmas. After a two-year turn at the University of California at Los Angeles working on a plasma-wall test facility called PISCES, LaBombard, who grew up in New England, was happy to return to MIT to join the PSFC’s new Alcator C-Mod project.
Over the next 28 years of C-Mod’s construction phase and operation, LaBombard continued to make groundbreaking contributions to understanding tokamak edge and divertor plasmas, and to design internal components that can survive the harsh conditions and provide plasma control — including C-Mod’s vertical target plate divertor and a unique divertor cryopump system. That experience led him to conceive of the "X-point target divertor" for handling extreme fusion power exhaust and to propose a national Advanced Divertor tokamak eXperiment (ADX) to test such ideas.
All along, LaBombard’s true passion was in creating revolutionary diagnostics to unfold boundary layer physics and in guiding graduate students to do the same: an Omegatron, to measure impurity concentrations directly in the boundary plasma, resolved by charge-to-mass ratio; fast-scanning Langmuir-Mach probes to measure plasma flows; a Shoelace Antenna to provide insight into plasma fluctuations at the edge; the invention of a Mirror Langmuir Probe for the real-time measurements of plasma turbulence at high bandwidth.
His expertise established, he could have continued this focus on the edge of the plasma through collaborations with other laboratories and at the PSFC. Instead, he finds himself on the other side of the vacuum chamber, immersed in magnet design and technology. Challenged with finding an effective HTS magnet design for SPARC, he and his team were able to propose a winning strategy, one that seemed most likely to achieve the compact high field and high performance that PSFC tokamaks have been known for.
LaBombard is stimulated by his new direction and excited about the upcoming test of the TFMC. His new role takes advantage of his physics background in electricity and magnetism. It also supports his passion for designing and building things, which he honed as high school apprentice to his machinist father and explored professionally building systems for Alcator C-Mod.
“I view my principal role is to make sure the TF coil works electrically, the way it's supposed to,” he says. “So it produces the magnetic field without damaging the coil.”
A successful test would validate the understanding of how the new magnet technology works, and will prepare the team to build magnets for SPARC.
Among those overseeing the hours of TFMC testing will be graduate students, current and former, reminding LaBombard of his own student days working on Alcator C, and of his years supervising students on Alcator C-Mod.
“Those students were directly involved with Alcator C-Mod. They would jump in, make things happen — and as a team. This team spirit really enabled everyone to excel.
“And looking to when SPARC was taking shape, you could see that across the board, from the new folks to the younger folks, they really got engaged by the spirit of Alcator — by recognition of the plasma performance that can be made possible by high magnetic fields.”
He laughs as he looks to the past and to the future.
“And they are taking it to SPARC.”
Paul Lagacé, a professor of aeronautics and astronautics at MIT, died July 16 in his home in Wilmington, Massachusetts. He was 63.
A longtime member of the MIT community, Lagacé graduated from Course 16 (aeronautics and astronautics) with his bachelor's degree in 1978, his master's in 1979, and his PhD 1982. He joined the faculty in the Department of Aeronautics and Astronautics in 1982.
Lagacé’s research focused on the design and manufacture of composite structures and materials mainly used in the aerospace industry. The work of his research laboratory, the Technology Laboratory for Advanced Materials and Structures, or TELAMS, ranged from characterizing a basic understanding of composite materials to exploring their behavior in specific structural configurations to computational modeling in solid mechanics. The lab also worked on the design, fabrication, and testing of micro-electromechanical systems (MEMS), along with their associated materials and processes.
"Paul's most significant research contributions were in building an intellectual bridge between the material properties of emerging advanced composites, and their application to and certification in aircraft structures," says Edward Crawley, the Ford Professor of Engineering and a longtime colleague of Lagacé in the Department of Aeronautics and Astronautics.
Lagacé was widely recognized for his research expertise, particularly as it applied to the response and failure of composite structures, and the development of composite structures technology and the safety of aircraft structural systems. He was highly sought-after as an advisor and consultant to industry and government agencies on aspects of structural technology and broader engineering systems. He has served as a consultant, expert witness, and member of committees and panels in the investigation of accidents and their implications.
Lagacé held fellowships with the American Institute of Aeronautics and Astronautics, the American Society for Composites, and the American Society for Testing and Materials (now known as ASTM International). He served as president of the International Committee on Composite Materials and was recognized as a World Fellow of Composites and Honorary Member of the Executive Council.
In addition to his research, he was actively involved in education and service to MIT. Lagacé taught courses in mechanics of materials and structures with special emphasis on composite materials and their structures. In 1995, he was named a MacVicar Faculty Fellow, an honor that recognizes outstanding classroom teaching, significant innovations in education, and dedication to helping others achieve teaching excellence. He served as co-director of the MIT Leaders for Manufacturing (LFM — now Leaders for Global Operations, or LGO) and Systems Design and Management (SDM) programs, which are both co-sponsored by the School of Engineering and the Sloan School of Management. Drawing on his own experience, Lagacé was instrumental in launching MIT’s First-Generation Program. For many years, he was the lead faculty marshal during MIT’s commencement celebration, leading the faculty procession at the beginning and end of the ceremony.
Lagacé, a first-generation college student, grew up in Lewiston, Maine. He described his hometown as a “blue-collar town” that attracted many French-Canadian immigrants like his grandparents to work in the mills that used energy generated from the nearby Androscoggin River to produce shoes, textiles, and bricks. Lagacé’s parents — a self-employed painter and paper-hanger and a stay-at-home mother who worked part-time as a bookkeeper for her husband’s business — supported his goal to further his education, making it possible for him to attend a Jesuit high school and expanding his horizons by giving him a book about colleges one year for his birthday. Growing up during the advent of the Space Age had a profound effect on Lagacé; his interest in aerospace coupled with his academic strengths in math and science led him to MIT.
“In my many years of working with Paul, it was always clear to me that he loved MIT and he especially loved our students,” says Daniel Hastings, head of the Department of Aeronautics and Astronautics and Cecil and Ida Green Education Professor.
Outside of MIT, Lagacé was a passionate sports fan particularly devoted to the Boston Red Sox. After seeing his first game at Fenway Park with his grandfather in 1968, Lagacé wove the Red Sox into his life in countless ways, from lining up a business trip with their travel game schedule to taking his future wife to a game on their first date. To the delight of local media, Lagacé also found a way to integrate his love of the Red Sox with his aeronautical knowledge into a real-life problem set for his students.
In the early 1990s, Lagacé observed that fewer balls seemed to reach the center field stands. He worked with his undergraduate students to construct a model of Fenway Park, which they then tested in MIT’s Wright Brothers Wind Tunnel to simulate the wind and baseball trajectory pathways. He concluded that a recently constructed press box created a wind vortex that prevented baseballs from reaching as far as they used to.
Lagacé is survived by his wife of 38 years, Robin, his brother, Daniel, and sister-in-law, Elyse, as well as aunts, uncles, relatives, and friends. Donations can be made in Paul's memory to the Epilepsy Foundation or the Jimmy Fund.
Since the Covid-19 pandemic began last year, face masks and other personal protective equipment have become essential for health care workers. Disposable N95 masks have been in especially high demand to help prevent the spread of SARS-CoV-2, the virus that causes Covid-19.
All of those masks carry both financial and environmental costs. The Covid-19 pandemic is estimated to generate up to 7,200 tons of medical waste every day, much of which is disposable masks. And even as the pandemic slows down in some parts of the world, health care workers are expected to continue wearing masks most of the time.
That toll could be dramatically cut by adopting reusable masks, according to a new study from MIT that has calculated the financial and environmental cost of several different mask usage scenarios. Decontaminating regular N95 masks so that health care workers can wear them for more than one day drops costs and environmental waste by at least 75 percent, compared to using a new mask for every encounter with a patient.
“Perhaps unsurprisingly, the approaches that incorporate reusable aspects stand to have not only the greatest cost savings, but also significant reduction in waste,” says Giovanni Traverso, an MIT assistant professor of mechanical engineering, a gastroenterologist at Brigham and Women’s Hospital, and the senior author of the study.
The study also found that fully reusable silicone N95 masks could offer an even greater reduction in waste. Traverso and his colleagues are now working on developing such masks, which are not yet commercially available.
Jacqueline Chu, a physician at Massachusetts General Hospital, is the lead author of the study, which appears in the British Medical Journal Open.
Reduce and reuse
In the early stages of the Covid-19 pandemic, N95 masks were in short supply. At many hospitals, health care workers were forced to wear one mask for a full day, instead of switching to a new one for each patient they saw. Later on, some hospitals, including MGH and Brigham and Women’s Hospital in Boston, began using decontamination systems that use hydrogen peroxide vapor to sterilize masks. This allows one mask to be worn for a few days.
Last year, Traverso and his colleagues began developing a reusable N95 mask that is made of silicone rubber and contains an N95 filter that can be either discarded or sterilized after use. The masks are designed so they can be sterilized with heat or bleach and reused many times.
“Our vision was that if we had a reusable system, we could reduce the cost,” Traverso says. “The majority of disposable masks also have a significant environmental impact, and they take a very long time to degrade. During a pandemic, there’s a priority to protect people from the virus, and certainly that remains a priority, but for the longer term, we have to catch up and do the right thing, and strongly consider and minimize the potential negative impact on the environment.”
Throughout the pandemic, hospitals in the United States have been using different mask strategies, based on availability of N95 masks and access to decontamination systems. The MIT team decided to model the impacts of several different scenarios, which encompassed usage patterns before and during the pandemic, including: one N95 mask per patient encounter; one N95 mask per day; reuse of N95 masks using ultraviolet decontamination; reuse of N95 masks using hydrogen peroxide sterilization; and one surgical mask per day.
They also modeled the potential cost and waste generated by the reusable silicone mask that they are now developing, which could be used with either disposable or reusable N95 filters.
According to their analysis, if every health care worker in the United States used a new N95 mask for each patient they encountered during the first six months of the pandemic, the total number of masks required would be about 7.4 billion, at a cost of $6.4 billion. This would lead to 84 million kilograms of waste (the equivalent of 252 Boeing 747 airplanes).
They also found that any of the reusable mask strategies would lead to a significant reduction in cost and in waste generated. If each health care worker were able to reuse N95 masks that were decontaminated with hydrogen peroxide or ultraviolet light, costs would drop to $1.4 billion to $1.7 billion over six months, and 13 million to 18 million kilograms of waste would result (the equivalent of 39 to 56 747s).
Those numbers could potentially be reduced even further with a reusable, silicone N95 mask, especially if the filters were also reusable. The researchers estimated that over six months, this type of mask could reduce costs to $831 million and waste to 1.6 million kilograms (about five 747s).
“Masks are here to stay for the foreseeable future, so it’s critical that we incorporate sustainability into their use, as well as the use of other disposable personal protective equipment that contribute to medical waste,” Chu says.
The data the researchers used for this study were gathered during the first six months of the pandemic in the United States (late March 2020 to late September 2020). Their calculations are based on the total number of health care workers in the United States, the number of Covid-19 patients at the time, and the length of hospital stay per patient, among other factors. Their calculations do not include any data on mask usage by the general public.
“Our focus here was on health care workers, so it’s likely an underrepresentation of the total cost and environmental burden,” Traverso notes.
While vaccination has helped to reduce the spread of Covid-19, Traverso believes health care workers will likely continue to wear masks for the foreseeable future, to protect against not only Covid-19 but also other respiratory diseases such as influenza.
He and others have started a company called Teal Bio that is now working on further refining and testing their reusable silicone mask and developing methods for mass manufacturing it. They plan to seek regulatory approval for the mask later this year. While cost and environmental impact are important factors to consider, the effectiveness of the masks also needs to be a priority, Traverso says.
“Ultimately, we want the systems to protect us, so it’s important to appreciate whether the decontamination system is compromising the filtering capacity or not,” he says. “Whatever you’re using, you want to make sure you’re using something that’s going to protect you and others.”
The research was funded by the MIT Undergraduate Research Opportunities Program, the National Institutes of Health, and MIT’s Department of Mechanical Engineering. Other authors of the paper include Omkar Ghenand, an MIT undergraduate; Joy Collins, a senior clinical research coordinator at Brigham and Women’s Hospital and a former MIT technical associate; James Byrne, a radiation oncologist at Brigham and Women’s Hospital and research affiliate at MIT’s Koch Institute for Integrative Cancer Research; Adam Wentworth, a research engineer at Brigham and Women’s Hospital and a research affiliate at the Koch Institute; Peter Chai, an emergency medicine physician at Brigham and Women’s Hospital; Farah Dadabhoy, an MIT research affiliate; and Chin Hur, a professor of medicine and epidemiology at Columbia University.
It can be difficult to get drugs to disease sites along the gastrointestinal tract, which spans the mouth, esophagus, stomach, small and large intestine, and anus. Invasive treatments can take hours as patients wait for adequate amounts of drugs to be absorbed at the right location. The same problem is holding back newer treatments like gene-altering therapies.
Now the MIT spinout Suono Bio is advancing a new approach that uses ultrasound to deliver drugs, including nucleic acids like DNA and RNA, to the GI tract more effectively. The company believes its technology can be used to get a broad array of therapeutic molecules into the areas of the body that have proven most difficult to drug.
“Ultrasound is a well-known technology that’s been used for decades in the clinic,” Suono co-founder and CTO Carl Schoellhammer PhD ’15 says. “But now we’re doing something really unique and novel with it to facilitate the delivery of things that couldn’t be delivered before.”
Suono’s technology is the culmination of more than three decades of discoveries made in MIT labs by researchers including Schoellhammer and fellow Suono co-founders Robert Langer, who is the David H. Koch Institute Professor at MIT, and Giovanni Traverso, an assistant professor at MIT. The platform takes advantage of a phenomena in which ultrasound waves create little jets in liquid that can be used to push drugs into cells.
The company’s first treatment program targets ulcerative colitis. Last week, Suono announced a funding round to advance that program and others in its pipeline into clinical trials.
Beyond that first program, the founders say the platform could be used to deliver a range of molecules, from nucleic acids to peptides and larger proteins, to any part of the GI tract. And although the first iteration of Suono’s delivery platform will leverage hand-held systems, the founders believe the technology could one day be contained in a battery-powered, ingestible pill.
“That [first drug candidate] is the proof of concept where we could potentially solve a very pressing clinical problem and do a lot of good for a lot of patients,” Schoellhammer says. “But then you’ve de-risked the whole platform, because the trial is applying ultrasound to a mucosal surface, and your entire GI tract is one big mucosal surface. So, all the subsequent products that we do, even in other form factors, will build on each other.”
A discovery with promise
Schoellhammer was a PhD candidate in chemical engineering between 2010 and 2015. During that time, he was co-advised by Daniel Blankschtein, the Herman P. Meissner Professor of Chemical Engineering, and Langer, who has co-founded over 40 companies.
Langer and Blankschtein first discovered that ultrasound waves can be used to help drugs pass through the skin in 1995. When ultrasound waves pass through a fluid, they create tiny, imploding bubbles that, upon popping, create forces capable of delivering drugs into cells before the drugs degrade. Nearly two decades later, Schoellhammer and collaborators at MIT took that discovery a step further by applying two different beams of ultrasound waves to skin simultaneously to further enhance the cell-penetrating forces.
At the time, Traverso was a gastroenterology fellow at the Massachusetts General Hospital completing the research portion of his training in Langer’s lab. Schoellhammer, Traverso, and other collaborators decided to see if ultrasound could enhance drug delivery to the GI tract. “It seemed to work so well on skin we figured why not try other places in the body,” Schoellhammer remembers.
Drugs typically need to be encapsulated by a protective coating to be delivered into the body without degrading. For the researchers’ first experiment, they combined raw biologic drugs and ultrasound waves. To their surprise, the drugs were absorbed effectively by the GI tract. The method worked for the delivery of proteins, DNA, RNA, and forms of RNA used in treatments, such as mRNA and siRNA.
“Long story short, we just found that everything works,” Schoellhammer says. “We could deliver a broad range of classes of drugs without formulation. The GI tract is designed to absorb, but it generally absorbs small molecules. Anything larger, whether it be biologics, proteins, gene therapies, are degraded because at the same time the GI tract is a very inhospitable environment. It has a low pH and a wealth of proteases and nucleases to chew up all these molecules. So, delivery of those sorts of compounds to the GI tract is kind of the holy grail.”
The breakthrough convinced Schoellhammer the technology could one day improve treatment options for patients, and he went on to work with the Deshpande Center for Technological Innovation, participate in the MIT $100K Entrepreneurship Competition, receive funding from The Engine investment fund, and embrace a number of other educational experiences he says were integral to starting Suono.
“It’s mentors like Bob, mentors like Gio, being able to take classes at MIT’s business school, working with the Technology Licensing Office at MIT and getting to learn from their perspective in terms of what they’re looking for in protecting technology and engaging external groups, support from the Deshpande Center where we got an early grant; I was also the recipient of the 2015 Lemelson-MIT Program’s student prize,” Schoellhammer says of the things that helped his entrepreneurial journey. “Without all those pieces, Suono doesn’t exist, and the technology doesn’t exist to hopefully one day get to patients.”
Subsequent research confirmed the ultrasound delivery method could be used to deliver drugs anywhere along the gastrointestinal tract. It also showed the drugs were absorbed far more efficiently and had more positive effects than treatments that used other delivery methods.
“The breadth of molecules that can be delivered is extremely unusual for a drug delivery technology, so that’s really exciting,” Traverso says. “Those observations are further bolstered by the recoveries we’ve seen when ultrasound has been applied in GI disease models.”
Getting to patients
Suono expects to begin clinical trials in the next 12 to 18 months. The founders believe getting one drug approved will not only validate the efficacy of their approach but simplify regulatory hurdles for future drugs, even if later treatments look much different from what’s being administered today.
“Ultrasound can be packaged in many different form factors, so it could be in a system that’s giving an enema, on an endoscope, or in a pill,” Traverso says. “Using ultrasound in all of those ways opens up many new opportunities. The work now is identifying the top opportunities given that so many things could be done.”
In addition to inflammatory bowel disease, Suono is exploring treatments for many other disorders of the GI tract. The localized delivery platform could make treatments of certain cancers, for example, more precise and effective.
“Like any company, we have to think very hard about the logical lead indication,” Schoellhammer says. “And so, we’re starting by targeting ulcerative colitis. But that’s not where we’re ending. That will build the value of the whole platform, which will ultimately one day be fully ingestible systems for oral delivery of anything: oral delivery of biologics, oral delivery of nucleic acids. It’s that long-term vision we’re focused on with this path.”
When the Voyager 1 and Voyager 2 spacecraft launched in 1977, they each carried a Golden Record, a special project spearheaded by astrophysicist Carl Sagan, in addition to the scientific instruments necessary for their mission to explore the outer reaches of our solar system. Part time capsule, part symbolic ambassador of goodwill, the Golden Record comprises sounds, images, music, and greetings in 59 languages, providing a snapshot of life on Earth for the edification of any intelligent extraterrestrial beings the spacecraft might encounter.
Today, while Voyager 1 and 2 hurtle on through interstellar space more than 14 billion and 12 billion miles away, respectively, the Golden Record and the iconic etching on its cover has inspired a new student-run initiative, the Humanity United with MIT Art and Nanotechnology in Space (HUMANS) project, which aims to send a message that hits a little closer to home: that space is for everyone.
“We want to invite the world to submit a message to our project website — either text or audio, or both! — sharing what space means to them and to humanity in their native languages,” says project co-founder Maya Nasr, a graduate student in the Department of Aeronautics and Astronautics. “Our goal is to use art and nanotechnology to create a symbol of unity that promotes global representation in space and brings awareness to the need for expanded access to the space sector worldwide.”
Nasr and her fellow HUMANS project co-founder Lihui Lydia Zhang '21, a graduate of MIT's Technology Policy Program, are collecting submissions this summer into the fall semester via a submission portal on their website, humans.mit.edu. Taking inspiration from One.MIT, a project to etch more than 270,000 names from the MIT community on a 6-inch wafer, they have partnered with MIT.nano to etch both text and audio waveforms onto a 6-inch disk.
Finally, in collaboration with the Space Exploration Initiative (SEI) at the MIT Media Lab, this new “record of our voices” will travel to the International Space Station (ISS) on a future mission.
For both Nasr and Zhang, the philosophy “space for all” is personal. The two bonded over their shared experience as international students whose own passion for space brought them to MIT: Nasr grew up in Lebanon, while Zhang grew up in China. In their journeys in the space sector, they have both faced constant challenges and struggles that limited them from fully contributing their learning and passion.
These challenges generated a shared frustration, but more importantly, a vision that space should be more accessible and representative for more people around the world. As classmates in 16.891 (Space Policy Seminar) with Professor Dava Newman, they came across an open call for proposals for developing suborbital and ISS payloads from the SEI. Nasr and Zhang put their heads together to create their proposal for the HUMANS project.
“The International Space Station is one of the few avenues that represents international cooperation in space, but there are still so many countries around the world that aren't included in that representation,” says Zhang. “The HUMANS project won't solve this problem, but we hope it will be a small step forward to help us advocate for expanding global access to space.”
In addition to Nasr and Zhang, HUMANS project collaborators include faculty advisor Jeffrey Hoffman, professor of the practice in aeronautics and astronautics; advisor Ariel Ekblaw, director of SEI; website developer and rising senior Claire Cheng; Xin Lu and Sean Auffinger from SEI; Professor Craig Carter from the Department of Materials Science and Engineering (DMSE); and Georgios Varnavides, a graduate student in DMSE.
To participate in the HUMANS project, visit humans.mit.edu to submit a text and/or audio message. Messages must follow project guidelines to be included on the final disk that will be sent into space.
Much of the effort to make businesses sustainable centers on their supply chains, which were severely disrupted during the Covid-19 pandemic. Yet, according to new research from the MIT Center for Transportation and Logistics (CTL), supply chain sustainability (SCS) investments hardly slowed, even as the pandemic raged.
The finding, contained in the 2021 State of Supply Chain Sustainability report, puts companies on notice that they ignore the sustainability of their supply chains at their peril. This is particularly the case for enterprises with a low or moderate commitment to SCS, such as organizations classed as “Low Effort” and “Dreamer” in the new SCS Firm Typology that appears in the report for the first time.
The research also highlights the increasing pressure companies are under to devote resources to SCS. This pressure came from various stakeholders last year and suggests that sustainability in supply chains is a business trend, and not a fad.
CTL publishes the 2021 State of Supply Chain Sustainability report in collaboration with the Council of Supply Chain Management Professionals (CSCMP), a leading professional membership association. This year’s report is sponsored by BlueYonder, C.H. Robinson, KPMG, Intel, and Sam’s Club.
Sustainability efforts undaunted by Covid-19
“We believe cooperation between sectors is vital to thoroughly understand the complexity and evolution of sustainability efforts more broadly,” says David Correll, CTL research scientist. “Our work with CSCMP and our sponsors helps us to embed this essential research and its findings within the context of the real-life practice of supply chain management.”
The research included a large-scale international survey of supply chain professionals with over 2,400 respondents — more than double the number received for the previous report. The survey was conducted in late 2020. In addition, 21 in-depth executive interviews were completed, and relevant news items, social media content, and reports were analyzed for the report.
More than 80 percent of survey respondents claimed the pandemic had no impact or increased their firms’ commitments to SCS: Eighty-three percent of the executives interviewed said that Covid-19 had either accelerated SCS activity or, at the very least, increased awareness and brought urgency to this growing field.
The pressure to support sustainability in supply chains came from multiple sources, both internal and external, but increased the most among investors and industry associations. Internally, company executives were standout champions of SCS.
Although there are many approaches to investing in SCS, interest in human rights protection and worker welfare, along with energy savings and renewable energy, increased significantly last year. Supplier development was the most common mechanism used by firms to deliver on their SCS promises.
Increasing investment, some speed bumps
Given the momentum behind SCS, the future will likely bring more investment in this increasingly important area of supply chain management. And practitioners — who bring deep domain expertise and well-rounded views of enterprises to the table — will become more influential as sustainability advocates.
But there are some formidable obstacles to overcome, too. For example, it is notable that most of the momentum behind SCS appeared to come from large (1,000-plus employees) and very large (10,000-plus employees) companies covered by the research. Small- to medium-sized enterprises were far less committed, and more work is needed to bring them into the fold through a better understanding of the barriers they face.
A broader concern is that more attention from stakeholders — notably consumers, investors, and regulators — will bring more scrutiny of firms’ SCS track records, and less tolerance of token efforts to make supply chains sustainable. Improved supply chain transparency and disclosure are critical to firms’ responses, the report suggests.
Some high-profile issues, such as combating social injustices and climate change mitigation, will continue to stoke the pressure on companies to invest in meaningful SCS initiatives. It follows that the connection between companies’ SCS performance and their profitability is likely to strengthen over the next few years.
Will companies follow through?
As companies grapple with these issues, they will face some difficult decisions. For example, the chief operating officer of a consumer goods company interviewed for the report described operating through pandemic constraints as a “moral calculus” where some sustainability commitments had to be temporarily sacrificed to achieve others. Such a calculus will likely challenge many companies as they juggle their responses to SCS demands. A key question is to ascertain the degree to which companies’ recent net-zero commitments will translate into effective SCS actions over the next few years.
The CTL and CSCMP research teams are laying the groundwork for the 2022 State of Supply Chain Sustainability report. This annual status report aims to help practitioners and the industry to make more effective and informed sustainability decisions. The questionnaire for next year’s report will open in September.