MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 20 hours 29 min ago

School of Engineering welcomes new faculty

Thu, 04/01/2021 - 4:15pm

The School of Engineering is welcoming 15 new faculty members to its departments, institutes, labs, and centers. With research and teaching activities ranging from the development of robotics and AI technologies to the modeling and optimization of renewable energy systems, they are poised to make significant contributions in new directions across the school and to a wide range of research efforts around the Institute.

“I am happy to welcome our wonderful new faculty,” says Anantha Chandrakasan, dean of the School of Engineering. “Their talents and expertise as educators, researchers, collaborators, and mentors will enhance the engineering community and strengthen our global impact.”

Navid Azizan will join the MIT faculty with dual appointments in the Department of Mechanical Engineering and the Institute for Data, Systems, and Society (IDSS) as an assistant professor in September. He is currently a postdoc in the Autonomous Systems Laboratory at Stanford University. He received his PhD in computing and mathematical sciences from Caltech in 2020, his MS from the University of Southern California in 2015, and his BS from Sharif University of Technology in 2013. Additionally, he was a research intern at Google DeepMind in 2019. Azizan’s research interests broadly lie in machine learning, control theory, mathematical optimization, and network science. He has made fundamental contributions to various aspects of intelligent systems, including the design and analysis of optimization algorithms for nonconvex and networked problems with applications to the smart grid, distributed computation, epidemics, and autonomy. Azizan’s work has been recognized by several awards, including the 2020 Information Theory and Applications (ITA) Graduation-Day Gold Award. He was named an Amazon Fellow in Artificial Intelligence in 2017 and a PIMCO Fellow in Data Science in 2018. His research on electricity markets received the ACM GREENMETRICS Best Student Paper Award in 2016. He was also the first-place winner and a gold medalist at the 2008 National Physics Olympiad in Iran. He co-organizes the popular “Control meets Learning” virtual seminar series.

Rodrigo Freitas joined the Department of Materials Science and Engineering (DMSE) in January as the AMAX Assistant Professor. He received his BS and MS degrees in physics from the University of Campinas in Brazil, and MS and PhD degrees in materials science and engineering from the University of California at Berkeley, followed by postdoctoral work at Stanford University. During his PhD, he was also a Livermore Graduate Scholar in the Materials Science Division of the Lawrence Livermore National Laboratory. He uses a combination of theoretical, computational, and data-driven approaches to study the mechanisms of microstructure evolution in materials. This research area is critical to understand and control materials kinetics at the microstructure level, and it has broad potential impact and application, which will lead to collaborations across DMSE and in the MIT Stephen A. Schwarzman College of Computing.

Marzyeh Ghassemi will join the Institute for Medical Engineering and Science and the Department of Electrical Engineering and Computer Science as an assistant professor in July. She received her PhD in computer science from MIT; her MS in biomedical engineering from Oxford University; and two BS degrees, in electrical engineering and computer science, from New Mexico State University. Her research focuses on creating and applying machine learning to human health improvement. Ghassemi’s work has been published in top conferences and journals including NeurIPS, FaCCT, The Lancet Digital HealthJAMA, the AMA Journal of Ethics, and Nature Medicine, and featured in media outlets such as MIT News, NVIDIA, and the Huffington Post. A British Marshall Scholar and American Goldwater Scholar who has completed graduate fellowships at organizations including Xerox and the NIH, Ghassemi has been named one of MIT Technology Review’s 35 Innovators Under 35. Ghassemi organized MIT's first Hacking Discrimination event and was awarded MIT’s 2018 Seth J. Teller Award for Excellence, Inclusion and Diversity. She also is on the Senior Advisory Council of Women in Machine Learning (WiML) and founded the ACM Conference on Health, Inference and Learning (ACM CHIL).

Dylan Hadfield-Menell will join the Department of Electrical Engineering and Computer Science as an assistant professor in July. Hadfield-Menell received his PhD in computer science from the University of California at Berkeley, and his MS and BS (both in computer science and electrical engineering) from MIT. His research focuses on the value alignment problem in artificial intelligence, and aims to help create algorithms that pursue the intended goals of their users. He is also interested in work that bridges the gap between AI theory and practical robotics, and the problem of integrated task and motion planning. Hadfield-Menell is an NSF Graduate Research Fellowship Recipient and a Berkeley Fellow, with multiple conference papers published in the AAAI/ACM Conference on AI, Ethics, and Society and the ACM/IEEE International Conference on Human-Robot Interaction, among others. He was the technical lead on The Future Starts Here Exhibit for the Victoria and Albert Museum, and has interned at Facebook and Microsoft.

Jack Hare joined the Department of Nuclear Science and Engineering as an assistant professor in January. He received his BA (2010) and his MS (2011) in natural sciences from the University of Cambridge, his MA in plasma physics from Princeton University in 2013, and his PhD in plasma physics from Imperial College London in 2017. After his PhD, he held postdoc appointments at Imperial College London, where he has researched magnetized turbulence in high-energy-density plasmas, and at the Max-Planck Institute for Plasma Physics, where he worked on the design of diagnostics for the ITER fusion reactor project. At MIT, his research will focus on fundamental plasma processes in magnetized high energy density plasmas, such as magnetic reconnection and magneto-hydrodynamic turbulence. These plasmas are created using intense pulses of electrical current generated by the new PUFFIN pulsed-power facility, hosted on campus at the Plasma Science and Fusion Center.

Samuel Hopkins will join the Department of Electrical Engineering and Computer Science as an assistant professor in January 2022. Hopkins received his PhD in computer science from Cornell University, and his BS in computer science and mathematics from the University of Washington. His research focuses on algorithms, optimization, and theoretical machine learning, especially through the lens of convex programming relaxations. He is a Miller Fellow, an NSF Graduate Research Fellow, a Microsoft Research Fellow, and has won the Cornell Computer Science Dissertation Award. Hopkins' publications include papers in FOCSSTOC, and the Annals of Statistics. Before coming to MIT, Hopkins was a Miller Fellow in the theory group at the University of California at Berkeley.

Michael F. Howland will join the department of Civil and Environmental Engineering as an assistant professor in September. Currently he is a postdoc at Caltech in the Department of Aerospace Engineering. He received his BS from Johns Hopkins University and his MS from Stanford University, both in mechanical engineering. He received his PhD from Stanford University in the Department of Mechanical Engineering. His research encompasses the flow physics of Earth’s atmosphere and the modeling, optimization, and control of renewable energy generation systems. Howland’s work is focused at the intersection of fluid mechanics, weather and climate modeling, uncertainty quantification, and optimization and control with an emphasis on renewable energy systems. He uses synergistic approaches including simulations, laboratory and field experiments, and modeling to understand the operation of renewable energy systems, with the goal of improving the efficiency, predictability, and reliability of low-carbon energy generation. He was the recipient of the Robert George Gerstmyer Award, the Creel Family Teaching Award, and the James F. Bell Award from Johns Hopkins University. He received the Tau Beta Pi scholarship, NSF Graduate Research Fellowship, and a Stanford Graduate Fellowship.

Yoon Kim will join the Department of Electrical Engineering and Computer Science as an assistant professor in July. Currently a research scientist at the MIT-IBM Watson AI Lab, Kim received a PhD in computer science from Harvard University, an MS in data science from New York University, an MA in statistics from Columbia, and dual BA degrees in mathematics and economics from Cornell University. Kim’s research focuses on machine learning and natural language processing. He is the recipient of a Google Fellowship.

Adrián Lozano-Duran joined the Department of Aeronautics and Astronautics at MIT as an Assistant Professor in January. He received his PhD in aerospace engineering from the Technical University of Madrid in 2015 on the use of graph theory to unravel the dynamics of chaotic patterns in fluids. From 2016 to 2020, he was a postdoc at Stanford University working on high-fidelity simulations of external aerodynamic applications. His research is focused on solving outstanding problems in physics and modeling of turbulent flows using transformative tools and creativity. His work includes turbulence theory and modeling by artificial intelligence, information theory, and quantum computing, with applications ranging from unmanned aerial vehicles and commercial airliners to hypersonic vehicles. He is the recipient of the Milton van Dyke Award from the American Physical Society (2017), the Center for Turbulence Research Fellowship from Stanford University (2016), and the Da Vinci Award for the top five European dissertations on Fluid Mechanics (2015).

Kelly A. Metcalf Pate joined the Department of Biological Engineering as an assistant professor and director of the Division of Comparative Medicine in March. As director of DCM, Pate will oversee the group of veterinarians and staff who serve as experts in animal models for the MIT community. Pate's research focuses on the role of platelets in the pathogenesis of viral infection, with an emphasis on HIV and cytomegalovirus, and on the refinement and development of animal models. 

Anand Natarajan joined the Department of Electrical Engineering and Computer Science as an assistant professor in September 2020. Natarajan received his PhD in physics from MIT, and an MS in computer science and BS in physics from Stanford University. Natarajan’s interests center upon theoretical quantum information, particularly nonlocality (e.g., Bell inequalities and nonlocal games), quantum complexity theory (especially the power of quantum interactive proof systems), and semidefinite programming hierarchies. He is the co-winner of the Best Paper Award at FOCS ’19 for paper NEEXP ⊆ MIP*, with John Wright, and is a gold medalist in the International Physics Olympiad. His conference papers have been published in the Proceedings of ITCS, Proceedings of FOCS, and Proceedings of CCC, among others. Before joining MIT, Natarajan was a postdoc at the Institute for Quantum Information and Matter at Caltech.

Jelena Notaros joined the Department of Electrical Engineering and Computer Science in June 2020 as an assistant professor, a principal investigator in the Research Laboratory of Electronics, and a core faculty member of the Microsystems Technology Laboratories. Notaros received her PhD and MS degrees from MIT in 2020 and 2017, respectively, and BS degree from the University of Colorado Boulder in 2015. Her research interests are in integrated silicon photonics devices, systems, and applications, with an emphasis on integrated optical phased arrays for lidar and augmented reality. Notaros was a Top-Three DARPA Riser, a 2018 DARPA D60 Plenary Speaker, a 2021 Forbes 30 Under 30 Listee, a 2020 MIT RLE Early Career Development Award recipient, an MIT Presidential Fellow, a National Science Foundation Graduate Research Fellow, a 2019 OSA CLEO Chair's Pick Award recipient, a 2014 IEEE R5 Student Paper Competition First Place Award recipient, a 2019 MIT MARC Best Paper Award recipient, a 2018 MIT EECS Rising Star, and a 2015 CU Boulder Chancellor’s Recognition Award recipient, among other honors.

Carlos Portela joined the Department of Mechanical Engineering as an assistant professor in August 2020. He received his PhD in mechanical engineering from Caltech in 2019. He was a postdoc at Caltech under the guidance of professors Julia Greer, Dennis Kochmann, and Chiara Daraio. Portela’s research lies at the intersection of materials science, mechanics, and nano-to-macro fabrication with the objective of designing and testing novel materials — with features spanning from nanometers to centimeters — that yield unprecedented mechanical, optical, and acoustic properties. His recent accomplishments have provided routes for fabrication of these so-called "nano-architected materials" in scalable processes as well as testing nanomaterials in real-world conditions such as supersonic impact, in collaboration with researchers at MIT’s Institute for Soldier Nanotechnologies. His present application areas involve the creation of novel lightweight armor materials, ultrasonic devices for medical purposes, and new generations of ultra-strong structural materials. Portela is the recipient of several awards including the Gold Paper Award at the Society of Engineering Science Meeting in 2019, the Centennial Award for the Best Thesis in Mechanical and Civil Engineering at Caltech, and the Caltech Rolf H. Sabersky Graduate Fellowship.

Ashia Wilson joined the Department of Electrical Engineering and Computer Science as an assistant professor in January. Wilson received her PhD in statistics from the University of California at Berkeley, and her BA in applied mathematics from Harvard University. Her research centers upon optimization, algorithmic decision-making, dynamical systems, and fairness within large-scale machine learning. A National Science Foundation Graduate Research Fellow, Wilson has received the NeurIPS ’17 Spotlight Paper Award for "The Marginal Value of Adaptive Methods in Machine Learning," and has performed research with Microsoft and Google AI. Her papers have been published in the Proceedings of the National Academy of Science, Advances in Neural Information Processing Systems, and the International Conference of Machine Learning, among others. Additionally, she has served as a reviewer for NeurIPS and the Journal of Machine Learning.

Sixian You joined the Department of Electrical Engineering and Computer Science as an assistant professor in March. You received her PhD and MS in bioengineering from the University of Illinois at Urbana-Champaign, and her BS in optical and electronic information from Huazhong University of Science and Technology. Her research interests are biophotonics, microscopy, and computational imaging. She has won the Microscopy Innovation Award and the Nikon Photomicrography Competition Image of Distinction award, and her work has been featured on the PNAS cover and as a Nature Communications Editors’ Highlight, among other honors. The You Lab focuses on developing optical imaging tools to enable noninvasive, deeper, faster, and richer visualization of dynamic biological processes and disease pathology. She was recently a postdoc at the University of California at Berkeley, and has been an engineer intern for Apple.

A streamlined approach to determining thermal properties of crystalline solids and alloys

Thu, 04/01/2021 - 3:55pm

In a September 2020 essay in Nature Energy, three scientists posed several “grand challenges” — one of which was to find suitable materials for thermal energy storage devices that could be used in concert with solar energy systems. Fortuitously, Mingda Li — the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering at MIT, who heads the department’s Quantum Matter Group — was already thinking along similar lines. In fact, Li and nine collaborators (from MIT, Lawrence Berkeley National Laboratory, and Argonne National Laboratory) were developing a new methodology, involving a novel machine-learning approach, that would make it faster and easier to identify materials with favorable properties for thermal energy storage and other uses.

The results of their investigation appear this month in a paper for Advanced Science. “This is a revolutionary approach that promises to accelerate the design of new functional materials,” comments physicist Jaime Fernandez-Baca, a distinguished staff member at Oak Ridge National Laboratory.

A central challenge in materials science, Li and his coauthors write, is to “establish structure-property relationships” — to figure out the characteristics a material with a given atomic structure would have. Li’s team focused, in particular, on using structural knowledge to predict the “phonon density of states,” which has a critical bearing on thermal properties.

To understand that term, it’s best to start with the word phonon. “A crystalline material is composed of atoms arranged in a lattice structure,” explains Nina Andrejevic, a PhD student in materials science and engineering. “We can think of these atoms as spheres connected by springs, and thermal energy causes the springs to vibrate. And those vibrations, which only occur at discrete [quantized] frequencies or energies, are what we call phonons.”

The phonon density of states is simply the number of vibrational modes, or phonons, found within a given frequency or energy range. Knowing the phonon density of states, one can determine a material’s heat-carrying capacity as well as its thermal conductivity, which relates to how readily heat passes through a material, and even the superconducting transition temperature in a superconductor. “For thermal energy storage purposes, you want a material with a high specific heat, which means it can take in heat without a sharp rise in temperature,” Li says. “You also want a material with low thermal conductivity so that it retains its heat longer.”

The phonon density of states, however, is a difficult term to measure experimentally or to compute theoretically. “For a measurement like this, one has to go to a national laboratory to use a large instrument, about 10 meters long, in order to get the energy resolution you need,” Li says. “That’s because the signal we’re looking for is very weak.”

“And if you want to calculate the phonon density of states, the most accurate way of doing so relies on density functional perturbation theory (DFPT),” notes Zhantao Chen, a mechanical engineering PhD student. “But those calculations scale with the fourth order of the number of atoms in the crystal’s basic building block, which could require days of computing time on a CPU cluster.” For alloys, which contain two or more elements, the calculations become much harder, possibly taking weeks or even longer.

The new method, says Li, could reduce those computational demands to a few seconds on a PC. Rather than trying to calculate the phonon density of states from first principles, which is clearly a laborious task, his team employed a neural network approach, utilizing artificial intelligence algorithms that enable a computer to learn from example. The idea was to present the neural network with enough data on a material’s atomic structure and its associated phonon density of states that the network could discern the key patterns connecting the two. After “training” in this fashion, the network would hopefully make reliable density of states predictions for a substance with a given atomic structure.

Predictions are difficult, Li explains, because the phonon density of states cannot by described by a single number but rather by a curve (analogous to the spectrum of light given off at different wavelengths by a luminous object). “Another challenge is that we only have trustworthy [density of states] data for about 1,500 materials. When we first tried machine learning, the dataset was too small to support accurate predictions.”

His group then teamed up with Lawrence Berkeley physicist Tess Smidt '12, a co-inventor of so-called Euclidean neural networks. “Training a conventional neural network normally requires datasets containing hundreds of thousands to millions of examples,” Smidt says. A significant part of that data demand stems from the fact that a conventional neural network does not understand that a 3D pattern and a rotated version of the same pattern are related and actually represent the same thing. Before it can recognize 3D patterns — in this case, the precise geometric arrangement of atoms in a crystal — a conventional neural network first needs to be shown the same pattern in hundreds of different orientations.

“Because Euclidean neural networks understand geometry — and recognize that rotated patterns still ‘mean’ the same thing — they can extract the maximal amount of information from a single sample,” Smidt adds. As a result, a Euclidean neural network trained on 1,500 examples can outperform a conventional neural network trained on 500 times more data.

Using the Euclidean neural network, the team predicted phonon density of states for 4,346 crystalline structures. They then selected the materials with the 20 highest heat capacities, comparing the predicted density of states values with those obtained through time-consuming DFPT calculations. The agreement was remarkably close.

The approach can be used to pick out promising thermal energy storage materials, in keeping with the aforementioned “grand challenge,” Li says. “But it could also greatly facilitate alloy design, because we can now determine the density of states for alloys just as easily as for crystals. That, in turn, offers a huge expansion in possible materials we could consider for thermal storage, as well as many other applications.”

Some applications have, in fact, already begun. Computer code from the MIT group has been installed on machines at Oak Ridge, enabling researchers to predict the phonon density of states of a given material based on its atomic structure.

Andrejevic points out, moreover, that Euclidean neural networks have even broader potential that is as-of-yet untapped. “They can help us figure out important material properties besides the phonon density of states. So this could open up the field in a big way.”

This research was funded by the U.S. Department of Energy Office of Science, National Science Foundation, and Lawrence Berkeley National Laboratory.

A robot that senses hidden objects

Thu, 04/01/2021 - 12:00am

In recent years, robots have gained artificial vision, touch, and even smell. “Researchers have been giving robots human-like perception,” says MIT Associate Professor Fadel Adib. In a new paper, Adib’s team is pushing the technology a step further. “We’re trying to give robots superhuman perception,” he says.

The researchers have developed a robot that uses radio waves, which can pass through walls, to sense occluded objects. The robot, called RF-Grasp, combines this powerful sensing with more traditional computer vision to locate and grasp items that might otherwise be blocked from view. The advance could one day streamline e-commerce fulfillment in warehouses or help a machine pluck a screwdriver from a jumbled toolkit.

The research will be presented in May at the IEEE International Conference on Robotics and Automation. The paper’s lead author is Tara Boroushaki, a research assistant in the Signal Kinetics Group at the MIT Media Lab. Her MIT co-authors include Adib, who is the director of the Signal Kinetics Group; and Alberto Rodriguez, the Class of 1957 Associate Professor in the Department of Mechanical Engineering. Other co-authors include Junshan Leng, a research engineer at Harvard University, and Ian Clester, a PhD student at Georgia Tech.

As e-commerce continues to grow, warehouse work is still usually the domain of humans, not robots, despite sometimes-dangerous working conditions. That’s in part because robots struggle to locate and grasp objects in such a crowded environment. “Perception and picking are two roadblocks in the industry today,” says Rodriguez. Using optical vision alone, robots can’t perceive the presence of an item packed away in a box or hidden behind another object on the shelf — visible light waves, of course, don’t pass through walls.

But radio waves can.

For decades, radio frequency (RF) identification has been used to track everything from library books to pets. RF identification systems have two main components: a reader and a tag. The tag is a tiny computer chip that gets attached to — or, in the case of pets, implanted in — the item to be tracked. The reader then emits an RF signal, which gets modulated by the tag and reflected back to the reader.

The reflected signal provides information about the location and identity of the tagged item. The technology has gained popularity in retail supply chains — Japan aims to use RF tracking for nearly all retail purchases in a matter of years. The researchers realized this profusion of RF could be a boon for robots, giving them another mode of perception.

“RF is such a different sensing modality than vision,” says Rodriguez. “It would be a mistake not to explore what RF can do.”

RF Grasp uses both a camera and an RF reader to find and grab tagged objects, even when they’re fully blocked from the camera’s view. It consists of a robotic arm attached to a grasping hand. The camera sits on the robot’s wrist. The RF reader stands independent of the robot and relays tracking information to the robot’s control algorithm. So, the robot is constantly collecting both RF tracking data and a visual picture of its surroundings. Integrating these two data streams into the robot’s decision making was one of the biggest challenges the researchers faced.

“The robot has to decide, at each point in time, which of these streams is more important to think about,” says Boroushaki. “It’s not just eye-hand coordination, it’s RF-eye-hand coordination. So, the problem gets very complicated.”

The robot initiates the seek-and-pluck process by pinging the target object’s RF tag for a sense of its whereabouts. “It starts by using RF to focus the attention of vision,” says Adib. “Then you use vision to navigate fine maneuvers.” The sequence is akin to hearing a siren from behind, then turning to look and get a clearer picture of the siren’s source.

With its two complementary senses, RF Grasp zeroes in on the target object. As it gets closer and even starts manipulating the item, vision, which provides much finer detail than RF, dominates the robot’s decision making.

RF Grasp proved its efficiency in a battery of tests. Compared to a similar robot equipped with only a camera, RF Grasp was able to pinpoint and grab its target object with about half as much total movement. Plus, RF Grasp displayed the unique ability to “declutter” its environment — removing packing materials and other obstacles in its way in order to access the target. Rodriguez says this demonstrates RF Grasp’s “unfair advantage” over robots without penetrative RF sensing. “It has this guidance that other systems simply don’t have.”

RF Grasp could one day perform fulfilment in packed e-commerce warehouses. Its RF sensing could even instantly verify an item’s identity without the need to manipulate the item, expose its barcode, then scan it. “RF has the potential to improve some of those limitations in industry, especially in perception and localization,” says Rodriguez.

Adib also envisions potential home applications for the robot, like locating the right Allen wrench to assemble your Ikea chair. “Or you could imagine the robot finding lost items. It’s like a super-Roomba that goes and retrieves my keys, wherever the heck I put them.”

The research is sponsored by the National Science Foundation, NTT DATA, Toppan, Toppan Forms, and the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS).

Why we need a more precise understanding of vaccine hesitation

Thu, 04/01/2021 - 12:00am

As Covid-19 vaccinations continue to be distributed, equity is vital to the process. Vaccination availability for individuals from minority groups is an important issue — partly because of longstanding inequitable access to quality care, and partly because of public sentiment, reflected in polls showing that Black Americans’ mistrust of health care and medical research may influence their feelings about getting vaccinated. Charles Senteio, a health informatics scholar focused on equity and one of MIT’s 2020-21 MLK Visiting Professors, has been conducting an ongoing study of this issue, along with David Rand, the Erwin H. Schell professor at the MIT Sloan School of Management. Senteio, Rand, and their interdisciplinary team have found that attitudes toward Covid-19 vaccines vary significantly within Black communities, and they contend that effective vaccine outreach should factor in these differences.

Senteio is visiting MIT from Rutgers University, where he is an assistant professor in the School of Communication and Information. An expert in the delivery of health care to at-risk patients and communities, he helps use technology to develop sustainable models for better care. MIT News spoke with Senteio about his work.

Q: You’re working on a study, still in progress, about vaccine hesitation among people in underserved communities. What are your findings so far?

A: We tend to at times view minority groups as a homogenous lot. But that’s not correct. After running several experiments with thousands of participants, we found that Black Americans who are more immersed in African American culture have substantially more negative attitudes toward the Covid-19 vaccine. These cultural differences exist above and beyond variation accounted for by basic demographics like age, gender, education, and income — and they show how problematic it is to describe health perceptions for Black Americans as if they are the same.

As a researcher who does community-based work, I think if we can be more precise about understanding the vaccine hesitation within groups, and efforts to access the vaccines, specifically among Black Americans, then we can better target messaging to address disinformation or other barriers to accessing health information. Our research will test the current version of the old trope that says, “Blacks tend to have higher levels of mistrust, therefore they’re less willing to be vaccinated. So let’s assume that their vaccination rates will be lower.” That kind of concession is the result of bad science at worst, and sloppy science at best.

We intend to translate what we learn about how Covid-19 perceptions, and other potential barriers to care, influence vaccination as well as other chronic conditions that are characterized by persistent racial health inequity, like cancer, diabetes, and kidney disease. We’re in the process of trying to refine our understanding of how perceptions, and potential barriers to care, may influence health behaviors.

Describing intragroup differences is novel. In the various polling being done on vaccine perceptions and intentions, much of it reported by race and ethnicity, I’m not aware of one that describes intragroup differences. I think that’s super-important.

Q: What caused you to look at this particular issue?

A: Last year when the pandemic was descending upon all of us I, like many, was having discussions with different people, including medical research colleagues and nonscientists, trying to understand what the heck was going on in the country and in the world. During discussions among people of color from my own personal network, I observed vast differences in Covid-19 perceptions and beliefs. This reinforced the importance of understanding intragroup differences when conducting health equity work because ethnic minority groups can have substantial diversity within them, and these differences are not consistently reflected in racial equity health research.

And for me, as an equity researcher, around last April and into May, I was recognizing a potential new tsumani of inequity looming. I and other equity researchers were realizing that the same structural factors that affect racial inequity for common chronic diseases, such as diabetes and cancer, are similar to the structural factors that appeared to be influencing racial inequity in Covid-19 infection, hospitalization, and mortality. For example, people of color are disproportionally represented in “essential” jobs, like meat processing, food service, and couriers.

I didn’t realize that the bench scientists would do such fantastic work so quickly to develop safe and highly effective vaccines, and that the FDA would act so quickly to approve them. But last summer I had a sense that whenever vaccines became available, we social scientists would be needed to translate the tremendous vaccine research from the bench to communities to help promote vaccination. Because vaccines don’t make us safe, vaccinations do. And that’s where social science comes in.

Q: We’ve seen from news coverage that it has been a struggle in some places to distribute the vaccine to all groups. Has it been a struggle to get people to recognize the inequity issues in the first place — or not?

A: The reporting of racial inequity in Covid-19 infection, hospitalization, and mortality and the killing of George Floyd made it much more difficult to deny or claim ignorance to the longstanding structural factors which are the best explanations of persistent racial inequity. I’ve probably had more conversations about health equity in the past year than I’ve had in the previous decade. I think what Covid-19 has done, among other things, is expose causes of persistent inequity, the structural factors which we refer to as social determinants of health. For example, it’s just not as easy for some of us to socially distance. If I work in a supermarket, if I work in the service industry, if I deliver packages for a living, if I work as a flight attendant — I don’t have the luxury and privilege to rearrange a few bookcases behind my desk, buy a cool camera, and still do my job while adhering to social distancing guidelines.

As the grocery clerks, food delivery drivers, and couriers became so important to us last spring, I think many of us realized that structural factors, like type of employment, matter quite a bit to our maintaining our health and controlling our risk. In my view, the awareness and appreciation of the drivers of health inequity have never been greater than they are now. And that is a good thing. We equity researchers are working hard to take advantage of this heightened awareness, however long it may last.

Q: Although it’s not that researchers haven’t been trying, right? It’s taken an event of this magnitude to get people to more explicitly acknowledge these structural issues.

A: I submit that the first empirical study of racial health inequity in the U.S. was W.E.B. Du Bois’ seminal work “The Philadelphia Negro,” which was published in 1899. I’ve gone back to this important work quite a bit since I’ve been asked to give talks over the past eight months or so about Covid-19 and racial health inequity. Du Bois the sociologist found, in studying the racial health inequities in the racially and economically diverse 7th ward of Philadelphia, that structural factors and social living conditions were the primary explanatory factors for racial differences in health. And these differences had been known for decades up to that point. Du Bois’ research refuted the “scientific” view that genetics was the primary cause of racial differences, which some posited as an explanatory factor both then and now to explain racial inequity across a variety of areas beyond health, like educational attainment and criminal justice.

I think when good, reasonable people try to explain racial inequities, whether in criminal justice or health or the environment, they may reflexively search for explanations separate from structural inequality and racism. It is just easier to assign responsibility, or blame, to individuals. But as one of my public health professors said in a class I took early in my health research training: Our behavior is primarily a function of our options. I think oftentimes any of us can lose track of that. I do too sometimes, despite decades of research I’m familiar with that finds simply living near a poor neighborhood increases your risk of an early death. And for many of us, these types of facts, which help illuminate the importance of structural factors, can be difficult to come to grips with.

Q: So, do you think if people start to understand health disparities better because of Covid-19, and find some ways to turn that into action, it could at least make a difference in the future?

A: If we come out of this pandemic with a heightened awareness and understanding of how people’s lived experience exerts influence on their perspectives and health behaviors, then that might help result in some measure of good in the face of 550,000-plus lives lost in the U.S., and almost 3 million lost worldwide, in addition to so many other lives forever impacted.

Connecting history with the present moment

Wed, 03/31/2021 - 4:15pm

In spring 2020, as people all over the world confronted the daily reality of living through the Covid-19 pandemic, many wondered how previous generations of humans navigated similar crises. At MIT, an interdisciplinary team of humanistic faculty decided to explore this question in a course that broke ground as a live, free MIT class, held in an open public webinar format so that anyone who wanted to attend could do so, from anywhere in the world.

As the course began, hundreds of people from around the world responded to the opportunity and joined students in 21H.000 (History of Now: Plagues and Pandemics). In hour-long, weekly sessions, they heard experts explain the origins and ramifications of a wide range of devastating pandemics — from the Black Death, which killed as many as 200 million people during the Middle Ages, to the 1918 flu pandemic, as well as many lesser-known plagues.

Live, in real-time, around the world

“This was a live MIT class happening in real-time that was open to an external audience,” says Malick Ghachem, an associate professor of history who created and taught the First-Year Discovery subject in concert with two of his History Section colleagues — Associate Professor Sana Aiyar and Professor Elizabeth Wood — as well as two faculty from the Program in Science, Technology, and Society, Professor Kate Brown and Associate Professor Dwai Banerjee.

MIT's History of Now, a recent course concept that had run once before, enables students to take a deep dive into topics in the current headlines, exploring current issues with the additional context of historical perspectives. In the first iteration of the course, MIT history professors rotated in to give students a presentation connecting present-day issues to their area of research and expertise. Ghachem, for example, engaged the students in a comparison of impeachment in the 18th century and today.

Webinar format expanded the range of expertise

For the pandemic edition, the History of Now course format changed abruptly from an in-person classroom experience to a webinar series. The new format greatly expanded the scope of expertise that was available to students, Ghachem says. Guest speakers included faculty members from Columbia University, Georgetown University, and the University of Cambridge in the United Kingdom, for example, sharing expertise in fields ranging from microbial biology to economics, anthropology, and medicine. Weekly discussion topics included “Public Health, Biopower, and Inequality,” “Immigration and Contagion,” "Race and Pandemics," and “Sovereignties, Plagues, and Policing.”

“These were eye-opening sessions with people who have studied these issues in great detail. Covid-19 was something they could put in very deep context,” Ghachem says.

Explorations from public health to immigration to biopower

First-year student Sagnik Anupam says he particularly enjoyed the talk by Kathryn Olivarius, an assistant professor of history at Stanford University who described how disease status has historically been used as a dividing line in society, conferring privilege on those considered immune (due to prior disease exposure, for example). “I found Professor Olivarius’s comments on the weaponization of immunoprivilege the most interesting aspect of the course,” he says. “She highlighted how in New Orleans in the 19th century, yellow fever was weaponized to extend both economic as well as racial divides.”

First-year undergraduate Charvi Sharma says the class “opened her eyes” to the broad range of factors that determine the course of a pandemic. “For example, when thinking about when a pandemic ‘ends,' we can’t only look at the number of cases of disease,” she says. “While this is an indicative factor of the decline of a pandemic, there are many other social, cultural, and economic implications that can’t be ignored. By discussing past plagues and pandemics, we were able to uncover a great deal about the present Covid-19 pandemic.”

Senior Helen Wang was especially interested in Professor Cindy Ermus' comment "that living through a pandemic had provided her invaluable insight on those in the past who also experienced life during a pandemic. I found this concept fascinating," says Wang. "Until hearing this remark, I believed that the study of history was intended to shed light on present conditions, rather than enlightenment happening in the other direction. This course was a steady reminder of the role that history plays in informing our lives, as well as the active role we play in interpreting it."

The hunger for historical knowledge

Faculty members note that there were some drawbacks to the novel webinar format. For example, since the instructors could not interact with students during the one-hour class sessions, they set up extra time to discuss the materials with the enrolled MIT students. “While virtual learning environments open up possibilities for international collaborative pedagogy, they also present their own challenges,” Banerjee says. “The feeling of Zoom fatigue, an outcome of the loss of social connectivity during this crisis, continues to push us to imagine new ways of learning.”

That said, since MIT’s First-Year Discovery classes are one-credit classes with intentionally light loads, the faculty felt comfortable opening History of Now up to other learners. “People have a hunger for historical knowledge. If people have this hunger and we can satisfy it, why not?” Ghachem says, comparing the class to some of the educational offerings of other schools, such as the Executive Education arm of the MIT Sloan School of Management, which offers training and certificate programs. “In a way this was an experiment in a kind of MIT-SHASS extension school.”

While also similar in some ways to the free MIT courses offered to the public via the MITx and edX platforms, the History of Now experimental webinar course was live, rather than prerecorded, and thus far more economical to produce.

A contribution to engaged citizenship

Faculty members are now discussing what the History of Now will look like in fall 2021. They are considering a new, six-unit, half-semester version of the course — and they continue to think about ways to fine-tune the format of a webinar course so it could simultaneously expand public access to knowledge and provide enrolled MIT students with ample, meaningful engagement with instructors.
 
Ghachem notes that “One thing this course taught us is that there are a lot of people out there who, if they could sit in on an MIT class, they would." Wood agrees, observing that webinar courses like "History of Now" that invite the public to think together about the public good "are one way for universities to contribute to engaged citizenship."

Brown says the experiment gave her a new appreciation of the value of online education. “I would love to see such a course directed at high school students aspiring to attend an institution such as MIT,” she says. “One major issue when you don’t live near a large metropolitan area is getting access to libraries, educators, and learning experiences. We have learned in the pandemic that such barriers no longer need exist. We can reach far more people than before. That’s an exciting horizon.”

Story prepared by MIT SHASS Communications
Editor and designer: Emily Hiestand, communications director
Senior Writer: Katherine O'Neill

Tactile textiles sense movement via touch

Wed, 03/31/2021 - 3:50pm

In recent years there have been exciting breakthroughs in wearable technologies, like smartwatches that can monitor your breathing and blood oxygen levels. 

But what about a wearable that can detect how you move as you do a physical activity or play a sport, and could potentially even offer feedback on how to improve your technique? 

And, as a major bonus, what if the wearable were something you’d actually already be wearing, like a shirt of a pair of socks?

That’s the idea behind a new set of MIT-designed clothing that use special fibers to sense a person’s movement via touch. Among other things, the researchers showed that their clothes can actually determine things like if someone is sitting, walking, or doing particular poses.

The group from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) says that their clothes could be used for athletic training and rehabilitation. With patients’ permission, they could even help passively monitor the health of residents in assisted-care facilities and determine if, for example, someone has fallen or is unconscious.  

The researchers have developed a range of prototypes, from socks and gloves to a full vest. The team’s “tactile electronics” use a mix of more typical textile fibers alongside a small amount of custom-made functional fibers that sense pressure from the person wearing the garment.

According to CSAIL graduate student Yiyue Luo, a key advantage of the team’s design is that, unlike many existing wearable electronics, theirs can be incorporated into traditional large-scale clothing production. The machine-knitted tactile textiles are soft, stretchable, breathable, and can take a wide range of forms. 

“Traditionally it’s been hard to develop a mass-production wearable that provides high-accuracy data across a large number of sensors,” says Luo, lead author on a new paper about the project that is appearing in this month’s edition of Nature Electronics. “When you manufacture lots of sensor arrays, some of them will not work and some of them will work worse than others, so we developed a self-correcting mechanism that uses a self-supervised machine learning algorithm to recognize and adjust when certain sensors in the design are off-base.”

The team’s clothes have a range of capabilities. Their socks predict motion by looking at how different sequences of tactile footprints correlate to different poses as the user transitions from one pose to another. The full-sized vest can also detect the wearers' pose, activity, and the texture of the contacted surfaces.

The authors imagine a coach using the sensor to analyze people’s postures and give suggestions on improvement. It could also be used by an experienced athlete to record their posture so that beginners can learn from them. In the long term, they even imagine that robots could be trained to learn how to do different activities using data from the wearables. 

“Imagine robots that are no longer tactilely blind, and that have ‘skins’ that can provide tactile sensing just like we have as humans,” says corresponding author Wan Shou, a postdoc at CSAIL. “Clothing with high-resolution tactile sensing opens up a lot of exciting new application areas for researchers to explore in the years to come.”

The paper was co-written by MIT professors Antonio Torralba, Wojciech Matusik, and Tomás Palacios, alongside PhD students Yunzhu Li, Pratyusha Sharma, and Beichen Li; postdoc Kui Wu; and research engineer Michael Foshey. 

The work was partially funded by Toyota Research Institute.

Matthew Vander Heiden named director of the Koch Institute

Wed, 03/31/2021 - 1:00pm

Matthew Vander Heiden, an MIT professor of biology and a pioneer in the field of cancer cell metabolism, has been named the next director of MIT’s Koch Institute for Integrative Cancer Research, effective April 1.

Vander Heiden will succeed Tyler Jacks, who has served as director for more than 19 years, first for the MIT Center for Cancer Research and then for its successor, the Koch Institute.

“Matt Vander Heiden has been a part of the Koch Institute almost from the beginning,” says MIT President L. Rafael Reif. “He knows firsthand that incredible discoveries emerge when scientists and engineers come together, in one space, to collaborate and learn from each other. We are thrilled that he will be carrying forward the institute’s groundbreaking work at the frontiers of cancer research.”

The MIT Center for Cancer Research (CCR) was founded by Nobel laureate Salvador Luria in 1974, shortly after the federal government declared a “war on cancer,” with the mission of unravelling the molecular basis of cancer. Working alongside colleagues such as Associate Director Jacqueline Lees, Jacks oversaw the evolution of the CCR into the Koch Institute in 2007, as well as the construction of the institute’s new home in Building 76, completed in 2010.

“I’m very grateful for all of the wonderful things that Tyler’s leadership has led to, because I think this really positions us to build on all of those successes and move forward to do more amazing things over the next decade,” Vander Heiden says.

Vander Heiden, who became a member of the Koch Institute in 2010 and has served as an associate director since 2017, is “an excellent choice for the Koch’s next director,” Jacks says. “Matt knows the landscape of cancer research deeply. He is very well-positioned to guide our existing programs and to develop new ones that take advantage of the unique strengths at the Koch and at MIT more broadly, at the intersection of science and engineering for cancer. I am looking forward to watching him lead the Institute’s exciting next chapter.”

Over the past several decades, cancer researchers have made significant strides in their understanding of the genetic underpinnings of the disease. They’ve also identified molecular signatures that distinguish different types of tumors, leading to the development of targeted treatments for specific types of cancer.

Vander Heiden says that he sees great opportunity in the field of cancer research for making new fundamental discoveries regarding the disease, and also for translating existing knowledge into better treatments. He expects that one key area of focus in the coming years will be applying the power of machine learning and artificial intelligence to understanding cancer.

“With the MIT Schwarzman College of Computing coming online, there’s tremendous opportunity in using the rapid advances in machine learning and computer science for health care,” Vander Heiden says. “I think that’s something MIT absolutely should be a leader on, especially as it applies to cancer.”

“Matt Vander Heiden will be a wonderful director,” says Phillip Sharp, an MIT Institute Professor and a member of the Koch Institute, who chaired the search committee for the new director. “His innovative research on cancer metabolism, service as associate director, and ability to ‘think like an engineer’ has earned him deep admiration from colleagues.”

Vander Heiden, who grew up in Wisconsin, earned his bachelor’s degree, MD, and PhD from the University of Chicago. While a graduate student, he became interested in studying the abnormal metabolism seen in cancer cells, which was first discovered nearly 100 years ago by the German chemist Otto Warburg. Instead of breaking down sugar using aerobic respiration, as healthy mammalian cells do, cancer cells switch to an alternative metabolic pathway called fermentation, which is less efficient.

As a postdoc in 2008, Vander Heiden and his colleagues at Harvard Medical School made the discovery that cancer cells shift their metabolism to fermentation by activating an enzyme called PKM2. While at Harvard, Vander Heiden also worked on a paper that contributed to the eventual development of drugs that target cancer cells with a mutation in the IDH gene. These drugs, the first modern FDA-approved cancer drugs that target metabolism, shut off an alternative pathway used by cancer cells with the IDH mutation.

In 2010, Vander Heiden became one of the first new faculty members hired after the creation of the Koch Institute. The Koch Institute was formed with the mission of bringing scientists and engineers together to work on cancer problems, an experimental approach that has had great success, Vander Heiden says.

“When I look at the Koch Institute today, I don’t think of my colleagues as being scientists or engineers. I just view them as people who are asking interesting questions in cancer, trying to solve translational problems, and trying to solve basic problems,” he says. “We have broken down all these barriers, these traditional silos of fields, and I think that uniquely positions us to answer the big questions about cancer going forward.”

While serving as director, Vander Heiden plans to continue his own research program on the role of cell metabolism in the development and progression of cancer. He also plans to continue his work as a medical oncologist at Dana-Farber Cancer Institute, where he treats prostate cancer patients.

“Having a personal link to the clinic helps keep me grounded in the realities of how patients experience cancer, and hopefully that will help me be a better steward of the Koch Institute and help us have even more impact with the work that we’re doing,” he says.

Mice naturally engage in physical distancing, study finds

Wed, 03/31/2021 - 11:00am

When someone is sick, it’s natural to want to stay as far from them as possible. It turns out this is also true for mice, according to an MIT study that also identified the brain circuit responsible for this distancing behavior.

In a study that explores how otherwise powerful instincts can be overridden in some situations, researchers from MIT’s Picower Institute for Learning and Memory found that when male mice encountered a female mouse showing signs of illness, the males interacted very little with the females and made no attempts to mate with them as they normally would. The researchers also showed that this behavior is controlled by a circuit in the amygdala, which detects distinctive odors from sick animals and triggers a warning signal to stay away.

“As a community, it’s very important for animals to be able to socially distance themselves from sick individuals,” says Gloria Choi, an associate professor of brain and cognitive sciences at MIT and a member of the Picower Institute. “Especially in species like mice, where mating is instinctively driven, it’s imperative to be able to have a mechanism that can shut it down when the risk is high.”

Choi’s lab has previously studied how illness influences behavior and neurological development in mice, including the development of autism-like behaviors following maternal illness during pregnancy. The new study, which appears today in Nature, is her first to reveal how illness can affect healthy individuals’ interactions with those who are sick.

The paper’s lead author is MIT postdoc Jeong-Tae Kwon. Other authors of the paper include Myriam Heiman, the Latham Family Career Development Associate Professor of Neuroscience and a member of the Picower Institute, and Hyeseung Lee, a postdoc in Heiman’s lab.

Keeping a distance

For mice and many other animals, certain behaviors such as mating and fighting are innately programmed, meaning that the animals automatically engage in them when certain stimuli are present. However, there is evidence that under certain circumstances, these behaviors can be overridden, Choi says.

“We wanted to see whether there's a brain mechanism that would be engaged when an animal encounters a sick member of the same species that would modulate these innate, automatic social behaviors,” she says.

Previous studies have shown that mice can distinguish between healthy mice and mice that have been injected with a bacterial component called LPS, which induces mild inflammation when given at a low dose. These studies suggested that mice use odor, processed by their vomeronasal organ, to identify sick individuals.

To explore whether mice would change their innate behavior when exposed to sick animals, the researchers placed male mice in the same cage with either a healthy female or a female that was showing LPS-induced signs of illness. They found that the males engaged much less with the sick females and made no effort to mount them.

The researchers then tried to identify the brain circuit underlying this behavior. The vomeronasal organ, which processes pheromones, feeds into a part of the amygdala called the COApm, and the MIT team found that this region is activated by the presence of LPS-injected animals.

Further experiments revealed that activity in the COApm is necessary to suppress the males’ mating behavior in the presence of sick females. When COApm activity was turned off, males would try to mate with sick females. Additionally, artificially stimulating the COApm suppressed mating behavior in males even when they were around healthy females.

Sickness behavior

The researchers also showed that the COApm communicates with another part of the amygdala called the medial amygdala, and this communication, carried by a hormone called thyrotropin releasing hormone (TRH), is necessary to suppress mating behavior. The link to TRH is intriguing, Choi says, because thyroid dysfunction has been implicated in depression and social withdrawal in humans. She now plans to explore the possibility that internal factors (such as mental state) can alter TRH levels in the COApm circuits to modulate social behavior.

“This is something we are trying to probe in the future: whether there's a link between thyroid dysfunction and modulation of this amygdala circuit that controls social behavior,” she says.

This study is part of a larger effort in Choi’s lab to study the role of neuro-immune interactions in coordinating “sickness behaviors.” One area they are investigating, for example, is whether pathogens might attempt to exert control over the animals’ behavior and stimulate them to socialize more, allowing viruses or bacteria to spread further.

“Pathogens may also have the ability to utilize immune systems, including cytokines and other molecules, to engage the same circuits in the opposite way, to promote more engagement,” Choi says. “This is a sort of far-flung, but very interesting and exciting idea. We want to examine host-pathogen interactions at a network level to understand how the same neuro-immune mechanisms can be differently employed by the host versus pathogen to either contain or spread the infection, respectively, within a community. For example, we want to follow sick animals through their interactions within the community while controlling their immune status and manipulating their neural circuits.”

The research was funded by the National Institute of Mental Health, the JPB Foundation, the Simons Center for the Social Brain Postdoctoral Fellowship program, and the Picower Fellowship program.

How industrialized life remodels the microbiome

Wed, 03/31/2021 - 11:00am

Thousands of different bacterial species live within the human gut. Most are beneficial, while others can be harmful. A new study from an MIT-led team has revealed that these bacterial populations can remake themselves within the lifetime of their host, by passing genes back and forth.

The researchers also showed that this kind of gene transfer occurs more frequently in the microbiomes of people living in industrialized societies, possibly in response to their specific diets and lifestyles.

“One unexpected consequence of humans living in cities may be that we've created conditions that are very conducive to the bacteria that inhabit our guts exchanging genes with each other,” says Eric Alm, director of MIT’s Center for Microbiome Informatics and Therapeutics, a professor of biological engineering and of civil and environmental engineering at MIT, a member of the Broad Institute of MIT and Harvard, and the senior author of the new study.

The study is the first major paper from the Global Microbiome Conservancy (GMbC), a consortium that is collecting microbiome samples from underrepresented human populations around the world in an effort to preserve bacterial species that are at risk of being lost as humanity becomes more exposed to industrialized diets and lifestyles globally.

“Most of the species that we find in rural and isolated populations are species that you wouldn't see in the industrialized world,” says Mathieu Groussin, an MIT research associate and one of the lead authors of the paper. “The composition of the microbiome shifts completely, and along with this, the number of different species is diminishing. This lower diversity of the industrialized microbiome might be a reflection of poor intestinal health.”

MIT research associate Mathilde Poyet is also a lead author of the study, which appears today in Cell. Other authors of the paper include researchers from institutions in Denmark, France, South Africa, Cameroon, Canada, Finland, New Zealand, Tanzania, Spain, Sweden, Ghana, and Nigeria.

Microbe diversity

The GMbC launched in 2016, with the mission of preserving human microbiome diversity before it is lost. So far, the project has gathered samples from 34 human populations worldwide. The GMbC consortium includes scientists from each country where samples are being collected.

“This effort is being led by MIT, but it is really a global collaboration,” Poyet says. “With our international consortium, we’re putting time and effort into collecting and preserving the individual bacterial strains so that we can keep them indefinitely into future generations, but all of those bacteria and their derivatives are still owned by the participants who provide them.”

Previous work has shown that the composition of the microbiome in people living in industrialized societies is very different from that of rural peoples living in relative isolation. Nonindustrialized populations usually have a larger bacterial biodiversity, including many species that are not seen in industrialized populations. Differences in diet, antibiotic use, and exposure to soil bacteria are hypothesized to contribute to these differences.

In the Cell study, the researchers explored the phenomenon of horizontal gene transfer, which occurs when bacteria living in the same environment pass genes among each other. In 2011, Alm’s lab discovered that the human gut is a hotspot for this type of gene exchange. However, with the technique the researchers were using at the time, they were only able to determine that these gene transfers had likely happened sometime within the past 5,000 years.

In their new study, the researchers were able to estimate much more precisely when these transfers occurred. To do this, they compared the genetic differences between different species of gut bacteria. When they compared pairs of bacterial species that came from the same person, they found a much higher rate of genetic similarity than that seen in the same pairs taken from two different people, confirming that horizontal gene transfer can happen within the lifetime of an individual person.

“One of the really exciting things about this paper is we were finally able to answer the question of whether the rate of horizontal transfer has been high in the human microbiome over the last few millennia, or is it true that within each person’s lifetime, the bugs in their gut are constantly trading genes back and forth with each other,” Alm says.

Exchanging traits

Depending on the species, the researchers found that bacteria might obtain between 10 and 100 new genes every year. The researchers also found that the rate of gene exchange was significantly higher in people living in industrialized societies, and they also saw differences in the types of genes that were most commonly exchanged.

As one example, they found that among pastoralist populations who treat their livestock with antibiotics, genes for antibiotic resistance are among those exchanged at the highest rates. They also found that people from nonindustrialized societies, especially hunter-gatherers, had high rates of gene exchange for genes that are involved in fiber degradation. This makes sense because those populations usually consume much more dietary fiber than industrialized populations, the researchers say.

Among the microbes found in industrialized populations, the researchers found especially high rates of exchange for genes whose role is to facilitate gene transfer. These microbes also have higher exchange rates for genes involved in virulence. The researchers are now investigating how those genes may influence inflammatory diseases such as irritable bowel syndrome, which is seen much more often in industrialized societies than nonindustrialized societies.

The research was funded by the Center for Microbiome Informatics and Therapeutics at MIT, the Rasmussen Family Foundation, and a BroadNext10 award from the Broad Institute.

Call to share input on diversity and inclusion proposal

Wed, 03/31/2021 - 9:00am

The follow letter was sent to the MIT community March 30 by members of MIT's leadership team.

To the members of the MIT community,

With this letter, we share the first draft of MIT's Five-year Strategic Action Plan for Diversity, Equity, and Inclusion, and we invite your input. You may reply to this email or offer your feedback through the plan's website: deiactionplan.mit.edu.

Last July, President Reif announced this effort to develop and implement a comprehensive, Institute-wide action plan for diversity, equity, and inclusion – one that will be central to MIT’s overall goals and strategy. Since then, the Strategic Action Plan Steering Team provided a draft set of commitments, which were further refined and endorsed by the senior administration.

Much of the decision-making at MIT is decentralized. As an organizational strategy, this matches the intellectual and creative freedom we cherish. However, decentralization presents special challenges when we need – as we do now – to make meaningful change together. This plan aims to provide a framework for making real progress by establishing mutually agreed-upon goals. It is MIT’s attempt to deliver an explicit, directional, and aspirational set of actions to be taken by the Institute, meaning all of us. We encourage every member of the MIT community to read the draft plan and offer your questions and ideas.

For those of you in parts of the Institute already engaged in plans and programs to address issues of diversity, equity, and inclusion, we hope you will consider this MIT-wide commitment an encouraging step that can help make your local efforts more lasting and effective.

Please watch for an invitation to an upcoming community engagement session.

The comment period will be open until April 30. We expect to share a final plan by the end of the semester. 

Sincerely,

Martin A. Schmidt, Provost

Cynthia Barnhart, Chancellor

Maria T. Zuber, Vice President for Research

Sanjay Sarma, Vice President for Open Learning

Glen Shor, Executive Vice President and Treasurer

John Dozier, Institute Community and Equity Officer

Big data dreams for tiny technologies

Tue, 03/30/2021 - 4:35pm

Small-molecule therapeutics treat a wide variety of diseases, but their effectiveness is often diminished because of their pharmacokinetics — what the body does to a drug. After administration, the body dictates how much of the drug is absorbed, which organs the drug enters, and how quickly the body metabolizes and excretes the drug again.

Nanoparticles, usually made out of lipids, polymers, or both, can improve the pharmacokinetics, but they can be complex to produce and often carry very little of the drug.

Some combinations of small-molecule cancer drugs and two small-molecule dyes have been shown to self-assemble into nanoparticles with extremely high payloads of drugs, but it is difficult to predict which small-molecule partners will form nanoparticles among the millions of possible pairings.

MIT researchers have developed a screening platform that combines machine learning with high-throughput experimentation to identify self-assembling nanoparticles quickly. In a study published in Nature Nanotechnology, researchers screened 2.1 million pairings of small-molecule drugs and “inactive” drug ingredients, identifying 100 new nanoparticles with potential applications that include the treatment of cancer, asthma, malaria, and viral and fungal infections.

“We have previously described some of the negative and positive effects that inactive ingredients can have on drugs, and here, through a concerted collaboration across our laboratories and core facilities, describe an approach focusing on the potential positive effects these can have on nanoformulation,” says Giovanni Traverso, the Karl Van Tassel (1925) Career Development Professor of Mechanical Engineering, and senior corresponding author of the study.

Their findings point to a strategy for that solves for both the complexity of producing nanoparticles and the difficulty of loading large amounts of drugs onto them.

“So many drugs out there don’t live up to their full potential because of insufficient targeting, low bioavailability, or rapid drug metabolism,” says Daniel Reker, lead author of the study and a former postdoc in the laboratory of Robert Langer. “By working at the interface of data science, machine learning, and drug delivery, our hope is to rapidly expand our tool set for making sure a drug gets to the place it needs to be and can actually treat and help a human being.”

Langer, the David H. Koch Institute Professor at MIT and a member of the Koch Institute for Integrative Cancer Research, is also a senior author of the paper.

A cancer therapy meets its match

In order to develop a machine learning algorithm capable of identifying self-assembling nanoparticles, researchers first needed to build a dataset on which the algorithm could train. They selected 16 self-aggregating small-molecule drugs with a variety of chemical structures and therapeutic applications and a diverse set of 90 widely available compounds, including ingredients that are already added to drugs to make them taste better, last longer, or make them more stable. Because both the drugs and the inactive ingredients are already FDA-approved, the resulting nanoparticles are likely to be safer and move through the FDA approval process more quickly.

The team then tested every combination of small-molecule drug and inactive ingredient, enabled by the Swanson Biotechnology Center, a suite of core facilities providing advanced technical services within the Koch Institute. After mixing pairings and loading 384 samples at a time onto nanowell plates using robotics in the High Throughput Sciences core, researchers walked the plates, often with quickly degrading samples, next door to the Peterson (1957) Nanotechnology Materials Core Facility core to measure the size of particles with high throughput dynamic light scattering.

Now trained on 1,440 data points (with 94 nanoparticles already identified), the machine learning platform could be turned on a much bigger library of compounds. Screening 788 small-molecule drugs against more than 2,600 inactive drug ingredients, the platform identified 38,464 potential self-assembling nanoparticles from 2.1 million possible combinations.

The researchers selected six nanoparticles for further validation, including one composed of sorafenib, a treatment commonly used for advanced liver and other cancers, and glycyrrhizin, a compound frequently used as both a food and drug additive and most commonly known as licorice flavoring. Although sorafenib is the standard of care for advanced liver cancer, its effectiveness is limited.

In human liver cancer cell cultures, the sorafenib-glycyrrhizin nanoparticles worked twice as well as sorafenib by itself because more of the drug could enter the cells. Working with the Preclinical Modeling, Imaging and Testing facility at the Koch Institute, researchers treated mouse models of liver cancer to compare the effects of sorafenib-glycyrrhizin nanoparticles versus either compound by itself. They found that the nanoparticle significantly reduced levels of a marker associated with liver cancer progression compared to mice given sorafenib alone, and lived longer than mice given sorafenib or glycyrrhizin alone. The sorafenib-glycyrrhizin nanoparticle also showed improved targeting to the liver when compared to oral delivery of sorafenib, the current standard in the clinic, or when injecting sorafenib after it has been combined with cremophor, a commonly-used drug vehicle that improves water solubility but has toxic side effects.

Personalized drug delivery

The new platform may have useful applications beyond optimizing the efficiency of active drugs: it could be used to customize inactive compounds to suit the needs of individual patients. In earlier work, members of the team found that inactive ingredients could provoke adverse allergic reactions in some patients. Now, with the expanded machine learning toolbox, more options could be generated to provide alternatives for these patients.

“We have an opportunity to think about matching the delivery system to the patient,” explains Reker, now an assistant professor of biomedical engineering at Duke University. “We can account for things like drug absorption, genetics, even allergies to reduce side effects upon delivery. Whatever the mutation or medical condition, the right drug is only the right drug if it actually works for the patient.”

The tools for safe, efficacious drug delivery exist, but putting all the ingredients together can be a slow process. The combination of machine learning, rapid screening, and the ability to predict interactions among different combinations of materials will accelerate the design of drugs and the nanoparticles used to deliver them throughout the body.

In ongoing work, the team is looking not just to improve effective delivery of drugs but also for opportunities to create medications for people for whom standard formulations are not a good option, using big data to solve problems in small populations by looking at genetic history, allergies, and food reactions.

Comprehensive report on pandemic response solutions developed by 180 leading experts

Tue, 03/30/2021 - 4:10pm

When the World Health Organization declared the Covid-19 outbreak a pandemic, the health crisis catalyzed a global effort to accelerate innovation and stop the novel virus’s spread. To help streamline that effort, the MIT Center for Collective Intelligence, MIT Media Lab’s Community Biotechnology Initiative, and MilliporeSigma, the life science business of Merck KGaA, Darmstadt, Germany, together convened more than 180 thought leaders from around the globe to collaborate asynchronously and rapidly identify solutions. A comprehensive report that synthesizes data-driven insights from this expert group, known as the “Pandemic Response Supermind,” has now been published, outlining the most promising solutions for pandemic response.

“We engaged with hundreds of creative minds and leaders of scientific research, public health, industry and beyond in a collective intelligence exercise to generate numerous innovative solutions to the extraordinary challenges posed by the global pandemic,” says David Sun Kong, director of the Community Biotechnology Initiative and primary author of the report. “It was inspiring to see the generative nature of the collective creativity that emerged. I believe both the insights and the methodology we developed can help guide not only our society’s emergence from the pandemic, but also prepare us for future challenges.” 

The Pandemic Response Supermind Report is the result of a six-month global collaboration and expert synthesis that applied an accelerated approach to identify solutions in the midst of the pandemic. During the initial invitation-only exercise that ran for three weeks in May 2020, the Supermind convened using the Center for Collective Intelligence’s software platform and methodology to address a central challenge question: How can we develop pandemic resilience — the ability for society to recover quickly from global disease outbreaks — both in resolving the current Covid-19 pandemic and in building the public health and other infrastructure to prepare for future pandemics?  

During this sprint, the Supermind identified gaps and innovative solutions across five key technical areas of pandemic response, including: transmission control; diagnostics and monitoring; access to therapies and vaccines; sharing and communicating scientific insights; and pandemic preparedness. Synthesizing the results of this exercise, the Supermind Report was published in thematic installments from June to November 2020 and is now available in full.

“It is exciting to see how the collective intelligence of this group of experts and our teams could be mobilized so quickly and effectively to come up with innovative suggestions for pandemic response,” says Kathleen Kennedy, executive director of the Center for Collective Intelligence, who oversaw the Supermind platform.

The Supermind provided an early signal on key solutions to combat Covid-19, identifying innovative ideas such as using sewage monitoring to detect and prevent the spread of SARS-CoV-2 and other infectious diseases within communities. This wastewater epidemiology strategy has since been leveraged more broadly in the fight against Covid-19 by universities, cities, and various communal settings around the world. The Supermind Report also details key findings on novel designs for face masks, accelerating clinical trials by implementing real-world trials and Bayesian statistics, rapid reproduction of critical research, and building equity into pandemic funding and implementation, among other innovative strategies identified throughout the exercise.  

The initiative’s unique methodology applied natural language processing to cluster and synthesize contributions from the 180 participants. Participants contributed to the online platform asynchronously with daily facilitation. In total, 243 individual ideas were put forward during the exercise, garnering more than 1,200 votes cast for the top solutions during an evaluation phase.

The Pandemic Response Supermind demonstrates the power of collective intelligence in identifying the most feasible, impactful solutions to fight Covid-19 and better prepare public health infrastructure for future pandemics. This body of work also informs a public, crowdsourcing effort on the Pandemic Response CoLab, an open platform from the Center for Collective Intelligence and Community Biotechnology Initiative.

“This exercise proves that global conversations and artificial intelligence algorithms can play a vital role in finding approaches that address complex challenges,” says Patrick Schneider, head of strategy, business development, and innovation for the Research Solutions business unit at MilliporeSigma and chair of the company’s Innovation Board. “Looking to a post-pandemic world, these learnings and accelerated pathways to solutions have the potential to usher in a paradigm shift across scientific industries.”

The three collaborating organizations will expand upon their work in 2021, convening a second “Supermind.” By broadening their approach to include new synchronous and asynchronous methodologies, the Supermind will seek to identify and apply the lessons learned from Covid-19 to develop actionable solutions that will usher in the future of life science in a post-pandemic world.

Learn more about the Supermind’s solutions for pandemic response and explore the full report.

Synthetic mucus can mimic the real thing

Tue, 03/30/2021 - 10:00am

More than just a sign of illness, mucus is a critical part of our body’s defenses against disease. Every day, our bodies produce more than a liter of the slippery substance, covering a surface area of more than 400 square meters to trap and disarm microbial invaders.

Mucus is made from mucins — proteins that are decorated with sugar molecules. Many scientists are trying to create synthetic versions of mucins in hopes of replicating their beneficial traits. In a new study, researchers from MIT have now generated synthetic mucins with a polymer backbone that more accurately mimic the structure and function of naturally occurring mucins. The team also showed that these synthetic mucins could effectively neutralize the bacterial toxin that causes cholera.

The findings could help give researchers a better idea of which features of mucins contribute to different functions, especially their antimicrobial functions, says Laura Kiessling, the Novartis Professor of Chemistry at MIT. Replicating those functions in synthetic mucins could eventually lead to new ways to treat or prevent infectious disease, and such materials may be less likely to lead to the kind of resistance that occurs with antibiotics, she says.

“We would really like to understand what features of mucins are important for their activities, and mimic those features so that you could block virulence pathways in microbes,” says Kiessling, who is the senior author of the new study.

Kiessling’s lab worked on this project with Katharina Ribbeck, the Mark Hyman, Jr. Career Development Professor of Biological Engineering, and Richard Schrock, the F.G. Keyes Professor Emeritus of Chemistry, who are also authors of the paper. The lead authors of the paper, which appears today in ACS Central Science, are former MIT graduate student Austin Kruger and MIT postdoc Spencer Brucks.

Inspired by mucus

Kiessling and Ribbeck joined forces to try to create mucus-inspired materials in 2018, with funding from a Professor Amar G. Bose Research Grant. The primary building blocks of mucus are mucins — long, bottlebrush-like proteins with many sugar molecules called glycans attached. Ribbeck has discovered that these mucins disrupt many key functions of infectious bacteria, including their ability to secrete toxins, communicate with each other, and attach to cellular surfaces.

Those features have led many scientists to try to generate artificial versions that could help prevent or treat bacterial infection. However, mucins are so large that it has been difficult to replicate their structure accurately. Each mucin polymer has a long backbone consisting of thousands of amino acids, and many different glycans can be attached to these backbones.

In the new study, the researchers decided to focus on the backbone of the polymer. To try to replicate its structure, they used a reaction called ring-opening metathesis polymerization. During this type of reaction, a carbon-containing ring is opened up to form a linear molecule containing a carbon-carbon double bond. These molecules can then be joined together to form long polymers.

In 2005, Schrock shared the Nobel Prize in Chemistry for his work developing catalysts that can drive this type of reaction. Later, he developed a catalyst that could yield specifically the “cis” configuration of the products. Each carbon atom in the double bond usually has one other chemical group attached to it, and in the cis configuration, both of these groups are on the same side of the double bond. In the “trans” configuration, the groups are on opposite sides.

To create their polymers, the researchers used Schrock’s catalyst, which is based on tungsten, to form cis versions of mucin mimetic polymers. They compared these polymers to those produced by a different, ruthenium-based catalyst, which creates trans versions. They found that the cis versions were much more similar to natural mucins — that is, they formed very elongated, water-soluble polymers. In contrast, the trans polymers formed globules that clumped together instead of stretching out.

Mimicking mucins

The researchers then tested the synthetic mucins’ ability to mimic the functions of natural mucins. When exposed to the toxin produced by Vibrio cholerae, the elongated cis polymers were much better able to capture the toxin than the trans polymers, the researchers found. In fact, the synthetic cis mucin mimics were even more effective than naturally occurring mucins.

The researchers also found that their elongated polymers were much more soluble in water than the trans polymers, which could make them useful for applications such as eye drops or skin moisturizers.

Now that they can create synthetic mucins that effectively mimic the real thing, the researchers plan to study how mucins’ functions change when different glycans are attached to the backbones. By altering the composition of the glycans, they hope to develop synthetic mucins that can dampen virulence pathways of a variety of microbes.

“We're thinking about ways to even better mimic mucins, but this study is an important step in understanding what's relevant,” Kiessling says.

In addition to the Bose grant, the research was funded by the National Institute of Biomedical Imaging and Bioengineering, the National Science Foundation, and the National Institute of Allergy and Infectious Diseases.

MIT graduate engineering, business, economics programs ranked highly by U.S. News for 2022

Tue, 03/30/2021 - 12:00am

MIT’s graduate program in engineering has again earned a No. 1 spot in U.S. News and Word Report’s annual rankings, a place it has held since 1990, when the magazine first ranked such programs.

The MIT Sloan School of Management also placed highly. It occupies the No. 5 spot for the best graduate business programs, a placement it shares with Harvard University.

Among individual engineering disciplines, MIT placed first in six areas: aerospace/aeronautical/astronautical engineering (tied with Caltech), chemical engineering, computer engineering, electrical/electronic/communications engineering (tied with Stanford University and the University of California at Berkeley), materials engineering, and mechanical engineering. It placed second in nuclear engineering.

In the rankings of individual MBA specialties, MIT placed first in three areas: business analytics, production/operations, and project management. It placed second in information systems and supply chain/logistics.

U.S. News does not issue annual rankings for all doctoral programs but revisits many every few years. This year, the magazine ranked the nation’s top PhD programs in the social sciences and humanities, which it last evaluated for 2018. MIT’s economics program earned a No. 1 ranking overall, shared with Harvard, Stanford, Princeton University, University of Chicago, and Yale University; it also earned first- or second-place rankings for six economics specialties. MIT’s political science program placed among the top 10 in the nation as well.

MIT ranked in the top five for 24 of the 37 science disciplines evaluated for 2019.

The magazine bases its rankings of graduate schools of engineering and business on two types of data: reputational surveys of deans and other academic officials, and statistical indicators that measure the quality of a school’s faculty, research, and students. The magazine’s less-frequent rankings of programs in the sciences, social sciences, and humanities are based solely on reputational surveys.

Homing in on longer-lasting perovskite solar cells

Tue, 03/30/2021 - 12:00am

Materials called perovskites are widely heralded as a likely replacement for silicon as the material of choice for solar cells, but their greatest drawback is their tendency to degrade relatively rapidly. Over recent years, the usable lifetime of perovskite-based cells has gradually improved from minutes to months, but it still lags far behind the decades expected from silicon, the material currently used for virtually all commercial solar panels.

Now, an international interdisciplinary team led by MIT has come up with a new approach to narrowing the search for the best candidates for long-lasting perovskite formulations, out of a vast number of potential combinations. Already, their system has zeroed in on one composition that in the lab has improved on existing versions more than tenfold. Even under real-world conditions at full solar cell level, beyond just a small sample in a lab, this type of perovskite has performed three times better than the state-of-the-art formulations.

The findings appear in the journal Matter, in a paper by MIT research scientist Shijing Sun, MIT professors, Moungi Bawendi,  John Fisher, and Tonio Buonassisi, who is also a principal investigator at the Singapore-MIT Alliance for Research and Technology (SMART), and 16 others from MIT, Germany, Singapore, Colorado, and New York.

Perovskites are a broad class of materials characterized by the way atoms are arranged in their layered crystal lattice. These layers, described by convention as A, B, and X, can each consist of a variety of different atoms or compounds. So, searching through the entire universe of such combinations to find the best candidates to meet specific goals — longevity, efficiency, manufacturability, and availability of source materials — is a slow and painstaking process, and largely one without any map for guidance.

“If you consider even just three elements, the most common ones in perovskites that people sub in and out are on the A site of the perovskite crystal structure,” which can each easily be varied by 1-percent increments in their relative composition, Buonassisi says. “The number of steps becomes just preposterous. It becomes very, very large” and thus impractical to search through systematically. Each step involves the complex synthesis process of creating a new material and then testing its degradation, which even under accelerated aging conditions is a time-consuming process.

The key to the team’s success is what they describe as a data fusion approach. This iterative method uses an automated system to guide the production and testing of a variety of formulations, then uses machine learning to go through the results of those tests, combined again with first-principles physical modeling, to guide the next round of experiments. The system keeps repeating that process, refining the results each time.

Buonassisi likes to compare the vast realm of possible compositions to an ocean, and he says most researchers have stayed very close to the shores of known formulations that have achieved high efficiencies, for example, by tinkering just slightly with those atomic configurations. However, “once in a while, somebody makes a mistake or has a stroke of genius and departs from that and lands somewhere else in composition space, and hey, it works better! A happy bit of serendipity, and then everybody moves over there” in their research. “But it's not usually a structured thought process.”

This new approach, he says, provides a way to explore far offshore areas in search of better properties, in a more systematic and efficient way. In their work so far, by synthesizing and testing less than 2 percent of the possible combinations among three components, the researchers were able to zero in on what seems to be the most durable formulation of a perovskite solar cell material found to date.

“This story is really about the fusion of all the different sets of tools” used to find the new formulation, says Sun, who coordinated the international team that carried out the work, including the development of a high-throughput automated degradation test system that monitors the breakdown of the material through its changes in color as it darkens. To confirm the results, the team went beyond making a tiny chip in the lab and incorporated the material into a working solar cell.

“Another point of this work is that we actually demonstrate, all the way from the chemical selection until we actually make a solar cell in the end,” she says. “And it tells us that the machine-learning-suggested chemical is not only stable in its own freestanding form. They can also be translated into real-life solar cells, and they lead to improved reliability.” Some of their lab-scale demonstrations achieved longevity as much as 17 times greater than the baseline formula they started with, but even the full-cell demonstration, which includes the necessary interconnections, outlasted the existing materials by more than three times, she says.

Buonassisi says the method the team developed could also be applied to other areas of materials research involving similarly large ranges of choice in composition. “It really opens the door for a mode of research where you can have these short, quick loops of innovation happening, maybe at a subcomponent or a material level. And then once you zero in on the right composition, you bump it up into a longer loop that involves device fabrication, and you test it out” at that next level.

“It's one of the big promises of the field to be able to do this type of work,” he says. “To see it actually happen was one of those [highly memorable] moments. I remember the exact place I was when I received the call from Shijing about these results — when you start to actually see these ideas come to life. It was really stunning.”

“What is particularly exciting about [this] advance is that the authors use physics to guide the intuition of the [optimization] process, rather than limiting the search space with hard constraints,” says University Professor Edward Sargent of the University of Toronto, a specialist in nanotechnology who was not connected with this research. “This approach will see widespread exploitation as machine learning continues to move toward solving real problems in materials science.”

The team included researchers at MIT, the Helmholz Intitute in Germany, the Colorado School of Mines, Brookhaven National Laboratory in New York, the Singapore-MIT Alliance for Research and Technology, and the Institute of Materials for Electronics and Energy Technology in Erlangen, Germany. The work was supported by DARPA, Total SA, the National Science Foundation, and the Skoltech NGP program.

Accounting for firms’ positive impacts on the environment

Mon, 03/29/2021 - 5:00pm

Gregory Norris is an expert on quantifying firms’ impacts on the environment over the life cycles of their products and processes. His analyses help decision-makers opt for more sustainable, Earth-friendly outputs.

He and others in this field of life-cycle assessment (LCA) have largely gone about their work by determining firms’ negative impacts on the environment, or footprints, a term most people are familiar with. But Norris felt something was missing. What about the positive impacts firms can have by, for example, changing behaviors or creating greener manufacturing processes that become available to competitors? Could they be added to the overall LCA tally?

Introducing handprints, the term Norris coined for those positive impacts and the focus of MIT’s Sustainability and Health Initiative for NetPositive Enterprise (SHINE). SHINE is co-led by Norris and Randolph Kirchain, who both have appointments through MIT’s Materials Research Laboratory (MRL).

Positive impacts

“If you ask LCA practitioners what they track to determine a product’s sustainability, 99 out of 100 will talk about footprints, these negative impacts,” Norris says. “We’re about expanding that to include handprints, or positive impacts.”

Says Kirchain, “we’re trying to make the [LCA] metrics more encompassing so firms are motivated to make positive changes as well.” And that could ultimately “increase the scope of activities that firms engage in for environmental benefits.”

In a February 2021 paper in the International Journal of Life Cycle Assessment, Norris, Kirchain, and colleagues lay out the methodology for not only estimating handprints but also combining them with footprints. Additional authors of the paper are Jasmina Burek, Elizabeth A. Moore, and Jeremy Gregory, who are also affiliated with the MRL.

“By giving handprints a defendable methodology, we get closer to the ideal place where everything that counts can be counted,” says Jeff Zeman, principal of TrueNorth Collective, a consulting firm for sustainability. Zeman was not involved in the work.

As a result, Zeman continues, “designers can see the positive impact of their work show up in an organization’s messaging, as progress toward its sustainability goals, and bridge their work with other good actors to create shared benefits. Handprints have been a powerful influence on me and my team — and continue to be.”

How it works

Handprints are measured with the same metrics used for quantifying different footprints. For example, a classic metric for determining a product’s water footprint is the liters of water used to create that product. The same product’s water handprint would be calculated by determining the liters of water saved through a positive change such as instituting a new manufacturing process involving recycled materials. Both footprints and handprints are measured using existing life-cycle inventory databases, software, and calculation methods.

The SHINE team has demonstrated the impact of adding handprints to LCA analyses through case studies with several companies. One such study described in the paper involved Interface, a manufacturer of flooring materials. The SHINE team calculated the company’s handprints associated with the use of “recycled” gas to help heat its manufacturing facility. Specifically, Interface captured and burned methane gas from a landfill. That gas would otherwise have been released to the atmosphere, contributing to climate change.

After calculating both the company’s handprints and footprints, the SHINE team found that Interface had a net positive impact. As the team wrote in their paper, “with the SHINE handprint framework, we can help actors to create handprints greater than, and commensurate with, their footprints.”

Concludes Norris: “With this paper, we hope that work on sustainability will get stronger by making these tools available to more people.”

This work was supported by the SHINE consortium.

Powering the energy transition with better storage

Mon, 03/29/2021 - 12:00pm

“The overall question for me is how to decarbonize society in the most affordable way,” says Nestor Sepulveda SM ’16, PhD ’20. As a postdoc at MIT and a researcher with the MIT Energy Initiative (MITEI), he worked with a team over several years to investigate what mix of energy sources might best accomplish this goal. The group’s initial studies suggested the “need to develop energy storage technologies that can be cost-effectively deployed for much longer durations than lithium-ion batteries,” says Dharik Mallapragada, a research scientist with MITEI.  

In a new paper published in Nature Energy, Sepulveda, Mallapragada, and colleagues from MIT and Princeton University offer a comprehensive cost and performance evaluation of the role of long-duration energy storage (LDES) technologies in transforming energy systems. LDES, a term that covers a class of diverse, emerging technologies, can respond to the variable output of renewables, discharging electrons for days and even weeks, providing resilience to an electric grid poised to deploy solar and wind power on a large scale.

“If we want to rely overwhelmingly on wind and solar power for electricity — increasingly the most affordable way to decrease carbon emissions — we have to deal with their intermittency,” says Jesse Jenkins SM ’14, PhD ’18, an assistant professor of mechanical and aerospace engineering and the Andlinger Center for Energy and the Environment at Princeton University and former researcher at MITEI.

In their paper, the researchers analyzed whether LDES paired with renewable energy sources and short-duration energy storage options like lithium-ion batteries could indeed power a massive and cost-effective transition to a decarbonized grid. They also investigated whether LDES might even eliminate the need for available-on-demand, or firm, low-carbon energy sources such as nuclear power and natural gas with carbon capture and sequestration.

“The message here is that innovative and low-cost LDES technologies could potentially have a big impact, making a deeply decarbonized electricity system more affordable and reliable,” says lead author Sepulveda, who now works as a consultant with McKinsey and Company.  But, he notes, “We will still be better off retaining firm low-carbon energy sources among our options.”

In addition to Jenkins and Mallapragada, the paper’s coauthors include Aurora Edington SM ’19, a MITEI research assistant at the time of this research and now a consultant at The Cadmus Group; and Richard K. Lester, the Japan Steel Industry Professor and associate provost at MIT, and former head of the Department of Nuclear Science and Engineering.

“As the world begins to focus more seriously on how to achieve deep decarbonization goals in the coming decades, the insights from these system-level studies are essential,” says Lester. “Researchers, innovators, investors, and policymakers will all benefit from knowledge of the cost and technical performance targets that are suggested by this work.” 

Performance and cost

The team set out to assess the impacts of LDES solutions in hypothetical electric systems that reflect real-world conditions, where technologies are scrutinized not merely by their standalone attributes, but by their relative value when matched against other energy sources.

“We need to decarbonize at an affordable cost to society, and we wanted to know if LDES can increase our probability of success while also reducing overall system cost, given the other technologies competing in the space,” says Sepulveda.

In pursuit of this goal, the team deployed an electricity system capacity expansion model, GenX, earlier developed by Jenkins and Sepulveda while at MIT. This simulation tool made it possible to evaluate the potential system impact of utilizing LDES technologies, including technologies currently being developed and others that could potentially be developed, for different future low-carbon electric grid scenarios characterized by cost and performance attributes of renewable generation, different types of firm generation, as well as alternative electricity demand projections. The study, says Jenkins, was “the first extensive use of this sort of experimental method of applying wide-scale parametric uncertainty and long-term systems-level analysis to evaluate and identify target goals regarding cost and performance for emerging long-duration energy storage technologies.”

For their study, the researchers surveyed a range of long-duration technologies — some backed by the U.S. Department of Energy’s Advanced Research Projects Agency-Energy (ARPA-E) program — to define the plausible cost and performance attributes of future LDES systems based on five key parameters that encompass a range of mechanical, chemical, electrochemical, and thermal approaches. These include pumped hydropower storage, vanadium redox flow batteries, aqueous sulfur flow batteries, and firebrick resistance-heated thermal storage, among others.

“Think of a bathtub, where the parameter of energy storage capacity is analogous to the volume of the tub,” explains Jenkins. Continuing the analogy, another important parameter, charge power capacity, is the size of the faucet filling the tub, and discharge power capacity, the size of the drain. In the most generalized version of an LDES technology, each attribute of the system can be independently sized. In optimizing an energy system where LDES technology functions as “an economically attractive contributor to a lower-cost, carbon-free grid,” says Jenkins, the researchers found that the parameter that matters the most is energy storage capacity cost.

“For a comprehensive assessment of LDES technology design and its economic value to decarbonized grids, we evaluated nearly 18,000 distinctive cases,” Edington explains, “spanning variations in load and renewable resource availability, northern and southern latitude climates, different combinations of LDES technologies and LDES design parameters, and choice of competing firm low-carbon generation resources.”

Some of the key takeaways from the researchers’ rigorous analysis:

  • LDES technologies can offer more than a 10 percent reduction in the costs of deeply decarbonized electricity systems if the storage energy capacity cost (the cost to increase the size of the bathtub) remains under the threshold of $20/kilowatt-hour. This value could increase to 40 percent if energy capacity cost of future technologies is reduced to $1/kWh and to as much as 50 percent for the best combinations of parameters modeled in the space. For purposes of comparison, the current storage energy capacity cost of batteries is around $200/kWh.
  • Given today’s prevailing electricity demand patterns, the LDES energy capacity cost must fall below $10/kWh to replace nuclear power; for LDES to replace all firm power options entirely, the cost must fall below $1/kWh.
  • In scenarios with extensive electrification of transportation and other end-uses to meet economy-wide deep decarbonization goals, it will be more challenging in northern latitudes to displace firm generation under any likely future combination of costs and efficiency performance range for known LDES technologies. This is primarily due to greater peak electricity demand resulting from heating needs in colder climates.

Actionable insights

While breakthroughs in fusion energy, next-generation nuclear power, or carbon capture could well shake up their models, the researchers believe that insights from their study can make an impact right now.

“People working with LDES can see where their technology fits in to the future electricity mix and ask: 'Does it make economic sense from a system perspective?'” says Mallapragada. “And it’s a call for action in policy and investment in innovation, because we show where the technology gaps lie and where we see the greatest value for research breakthroughs in LDES technology development.”

Not all LDES technologies can clear the bar in this design space, nor can there be reliance on LDES as the exclusive means to expand wind and solar swiftly in the near term, or to enable a complete transition to a zero-carbon economy by 2050.

“We show how promising LDES technologies could be,” says Sepulveda. “But we also show that these technologies are not the one solution, and that we are still better off with them complementing firm resources.”

Jenkins spies niche market opportunities for LDES immediately, such as places with a lot of wind and solar deployed and limits on transmission to export that power. In such locations, storage could fill up when transmission is at its limit, and export power later while maximizing use of the power line capacity. But LDES technologies must be ready to make a major impact by the late 2030s and 2040s, he believes, by which time economies might need to be weaned completely off of natural gas dependency if decarbonization is to succeed.

“We must develop and deploy LDES and improve other low-carbon technologies this decade, so we can present real alternatives to policymakers and power system operators,” he says.

In light of this urgent need, Jenkins at Princeton and Mallapragada at MIT are now working to evaluate and advance technologies with the greatest potential in the storage and energy fields to hasten the zero-carbon goal. With help from ARPA-E and MITEI, they are making the state-of-the-art GenX electricity system planning model an open-source tool for public use as well. If their research and modeling approach can show developers and policymakers what kind of designs are most impactful, says Sepulveda, “We could have a decarbonized system that’s less expensive than today’s system if we do things right.”

This research was supported by a grant from the National Science Foundation, and by MITEI’s Low-Carbon Energy Center for Electric Power Systems.

Physicists flip particle accelerator setup to gain a clearer view of atomic nuclei

Mon, 03/29/2021 - 11:00am

Physicists at MIT and elsewhere are blasting beams of ions at clouds of protons —like throwing nuclear darts at the speed of light — to map the structure of an atom’s nucleus.

The experiment is an inversion of the usual particle accelerators, which hurl electrons at atomic nuclei to probe their structures. The team used this “inverse kinematics” approach to sift out the messy, quantum mechanical influences within a nucleus, to provide a clear view of a nucleus’ protons and neutrons, as well as its short-range correlated (SRC) pairs. These are pairs of protons or neutrons that briefly bind to form super-dense droplets of nuclear matter and that are thought to dominate the ultradense environments in neutron stars.

The results, published today in Nature Physics, demonstrate that inverse kinematics may be used to characterize the structure of more unstable nuclei — essential ingredients scientists can use to understand the dynamics of neutron stars and the processes by which they generate heavy elements.

“We’ve opened the door for studying SRC pairs, not only in stable nuclei but also in neutron-rich nuclei that are very abundant in environments like neutron star mergers,” says study co-author Or Hen, assistant professor of physics at MIT. “That gets us closer to understanding such exotic astrophysical phenomena.”

Hen’s co-authors include Jullian Kahlbow and Efrain Segarra of MIT, Eli Piasetzky of Tel-Aviv University, and researchers from Technical University of Darmstadt, the Joint Institute for Nuclear Research (JINR) in Russia, the French Alternative Energies and Atomic Energy Commission (CEA), and the GSI Helmholtz Center for Heavy Ion Research in Germany.

An inverted accelerator

Particle accelerators typically probe nuclear structures through electron scattering, in which high-energy electrons are beamed at a stationary cloud of target nuclei. When an electron hits a nucleus, it knocks out protons and neutrons, and the electron loses energy in the process. Researchers measure the energy of the electron beam before and after this interaction to calculate the original energies of the protons and neutrons that were kicked away.

While electron scattering is a precise way to reconstruct a nucleus’ structure, it is also a game of chance. The probability that an electron will hit a nucleus is relatively low, given that a single electron is vanishingly small in comparison to an entire nucleus. To increase this probability, beams are loaded with ever-higher electron densities.

Scientists also use beams of protons instead of electrons to probe nuclei, as protons are comparably larger and more likely to hit their target. But protons are also more complex, and made of quarks and gluons, the interactions of which can muddy the final interpretation of the nucleus itself.

To get a clearer picture, physicists in recent years have inverted the traditional setup: By aiming a beam of nuclei, or ions, at a target of protons, scientists can not only directly measure the knocked out protons and neutrons, but also compare the original nucleus with the residual nucleus, or nuclear fragment, after it has interacted with a target proton.

“With inverted kinematics, we know exactly what happens to a nucleus when we remove its protons and neutrons,” Hen says.

Quantum sifting

The team took this inverted kinematics approach to ultrahigh energies, using JINR’s particle accelerator facility to target a stationary cloud of protons with a beam of carbon-12 nuclei, which they shot out at 48 billion electron-volts — orders of magnitude higher than the energies found naturally in nuclei.

At such high energies, any nucleon that interacts with a proton will stand out in the data, compared with noninteracting nucleons that pass through at much lower energies. In this way, the researchers can quickly isolate any interactions that did occur between a nucleus and a proton.

From these interactions, the team picked through the residual nuclear fragments, looking for boron-11 — a configuration of carbon-12, minus a single proton. If a nucleus started out as carbon-12 and wound up as boron-11, it could only mean that it encountered a target proton in a way that knocked out a single proton. If the target proton knocked out more than one proton, it would have been the result of quantum mechanical effects within the nucleus that would be difficult to interpret. The team isolated boron-11 as a clear signature and discarded any lighter, quantumly influenced fragments.

The team calculated the energy of the proton knocked out of the original carbon-12 nucleus, based on each interaction that produced boron-11. When they set the energies into a graph, the pattern fit exactly with carbon-12’s well-established distribution — a validation of the inverted, high-energy approach.

They then turned the technique on short-range correlated pairs, looking to see if they could reconstruct the respective energies of each particle in a pair —  fundamental information for ultimately understanding the dynamics in neutron stars and other neutron-dense objects.

They repeated the experiment and this time looked for boron-10, a configuration of carbon-12, minus a proton and a neutron. Any detection of boron-10 would mean that a carbon-12 nucleus interacted with a target proton, which knocked out a proton, and its bound partner, a neutron. The scientists could measure the energies of both the target and the knocked out protons to calculate the neutron’s energy and the energy of the original SRC pair.

In all, the researchers observed 20 SRC interactions and from them mapped carbon-12’s distribution of SRC energies, which fit well with previous experiments. The results suggest that inverse kinematics can be used to characterize SRC pairs in more unstable and even radioactive nuclei with many more neutrons.

“When everything is inverted, this means a beam driving through could be made of unstable particles with very short lifetimes that live for a millisecond,” says Julian Kahlbow, a joint postdoc at MIT and Tel-aviv University and a co-leading author of the paper. “That millisecond is enough for us to create it, let it interact, and let it go. So now we can systematically add more neutrons to the system and see how these SRCs evolve, which will help us inform what happens in neutron stars, which have many more neutrons than anything else in the universe.”

Method offers inexpensive imaging at the scale of virus particles

Mon, 03/29/2021 - 11:00am

Using an ordinary light microscope, MIT engineers have devised a technique for imaging biological samples with accuracy at the scale of 10 nanometers — which should enable them to image viruses and potentially even single biomolecules, the researchers say.

The new technique builds on expansion microscopy, an approach that involves embedding biological samples in a hydrogel and then expanding them before imaging them with a microscope. For the latest version of the technique, the researchers developed a new type of hydrogel that maintains a more uniform configuration, allowing for greater accuracy in imaging tiny structures.

This degree of accuracy could open the door to studying the basic molecular interactions that make life possible, says Edward Boyden, the Y. Eva Tan Professor in Neurotechnology, a professor of biological engineering and brain and cognitive sciences at MIT, and a member of MIT’s McGovern Institute for Brain Research and Koch Institute for Integrative Cancer Research.

“If you could see individual molecules and identify what kind they are, with single-digit-nanometer accuracy, then you might be able to actually look at the structure of life. And structure, as a century of modern biology has told us, governs function,” says Boyden, who is the senior author of the new study.

The lead authors of the paper, which appears today in Nature Nanotechnology, are MIT Research Scientist Ruixuan Gao and Chih-Chieh “Jay” Yu PhD ’20. Other authors include Linyi Gao PhD ’20; former MIT postdoc Kiryl Piatkevich; Rachael Neve, director of the Gene Technology Core at Massachusetts General Hospital; James Munro, an associate professor of microbiology and physiological systems at University of Massachusetts Medical School; and Srigokul Upadhyayula, a former assistant professor of pediatrics at Harvard Medical School and an assistant professor in residence of cell and developmental biology at the University of California at Berkeley.

Low cost, high resolution

Many labs around the world have begun using expansion microscopy since Boyden’s lab first introduced it in 2015. With this technique, researchers physically enlarge their samples about fourfold in linear dimension before imaging them, allowing them to generate high-resolution images without expensive equipment. Boyden’s lab has also developed methods for labeling proteins, RNA, and other molecules in a sample so that they can be imaged after expansion.

“Hundreds of groups are doing expansion microscopy. There’s clearly pent-up demand for an easy, inexpensive method of nanoimaging,” Boyden says. “Now the question is, how good can we get? Can we get down to single-molecule accuracy? Because in the end, you want to reach a resolution that gets down to the fundamental building blocks of life.”

Other techniques such as electron microscopy and super-resolution imaging offer high resolution, but the equipment required is expensive and not widely accessible. Expansion microscopy, however, enables high-resolution imaging with an ordinary light microscope.

In a 2017 paper, Boyden’s lab demonstrated resolution of around 20 nanometers, using a process in which samples were expanded twice before imaging. This approach, as well as the earlier versions of expansion microscopy, relies on an absorbent polymer made from sodium polyacrylate, assembled using a method called free radical synthesis. These gels swell when exposed to water; however, one limitation of these gels is that they are not completely uniform in structure or density. This irregularity leads to small distortions in the shape of the sample when it’s expanded, limiting the accuracy that can be achieved.

To overcome this, the researchers developed a new gel called tetra-gel, which forms a more predictable structure. By combining tetrahedral PEG molecules with tetrahedral sodium polyacrylates, the researchers were able to create a lattice-like structure that is much more uniform than the free-radical synthesized sodium polyacrylate hydrogels they previously used.

The researchers demonstrated the accuracy of this approach by using it to expand particles of herpes simplex virus type 1 (HSV-1), which have a distinctive spherical shape. After expanding the virus particles, the researchers compared the shapes to the shapes obtained by electron microscopy and found that the distortion was lower than that seen with previous versions of expansion microscopy, allowing them to achieve an accuracy of about 10 nanometers.

“We can look at how the arrangements of these proteins change as they are expanded and evaluate how close they are to the spherical shape. That's how we validated it and determined how faithfully we can preserve the nanostructure of the shapes and the relative spatial arrangements of these molecules,” Ruixuan Gao says.

Single molecules

The researchers also used their new hydrogel to expand cells, including human kidney cells and mouse brain cells. They are now working on ways to improve the accuracy to the point where they can image individual molecules within such cells. One limitation on this degree of accuracy is the size of the antibodies used to label molecules in the cell, which are about 10 to 20 nanometers long. To image individual molecules, the researchers would likely need to create smaller labels or to add the labels after expansion was complete.

They are also exploring whether other types of polymers, or modified versions of the tetra-gel polymer, could help them realize greater accuracy.

If they can achieve accuracy down to single molecules, many new frontiers could be explored, Boyden says. For example, scientists could glimpse how different molecules interact with each other, which could shed light on cell signaling pathways, immune response activation, synaptic communication, drug-target interactions, and many other biological phenomena.

“We’d love to look at regions of a cell, like the synapse between two neurons, or other molecules involved in cell-cell signaling, and to figure out how all the parts talk to each other,” he says. “How do they work together and how do they go wrong in diseases?”

The research was funded by Lisa Yang, John Doerr, Open Philanthropy, the National Institutes of Health, the Howard Hughes Medical Institute Simons Faculty Scholars Program, the Intelligence Advanced Research Projects Activity, the U.S. Army Research Laboratory, the US-Israel Binational Science Foundation, the National Science Foundation, the Friends of the McGovern Fellowship, and the Fellows program of the Image and Data Analysis Core at Harvard Medical School.

Supporting the Covid-19 vaccine rollout with extra-strength glass

Sun, 03/28/2021 - 12:00am

Some people are actually able to bottle their success, and Mark Kurz SM ’95 is one of the lucky few. Kurz is at the forefront of the fight against Covid-19 as a manufacturing supply chain leader at Corning, the New York-based pioneer in glass science and manufacturing technology.

Corning produces Valor Glass vials, a primary mode of delivery for vaccines as part of the U.S. government’s Operation Warp Speed. In his role as director of Corning’s Pharmaceutical Technologies manufacturing and operations, Kurz oversaw a four-fold acceleration of production capacity for vials. Production is slated to increase 10-fold by the end of this year.

Kurz never expected to be at the forefront of a pandemic — who does? — but his participation in the MIT Leaders for Global Operations (LGO) program positioned him well. He joined LGO (known at the time as Leaders for Manufacturing) as an operations professional at Kodak in 1993, receiving his SM from the Department of Mechanical Engineering as well as an SM from the MIT Sloan School of Management.

“I was really drawn to the MIT LGO program because it covered three main areas: engineering and technology — there's no better school in the world for that — but also business management and leadership. Understanding people and how you lead them is so important, and that really drew me to this MIT program. It’s been the cornerstone of my career,” Kurz says.

Today, MIT LGO students earn an MBA from the MIT Sloan School of Management and a master’s degree from one of eight participating departments in the School of Engineering while participating in an immersive, six-month research fellowship at partner organizations, ranging from Amazon to Corning.

Kurz says that Corning was the perfect place to bring his operations skills, with its 170-year history in glass production. (In the early days, it helped to develop encasements for Edison’s light bulb.) Kurz became part of a team exploring ways to make stronger pharmaceutical glass packaging, a process that hadn’t changed in a century — making glass that was more efficient to fill, crack-resistant, and more durable for vials.

Valor Glass offers multiple improvements over conventional borosilicate vials by chemically strengthening the glass with molten salt. Not only does Valor Glass provide safety benefits with reduced surface delamination and fewer particles, but it is also more efficient for pharmaceutical filling operations by enabling higher production speeds. Corning also found that it was less prone to breaking during standard lyophilization, a freeze-drying process used with some pharmaceutical products. Cold temperature performance is a critical attribute for some vaccines. Valor Glass addresses expensive pain points for partners like Pfizer and Merck. 

“When the Covid demand materialized, we had already laid the groundwork. It’s this lasting investment [with our partners] that positioned us to be able to go fast. We’d been positioning ourselves for expansion,” Kurz says. “We had some capacity that was ready to come up if we had the demand. We knew what needed to be done. It turned from planning to execution.”

Corning joined Operation Warp Speed to supply the vials, shifting to 24/7 production, delivering in weeks instead of months, streamlining the product line to a standard-sized vial, and focusing on quality and volume. In June 2020, Corning received $204 million in funding from the U.S. government to ramp up manufacturing capacity.

“It came back to the fundamentals I learned in the MIT LGO program — really challenging technical problems, understanding the business environment you're operating in, and leading people. I've used my MIT background in every job I've ever been in. And this challenge required all three,” he says.

One particularly important task? Rapidly spotting defects with a relatively new product. Kurz implemented a machine-vision system to examine every vial for flaws. The stakes were high.

“In early manufacturing, there are bugs. We didn’t want to throw out good vials. We also didn’t want to pass bad products along. So we put a lot of effort into that and actually were able to improve our output by more than 10 percent. If you look at it as doses of the vaccine getting to people, it's inspiring,” he says.

Kurz has subsequently shifted roles to focus on global health services, implementing protective measures in Corning factories worldwide. Meanwhile, he’s confident that Valor glass will have pharmaceutical uses long after the pandemic fades, such as for cancer treatments. On that note, he’s eager to tell the MIT community what a valuable partner Corning is and what a forward-thinking company it continues to be.

“It’s easy to think, ‘Well, it's just glass, right?’ Yet we keep innovating, 170 years after the company was formed. We keep investing nearly a billion dollars a year in R&D and finding new ways to take glass into places that it’s never been before,” he says, from Edison’s original light bulbs to ultra-durable, anti-microbial Gorilla Glass for wearable devices.

For Kurz, his pride in glass manufacturing is clear.

“People want a noble cause to work for. From a leadership standpoint, there can’t be a more noble goal than to try to help people overcome Covid,” he says.

Pages