MIT Latest News
Most people’s experience with seagrass, if any, amounts to little more than a tickle on their ankles while wading in shallow coastal waters. But it turns out these ubiquitous plants, varieties of which exist around the world, could play a key role in protecting vulnerable shores as they face onslaughts from rising sea levels.
New research for the first time quantifies, through experiments and mathematical modelling, just how large and how dense a continuous meadow of seagrass must be to provide adequate damping of waves in a given geographic, climatic, and oceanographic setting.
In a pair of papers appearing in the May issues of two research journals, Coastal Engineering and the Journal of Fluids and Structures, MIT professor of civil and environmental engineering Heidi Nepf and doctoral student Jiarui Lei describe their findings and the significant environmental benefits seagrass offers. These include not only preventing beach erosion and protecting seawalls and other structures, but also improving water quality and sequestering carbon to help limit future climate change.
Those services, coupled with better-known services such as providing habitat for fish and food for other marine creatures, mean that submerged aquatic vegetation including seagrass provides an overall value of more than $4 trillion globally every year, as earlier studies have shown. Yet today, some important seagrass areas such as the Chesapeake Bay are down to about half of their historic seagrass coverage (having rebounded from a low of just 2 percent), thus limiting the availability of these valuable services.
Nepf and Lei recreated artificial versions of seagrass, assembled from materials of different stiffness to reproduce the long, flexible blades and much stiffer bases that are typical of seagrass plants such as Zostera marina, also known as common eelgrass. They set up a meadow-like collection of these artificial plants in a 79-foot-long (24-meter) wave tank in MIT’s Parsons Laboratory, which can mimic conditions of natural waves and currents. They subjected the meadow to a variety of conditions, including still water, strong currents, and wave-like sloshing back and forth. Their results validated predictions made earlier using a computerized model of individual plants.
Researchers used a74-foot-long wave tank at MIT, loaded with simulated seagrass plants, to study how seagrass acts to attenuate waves under various conditions. In this video, the simulated plants are exposed to strong waves.
In further tests in the MIT tank, simulated seagrass plants are subjected to very low-velocity waves.
The researchers used the physical and numerical models to analyze how the seagrass and waves interact under a variety of conditions of plant density, blade lengths, and water motions. The study describes how the motion of the plants changes with blade stiffness, wave period, and wave amplitude, providing a more precise prediction of wave damping over seagrass meadows. While other research has modeled some of these conditions, the new work more faithfully reproduces real-world conditions and provides a more realistic platform for testing ideas about seagrass restoration or ways of optimizing the beneficial effects of such submerged meadows, they say.
To test the validity of the model, the team then did a comparison of the predicted effects of seagrass on waves, looking at one specific seagrass meadow off the shore of the Spanish island of Mallorca, in the Mediterranean Sea, which is known to attenuate the force of incoming waves by a factor of about 50 percent on average. Using measurements of meadow morphology and wave velocities collected in a previous study led by Professor Eduardo Infantes, currently of Gothenburg University, Lei was able to confirm the predictions made by the model, which analyzed the way the tips of the grass blades and particles suspended in the water both tend to follow circular paths as waves go by, forming circles of motion known as orbitals.
The observations there matched the predictions very well, Lei says, showing the way wave strength and seagrass motion varied with distance from the edge of the meadow to its interior agreed with the model. So, “with this model the engineers and practitioners can assess different scenarios for seagrass restoration projects, which is a big deal right now,” he says That could make a significant difference, he says, because now some restoration projects are considered too expensive to undertake, whereas a better analysis could show that a smaller area, less expensive to restore, might be capable of providing the desired level of protection. In other areas, the analysis might show that a project is not worth doing at all, because the characteristics of the local waves or currents would limit the grasses’ effectiveness.
The particular seagrass meadow in Mallorca that they studied is known to be very dense and uniform, so one future project is to extend the comparison to other seagrass areas, including those that are patchier or less thickly vegetated, Nepf says, to demonstrate that the model can indeed be useful under a variety of conditions.
By attenuating the waves and thus providing protection against erosion, the seagrass can trap fine sediment on the seabed. This can significantly reduce or prevent runaway growth of algae fed by the nutrients associated with the fine sediment, which in turn causes a depletion of oxygen that can kill off much of the marine life, a process called eutrophication.
Seagrass also has significant potential for sequestering carbon, both through its own biomass and by filtering out fine organic material from the surrounding water, according to Nepf, and this is a focus of her and Lei’s ongoing research. An acre of seagrass can store about three times as much carbon as an acre of rainforest, and Lei says preliminary calculations suggest that globally, seagrass meadows are responsible for more than 10 percent of carbon buried in the ocean, even though they occupy just 0.2 percent of the area.
While other researchers have studied the effects of seagrass in steady currents, or in oscillating waves, “they are the first to combine these two types of flows, which are what real plants are typically subjected to. Despite the added complexity, they really sort out the physics and define different flow regimes with different behaviours,” says Frédérick Gosselin, professor of mechanical engineering at Polytechnique Montréal, in Canada, who was not connected to this research.
Gosselin adds, “This line of research is critical. Land developers are quick to fill and dredge wetlands without much thinking about the role these humid environments play.” This study “demonstrates how submerged vegetation has a precisely quantifiable effect on damping incoming waves. This means we can now evaluate exactly how much a meadow protects the coast from erosion. … This information would allow better decisions by our lawmakers.”
The work was funded by the U.S. National Science Foundation.
Climate change is confusing.
At least, that’s the impression the average American might get if they tried to learn about the subject from the flurry of journal articles, policy papers, and action plans that dominate the conversation among scientists and environmental advocates. To help demystify the science, solutions, and policies behind climate change, the MIT Environmental Solutions Initiative (ESI) has launched a podcast series called TILclimate, airing eight episodes in its first season over the 2019 spring semester.
“There’s a lot of information out there about why climate change is happening, how it will affect human life, and the solutions that are on the table. But it’s hard to find sources that you trust,” says Laur Hesse Fisher, program director for ESI and host of the new series. “And even then, there’s still a lot of jargon and technicalities that you have to wade through.
“We’re trying to solve that problem.”
In each 10-minute episode, Hesse Fisher speaks to an expert from the MIT community to break down a clear, focused question related to climate change. In the first batch of episodes, these questions have included: What do clouds have to do with climate change? Why are different parts of the world experiencing different climate impacts? How does carbon pricing work?
The podcast is part of a broader ESI project called MIT Climate, a community-building effort built around a common web portal where users can share climate change-related projects, news stories, and learning resources at MIT and beyond. MIT Climate is intended to draw individuals and groups working on climate issues at MIT closer together, and eventually become a platform for worldwide, science-based learning and engagement on climate change. You can see a prototype of the portal at climate.mit.edu.
“We named the podcast TILclimate after the popular Reddit hashtag TIL, which stands for Today I Learned,” says Hesse Fisher. “We hope to signify that these episodes are accessible. Even if you have no prior knowledge of climate science or policy, after 10 minutes you know enough to start being a part of the conversation.”
To hear this approach in action, you can listen to the podcast’s first episode, “TIL about planes,” where Hesse Fisher interviews MIT professor of aeronautics and astronautics Steven Barrett, head of MIT's Laboratory for Aviation and the Environment. Together, they walk listeners through the two primary ways that air travel impacts Earth’s climate: by releasing carbon dioxide high into the atmosphere and by producing heat-trapping condensation trails.
“Most of the CO2 that aviation's ever emitted is still in the atmosphere because it lasts so long,” Barrett says in the interview. To help illustrate his point, Hesse Fisher adds: “Think about fighter planes circling Europe in World War I, or Charles Lindbergh flying across the Atlantic Ocean in 1927. The CO2 from those flights are still in the atmosphere.”
“We steer clear of jargon whenever possible and make a real attempt to define the terms and concepts that we use,” says Hesse Fisher. “The point is, we hope the podcast will appeal to the ‘climate curious’ — people who are just interested enough in climate change that they’d listen to something around 10 minutes.”
Those who do want to dig deeper into the content can head to TILclimate’s profile on the MIT Climate website, where each episode posting includes a “More Info” tab with links to external resources.
Season 1 concluded on May 1, comprising eight episodes about planes, clouds, materials, hurricanes, uncertainty, climate impacts, carbon pricing, and geoengineering. You can listen to TILclimate on iTunes, Spotify, Google Podcasts, or wherever you get your podcasts.
MIT Climate is MIT’s central online portal for all things related to anthropogenic climate change.
In an event attended by more than 100 members of the MIT community — friends, family, faculty, staff, students, and sponsors — the MIT Motorsports team unveiled their 2019 electric race car in Lobby 13 in April. Introducing the evening, team captain and MIT junior Serena Grown-Haeberli thanked each student on the 50-member team for putting their heart and soul into this car. “This student hands-on project is really the 'mens et manus,' the hands-on portion, of our education,” she added, referring to MIT's motto of "mind and hand."
Cheers, hoots of appreciation, and applause could be heard when Grown-Haeberli peeled back the red tarp. No longer the MIT Pantone 201 red, the 551-pound vehicle is now black, bearing the names of sponsors who have generously helped the team get to where they are now.
Short talks by team members took place during the event. Mechanical lead, driver, and junior Jeremy Noel, a team member since 2017, outlined the newly converted-to four-wheel-drive vehicle. By adding two additional motors to drive the two front wheels in addition to a single rear motor driving both rear wheels “we can deliver more power to the ground and the car will be faster.” Noel said.
There’s been a redesign of the battery pack, switching from forced-air cooling with fans to water cooling by pumping water through the modules. The team has also developed control systems for the vehicle including power limiting, launch control, and torque vectoring.
Acknowledging the quirks in the team’s past, senior Cheyenne Hua — former team captain and now mechanical sub-team — talked about what has changed and what has stayed the same since the team began in 2001, co-founded by mathematics major Nick Gidwani ’04 and mechanical engineering major Richard James ’04, SM ’06.
“Nick got a couple of buddies and was like, ‘OK, let's start this racecar team.’ He would actually drive to people's houses and knock down their doors, and be like, ‘Hey, it's shop day. Come to shop.’ He'd pick them up and carry them to the shop and get them to do [the] work,” related Hua.
“And now we can't get [students] to leave shop.” Hua said. “I think these past couple of weeks, we've had at least five people in shop at any given time of day, including at 6 a.m.”
What has stayed the same is the team’s predilection for collecting useful (and not so useful) tools. In 2010 the team found an enormous shearer — a cheap metal cutting machine — in Ohio. They rented a forklift for the weekend, drove to Ohio, picked up the shearer, came back to Cambridge and installed it. They kept the forklift for the rest of the weekend, installed some hooks in the ceiling and decorated hard-to reach places in the shop. Today team members Ethan Perrin, a junior and the battery subteam lead, and Noel take weekly trips to random Craigslist destinations to pick up various tools.
“Every time I walk in, it seems like there's a new tool in the shop, like 12-inch calipers,” remarked Hua. “How big are your calipers,” Hua asked Noel.
“Twenty inches, 20-inch calipers,” answered Noel.
For the students, the unveiling was an opportunity to show their family and friends the fruit of their labors, to explain why they come home covered in grease and metal chips, and why they practically live in the Edgerton Center’s Area 51 student shop in Building N51.
With almost weekly trips to the Palmer Motorsports Park for test driving, the team is looking forward to the SAE International’s Collegiate Design Series competition coming up in June.
MIT neuroscientists have performed the most rigorous testing yet of computational models that mimic the brain’s visual cortex.
Using their current best model of the brain’s visual neural network, the researchers designed a new way to precisely control individual neurons and populations of neurons in the middle of that network. In an animal study, the team then showed that the information gained from the computational model enabled them to create images that strongly activated specific brain neurons of their choosing.
The findings suggest that the current versions of these models are similar enough to the brain that they could be used to control brain states in animals. The study also helps to establish the usefulness of these vision models, which have generated vigorous debate over whether they accurately mimic how the visual cortex works, says James DiCarlo, the head of MIT’s Department of Brain and Cognitive Sciences, an investigator in the McGovern Institute for Brain Research and the Center for Brains, Minds, and Machines, and the senior author of the study.
“People have questioned whether these models provide understanding of the visual system,” he says. “Rather than debate that in an academic sense, we showed that these models are already powerful enough to enable an important new application. Whether you understand how the model works or not, it’s already useful in that sense.”
MIT postdocs Pouya Bashivan and Kohitij Kar are the lead authors of the paper, which appears in the May 2 online edition of Science.
Over the past several years, DiCarlo and others have developed models of the visual system based on artificial neural networks. Each network starts out with an arbitrary architecture consisting of model neurons, or nodes, that can be connected to each other with different strengths, also called weights.
The researchers then train the models on a library of more than 1 million images. As the researchers show the model each image, along with a label for the most prominent object in the image, such as an airplane or a chair, the model learns to recognize objects by changing the strengths of its connections.
It’s difficult to determine exactly how the model achieves this kind of recognition, but DiCarlo and his colleagues have previously shown that the “neurons” within these models produce activity patterns very similar to those seen in the animal visual cortex in response to the same images.
In the new study, the researchers wanted to test whether their models could perform some tasks that previously have not been demonstrated. In particular, they wanted to see if the models could be used to control neural activity in the visual cortex of animals.
“So far, what has been done with these models is predicting what the neural responses would be to other stimuli that they have not seen before,” Bashivan says. “The main difference here is that we are going one step further and using the models to drive the neurons into desired states.”
To achieve this, the researchers first created a one-to-one map of neurons in the brain’s visual area V4 to nodes in the computational model. They did this by showing images to animals and to the models, and comparing their responses to the same images. There are millions of neurons in area V4, but for this study, the researchers created maps for subpopulations of five to 40 neurons at a time.
“Once each neuron has an assignment, the model allows you to make predictions about that neuron,” DiCarlo says.
The researchers then set out to see if they could use those predictions to control the activity of individual neurons in the visual cortex. The first type of control, which they called “stretching,” involves showing an image that will drive the activity of a specific neuron far beyond the activity usually elicited by “natural” images similar to those used to train the neural networks.
The researchers found that when they showed animals these “synthetic” images, which are created by the models and do not resemble natural objects, the target neurons did respond as expected. On average, the neurons showed about 40 percent more activity in response to these images than when they were shown natural images like those used to train the model. This kind of control has never been reported before.
“That they succeeded in doing this is really amazing. It’s as if, for that neuron at least, its ideal image suddenly leaped into focus. The neuron was suddenly presented with the stimulus it had always been searching for,” says Aaron Batista, an associate professor of bioengineering at the University of Pittsburgh, who was not involved in the study. “This is a remarkable idea, and to pull it off is quite a feat. It is perhaps the strongest validation so far of the use of artificial neural networks to understand real neural networks.”
In a similar set of experiments, the researchers attempted to generate images that would drive one neuron maximally while also keeping the activity in nearby neurons very low, a more difficult task. For most of the neurons they tested, the researchers were able to enhance the activity of the target neuron with little increase in the surrounding neurons.
“A common trend in neuroscience is that experimental data collection and computational modeling are executed somewhat independently, resulting in very little model validation, and thus no measurable progress. Our efforts bring back to life this ‘closed loop’ approach, engaging model predictions and neural measurements that are critical to the success of building and testing models that will most resemble the brain,” Kar says.
The researchers also showed that they could use the model to predict how neurons of area V4 would respond to synthetic images. Most previous tests of these models have used the same type of naturalistic images that were used to train the model. The MIT team found that the models were about 54 percent accurate at predicting how the brain would respond to the synthetic images, compared to nearly 90 percent accuracy when the natural images are used.
“In a sense, we’re quantifying how accurate these models are at making predictions outside the domain where they were trained,” Bashivan says. “Ideally the model should be able to predict accurately no matter what the input is.”
The researchers now hope to improve the models’ accuracy by allowing them to incorporate the new information they learn from seeing the synthetic images, which was not done in this study.
This kind of control could be useful for neuroscientists who want to study how different neurons interact with each other, and how they might be connected, the researchers say. Farther in the future, this approach could potentially be useful for treating mood disorders such as depression. The researchers are now working on extending their model to the inferotemporal cortex, which feeds into the amygdala, which is involved in processing emotions.
“If we had a good model of the neurons that are engaged in experiencing emotions or causing various kinds of disorders, then we could use that model to drive the neurons in a way that would help to ameliorate those disorders,” Bashivan says.
The research was funded by the Intelligence Advanced Research Projects Agency, the MIT-IBM Watson AI Lab, the National Eye Institute, and the Office of Naval Research.
A new study provides potential new targets for treating epilepsy and new fundamental insights into the relationship between neurons and their glial “helper” cells. In eLife, scientists at MIT’s Picower Institute for Learning and Memory report finding a key sequence of molecular events in which the genetic mutation in a fruit fly model of epilepsy leaves neurons vulnerable to becoming hyperactivated by stress, leading to seizures.
About 60 million people worldwide have epilepsy, a neurological condition characterized by seizures resulting from excessive neural activity. The “zydeco” model flies in the study experience seizures in a similar fashion. Since discovering zydeco, the lab of MIT neurobiologist Troy Littleton, the Menicon Professor in Neuroscience, has been investigating why the flies’ zydeco mutation makes it a powerful model of epilepsy.
Heading into the study, the team led by postdoc Shirley Weiss knew that the zydeco mutation was specifically expressed by cortex glial cells and that the protein it makes helps to pump calcium ions out of the cells. But that didn’t explain much about why a glial cell’s difficulty maintaining a natural ebb and flow of calcium ions would lead adjacent neurons to become too active under seizure-inducing stresses, such as fever-grade temperatures or the fly being jostled around.
The activity of neurons rises and falls based on the flow of ions — for a neuron to “fire,” for instance, it takes in sodium ions, and then to calm back down it releases potassium ions. But the ability of neurons to do that depends on there being a conducive balance of ions outside the cell. For instance, too much potassium outside makes it harder to get rid of potassium and calm down.
The need for an ion balance — and the way it is upset by the zydeco mutation — turned out to be the key to the new study. In a four-year series of experiments, Weiss, Littleton, and their co-authors found that excess calcium in cortex glia cells causes them to hyper-activate a molecular pathway that leads them to withdraw many of the potassium channels that they typically deploy to remove potassium from around neurons. With too much potassium left around, neurons can’t calm down when they are excited, and seizures ensue.
“No one has really shown how calcium signaling in glia could directly communicate with this more classical role of glial cells in potassium buffering,” Littleton says. “So this is a really important discovery linking an observation that’s been found in glia for a long time — these calcium oscillations that no one really understood — to a real biological function in glial cells, where it’s contributing to their ability to regulate ionic balance around neurons.”
Weiss’s work lays out a detailed sequence of events, implicating several specific molecular players and processes. That richly built knowledge meant that along the way, she and the team found multiple steps in which they could intervene to prevent seizures.
She started working the problem from the calcium end. With too much calcium afoot, she asked, what genes might be in a related pathway such that, if their expression was prevented, seizures would not occur? She interfered with expression in 847 potentially related genes and found that about 50 affected seizures. Among those, one stood out both for being closely linked to calcium regulation and also for being expressed in the key cortex glia cells of interest: calcineurin. Inhibiting calcineurin activity, for instance with the immunosuppressant medications cyclosprorine A or FK506, blocked seizures in zydeco mutant flies.
Weiss then looked at the genes affected by the calcineurin pathway and found several. One day at a conference where she was presenting a poster of her work, an onlooker mentioned that glial potassium channels could be involved. Sure enough, she found a particular one called “sandman” that, when knocked down, led to seizures in the flies. Further research showed that hyperactivation of calcineurin in zydeco glia led to an increase in a cellular process called endocytosis, in which the cell was bringing too much sandman back into the cell body. Without sandman staying on the cell membrane, the glia couldn’t effectively remove potassium from the outside.
When Weiss and her co-authors interfered to suppress endocytosis in zydeco flies, they also were able to reduce seizures, because that allowed more sandman to persist where it could reduce potassium. Sandman, notably, is equivalent to a protein in mammals called TRESK.
“Pharmacologically targeting glial pathways might be a promising avenue for future drug development in the field,” the authors wrote in eLife.
In addition to that clinical lead, the study also offers some new insights for more fundamental neuroscience, Littleton and Weiss said. While zydeco flies are good models of epilepsy, Drosophila’s cortex glia do have a property not found in mammals: They contact only the cell body of neurons, not the synaptic connections on their axon and dendrite branches. That makes them an unusually useful test bed to learn how glia interact with neurons via their cell body versus their synapses. The new study, for instance, shows a key mechanism for maintaining ionic balance for the neurons.
In addition to Weiss and Littleton, the paper’s other authors are Jan Melom, who helped lead the discovery of zydeco, postdoc Kiel Ormerod, and former postdoc Yao Zhang.
The National Institutes of Health and the JPB Foundation funded the research.
The following press release was issued jointly today by the LIGO Laboratory, LIGO Scientific Collaboration, and Virgo Collaboration.
On April 25, 2019, the National Science Foundation's Laser Interferometer Gravitational-Wave Observatory (LIGO) and the European-based Virgo detector registered gravitational waves from what appears likely to be a crash between two neutron stars — the dense remnants of massive stars that previously exploded. One day later, on April 26, the LIGO-Virgo network spotted another candidate source with a potentially interesting twist: It may in fact have resulted from the collision of a neutron star and black hole, an event never before witnessed.
"The universe is keeping us on our toes," says Patrick Brady, spokesperson for the LIGO Scientific Collaboration and a professor of physics at the University of Wisconsin-Milwaukee. "We're especially curious about the April 26 candidate. Unfortunately, the signal is rather weak. It's like listening to somebody whisper a word in a busy café; it can be difficult to make out the word or even to be sure that the person whispered at all. It will take some time to reach a conclusion about this candidate."
"NSF's LIGO, in collaboration with Virgo, has opened up the universe to future generations of scientists," says NSF Director France Cordova. "Once again, we have witnessed the remarkable phenomenon of a neutron star merger, followed up closely by another possible merger of collapsed stars. With these new discoveries, we see the LIGO-Virgo collaborations realizing their potential of regularly producing discoveries that were once impossible. The data from these discoveries, and others sure to follow, will help the scientific community revolutionize our understanding of the invisible universe."
The discoveries come just weeks after LIGO and Virgo turned back on. The twin detectors of LIGO — one in Washington and one in Louisiana — along with Virgo, located at the European Gravitational Observatory (EGO) in Italy, resumed operations April 1, after undergoing a series of upgrades to increase their sensitivities to gravitational waves — ripples in space and time. Each detector now surveys larger volumes of the universe than before, searching for extreme events such as smash-ups between black holes and neutron stars.
"Joining human forces and instruments across the LIGO and Virgo collaborations has been once again the recipe of an incomparable scientific month, and the current observing run will comprise 11 more months," says Giovanni Prodi, the Virgo data analysis coordinator, at the University of Trento and the Istituto Nazionale di Fisica Nucleare (INFN) in Italy. "The Virgo detector works with the highest stability, covering the sky 90 percent of the time with useful data. This is helping in pointing to the sources, both when the network is in full operation and at times when only one of the LIGO detectors is operating. We have a lot of groundbreaking research work ahead."
In addition to the two new candidates involving neutron stars, the LIGO-Virgo network has, in this latest run, spotted three likely black hole mergers. In total, since making history with the first-ever direct detection of gravitational waves in 2015, the network has spotted evidence for two neutron star mergers; 13 black hole mergers; and one possible black hole-neutron star merger.
When two black holes collide, they warp the fabric of space and time, producing gravitational waves. When two neutron stars collide, they not only send out gravitational waves but also light. That means telescopes sensitive to light waves across the electromagnetic spectrum can witness these fiery impacts together with LIGO and Virgo. One such event occurred in August 2017: LIGO and Virgo initially spotted a neutron star merger in gravitational waves and then, in the days and months that followed, about 70 telescopes on the ground and in space witnessed the explosive aftermath in light waves, including everything from gamma rays to optical light to radio waves.
In the case of the two recent neutron star candidates, telescopes around the world once again raced to track the sources and pick up the light expected to arise from these mergers. Hundreds of astronomers eagerly pointed telescopes at patches of sky suspected to house the signal sources. However, at this time, neither of the sources has been pinpointed.
"The search for explosive counterparts of the gravitational-wave signal is challenging due to the amount of sky that must be covered and the rapid changes in brightness that are expected," says Brady. "The rate of neutron star merger candidates being found with LIGO and Virgo will give more opportunities to search for the explosions over the next year."
The April 25 neutron star smash-up, dubbed S190425z, is estimated to have occurred about 500 million light-years away from Earth. Only one of the twin LIGO facilities picked up its signal along with Virgo (LIGO Livingston witnessed the event but LIGO Hanford was offline). Because only two of the three detectors registered the signal, estimates of the location in the sky from which it originated were not precise, leaving astronomers to survey nearly one-quarter of the sky for the source.
The possible April 26 neutron star-black hole collision (referred to as S190426c) is estimated to have taken place roughly 1.2 billion light-years away. It was seen by all three LIGO-Virgo facilities, which helped better narrow its location to regions covering about 1,100 square degrees, or about 3 percent of the total sky.
"The latest LIGO-Virgo observing run is proving to be the most exciting one so far," says David H. Reitze of Caltech, executive director of LIGO. "We're already seeing hints of the first observation of a black hole swallowing a neutron star. If it holds up, this would be a trifecta for LIGO and Virgo — in three years, we'll have observed every type of black hole and neutron star collision. But we've learned that claims of detections require a tremendous amount of painstaking work — checking and rechecking — so we'll have to see where the data takes us."
LIGO is funded by NSF and operated by Caltech and MIT, which conceived of LIGO and led the Initial and Advanced LIGO projects. Financial support for the Advanced LIGO project was led by the NSF with Germany (Max Planck Society), the U.K. (Science and Technology Facilities Council) and Australia (Australian Research Council-OzGrav) making significant commitments and contributions to the project. Approximately 1,300 scientists from around the world participate in the effort through the LIGO Scientific Collaboration, which includes the GEO Collaboration. A list of additional partners is available at https://my.ligo.org/census.php.
The Virgo Collaboration is currently composed of approximately 350 scientists, engineers, and technicians from about 70 institutes from Belgium, France, Germany, Hungary, Italy, the Netherlands, Poland, and Spain. The European Gravitational Observatory (EGO) hosts the Virgo detector near Pisa in Italy, and is funded by Centre National de la Recherche Scientifique (CNRS) in France, the Istituto Nazionale di Fisica Nucleare (INFN) in Italy, and Nikhef in the Netherlands. A list of the Virgo Collaboration members can be found at http://public.virgo-gw.eu/the-virgo-collaboration/. More information is available on the Virgo website at http://www.virgo-gw.eu.
It’s been just three weeks since LIGO resumed its hunt for cosmic ripples through space-time, and already the gravitational-wave hunter is off to a running start.
One of the detections researchers are now poring over is a binary neutron star merger — a collision of two incredibly dense stars, nearly 500 million light years away. The power of this stellar impact set off gravitational waves across the cosmos, eventually reaching Earth as infinitely small ripples that were picked up by LIGO (the Laser Interferometer Gravitational-wave Observatory, operated jointly by Caltech and MIT), as well as by Virgo, LIGO’s counterpart in Italy, on April 25 at 4 a.m. ET.Researchers have determined that the source of the gravitational wave signal is likely a binary neutron star merger, which they’ve dubbed #S190425z. This is the second time that LIGO has discovered such a source.
The other neutron star merger, detected in 2017, was also the first event captured by LIGO that was also observed using optical telescopes. As astronomers around the world pointed telescopes at this first neutron star merger, they were able to see the brilliant “kilonova” explosion generated as the two stars merged. They also detected signatures of gold and platinum in the aftermath — direct evidence for how heavy elements are produced in the universe.
With LIGO’s new detection, astronomers are again pointing telescopes to the skies and searching for optical traces of the stellar merger and any resulting cosmic goldmine.
MIT News caught up with Salvatore Vitale, assistant professor of physics at MIT and a member of the LIGO Scientific Collaboration, about this newest stellar discovery and hints of even more “cosmic whispers” on the horizon — including the tantalizing possibility that LIGO has also captured the collision of a black hole and a neutron star.
Q: Walk us through the moment of discovery. When did this signal come in, and what told you that it was likely a binary neutron star merger?
A: The signal hit Earth at 4:18 a.m. EDT. Unfortunately, at that time the LIGO detector in Hanford, Washington, was not collecting data. The signal was thus detected by the LIGO instrument in Baton Rouge, Louisiana, and the Virgo detector in Italy. Having only two detectors online did not affect our confidence of it being real, since neutron star binaries spend more than one minute in our detectors and these kinds of very long chirps cannot easily be confused with instrumental artifacts or other sources of noise. Similarly, we were able to measure extremely well the mass of the source, which told us it was a binary neutron star, the second ever detected by LIGO and Virgo.
The main consequence of only having two detectors online was that it hurt our ability to localize the source in the sky. The sky map we sent out had a very large uncertainty, over 10,000 square degrees, which is a huge area to follow up, if you are looking for an electromagnetic counterpart.
Q: Since the notice from LIGO went out, astronomers have been training telescopes on the sky. What have they been able to find about this new merger, and how is it different from the one LIGO detected in 2017?
A: When two neutron stars smash one against the other, they trigger a cataclysmic explosion that releases huge amounts of energy and creates some of the heaviest elements in the universe (gold, among others). Finding both gravitational and electromagnetic waves can tell us about the environment in which these systems form, how they shine, their role in enriching galaxies with metals, and about the universe. This is why we routinely and automatically send public alerts to astronomers, so that they can try to identify the sources of our gravitational-wave events.
This is challenging for S190425z, since it has been localized poorly (compare 10,000 square degrees for S190425z with 30 square degrees for the first binary neutron star merger, GW170817). Another important difference is that S190425z was nearly four times further away. Both these factors make it harder to successfully find an electromagnetic counterpart to S190425z. You want to scan a much larger area, and you want to find a weaker and more distant source. This doesn’t mean that astronomers are not trying hard! In fact, in the last 36 hours there have been dozens of observations. So far nothing too convincing, but a lot of excitement! It is nice to see the broader community so engaged with the follow-up of LIGO and Virgo’s events.
Q: Since it started its newest observing run, LIGO has been detecting at least one gravitational wave source per week. What does this say about what sort of extreme phenomena are happening in the universe, on a daily basis?
A: The last few weeks have been incredibly exciting! So far we are making discoveries at roughly the rate we were expecting: one binary black hole a week and one binary neutron star a month. This confirms our expectations that gravitational waves can really play a major role in understanding the most extreme objects of the universe.
It also says that it is not uncommon that two stellar-mass black holes merge, which was not obvious at all before LIGO and Virgo discovered them. We still don’t know if the black holes pairs we are seeing had been together their whole cosmic life, first as normal stars, then as black holes, or if instead they were born separately and then just happened to meet and form a binary system. Both avenues are possible, and with a few more tens of detections we should be able to tell which of these two scenarios happens more often.
Then there is always the possibility of detecting something new and unexpected! As I started drafting these answers, we detected #S190426c, which, if of astrophysical origin, could be the first neutron star colliding into a black hole ever detected by humans. We will know more in the next few weeks, and we will keep listening for these faint and remote cosmic whispers.
One of the most fundamental chemical reactions that takes place in energy-conversion systems — including catalysts, flow batteries, high-capacity energy-storing supercapacitors, and systems to make fuels using solar energy — has now been analyzed in detail. The results could inform the development of new electrode or catalyst materials with properties precisely tuned to match the energy levels needed for their functions.
The findings are described today in the journal ACS Central Science, in a paper by MIT graduate student Megan Jackson, postdoc Michael Pegis, and professor of chemistry Yogesh Surendranath.
Virtually every energy-conversion reaction involves protons and electrons reacting with each other, and in functional devices these reactions typically take place on the surface of a solid, such as a battery electrode. Until now, Surendranath says, “we haven’t had a very good fundamental understanding of what governs the thermodynamics of electrons and protons coming together at an electrode. We don’t understand those thermodynamics at a molecular level,” and without that knowledge, selecting materials for energy devices comes down largely to trial and error.
Much research has been devoted to understanding electron-proton reactions in molecules, he says. In those cases, the amount of energy needed to bind a proton to the molecule, a factor called pKa, can be distinguished from the energy needed to bind an electron to that molecule, called the reduction potential.
Knowing those two numbers for a given molecule makes it possible to predict and subsequently tune reactivity. But when the reactions are taking place on an electrode surface instead, there has been no way to separate out the two different factors, because proton transfer and electron transfer occur simultaneously.
A new framework
On a metallic surface, electrons can flow so freely that every time a proton binds to the surface, an electron comes in and binds to it instantaneously. “So it’s very hard to determine how much energy it takes to transfer just the electron and how much energy it takes to transfer just the proton, because doing one leads to the other,” Surendranath says.
“If we knew how to split up the energy into a proton transfer term and an electron transfer term, it would guide us in designing a new catalyst or a new battery or a new fuel cell in which those reactions must occur at the right energy levels to store or release energy with the optimal efficiency.” The reason no one had this understanding before, he says, was because it has been historically almost impossible to control electrode surface sites with molecular precision. Even estimating a pKa for the surface site to try to get at the energy associated with proton transfer first requires molecular-level knowledge of the site.
A new approach makes this kind of molecular-level understanding possible. Using a method they call “graphite conjugation,” Surendranath and his team incorporate specifically chosen molecules that can donate and accept protons into graphite electrodes such that the molecules become part of the electrodes.
By electronically conjugating the selected molecules to graphite electrodes, “we have the power to design surface sites with molecular precision,” Jackson says. “We know where the proton is binding to the surface at a molecular level, and we know the energy associated with the proton transfer reaction at that site.”
By conjugating molecules with a wide range of pKa values and experimentally measuring the corresponding energies for proton-coupled electron transfer at the graphite-conjugated sites, they were able to construct a framework that describes the entire reaction.
Two design levers
“What we’ve developed here is a molecular-level model that allows us to partition the overall thermodynamics of simultaneously transferring an electron and a proton to the surface of an electrode into two separate components: one for protons and one for electrons,” Jackson says. This model closely mirrors the models used to describe this class of reactions in molecules, and should thus enable researchers to better design electrocatalysts and battery materials using simple molecular design principles.
“What this teaches us,” Surendranath says, “is that if we want to design a surface site that can transfer and accept protons and electrons at the optimal energy, there are two design levers we can control. We can control the sites on the surface and their local affinity for the proton — that’s their pKa. And we can also tune it by changing the intrinsic energy of the electrons in the solid,” which is correlated to a factor called the work function.
That means, according to Surendranath, that “we now have a general framework for understanding and designing proton-coupled electron transfer reactions at electrode surfaces, using the intuition that chemists have about what types of sites are very basic or acidic, and what types of materials are very oxidizing or reducing.” In other words, it now provides researchers with “systematic design principles,” that can help guide the selection of electrode materials for energy conversion reactions.
The new insights can be applied to many electrode materials, he says, including metal oxides in supercapacitors, catalysts involved in making hydrogen or reducing carbon dioxide, and the electrodes operating in fuel cells, because all of those processes involve the transfer of electrons and protons at the electrode surface.
Electron-proton transfer reactions are ubiquitous in virtually all electrochemical catalytic reactions, says Surendranath, “so knowing how they occur on a surface is the first step toward being able to design catalytic materials with a molecular-level understanding. And we’re now, fortunately, able to cross that milestone.”
This work “is truly pathbreaking,” says James Mayer, a professor of chemistry at Yale University, who was not involved in this work. “The interconversion of chemical and electrical energy — electrocatalysis — is a core part of many new scenarios for renewable energy. This is often accomplished with expensive rare metals such as platinum. This work shows, in an unexpected way, a new behavior of relatively simple carbon electrodes. This opens opportunities for new ways of thinking and eventually new technologies for energy conversions.”
Jeff Warren, an assistant professor of chemistry at Simon Fraser University in Burnaby, Bristish Columbia, who was not associated with this research, says that this work provides an important bridge between extensive research on such proton-electron reactions in molecules, and a lack of such research for reactions on solid surfaces.
“This creates a fundamental knowledge gap that workers in the field (myself included) have been grappling with for at least a decade,” he says. “This work addresses this problem in a truly satisfying way. I anticipate that the ideas described in this manuscript will drive thinking in the field for quite some time and will build crucial bridges between fundamental and applied/engineering researchers.”
This research was supported by the U.S. Department of Energy, the National Institutes of Health, the Sloan Foundation, and the Research Corporation for Science Advancement.
Artistic sketches can be used to capture details of a scene in a simpler image. MIT researchers are now bringing that concept to computational biology, with a novel method that extracts comprehensive samples — called “sketches” — of massive cell datasets that are easier to analyze for biological and medical studies.
Recent years have seen an explosion in profiling single cells from a diverse range of human tissue and organs — such as a neurons, muscles, and immune cells — to gain insight into human health and treating disease. The largest datasets contain anywhere from around 100,000 to 2 million cells, and growing. The long-term goal of the Human Cell Atlas, for instance, is to profile about 10 billion cells. Each cell itself contains tons of data on RNA expression, which can provide insight about cell behavior and disease progression.
With enough computation power, biologists can analyze full datasets, but it takes hours or days. Without those resources, it’s impractical. Sampling methods can be used to extract small subsets of the cells for faster, more efficient analysis, but they don’t scale well to large datasets and often miss less abundant cell types.
In a paper being presented next week at the Research in Computational Molecular Biology conference, the MIT researchers describe a method that captures a fully comprehensive “sketch” of an entire dataset that can be shared and merged easily with other datasets. Instead of sampling cells with equal probability, it evenly samples cells from across the diverse cell types present in the dataset.
“These are like sketches on paper, where an artist will try to preserve all the important features of a main image,” says Bonnie Berger, the Simons Professor of Mathematics at MIT, a professor of electrical engineering and computer science, and head of the Computation and Biology group.
In experiments, the method generated sketches from datasets of millions of cells in a few minutes — as opposed to a few hours — that had far more equal representation of rare cells from across the datasets. The sketches even captured, in one instance, a rare subset of inflammatory macrophages that other methods missed.
“Most biologists analyzing single-cell data are just working on their laptops,” says Brian Hie, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a researcher in the Computation and Biology group. “Sketching gives a compact summary of a very large dataset that tries to preserve as much biological information as possible … so people don’t need to use so much computational power.”
Joining Hie and Berger on the paper are: CSAIL PhD student Hyunghoon Cho; Benjamin DeMeo, a graduate student at MIT and Harvard Medical School; and Bryan Bryson, an MIT assistant professor of biological engineering.
Humans have hundreds of categories and subcategories of cells, and each cell expresses a diverse set of genes. Techniques such as RNA sequencing capture all cell information in massive tables, where each row represents a cell and each column represents some measurement of gene expression. Cells are points scattered around a sprawling multidimensional space where each dimension corresponds to the expression of a different gene.
As it happens, cell types with similar gene diversity — both common and rare — form similar-sized clusters that take up roughly the same space. But the density of cells within those clusters varies greatly: 1,000 cells may reside in a common cluster, while the equally diverse rare cluster will contain 10 cells. That’s a problem for traditional sampling methods that extract a target-size sample of single cells.
“If you take a 10-percent sample, and there are 10 cells in a rare cluster and 1,000 cells in a common cluster, you’re more likely to grab tons of common cells, but miss all rare cells,” Hie says. “But rare cells can lead to important biological discoveries.”
The researchers modified a class of algorithm that lays shapes over datasets. Their algorithm covers the entire computational space with what they call a “plaid covering,” which is like a grid of equal-sized squares but in many dimensions. It only lays these multidimensional squares where there’s at least one cell, and skips over any empty regions. In the end, the grid’s empty columns will be much wider or skinnier than occupied columns — hence the “plaid” description. That technique saves tons of computation to help the covering scale to massive datasets.
Capturing rare cells
Occupied squares may contain only one cell or 1,000 cells, but they will all have the exact same sampling weight. The algorithm then finds a target sample — of, say, 20,000 cells — by selecting a set number of cells from each occupied square uniformly, at random. The resulting sketch contains a far more equal distribution of cell types — for example, 10 common cells from a cluster of 100 and eight rare cells from a cluster of 10.
“We take advantage of these cell types occupying similar volumes of space,” Hie says. “Because we sample according to volume, instead of density, we get a more even coverage of the biological space … and we’re naturally preserving the rare cell types.”
They applied their sketching method to a dataset of around 250,000 umbilical cord cells that contained two subsets of a rare macrophages — inflammatory and anti-inflammatory. All other traditional sampling methods clustered both subsets together, while the sketching method separated them. Additional in-depth studies of these macrophage subpopulations could help reveal insight into inflammation and how to modulate inflammatory processes in response to disease, the researchers say.
“That’s a benefit in working at the interface of fields,” Berger says. “We’re trained as mathematicians, but we understand what biological data science problems are, so we can bring the best technologies to their analysis.”
Three MIT professors — Edward Boyden, Paula Hammond, and Aviv Regev — are among the 100 new members and 25 foreign associates elected to the National Academy of Sciences on April 30. Forty percent of the newly elected members are women, the most ever elected in any one year to date.
Membership to the National Academy of Sciences is considered one of the highest honors that a scientist or engineer can receive. Current membership totals approximately 2,380 members and nearly 485 foreign associates.
Edward S. Boyden is the Y. Eva Tan Professor in Neurotechnology at MIT; leader of the Synthetic Neurobiology Group in the MIT Media Lab; associate professor of biological engineering and of brain and cognitive sciences; a McGovern Institute investigator; co-director of the MIT Center for Neurobiological Engineering; and a member of the MIT Center for Environmental Health Sciences, Computational and Systems Biology Initiative, and Koch Institute for Integrative Cancer Research at MIT.
Boyden develops new tools for probing, analyzing, and engineering brain circuits. He uses a range of approaches, including synthetic biology, nanotechnology, chemistry, electrical engineering, and optics to develop tools capable of revealing fundamental mechanisms underlying complex brain processes. He pioneered the development of optogenetics, a powerful method that enables neuronal activity to be controlled with light. He also led the team that invented expansion microscopy, in which a specimen is embedded in a gel that swells as it absorbs water, thereby expanding nanoscale features to a size where they can be seen using conventional microscopes. He is now seeking to systematically integrate these technologies to create detailed maps and models of brain circuitry.
Paula T. Hammond is the David H. Koch Chair Professor of Engineering and the head of the Department of Chemical Engineering; a founding member of the MIT Institute for Soldier Nanotechnology; and a member of the MIT Energy Initiative and Koch Institute.
Hammond’s research in nanomedicine encompasses the development of new biomaterials to enable drug delivery from surfaces with spatio-temporal control. She also investigates novel responsive polymer architectures for targeted nanoparticle drug and gene delivery, and has developed self-assembled materials systems for electrochemical energy devices. She has designed multilayered nanoparticles to deliver a synergistic combination of siRNA or inhibitors with chemotherapy drugs in a staged manner to tumors, leading to significant decreases in tumor growth and a great lowering of toxicity.
Aviv Regev is a professor of biology; a core member of the Broad Institute of Harvard and MIT; and aHoward Hughes Medical Institute investigator.
Regev studies the molecular circuitry that governs the function of mammalian cells in health and disease and has pioneered many leading experimental and computational methods for the reconstruction of circuits, including in single-cell genomics. Her work focuses on dissecting complex molecular networks to determine how they function and evolve in the face of genetic and environmental changes, as well as during differentiation, evolution and disease.
The National Academy of Sciences is a private, non-profit society of distinguished scholars. Established in 1863 by an Act of Congress, signed by President Abraham Lincoln, the academy was charged with “providing independent, objective advice to the nation on matters related to science and technology.” Scientists are elected by their peers to membership for outstanding contributions to research. The NAS is committed to furthering science in America, and its members are active contributors to the international scientific community.
The following announcement was released jointly by MIT and the International Food Policy Research Institute.
Successful global efforts to substantially limit greenhouse gas emissions would likely boost GDP growth of poorer countries over the next 30 years, according to new research published in Climatic Change.
Researchers examined the impact global climate change mitigation would have on the economies of poorer countries — specifically Malawi, Mozambique, and Zambia. Devastation in Mozambique and Malawi recently caused by cyclones Idai and Kenneth vividly demonstrate the crippling impact that extreme weather events can have on these economies. Climate change is widely expected to increase the intensity and frequency of extreme weather events such as extreme heat, droughts, and floods as well as to magnify the destructive power of cyclones like Idai and Kenneth due to sea-level rise.
The study shows that beyond the benefits of reduced extreme weather in the long term, global mitigation efforts would also lower oil prices in coming decades, resulting in a significant economic boon for most poorer countries.
“It is abundantly clear that many low-income countries will bear the brunt of climate change impacts over the long term, and that successful efforts to rein in emissions will lessen this blow,” says lead author Channing Arndt, director of the Environment and Production Technology Division at the International Food Policy Research Institute (IFPRI). “Our research now provides another rationale for robust climate action: the economic benefits of mitigation arrive much sooner than previously thought.”
Lowering greenhouse gas emissions creates two sources of economic gain for poorer countries. First, effective global mitigation policies would reduce changes in local weather patterns and lower the odds of damaging extreme events, allowing for more economic growth than if climate change is unimpeded and more extreme weather damages economic activity.
Second, successful mitigation policies would cause oil prices to drop due to a reduction in oil demand. If richer nations take the lead in restraining their oil use, lower-income countries will be able to transition somewhat later while benefiting from much lower oil prices during the transition period. Since nearly all low-income countries are net oil importers, such price drops would represent a significant economic windfall.
The research suggests that by 2050 these two sources of economic benefit together could increase the average GDP of Malawi, Mozambique, and Zambia by between 2 and 6 percentage points — gains that cannot occur if greenhouse gas emissions continue unabated.
“Previous research into the economic impacts of global climate mitigation has tended to group oil exporters, such as Nigeria and Angola, and oil importers, such as Malawi and Zambia, together in a single aggregate region that both exports and imports oil,” says Sergey Paltsev, deputy director of the MIT Joint Program on the Science and Policy of Global Change. “When you look at the impacts on a country level though, most low-income countries benefit not only from having a more stable climate but also from lower fuel prices, because they are net fuel importers and the import volumes are large relative to the size of their economies.”
How emissions policies should be structured globally remains an open question. The models producing these results assume that low-income countries are afforded space to transition more slowly because their contributions to global emissions are relatively low and such exemption allows low-income countries to proceed with the benefit of experience accumulated elsewhere. But the researchers caution that for climate mitigation to be effective, some developing countries cannot be exempted for long — many middle-income countries will soon need to adhere to required emissions reductions.
“The impact of climate change is not likely to be distributed equally across the planet, and neither are any costs associated with reducing emissions,” says Arndt. “We want to limit the deleterious effects of climate change on the environment and on people, particularly poor people, while avoiding harming development prospects in the process. The gains from effective mitigation shown by this research could help us achieve this goal.”
MIT Solve, an MIT initiative that advances solutions from tech entrepreneurs to address the world’s most pressing issues, has announced a prize pool of $1.25 million for its next class of Solver teams. Prize sponsors include General Motors, Patrick J. McGovern Foundation, Vodafone Americas Foundation, Schmidt Futures, Everytown for Gun Safety Support Fund, the Abu Dhabi Crown Prince Court, and the Andan Foundation. The prize sponsors will convene at Solve at MIT from May 7-9 in Cambridge, Massachusetts, with the rest of the Solve community, including 2018 Solver teams, members, sponsors, and MIT faculty, staff, and students.
Solve seeks solutions from tech innovators around the world for its 2019 Global Challenges: Circular Economy, Community-Driven Innovation, Early Childhood Development, and Healthy Cities. Anyone can submit a solution and apply for the $1.25 million in prize funding by July 1. Finalists will be invited to pitch their solutions at Solve Challenge Finals during United Nations General Assembly Week in New York City on Sept. 22. At the event, leading cross-sector experts will select 35 of the most promising tech-based innovators to become Solver teams. They will work with Solve for the next year to scale their solutions with the support of funding, networking, mentorship, marketing, and more from the Solve community.
2019 MIT Solve Prizes available for selected Solver teams include:
Solver Funding: MIT Solve will award a $10,000 grant to all Solver teams selected during Solve Challenge Finals in September by the cross-sector judging panels of each of Solve’s four Global Challenges.
GM Prizes, supported by General Motors:
Solutions that foster prosperity and social mobility for underrepresented community members — including through STEM education — are eligible for the GM Prize on Community-Driven Innovation. Up to $50,000 will be granted to two recipients.
Solutions that help communities shift towards a more circular economy through zero waste and zero carbon — including through STEM education for new design and manufacturing techniques — are eligible for the GM Prize on Circular Economy. Up to $50,000 will be granted to two recipients.
AI Innovations Prize, supported by the Patrick J. McGovern Foundation and Schmidt Futures: Solutions that are propelled by advanced computing techniques or that leverage artificial intelligence to address any of the four challenges are eligible for a prize up to $500,000, granted across several recipients.
Innovation for Women Prize, supported by the Vodafone Americas Foundation: Solutions that use technology to empower and enrich the lives of women and girls are eligible for up to $75,000 across up to three Solver teams addressing any of Solve’s Global Challenges.
Everytown for Gun Safety Prize, supported by Everytown for Gun Safety Support Fund: Holistic, community-based Healthy Cities solutions that use technology to make cities safer are eligible for up to $100,000 in grant funding.
Innovating Together for Healthy Cities Prize, supported by Abu Dhabi Crown Prince Court: This prize of $75,000 will be awarded to one prize recipient, and is open to projects that focus on preventing or managing infectious disease or vector-borne illness in cities or slums.
Innovation for Refugee Inclusion Prize, supported by the Andan Foundation: Solutions that use innovation to advance economic, financial, and political inclusion of refugees in their hosting communities are eligible for this prize of up to $50,000. Eligible Solver teams will be selected from the Community-Driven Innovation Challenge.
“We are thrilled to work with such a diverse array of leading organizations to secure much needed funding for solutions to the world’s most intractable challenges,” said Alex Amouyel, executive director at MIT Solve. “There are innovators solving world challenges all around the world, but too few of them have access to the capital and expertise they need to scale. At Solve, we’re helping to bridge the pioneer gap in social impact, which is critical to achieving the UN Sustainable Development Goals.”
MIT Solve invites the MIT community to attend Tech for Equality, the opening plenary of Solve at MIT 2019, on May 7 from 4 to 5:30 p.m. at Kresge Auditorium. Tickets are free and those interested can RSVP here. Media interested in attending can apply for media credentials by emailing firstname.lastname@example.org.
Solve issues four Global Challenges each year to find the most promising Solver teams who will drive transformational change. Solve then deploys its global community of private, public, and nonprofit leaders to form the partnerships these Solver teams need to scale their impact. In the last two years, Solve has brokered more than $7.5 million in grant funding to Solver teams, in addition to in-kind support. Last year, more than 1,150 people from 110 countries submitted solutions to Solve’s four Global Challenges.
MIT researchers have performed the first comprehensive analysis of the genes that are expressed in individual brain cells of patients with Alzheimer’s disease. The results allowed the team to identify distinctive cellular pathways that are affected in neurons and other types of brain cells.
This analysis could offer many potential new drug targets for Alzheimer’s, which afflicts more than 5 million people in the United States.
“This study provides, in my view, the very first map for going after all of the molecular processes that are altered in Alzheimer’s disease in every single cell type that we can now reliably characterize,” says Manolis Kellis, a professor of computer science and a member of MIT’s Computer Science and Artificial Intelligence Laboratory and of the Broad Institute of MIT and Harvard. “It opens up a completely new era for understanding Alzheimer’s.”
The study revealed that a process called axon myelination is significantly disrupted in patients with Alzheimer’s. The researchers also found that the brain cells of men and women vary significantly in how their genes respond to the disease.
Kellis and Li-Huei Tsai, director of MIT’s Picower Institute for Learning and Memory, are the senior authors of the study, which appears in the May 1 online edition of Nature. MIT postdocs Hansruedi Mathys and Jose Davila-Velderrain are the lead authors of the paper.
The researchers analyzed postmortem brain samples from 24 people who exhibited high levels of Alzheimer’s disease pathology and 24 people of similar age who did not have these signs of disease. All of the subjects were part of the Religious Orders Study, a longitudinal study of aging and Alzheimer’s disease. The researchers also had data on the subjects’ performance on cognitive tests.
The MIT team performed single-cell RNA sequencing on about 80,000 cells from these subjects. Previous studies of gene expression in Alzheimer’s patients have measured overall RNA levels from a section of brain tissue, but these studies don’t distinguish between cell types, which can mask changes that occur in less abundant cell types, Tsai says.
“We wanted to know if we could distinguish whether each cell type has differential gene expression patterns between healthy and diseased brain tissue,” she says. “This is the power of single-cell-level analysis: You have the resolution to really see the differences among all the different cell types in the brain.”
Using the single-cell sequencing approach, the researchers were able to analyze not only the most abundant cell types, which include excitatory and inhibitory neurons, but also rarer, non-neuronal brain cells such as oligodendrocytes, astrocytes, and microglia. The researchers found that each of these cell types showed distinct gene expression differences in Alzheimer’s patients.
Some of the most significant changes occurred in genes related to axon regeneration and myelination. Myelin is a fatty sheath that insulates axons, helping them to transmit electrical signals. The researchers found that in the individuals with Alzheimer’s, genes related to myelination were affected in both neurons and oligodendrocytes, the cells that produce myelin.
Most of these cell-type-specific changes in gene expression occurred early in the development of the disease. In later stages, the researchers found that most cell types had very similar patterns of gene expression change. Specifically, most brain cells turned up genes related to stress response, programmed cell death, and the cellular machinery required to maintain protein integrity.
Bruce Yankner, a professor of genetics and neurology at Harvard Medical School, described the study as “a tour de force of molecular pathology.”
“This is the first comprehensive application of single-cell RNA sequencing technology to Alzheimer’s disease,” says Yankner, who was not involved in the research. “I anticipate this will be a very valuable resource for the field and will advance our understanding of the molecular basis of the disease.”
The researchers also discovered correlations between gene expression patterns and other measures of Alzheimer’s severity such as the level of amyloid plaques and neurofibrillary tangles, as well as cognitive impairments. This allowed them to identify “modules” of genes that appear to be linked to different aspects of the disease.
“To identify these modules, we devised a novel strategy that involves the use of an artificial neural network and which allowed us to learn the sets of genes that are linked to the different aspects of Alzheimer’s disease in a completely unbiased, data-driven fashion,” Mathys says. “We anticipate that this strategy will be valuable to also identify gene modules associated with other brain disorders.”
The most surprising finding, the researchers say, was the discovery of a dramatic difference between brain cells from male and female Alzheimer’s patients. They found that excitatory neurons and other brain cells from male patients showed less pronounced gene expression changes in Alzheimer’s than cells from female individuals, even though those patients did show similar symptoms, including amyloid plaques and cognitive impairments. By contrast, brain cells from female patients showed dramatically more severe gene-expression changes in Alzheimer’s disease, and an expanded set of altered pathways.
“That’s when we realized there’s something very interesting going on. We were just shocked,” Tsai says.
So far, it is unclear why this discrepancy exists. The sex difference was particularly stark in oligodendrocytes, which produce myelin, so the researchers performed an analysis of patients’ white matter, which is mainly made up of myelinated axons. Using a set of MRI scans from 500 additional subjects from the Religious Orders Study group, the researchers found that female subjects with severe memory deficits had much more white matter damage than matched male subjects.
More study is needed to determine why men and women respond so differently to Alzheimer’s disease, the researchers say, and the findings could have implications for developing and choosing treatments.
“There is mounting clinical and preclinical evidence of a sexual dimorphism in Alzheimer’s predisposition, but no underlying mechanisms are known. Our work points to differential cellular processes involving non-neuronal myelinating cells as potentially having a role. It will be key to figure out whether these discrepancies protect or damage the brain cells only in one of the sexes — and how to balance the response in the desired direction on the other,” Davila-Velderrain says.
The researchers are now using mouse and human induced pluripotent stem cell models to further study some of the key cellular pathways that they identified as associated with Alzheimer’s in this study, including those involved in myelination. They also plan to perform similar gene expression analyses for other forms of dementia that are related to Alzheimer’s, as well as other brain disorders such as schizophrenia, bipolar disorder, psychosis, and diverse dementias.
The research was funded by the National Institutes of Health, the JBP Foundation, and the Swiss National Science Foundation.
Minutes before dawn on Sept. 14, 2015, the Laser Interferometer Gravitational-wave Observatory (LIGO) became the first-ever instrument on Earth to directly detect a gravitational wave. This work, led by the LIGO Scientific Collaboration with prominent roles from MIT and Caltech, was the first confirmation of this consequence of Albert Einstein’s theory of general relativity — 100 years after he first predicted it. The groundbreaking detection represented an enormous step forward in the field of astrophysics. In the years since, scientists have striven to achieve even greater sensitivity in the LIGO detectors.
New research has taken investigators one step closer to this goal. Nergis Mavalvala, the Curtis and Kathleen Marble Professor of Astrophysics at MIT, postdoc Robert Lanza, graduate student Nancy Aggarwal, and their collaborators at Louisiana State University (LSU) recently conducted experiments that could help overcome a future limitation in Advanced LIGO. In their laboratory study, the team successfully measured a type of noise that will soon hold the LIGO instruments back from detecting gravitational waves with greater sensitivity.
Their study, reported recently in Nature, was the first to measure an important source of quantum noise at room temperature and at frequencies relevant to gravitational wave detectors. Funded by the National Science Foundation, this work could enable researchers to understand this limiting noise source and test ideas for circumventing it to further increase LIGO’s sensitivity to gravitational waves.
In addition to future applications for improving LIGO’s detection abilities, Mavalvala says these observations of quantum effects at room temperature could help scientists learn more about how quantum mechanics can disturb the precision of measurements generally — and how best to get around these quantum noise limits.
“This result was important for the gravitational wave community,” says Mavalvala. “But more broadly, this is essentially a room-temperature quantum resource, and that's something that many communities should care about.”
LIGO has undergone upgrades since its first gravitational wave searches in 2002; the currently operating version of the instrumentation is called Advanced LIGO following major upgrades in 2015. But to get LIGO to its maximum design sensitivity, Mavalvala says her team needs to be able to conduct experiments and test improvement strategies in the laboratory rather than on the LIGO instruments themselves. LIGO’s astrophysical detection work is too important to interfere with, so she and her collaborators have developed instruments in the lab that can mimic the sensitivity of the real thing. In this case, the team aimed to reproduce processes that occur in LIGO to measure a type of noise called quantum radiation pressure noise (QRPN).
In LIGO, gravitational waves are detected by using lasers to probe the motion of mirrors. The mirrors are suspended as pendulums, allowing them to have periodic motion similar to a mass on a spring. When laser beams hit the movable mirrors, the momentum carried by the light applies pressure on the mirror and causes them to move slightly.
“I like to think of it like a pool table,” says Aggarwal. “When your white cue ball strikes the ball in front of it, the cue ball comes back but it still moves the other ball. When a photon that was traveling forward then travels backwards, the momentum went somewhere; [in this case] that momentum went into the mirror.”
The quantum nature of light, which is made up of photons, dictates that there are quantum fluctuations in the number of photons hitting the mirrors, creating an uncertain amount of force on the mirrors at any given moment. This uncertainty results in random perturbations of the mirror. When the laser power is high enough, this QRPN can interfere with gravitational wave detection. At Advanced LIGO’s full design sensitivity, with many hundreds of kilowatts of laser power hitting 40-kilogram mirrors, QRPN will become a dominant limitation.
To address this imminent issue, Mavalvala, Aggarwal, and their collaborators designed an experiment to recreate the effects of QRPN in a laboratory setting. One challenge was that the team could not use lasers as powerful as those in Advanced LIGO in their lab experiments. The greater the laser power and the lighter the mass of the mirror oscillator, the stronger the radiation pressure-driven motion. To be able to detect this motion with less laser power, they needed to create an extremely low-mass mirror oscillator. They scaled down the 40-kilogram mirrors of Advanced LIGO with a 100-nanogram mirror oscillator (less than the mass of a grain of salt).
The team also faced the significant challenge of designing a mirror oscillator that could exhibit quantum behavior at room temperature. Previously, observing quantum effects like QRPN required cryogenic cooling so that the motion due to heat energy of the oscillator would not mask the QRPN. In addition to being challenging and impractical, vibrations associated with cryogenic cooling interferes with LIGO’s operation, so conducting experiments at room temperature would be more readily applicable to LIGO itself. After many iterations of design and testing, Mavalvala and her MIT colleagues designed a mirror oscillator that allowed the team to reach a low enough level of thermally driven fluctuations that the mirror motion was dominated by QRPN at room temperature — the first-ever study to do so.
“It’s really pretty mind-boggling that we can observe this room-temperature, macroscopic object — you can see it with the naked eye if you squint enough — being pushed around by quantum fluctuations,” Mavalvala says. “Its thermal jitter is small enough that it’s being tickled ever-so-slightly by quantum fluctuations, and we can measure that.”
This was also the first study to detect QRPN at frequencies relevant to gravitational wave detectors. Their success means that they can now design additional experiments that reflect the radiation pressure conditions in Advanced LIGO itself.
“This experiment mimics an important noise source in Advanced LIGO,” says Mavalvala. “It's now a test bed where we can try out new ideas for improving Advanced LIGO without impinging on the instrument’s own operating time.”
Advanced LIGO does not yet run its lasers at strong enough power for QRPN to be a limiting factor in gravitational wave detections. But, as the instruments become more sensitive, this type of noise will soon become a problem and limit Advanced LIGO’s capabilities. When Mavalvala and her collaborators recognized QRPN as an imminent issue, they strove to recreate its effects in the laboratory so that they can start exploring ways to overcome this challenge.
“We've known for a long time that this QRPN would be a limitation for Advanced LIGO,” says Mavalvala. “Now that we are able to reproduce that effect in a laboratory setting, we can start to test ideas for how to improve that limit.”
Mavalvala’s primary collaborator at LSU was Thomas Corbitt, an associate professor of physics and astronomy. Corbitt was formerly a graduate student and post-doctoral scholar in Mavalvala’s lab at MIT. They have since collaborated for many years.
“This is the first time this effect has been observed in a system similar to gravitational wave interferometers and in LIGO’s frequency band,” says Corbitt. “While this work was motivated by the imperative to make ever-more-sensitive gravitational wave detectors, it is of wide interest.”
Since the original detection of a binary black hole merger in 2015, LIGO has also captured signals from collisions of neutron stars, as well as additional black hole collisions. These waves ripple outward from interactions that can take place more than a billion light years away. While LIGO’s capabilities are impressive, Mavalvala and her team plan to continue finding ways to make LIGO even more powerful.
Before they collide, black holes, for example, orbit each other slowly and at lower frequencies. As the two black holes get closer, their orbits speed up and they swirl around each other at high speeds and high frequencies. If Advanced LIGO becomes sensitive enough to pick up lower frequencies, Mavalvala says we may someday detect these systems earlier in the process, before the pair collides, allowing us to draw an ever-clearer picture of these distant spacetime phenomena. She and her team aim to make sure that factors such as QRPN don’t limit Advanced LIGO’s growing power.
“At this moment in time, Advanced LIGO is the best it can be at its job: to look out at the sky and detect gravitational wave events,” says Mavalvala. “In parallel, we have all of these ideas for making it better, and we have to be able to try those out in laboratories. This measurement allows that to happen with QRPN for the first time.”
MIT’s Plan for Action on Climate Change, released by President L. Rafael Reif in October 2015, has already begun to catalyze new research on climate issues at the Institute and a tighter focus on building a sustainable campus here in Kendall Square. But MIT will be passing up important opportunities to make an impact on climate change if it does not look beyond its own borders and forge partnerships far outside the realm of academic research.
That was the message MIT Vice President for Research Maria Zuber brought to MIT Climate Night as the university’s environment and sustainability groups gathered on April 25 to discuss climate issues on campus.
“One of the pillars of MIT’s Climate Action Plan,” said Zuber at the event, “is that we’ve decided that we should engage with all comers. Climate change represents a global problem, and the only way that we can really address it is to partner with as many organizations and people as we can.”
Zuber appeared with John Fernández, director of the MIT Environmental Solutions Initiative (ESI), and Robert Armstrong, director of the MIT Energy Initiative (MITEI), to discuss their personal experiences working on climate issues and where they believe the Institute can be most influential. All three agreed that, while MIT’s role as a powerhouse of basic research is important, it has been equally energizing to delve into practical, policy-based engagement with unexpected partners.
Fernández, for example, pointed to ESI’s collaboration with the nonprofit Center for Coalfield Justice in Greene County, Pennsylvania, a region where the coal industry has long been the dominant employer. While residents of Greene County can be resistant to environmental groups, he said, it’s not because they refuse to accept that their economy is changing. Instead, they can see keenly that more prodding to abandon coal is not the help they need.
“Everyone knows there’s going to be a transition,” said Fernández. “In fact, maybe they know better than anyone, because they’ve seen companies go bankrupt and pull up and move.” Coalfield Justice and ESI have been able to work constructively with residents because their research centers on finding paths to a humane economic transition, and communicating those paths to help the county weather the decline of its major industry.
Armstrong, meanwhile, discussed MITEI’s work with developing countries through the Tata Center for Technology and Design, to bring energy solutions to areas without energy access, including in India and sub-Saharan Africa. “These regions are in energy poverty and in desperate need of getting energy in a carbon-free way so that they can engage in the global economy,” Armstrong said. Here, innovations in financing small grids can have a triple benefit: improving quality of life, bringing in new work opportunities, and adding carbon-free energy in countries where the dirtiest fossil fuels might otherwise expand.
Armstrong also takes heart from MITEI’s ongoing conversations with established energy companies, some of which are beginning to make large investments in carbon-free power. “Part of the Plan for Climate Action has been engaging with industry,” he said, citing cement, chemicals, and metals as important energy-intensive sectors to decarbonize. “This has to be economy-wide,” he said. “We’re working to engage a broader range of industries.”
MIT Climate Night was co-sponsored by ESI and MITEI and brought together representatives from more than a dozen departments, centers, and student and alumni groups whose missions include climate action on campus and beyond. It was the first real-world event to grow out of the MIT Climate Portal, an online community for engagement on climate science and solutions.
Attendees joined discussions on topics that encompassed both a global context and actions MIT can take on its own, including the energy transition, climate finance, and carbon offsets. They also took in the three headline speakers’ thoughts about how the Institute can not only advance research, but also inspire change in the wider world. As Zuber recalled, the flourishing environmental movement of the 1970s needed both scientific discovery and a mass change in consciousness, inspired by moments like the Apollo space missions, to succeed.
Zuber projected a single slide for her talk at Climate Night: the famous “Earthrise” image taken from the Apollo 8 spacecraft in December of 1968 — seven months before the Apollo 11 lunar landing and just over a year before the first Earth Day in April 1970. “Images are important, and I spend a lot of time thinking about, ‘What is the image, what is the message, that it’s going to take to globally change opinion … so that we’re all taking better care of our Earth?’” she said. “This image of actually seeing the fragile Earth from space and everything that we know and love sitting out there in space, alone, was one of the things that really inspired the environmental movement.”
MIT’s climate community may need little encouragement to look outside their own silos, as the diverse attendees who came to Climate Night to meet new allies on campus can attest. “I think that’s fundamentally in the DNA at MIT,” noted Armstrong in his opening remarks. “I’ve found that to be extraordinarily stimulating [at MITEI], both because of the enthusiasm across campus for addressing energy, but also because of the enthusiasm for working together across disciplines.
“I don’t know anywhere else among universities where there’s this low a barrier to collaboration.”
The audience at MIT Climate Night submitted more questions than the speakers could answer during the event. For responses to more audience questions from the offices of the Vice President for Research, ESI, and MITEI, visit climate.mit.edu over the coming weeks.
The Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic) has announced more than $2.3 million in funding for 18 projects involving principal investigators from departments and labs within engineering, architecture and planning, science, and management. J-Clinic received a total of 43 proposals.
Launched this fall, J-Clinic is the fourth major collaborative effort between MIT and Community Jameel, the social enterprise organization founded by Mohammed Abdul Latif Jameel ’78. J-Clinic aims to create high-precision, affordable, and scalable machine learning technologies in areas of health care ranging from diagnostics to pharmaceuticals.
The projects will harness the power of artificial intelligence technologies to optimize early detection and prevention of ailments including cancer, epilepsy, mental health, cognitive impairment, and congestive heart failure. Other projects include repurposing existing drugs and optimizing electronic health records. In addition, a $50,000 grant funded by J-Clinic in collaboration with the MIT Deshpande Center for Technological Innovation will support AI-focused research into the rapid diagnosis of bacterial infection.
“We were impressed by the depth, creativity, and scope of the proposals we received,” says Anantha P. Chandrakasan, dean of the School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who chairs J-Clinic.
“We are excited to be taking this important step with the inaugural round of J-Clinic research funding,” says Fady Jameel, president of international operations at Community Jameel. “Through the research funded by these grants, J-Clinic is harnessing the power of machine learning and taking the fight to cancer, Alzheimer’s, and other diseases that affect the lives of people around the world.”
The technologies and solutions will be applied to numerous health care systems and clinical settings around the globe, in developed and developing countries alike, to prevent and detect disease regardless of resources.
Stephon Henry-Rerrie grew up in Brooklyn as the oldest of five siblings. He loved math puzzles from a young age and chose a premed track in his specialized high school. He never thought he’d study at MIT, but after being accepted to MIT’s Weekend Immersion in Science and Engineering (WISE), a program for high school seniors from underrepresented communities to learn about the MIT experience, he changed his mind.
Before visiting MIT, “I could never see myself here, because it was just this ivory-tower looking place,” he says. “Whereas when I was here, and I was talking with people, I was like, ‘Oh, wow I can hang.’ Maybe I do belong here.”
Henry-Rerrie, now a senior, has discovered many passions during his time at the Institute. He realized early on that he didn’t want to pursue medicine, and chose to major in chemical engineering. Then, after realizing how versatile physics could be, he picked that up as a second major. In four years, he has helped create particle simulations, worked on a trading floor, conducted research in the chemical engineering industry, and mentored younger MIT students. He would never have predicted ending up where he is now — but he wouldn’t trade it.
“I have a very weird, nonlinear trajectory that I’ve taken,” he says. “But along the way I’ve learned lots of things about myself and about the world.”
In the market for growth
When Henry-Rerrie accepted an internship at Morgan Stanley the summer after his first year, he had no idea that he’d be working on the trading floor. Some similarities to the movie “Wall Street” were uncanny, he says — he was surrounded by bond traders, and his mentor underwrote municipal bonds. He says the experience of working in finance fundamentally changed his life. Not only did he learn to speak up among many powerful voices, he also realized that science and engineering are directly tied into economics. Research doesn’t happen in a vacuum — when scientists make discoveries, that impacts the economy.
“I think I needed that exposure,” he says. “Because if I hadn’t, I feel like I wouldn’t have the perspective that I have now on, what does this all mean? What is going on? What’s this larger system that we exist in?”
He really enjoyed working within the financial sector. And, after meeting a number of former physicists (and chemical engineers) now working in financial roles at Morgan Stanley, he realized that studying physics rather than economics wouldn’t hurt his chances of getting a job in finance — so he took on a double major and was thrilled to study another area he’s always been fascinated by.
In his sophomore year, he worked in the lab of Assistant Professor James Swan, creating particle simulations with PhD student Zachary Sherman. The pair looked at how varying two different kinds of interactions between nanoparticles in solution affected those nanoparticles. Henry-Rerrie likens it to having a bunch of people (representing the particles) in a room where temperature and wind are controlled by two knobs. As you turn up the temperature knob, or the wind knob, or both knobs in varying amounts, the people will react.
“What will those people be feeling? What will they do? ... I can turn those knobs and record, what did those people do at each specific value? And then after that, can we see a trend in how people will react?”
The following summer, Henry-Rerrie took an internship at chemical engineering company Praxair. The people there were great, he says, but as he considered his options for the future, he found his heart was with financial markets. The following summer, he took a job at investment management company BlackRock.
“I also found that finance touches everything, everybody’s life, in a very real way that you can’t get away from, at least now,” he says.
For him, BlackRock was the perfect compromise between chemical engineering and finance. As much of his role involved risk and quantitative analysis, he was able to practice many of the techniques he learned in engineering, as well as do real work in the finance sector.
“At my internship at BlackRock, I was able to apply everything that I learned,” he says. “Not necessarily the technical stuff, but the way of problem solving, of thinking.”
When Henry-Rerrie was first visiting MIT, he was introduced to a living group called Chocolate City, in New House. The group consisted of black and Latino men supporting each other socially, academically, and professionally.
“When I saw that, that was the signal to me that MIT is just a special place,” he says.
He was accepted to live in Chocolate City his first year and has been there ever since. He has served in a variety of roles, including athletics chair, social chair, co-chair, and now resident peer mentor. He describes himself as the big brother of the house, working to get people to socialize and bond with each other. Living in the group has had its challenges, as its members come from diverse backgrounds and often have conflicting opinions. But that’s all part of the learning experience that makes it so valuable, he says.
“Being in that ecosystem has, I think, developed me into the person I am now, and helped me to feel like I can take on, I can take on anything after I graduate here.”
Henry-Rerrie loves being part of Chocolate City, and is grateful for how much it has developed him as a person. That’s why he’s chosen to give back to the other residents this year as the resident peer mentor, and why he plans to continue to help out as an alumnus. To him, Chocolate City is much more than a place to sleep and study.
“I feel like I’m home,” he says of being a part of the living group. “I don’t feel like I’m at a dorm; I feel like I’m home.”
Science in context
Henry-Rerrie is grateful for the context that his humanities, arts, and social sciences (HASS) classes have given him in his scientific pursuits. He recalls one class, STS.042 / 8.225 (Physics in the Twentieth Century), that introduced him to an entire world of physics history. He learned everything from the politics underlying physics to the fact that Erwin Schrödinger himself was skeptical of quantum theory — he only made the cat analogy to show how crazy it was.
“A lot of ways that we evaluate people and what they’ve done can be super muddled if we don’t understand the history of how things came about,” he says.
It’s that kind of learning, bridging concepts that he never assumed were related, that Henry-Rerrie really enjoys. The applications to engineering and broader society are what drew him to finance; his research and economic work at BlackRock was so fulfilling that he’s accepted an offer to return after graduation full-time.
Longer term, Henry-Rerrie isn’t sure where exactly he’ll end up. He’s considering business school in his five-year plan and would love to end up back at MIT for that. His broader goal, at least right now, is to figure out where his skills can be put to the greatest use.
“I’m all about finding connections. Between, I guess, very weird things. Things that don’t seem that related,” he says.
MIT and Liberty Mutual Insurance today announced a $25 million, five-year collaboration to support artificial intelligence research in computer vision, computer language understanding, data privacy and security, and risk-aware decision making, among other topics.
The new collaboration launched today at a meeting between leadership from both institutions, including Liberty Mutual Chairman and CEO David Long and MIT President Rafael L. Reif. The collaboration will span MIT’s five schools and be led by MIT’s Stephen A. Schwarzman College of Computing through the Quest for Intelligence, MIT’s research initiative focusing on the science and engineering of intelligence.
“With the Quest, MIT is working to accelerate progress on techniques and technologies that can help countless industries seize the transformative opportunities of AI. Our collaboration with Liberty Mutual will advance research in an interdisciplinary, problem-focused way that will feel very familiar to our community," says Reif.
“AI tools and technologies are reshaping industry, and insurance is no exception,” says Antonio Torralba, director of the Quest for Intelligence and a professor of computer science and electrical engineering. “We look forward to working with Liberty Mutual to develop methods to make AI systems fair, secure, transparent, and more risk-aware.”
Based in Boston, Liberty Mutual employs 50,000 people globally, holds $126 billion in assets, and is the fourth largest U.S. insurer for property loss and damage, and other liabilities. The collaboration with MIT is designed to produce a range of intelligence tools and technologies.
“We are excited to embark on this project with MIT and look forward to leveraging their leading AI research to identify, develop, and ultimately operationalize several transformational AI-enabled solutions,” says Long, of Liberty Mutual. “Through this collaboration we intend to challenge the insurance industry status quo and be at the forefront of AI breakthroughs.”
Research topics under discussion include efforts to make decision-making algorithms transparent to customers and regulators, use computer vision to reduce crashes by identifying dangerous driving conditions and roadways, further protect the anonymity and security of personal data, use computer language understanding to analyze insurance claims to speed processing and compensation, and structure investment portfolios.
“We are excited to be working with Liberty Mutual and hope that this represents the first of many such collaborations that will help us advance the science of machine learning and natural intelligence,” says Michael Sipser, dean of the MIT School of Science and the Donner Professor of Mathematics.
Is it appropriate to evaluate the causes of suicide but dismiss mental illness as a contributing factor? What happens when you talk about war deaths as colored wedges on a chart? Does that change the conversation in important ways?
MIT students grappled with these and similar questions this spring in STS.047 (Quantifying People), a new subject focused on the history of the quest to understand human society scientifically. William Deringer, the Leo Marx Career Development Assistant Professor of Science, Technology, and Society, says he developed the class to enable students to explore the questions that motivate much of his own research: “Why do we invest so much trust in numbers, and what are the consequences for who we are?”
Deringer has written a book on the subject, "Calculated Values: Finance, Politics, and the Quantitative Age" (Harvard University Press, 2018), in which he examines the history of human efforts to use statistics to influence opinions and shape policy. “Many MIT students will likely be practitioners in the data field, so I want to encourage them to think about these issues,” he says.
The class has certainly gotten Jordan Browne thinking. “There’s this idea that by working with numbers people aren’t making moral judgments, but that’s a really dangerous assumption,” says Browne, a senior in the class who is majoring in mathematical economics. “This should be a required class.”
In fact, STS.047 will be one of several courses featured in a new MIT undergraduate HASS concentration focused on Computational Cultures, which "brings together perspectives from the humanities and social sciences for students to understand and improve the social, cultural, and political impact of the computing tools and digital devices that shape our lives."
Are numbers neutral?
STS.047 covers the history of science from the 17th century to the present as seen through the eyes of early statisticians and sociologists — people who were building new fields by attempting to understand social life through quantification.
One goal of the class, Deringer says, is to prompt students to consider the ways in which the tools we use to understand issues today can themselves reflect biases. “Thinking about old projects of quantification — the ways things look weird, wrong, or biased — helps you see how subjective elements might play out in current practice,” he says.
In the late 1850s, for example, British nurse, social reformer, and statistician Florence Nightingale gathered mortality data from the Crimean War and created visualizations to show that wounded soldiers were dying from disease due to poor sanitation in military hospitals. Those deaths were represented as blue wedges on a diagram, prompting Nightingale to make this impassioned plea to save lives: “Expunge the blue wedges.”
“That really struck me,” Deringer says. “There is some sort of strange transmutation that happens when you take data, turn it into something visual, then that is what you act on. That’s an interesting way of interacting with the world.”
Students discussing the work during one class session this spring wondered if Nightingale had abstracted the problem to make it seem easier to solve, although some found it odd that she had effectively dehumanized those who had died.
The students in class that day also discussed the work of 19th century French sociologist Emile Durkheim, who studied the correlation of suicide to such social circumstances as religion, marital status, and economic class. While Nightingale was using statistics in an attempt to change policy and save lives, Durkheim took an abstract approach that was less focused on solutions — and many students were unsettled by his dry assessment of the suicide data.
“They’re not just statistics, they’re people too,” says Yiran He.
The complicated history of quantitative methods
A junior in the Department of Materials Science and Engineering, He says she signed up for STS.047 to gain insight into today’s data-driven society. “Numbers rule everything I see in the rest of my life: measurements and results in academia in science and engineering, statistics in politics and policy decisions, models in economic decisions, and everything between and beyond,” she says. “I felt it was important to understand the origins of the statistics we use.”
For example, students in STS.047 learned that many tools in use today — including regression analysis — were developed through eugenics research. “These tools that every student here uses have this really insidious beginning,” Browne says.
This supports a point Deringer makes right in the syllabus for STS.047. “Social science and quantitative methods have a complicated history. There is much to celebrate and also much to criticize.”
This complex interplay of science and society is precisely what attracted Rhea Lin to the subject. “I wanted to take a humanities course that would give me the opportunity to reflect on how society has been impacted by science in the past and how my work as an engineer might affect people in the future,” says Lin, a senior majoring in electrical engineering and computer science.
“From this class, I have learned that technology and science are not always the answer to our problems. We've studied social scientists who have thrown statistics and theories at society in questionable ways, and I think it's important to remember that science is not effective if not used correctly,” Lin says.
Story prepared by MIT SHASS Communications
Editorial and Design Director: Emily Hiestand
Senior Writer: Kathryn O'Neill
Ten impact-driven student teams tackling a wide range of problems around the world took home $100,000 in combined awards at the MIT IDEAS innovation and social entrepreneurship showcase and awards Saturday.
A total of 34 teams, many members of which are already in the early stages of company building, displayed their projects at the event.
The grand prize of $15,000 was awarded to Myco Diagnostics, a startup that is working on a urine-based test to cheaply and quickly diagnose tuberculosis in low-resource areas of India.
Tuberculosis, or TB, is traditionally diagnosed through chest X-rays or skin test analysis that requires medical professionals to examine microscopic samples.
But for impoverished people in developing countries dealing with common TB symptoms like coughing or a fever, it can be difficult to travel to the nearest clinic. The result is that only about 35 percent of the nearly 3 million people with TB in India get diagnosed. That makes it much more likely the disease will be transmitted to family members and people in the local community.
“If you’re living on $2 a day, you don’t necessarily have the financial freedom to travel to these clinics,” Myco Diagnostics co-founder Eric Miller, a PhD candidate in the Department of Chemical Engineering, told MIT News. “We’re trying to replace that diagnostic with something that is decentralized, that can go to the patient.”
To accomplish that, Myco has re-engineered a set of proteins to bind with urine-based biomarkers of tuberculosis while staying stable enough to be stored at room temperature for long periods of time. It is also developing methods for cheaply grouping those proteins on paper so that, when exposed to urine, they can quickly show test results.
“Studies have shown that if you could cut the time to diagnosis from six months down to three months, over the course of 10 years you could reduce incidence of TB by 65 percent,” Miller says.
Myco’s team is made up of Miller; Naht Nguyen, an MBA candidate at Sloan School of Management; and Aditi Trehan, a research scientist at the Broad Institute of MIT and Harvard. Miller credits his PhD advisor Hadley Sikes, the Esther and Harold E. Edgerton Career Development Professor at MIT, with helping to focus his research on areas that could make an impact. Myco has also been supported by the Tata Center, the Deshpande Center, the Singapore-MIT Alliance for Research and Technology, and the Sandbox Fund.
Each year, IDEAS serves as a launchpad for new ventures with sustainable impact. Saturday’s showcase and awards were the culmination of a process that began at the beginning of the academic year, when 68 teams entered the MIT IDEAS program. Teams received feedback and support from industry experts, past IDEAS winners, and more than a dozen centers and programs across campus throughout the program.
Each of the projects fell into one of nine categories: water and sanitation, education and training, agriculture and food, health and medical, emergency and disaster relief, housing and transportation, energy and environment, mobile device and communication, and finance and entrepreneurship.
About 200 people attended the event on the 7th floor of the Samberg Conference Center. Thirty-five judges spoke with teams earlier in the day and deliberated on the floor below the event. They ultimately awarded nine other teams prizes of $10,000 and $7,500 each. Those teams were:
- Retired Talent ($7,500): a company that employs retirees by matching their skills with needs in the local community;
- Sustainable AI ($7,500): an organization that provides accurate forest inventory to aid reforestation efforts;
- Animo ($10,000): an affordable, noninvasive wristband that reduces hand tremors in patients with Parkinson’s disease;
- Precavida ($10,000): a digital matching platform that connects uninsured patients to health care providers;
- SciTeens ($10,000): a free online social network for high school STEM students designed to encourage sharing, reviewing, and collaborating;
- SiPure ($10,000): a company that develops silicon membrane technology that removes arsenic from fresh water;
- Req Staffing ($10,000): a company that develops contracts between energy companies and formerly incarcerated individuals to meet human capital needs;
- InSanirator ($10,000): a company that fills the gap in the sanitation value chain by converting fecal sludge into energy and clean water; and
- Frolic ($10,000): a company that pairs landowners who want to age-in-place with middle-income first time homebuyers.
IDEAS is run by the PKG Center at MIT.
This year marked the 18th year of IDEAS. Since its inception, the program has awarded more than $1 million to 160 social ventures operating in 44 different countries. After winning, IDEAS teams have gone on to secure over $65 million in funding and about half are still operating today.
This year’s projects included 16 undergraduate students, 50 graduate students, and 11 other MIT community members.
Last year’s winning teams have already gone on to develop a multipurpose sleeping bag for refugee families in the Middle East, design low-cost, inflatable seat cushions for wheelchair users in Indonesia, and launch mobile apps to do things such as access indoor air quality information in China and empower communities to document and preserve their indigenous languages.