MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 19 hours 5 min ago

Plug-and-play technology automates chemical synthesis

Thu, 09/20/2018 - 2:06pm

Designing a new chemical synthesis can be a laborious process with a fair amount of drudgery involved — mixing chemicals, measuring temperatures, analyzing the results, then starting over again if it doesn’t work out.

MIT researchers have now developed an automated chemical synthesis system that can take over many of the more tedious aspects of chemical experimentation, freeing up chemists to spend more time on the more analytical and creative aspects of their research.

“Our goal was to create an easy-to-use system that would allow scientists to come up with the best conditions for making their molecules of interest — a general chemical synthesis platform with as much flexibility as possible,” says Timothy F. Jamison, head of MIT’s Department of Chemistry and one of the leaders of the research team.

This system could cut the amount of time required to optimize a new reaction, from weeks or months down to a single day, the researchers say. They have patented the technology and hope that it will be widely used in both academic and industrial chemistry labs.

“When we set out to do this, we wanted it to be something that was generally usable in the lab and not too expensive,” says Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering at MIT, who co-led the research team. “We wanted to develop technology that would make it much easier for chemists to develop new reactions.”

Former MIT postdoc Anne-Catherine Bédard and former MIT research associate Andrea Adamo are the lead authors of the paper, which appears in the Sept. 20 online edition of Science.

Going with the flow

The new system makes use of a type of chemical synthesis known as continuous flow. With this approach, the chemical reagents flow through a series of tubes, and new chemicals can be added at different points. Other processes such as separation can also occur as the chemicals flow through the system.

In contrast, traditional “batch chemistry” requires performing each step separately, and human intervention is required to move the reagents along to the next step.

A few years ago, Jensen and Jamison developed a continuous flow system that can rapidly produce pharmaceuticals on demand. They then turned their attention to smaller-scale systems that could be used in research labs, in hopes of eliminating much of the repetitive manual experimentation needed to develop a new process to synthesize a particular molecule.

To achieve that, the team designed a plug-and-play system with several different modules that can be combined to perform different types of synthesis. Each module is about the size of a large cell phone and can be plugged into a port, just as computer components can be connected via USB ports. Some of modules perform specific reactions, such as those catalyzed by light or by a solid catalyst, while others separate out the desired products. In the current system, five of these components can be connected at once.

The person using the machine comes up with a plan for how to synthesize a desired molecule and then plugs in the necessary modules. The user then tells the machine what reaction conditions (temperature, concentration of reagents, flow rate, etc.) to start with. For the next day or so, the machine uses a general optimization program to explore different conditions and ultimately to determine which conditions generate the highest yield of the desired product.

Meanwhile, instead of manually mixing chemicals together and then isolating and testing the products, the researcher can go off to do something else.

“While the optimizations are being performed, the users could be talking to their colleagues about other ideas, they could be working on manuscripts, or they could be analyzing data from previous runs. In other words, doing the more human aspects of research,” Jamison says.

Rapid testing

In the new study, the researchers created about 50 different organic compounds, and they believe the technology could help scientists more rapidly design and produce compounds that could be tested as potential drugs or other useful products. This system should also make it easier for chemists to reproduce reactions that others have developed, without having to reoptimize every step of the synthesis.

“If you have a machine where you just plug in the components, and someone tries to do the same synthesis with a similar machine, they ought to be able to get the same results,” Jensen says.

The researchers are now working on a new version of the technology that could take over even more of the design work, including coming up with the order and type of modules to be used. 

The research was funded by the Defense Advanced Research Projects Agency (DARPA).

Recognizing the partially seen

Thu, 09/20/2018 - 12:40pm

When we open our eyes in the morning and take in that first scene of the day, we don’t give much thought to the fact that our brain is processing the objects within our field of view with great efficiency and that it is compensating for a lack of information about our surroundings — all in order to allow us to go about our daily functions. The glass of water you left on the nightstand when preparing for bed is now partially blocked from your line of sight by your alarm clock, yet you know that it is a glass.

This seemingly simple ability for humans to recognize partially occluded objects — defined in this situation as the effect of one object in a 3-D space blocking another object from view — has been a complicated problem for the computer vision community. Martin Schrimpf, a graduate student in the DiCarlo lab in the Department of Brain and Cognitive Sciences at MIT, explains that machines have become increasingly adept at recognizing whole items quickly and confidently, but when something covers part of that item from view, this task becomes increasingly difficult for the models to accurately recognize the article.

“For models from computer vision to function in everyday life, they need to be able to digest occluded objects just as well as whole ones — after all, when you look around, most objects are partially hidden behind another object,” says Schrimpf, co-author of a paper on the subject that was recently published in the Proceedings of the National Academy of Sciences (PNAS).

In the new study, he says, “we dug into the underlying computations in the brain and then used our findings to build computational models. By recapitulating visual processing in the human brain, we are thus hoping to also improve models in computer vision.”

How are we as humans able to repeatedly do this everyday task without putting much thought and energy into this action, identifying whole scenes quickly and accurately after injesting just pieces? Researchers in the study started with the human visual cortex as a model for how to improve the performance of machines in this setting, says Gabriel Kreiman, an affiliate of the MIT Center for Brains, Minds, and Machines. Kreinman is a professor of ophthalmology at Boston Children’s Hospital and Harvard Medical School and was lead principal investigator for the study.

In their paper, "Recurrent computations for visual pattern completion," the team showed how they developed a computational model, inspired by physiological and anatomical constraints, that was able to capture the behavioral and neurophysiological observations during pattern completion. In the end, the model provided useful insights towards understanding how to make inferences from minimal information.

Work for this study was conducted at the Center for Brains, Minds and Machines within the McGovern Institute for Brain Research at MIT.

A game changer takes on cricket’s statistical problem

Thu, 09/20/2018 - 12:00pm

Jehangir Amjad has done something few people can: He found a way to combine his favorite sport with his work. A longtime cricket enthusiast and player, he’s currently tackling an important statistical problem in the game — how to declare a winner when a match must end prematurely, due to weather or other circumstances. Given cricket’s global popularity, and the fact that matches can last for several hours, it’s a problem of great interest to fans and players alike.

For Amjad, it’s also a project that incorporates his passion for operations research. And the Laboratory for Information and Decision Systems (LIDS) was the perfect place for him to explore it.

Amjad took a circuitous path to MIT. Born and raised in Pakistan, he received a scholarship to complete his last two years of high school at the Red Cross Nordic United World College in Norway. Along with the school’s 200 other students, who came from over 100 countries, he studied, made personal and professional connections, and learned how to live with people of many different cultures during his time there. He then returned home to teach for a year (following in the footsteps of his parents, who are both professors), before attending Princeton University for a bachelor's in electrical engineering.

He graduated in 2010, and assuming he was finished with school, went to Microsoft to be a product manager. After several years there, though, he felt restless. Realizing that he’d found himself increasingly drawn to data science and machine learning since starting at Microsoft, he says figured he could either stay in the tech industry and learn more about these fields on the job, or “go back to school to master the mathematical nuances of this field.” He chose academics and came to MIT in 2013 as a graduate student in the Operations Research Center. There, he collaborated frequently with LIDS students and researchers, under the supervision of MIT Professor Devavrat Shah.

Because Shah is also a cricket fan, he and Amjad had been discussing the cricket problem for years, although Amjad didn’t land on his research project immediately. In fact, the theory that he is now applying to the cricket problem — robust synthetic control — is mostly used in economics, health policy, and political science. But because all of his work is interdisciplinary, he was able to see how to connect them. “A lot of what we train on [at LIDS] is the methods, but the applications are and should be very diverse,” Amjad says.

The current standard for international cricket games is to use the Duckworth-Lewis-Stern (DLS) method, created by British statisticians in the mid-1990s, to determine the winner when a game has to be called early. Amjad is viewing this as a forecasting problem.

“We aren’t just interested in predicting what the final score would be; we actually project out the entire trajectory for every ball, we project out what might happen on average,” he says.

In collaboration with Shah and Vishal Misra, a professor of computer science at Columbia University, Jehangir has used the robust synthetic control method to propose a solution to the forecasting problem, which has also led to a target revision algorithm like the Duckworth-Lewis-Stern method. Having back-tested their cricket results on many games, they are confident in the approach. They are currently comparing it to DLS, he says, and planning “what statistical argument we can make so that we can hopefully convince people that we have a viable alternative.”

Broadly, synthetic control is a statistical method for evaluating the effects of an intervention. In many cases, the intervention is the introduction of a new law or regulation.

“Let’s say that 10 years ago, Massachusetts introduced a new labor law, and you wanted to study the impact of that law,” Amjad explains. “This theory says you can use a data-driven approach to come up with a synthetic Massachusetts, one that that mimics Massachusetts as well as possible before the law was in place, so that you can then project what would have happened in Massachusetts had this law not been introduced.”

This creates a useful comparison point to the real Massachusetts, where the law has been in place. Placing the two side-by-side — the synthetic Massachusetts data and the real Massachusetts data — gives a sense of the law’s impact.

Amjad and his collaborators have developed a robust generalization of the classical method known as Robust Synthetic Control. In examining a problem this way, it turns out that limited and missing data do not become insurmountable obstacles. Instead, these sorts of difficulties can be accommodated, which is especially useful in the social sciences where there may not be many common data points available.

Continuing his example, he says, “the method is about using data about other states … to construct a synthetic unit. So, specifically, coming up with a synthetic Massachusetts that ends up being 20 percent like New York, 10 percent Wyoming, 5 percent something else — coming up with a weighted average of those. And those weights are essentially what is known as the synthetic control because now you’ve fixed those weights and you’re going to project that out into the future to say, ‘This is what would have happened had the law not been introduced.’”

Eventually, as research continues and more data become available to add to the synthetic unit, the accuracy of the results should improve, he says.

Amjad has used robust synthetic control in this more traditional way, as well. One of his other projects has been a collaboration with a team at the University of Washington on a study of alcohol and marijuana use to assess whether various laws have, over time, affected their sale and use. Another example he mentions as being a particularly good fit is any situation where a randomized control trial isn’t possible, such as studying the effect of distributing international aid in a crisis. Here, the moral and ethical implications of denying certain people aid make it impossible to use a randomized trial. Instead, observational studies are in order.

“You [the researcher] can’t control who gets the treatment and who doesn’t,” he says, but the results of it can be watched, recorded, and studied. As his work evolves, he’s also looking towards the future, thinking about time series forecasting and imputation.

“My work has converged on imputation and forecasting methods, whether it’s synthetic control or just pure time-series analysis,” he says.

This intersection is an emerging field of study. Econometricians historically used small data sets and classical statistics for problem solving, but with modern machine learning, options now exist that use lots of data to do approximate inference instead. Combining these approaches means you can explore the why of the problem and the prediction.

“You care both about the explanatory power and the predictive power, using these algorithms,” Amjad says. “These are designed for a larger scale, where you can still be prescriptive as well as predictive.” Elections forecasting is just one important example of the areas in which this work could be put to use.

Having defended his thesis earlier this year, Amjad is now a lecturer of machine learning at MIT’s Computer Science and Artificial Intelligence Laboratory. He says he is grateful for his time at LIDS — and all of the inspirational individuals he’s met and the groundbreaking ideas he’s come across here.

“The biggest lesson of my PhD is that it’s a journey,” he says. “LIDS is very accepting of you breaking the norm. They let people wander. And what that really helps you with is to understand that you can deal with ambiguity. If there is a problem that I don’t know about, I may never be able to completely solve it, but that won’t prevent me from thinking about it in a systematic way to hope to solve some parts of it.”

Reducing false positives in credit card fraud detection

Thu, 09/20/2018 - 12:00am

Have you ever used your credit card at a new store or location only to have it declined? Has a sale ever been blocked because you charged a higher amount than usual?

Consumers’ credit cards are declined surprisingly often in legitimate transactions. One cause is that fraud-detecting technologies used by a consumer’s bank have incorrectly flagged the sale as suspicious. Now MIT researchers have employed a new machine-learning technique to drastically reduce these false positives, saving banks money and easing customer frustration.

Using machine learning to detect financial fraud dates back to the early 1990s and has advanced over the years. Researchers train models to extract behavioral patterns from past transactions, called “features,” that signal fraud. When you swipe your card, the card pings the model and, if the features match fraud behavior, the sale gets blocked.

Behind the scenes, however, data scientists must dream up those features, which mostly center on blanket rules for amount and location. If any given customer spends more than, say, $2,000 on one purchase, or makes numerous purchases in the same day, they may be flagged. But because consumer spending habits vary, even in individual accounts, these models are sometime inaccurate: A 2015 report from Javelin Strategy and Research estimates that only one in five fraud predictions is correct and that the errors can cost a bank $118 billion in lost revenue, as declined customers then refrain from using that credit card.

The MIT researchers have developed an “automated feature engineering” approach that  extracts more than 200 detailed features for each individual transaction — say, if a user was present during purchases, and the average amount spent on certain days at certain vendors. By doing so, it can better pinpoint when a specific card holder’s spending habits deviate from the norm.

Tested on a dataset of 1.8 million transactions from a large bank, the model reduced false positive predictions by 54 percent over traditional models, which the researchers estimate could have saved the bank 190,000 euros (around $220,000) in lost revenue.

“The big challenge in this industry is false positives,” says Kalyan Veeramachaneni, a principal research scientist at MIT’s Laboratory for Information and Decision Systems (LIDS) and co-author of a paper describing the model, which was presented at the recent European Conference for Machine Learning. “We can say there’s a direct connection between feature engineering and [reducing] false positives. … That’s the most impactful thing to improve accuracy of these machine-learning models.”

Paper co-authors are: lead author Roy Wedge '15, a former researcher in the Data to AI Lab at LIDS; James Max Kanter ’15, SM ’15; and Santiago Moral Rubio and Sergio Iglesias Perez of Banco Bilbao Vizcaya Argentaria.

Extracting “deep” features

Three years ago, Veeramachaneni and Kanter developed Deep Feature Synthesis (DFS), an automated approach that extracts highly detailed features from any data, and decided to apply it to financial transactions.

Enterprises will sometimes host competitions where they provide a limited dataset along with a prediction problem such as fraud. Data scientists develop prediction models, and a cash prize goes to the most accurate model. The researchers entered one such competition and achieved top scores with DFS.

However, they realized the approach could reach its full potential if trained on several sources of raw data. “If you look at what data companies release, it’s a tiny sliver of what they actually have,” Veeramachaneni says. “Our question was, ‘How do we take this approach to actual businesses?’”

Backed by the Defense Advanced Research Projects Agency’s Data-Driven Discovery of Models program, Kanter and his team at Feature Labs — a spinout commercializing the technology — developed an open-source library for automated feature extraction, called Featuretools, which was used in this research.

The researchers obtained a three-year dataset provided by an international bank, which included granular information about transaction amount, times, locations, vendor types, and terminals used. It contained about 900 million transactions from around 7 million individual cards. Of those transactions, around 122,000 were confirmed as fraud. The researchers trained and tested their model on subsets of that data.

In training, the model looks for patterns of transactions and among cards that match cases of fraud. It then automatically combines all the different variables it finds into “deep” features that provide a highly detailed look at each transaction. From the dataset, the DFS model extracted 237 features for each transaction. Those represent highly customized variables for card holders, Veeramachaneni says. “Say, on Friday, it’s usual for a customer to spend $5 or $15 dollars at Starbucks,” he says. “That variable will look like, ‘How much money was spent in a coffee shop on a Friday morning?’”

It then creates an if/then decision tree for that account of features that do and don’t point to fraud. When a new transaction is run through the decision tree, the model decides in real time whether or not the transaction is fraudulent.

Pitted against a traditional model used by a bank, the DFS model generated around 133,000 false positives versus 289,000 false positives, about 54 percent fewer incidents. That, along with a smaller number of false negatives detected — actual fraud that wasn’t detected — could save the bank an estimated 190,000 euros, the researchers estimate.

Stacking primitives

The backbone of the model consists of creatively stacked “primitives,” simple functions that take two inputs and give an output. For example, calculating an average of two numbers is one primitive. That can be combined with a primitive that looks at the time stamp of two transactions to get an average time between transactions. Stacking another primitive that calculates the distance between two addresses from those transactions gives an average time between two purchases at two specific locations. Another primitive could determine if the purchase was made on a weekday or weekend, and so on.

“Once we have those primitives, there is no stopping us for stacking them … and you start to see these interesting variables you didn’t think of before. If you dig deep into the algorithm, primitives are the secret sauce,” Veeramachaneni says.

One important feature that the model generates, Veeramachaneni notes, is calculating the distance between those two locations and whether they happened in person or remotely. If someone who buys something at, say, the Stata Center in person and, a half hour later, buys something in person 200 miles away, then it’s a high probability of fraud. But if one purchase occurred through mobile phone, the fraud probability drops.

“There are so many features you can extract that characterize behaviors you see in past data that relate to fraud or nonfraud use cases,” Veeramachaneni says.

What makes an educational video game work well?

Thu, 09/20/2018 - 12:00am

To succeed at “Lure of the Labyrinth,” a video game created by designers in MIT’s Education Arcade, players rescue pets from an underground lair inhabited by monsters. In so doing, they solve mathematical puzzles, decipher maps, wear monster costumes as disguises, and cooperate with Iris, daughter of Hermes from classical mythology. With tenacity, players can foil the monsters’ plot and free hundreds of pets.

“Labyrinth” is intended for schoolkids. Funded by the U.S. Department of Education’s Star Schools program, the game was tested in Baltimore and rural Maryland, used feedback from teachers, and is intended to improve middle-school mathematics and literacy. But it is also meant to be a compelling, competitive challenge in and of itself.

“It’s both a good game and a good educational experience,” says Professor Eric Klopfer, director of MIT’s Scheller Teacher Education Program and the Education Arcade.

As such, “Labyrinth” represents many things about the philosophy of the Education Arcade, a design program situated at the junction of gaming and learning. As the program’s principals write, the world of “Labyrinth” is intended to help kids “succeed in school and life through perseverance and collaboration.”

A game, in short, can enhance a growth mindset.

“Our goal is not just to make games that are interesting interludes in a classroom, but really to connect to the students’ deeper appreciation for learning and their own trajectory in life,” says Scot Osterweil, a game designer and creative director at the Education Arcade.

Now, members of the Education Arcade detail that philosophy in a book, “Resonant Games,” published by the MIT Press. The authors are Klopfer; Osterweil; Jason Haas, a game designer and research assistant at the MIT Media Lab and the Education Arcade; and Lousia Rosenheck, a designer and research manager at the Education Arcade.

Reflecting on more than a decade of research and design, the authors discuss a core set of principles — “honor the whole learner,” for starters — and list ideas for educational game design, while underlining that they, themselves, keep learning about their craft.

“We have a list of principles,” Klopfer says. “It’s not a formula.”

Surprise, surprise

Indeed, one of the motifs of “Resonant Games” is that, while experience and data offer useful feedback, game design remains an unpredictable task: It’s never entirely clear how well certain games will catch on with an audience.

“Good design is usually surprising,” Osterweil says. “It first surprises you, then it surprises the player.”

Take “Vanished,” a 2011 game the Education Arcade developed in conjunction with the Smithsonian Institution.

The premise of this two-month-long game with thousands of participants was that people from the future contacted us in the present, with a question: What event, after our present but before their time, led to the loss of civilization’s historical records? Decoding clues, players had to find and provide information about Earth’s current condition, including temperature and species data.

“With ‘Vanished’ we were continuously learning new things about the players as the game went on, and we changed the game by what we were sensing in the players,” Osterweil says. “They surprised us by the depth of their engagement.”

Sometimes the surprise comes not from how people play a game, but who plays it. Take a pair of games the Education Arcade developed in 2008 and 2009, respectively. “Palmagotchi” is a secondary-school-level game about evolutionary biology, simulating an island where players help manage the ecosystem. “Weatherlings,” created in collaboration with Nanyang Technical University in Singapore, is a Pokemon-style game with online collectible cards that represent weather-dependent creatures that battle each other in U.S. cities.

The Education Arcade researchers were warned to expect a highly gendered response to the games, but in reality, that did not occur.

“We got lots of feedback from people who were saying, “Oh, that first game is only going to appeal to girls. Boys are not going to want to play that game,” Klopfer says. “The second one, they were like, boys play Pokemon. Girls aren’t going to want to play that. And we found in fact that boys and girls equally engaged in them. We had high school boys on the verge of tears because their virtual birds had died.”

Art projects

For reasons such as this, the Education Arcade researchers emphasize that a rote approach to design is likely to fail. It is better for designers to pursue a game topic that they find fascinating and hope others will as well.

“You kind of have to see it as an art project,” says Haas, a game designer and PhD candidate at the MIT Media Lab. “You have to feel this is something that people could fall in love with.”

Still, “Resonant Games” is filled with organizing principles for thinking about educational games, including four major ones the authors list at the start. The notion that we should “honor the whole learner,” for instance, means we should remember that learners are “full human beings with a range of passions, likes, and dislikes,” who often need to be pulled in with interesting stories, puzzles, and challenges.

“People love solving problems,” Osterweil observes. “People tend to tune out when the problem seems too big or too obscure. But if you can make a problem graspable, people tend to want to solve it. And that’s what we’re trying to harness.”

Other scholars in the discipline have praised the book. Jan L. Plass, a professor in digital media and learning sciences at New York University, has called it a “highly original book” and a “very valuable resource” for other designers. Still, as the authors note, the Education Arcade team does not claim to have all the answers for creating fun, fulfilling educational games. But they can at least suggest how other designers can find success.

“The future we imagine,” Osterweil says, “is not one where our game becomes the math game for every middle schooler, but rather a whole universe of possibilities so that kids can find themselves in any number of meaningful experiences.”

School of Science welcomes 10 professors

Wed, 09/19/2018 - 10:45am

The MIT School of Science recently welcomed 10 new professors in the departments of Biology Brain and Cognitive Sciences, Chemistry, Physics, Mathematics, and Earth, Atmospheric and Planetary Sciences.

Tristan Collins conducts research at the intersection of geometric analysis, partial differential equations, and algebraic geometry. In joint work with Valentino Tosatti, Collins described the singularity formation of the Ricci flow on Kahler manifolds in terms of algebraic data. In recent work with Gabor Szekelyhidi, he gave a necessary and sufficient algebraic condition for existence of Ricci-flat metrics, which play an important role in string theory and mathematical physics. This result lead to the discovery of infinitely many new Einstein metrics on the 5-dimensional sphere. With Shing-Tung Yau and Adam Jacob, Collins is currently studying the relationship between categorical stability conditions and existence of solutions to differential equations arising from mirror symmetry.

Collins earned his BS in mathematics at the University of British Columbia in 2009, after which he completed his PhD in mathematics at Columbia University in 2014 under the direction of Duong H. Phong. Following a four-year appointment as a Benjamin Peirce Assistant Professor at Harvard University, Collins joins MIT as an assistant professor in the Department of Mathematics.

Julien de Wit develops and applies new techniques to study exoplanets, their atmospheres, and their interactions with their stars. While a graduate student in the Sara Seager group at MIT, he developed innovative analysis techniques to map exoplanet atmospheres, studied the radiative and tidal planet-star interactions in eccentric planetary systems, and constrained the atmospheric properties and mass of exoplanets solely from transmission spectroscopy. He plays a critical role in the TRAPPIST/SPECULOOS project, headed by Université of Liège, leading the atmospheric characterization of the newly discovered TRAPPIST-1 planets, for which he has already obtained significant results with the Hubble Space Telescope. De Wit’s efforts are now also focused on expanding the SPECULOOS network of telescopes in the northern hemisphere to continue the search for new potentially habitable TRAPPIST-1-like systems.

De Wit earned a BEng in physics and mechanics from the Université de Liège in Belgium in 2008, an MS in aeronautic engineering and an MRes in astrophysics, planetology, and space sciences from the Institut Supérieur de l’Aéronautique et de l’Espace at the Université de Toulouse, France in 2010; he returned to the Université de Liège for an MS in aerospace engineering, completed in 2011. After finishing his PhD in planetary sciences in 2014 and a postdoc at MIT, both under the direction of Sara Seager, he joins the MIT faculty in the Department of Earth, Atmospheric and Planetary Sciences as an assistant professor.

Ila Fiete uses computational and theoretical tools to better understand the dynamical mechanisms and coding strategies that underlie computation in the brain, with a focus on elucidating how plasticity and development shape networks to perform computation and why information is encoded the way that it is. Her recent focus is on error control in neural codes, rules for synaptic plasticity that enable neural circuit organization, and questions at the nexus of information and dynamics in neural systems, such as understand how coding and statistics fundamentally constrain dynamics and vice-versa.

After earning a BS in mathematics and physics at the University of Michigan, Fiete obtained her PhD in 2004 at Harvard University in the Department of Physics. While holding an appointment at the Kavli Institute for Theoretical Physics at the University of California, Santa Barbara from 2004 to 2006, she was also a visiting member of the Center for Theoretical Biophysics at the University of California at San Diego. Fiete subsequently spent two years at Caltech as a Broad Fellow in brain circuitry, and in 2008 joined the faculty of the University of Texas at Austin. She joins the MIT faculty in the Department of Brain and Cognitive Sciences as an associate professor with tenure.

Ankur Jain explores the biology of RNA aggregation. Several genetic neuromuscular disorders, such as myotonic dystrophy and amyotrophic lateral sclerosis, are caused by expansions of nucleotide repeats in their cognate disease genes. Such repeats cause the transcribed RNA to form pathogenic clumps or aggregates. Jain uses a variety of biophysical approaches to understand how the RNA aggregates form, and how they can be disrupted to restore normal cell function. Jain will also study the role of RNA-DNA interactions in chromatin organization, investigating whether the RNA transcribed from telomeres (the protective repetitive sequences that cap the ends of chromosomes) undergoes the phase separation that characterizes repeat expansion diseases.

Jain completed a bachelor's of technology degree in biotechnology and biochemical engineering at the Indian Institute of Technology Kharagpur, India in 2007, followed by a PhD in biophysics and computational biology at the University of Illinois at Urbana-Champaign under the direction of Taekjip Ha in 2013. After a postdoc at the University of California at San Francisco, he joins the MIT faculty in the Department of Biology as an assistant professor with an appointment as a member of the Whitehead Institute for Biomedical Research.

Kiyoshi Masui works to understand fundamental physics and the evolution of the universe through observations of the large-scale structure — the distribution of matter on scales much larger than galaxies. He works principally with radio-wavelength surveys to develop new observational methods such as hydrogen intensity mapping and fast radio bursts. Masui has shown that such observations will ultimately permit precise measurements of properties of the early and late universe and enable sensitive searches for primordial gravitational waves. To this end, he is working with a new generation of rapid-survey digital radio telescopes that have no moving parts and rely on signal processing software running on large computer clusters to focus and steer, including work on the Canadian Hydrogen Intensity Mapping Experiment (CHIME).

Masui obtained a BSCE in engineering physics at Queen’s University, Canada in 2008 and a PhD in physics at the University of Toronto in 2013 under the direction of Ue-Li Pen. After postdoctoral appointments at the University of British Columbia as the Canadian Institute for Advanced Research Global Scholar and the Canadian Institute for Theoretical Astrophysics National Fellow, Masui joins the MIT faculty in the Department of Physics as an assistant professor.

Phiala Shanahan studies theoretical nuclear and particle physics, in particular the structure and interactions of hadrons and nuclei from the fundamental (quark and gluon) degrees of freedom encoded in the Standard Model of particle physics. Shanahan’s recent work has focused on the role of gluons, the force carriers of the strong interactions described by quantum chromodynamics (QCD), in hadron and nuclear structure by using analytic tools and high-performance supercomputing. She recently achieved the first calculation of the gluon structure of light nuclei, making predictions that will be testable in new experiments proposed at Jefferson National Accelerator Facility and at the planned Electron-Ion Collider. She has also undertaken extensive studies of the role of strange quarks in the proton and light nuclei that sharpen theory predictions for dark matter cross-sections in direct detection experiments. To overcome computational limitations in QCD calculations for hadrons and in particular for nuclei, Shanahan is pursuing a program to integrate modern machine learning techniques in computational nuclear physics studies.

Shanahan obtained her BS in 2012 and her PhD in 2015, both in physics, from the University of Adelaide. She completed postdoctoral work at MIT in 2017, then held a joint position as an assistant professor at the College of William and Mary and senior staff scientist at the Thomas Jefferson National Accelerator Facility until 2018. She returns to MIT in the Department of Physics as an assistant professor.

Nike Sun works in probability theory at the interface of statistical physics and computation. Her research focuses in particular on phase transitions in average-case (randomized) formulations of classical computational problems. Her joint work with Jian Ding and Allan Sly establishes the satisfiability threshold of random k-SAT for large k, and relatedly the independence ratio of random regular graphs of large degree. Both are long-standing open problems where heuristic methods of statistical physics yield detailed conjectures, but few rigorous techniques exist. More recently she has been investigating phase transitions of dense graph models.

Sun completed BA mathematics and MA statistics degrees at Harvard in 2009, and an MASt in mathematics at Cambridge in 2010. She received her PhD in statistics from Stanford University in 2014 under the supervision of Amir Dembo. She held a Schramm fellowship at Microsoft New England and MIT Mathematics in 2014-2015 and a Simons postdoctoral fellowship at the University of California at Berkeley in 2016, and joined the Berkeley Department of Statistics as an assistant professor in 2016. She returns to the MIT Department of Mathematics as an associate professor with tenure.

Alison Wendlandt focuses on the development of selective, catalytic reactions using the tools of organic and organometallic synthesis and physical organic chemistry. Mechanistic study plays a central role in the development of these new transformations. Her projects involve the design of new catalysts and catalytic transformations, identification of important applications for selective catalytic processes, and elucidation of new mechanistic principles to expand powerful existing catalytic reaction manifolds.

Wendlandt received a BS in chemistry and biological chemistry from the University of Chicago in 2007, an MS in chemistry from Yale University in 2009, and a PhD in chemistry from the University of Wisconsin at Madison in 2015 under the direction of Shannon S. Stahl. Following an NIH Ruth L. Krichstein Postdoctoral Fellowship at Harvard University, Wendlandt joins the MIT faculty in the Department of Chemistry as an assistant professor.

Chengyang Xu specializes in higher-dimensional algebraic geometry, an area that involves classifying algebraic varieties, primarily through the minimal model program (MMP). MMP was introduced by Fields Medalist S. Mori in the early 1980s to make advances in higher dimensional birational geometry. The MMP was further developed by Hacon and McKernan in the mid-2000s, so that the MMP could be applied to other questions. Collaborating with Hacon, Xu expanded the MMP to varieties of certain conditions, such as those of characteristic p, and, with Hacon and McKernan, proved a fundamental conjecture on the MMP, generating a great deal of follow-up activity. In collaboration with Chi Li, Xu proved a conjecture of Gang Tian concerning higher-dimensional Fano varieties, a significant achievement. In a series of papers with different collaborators, he successfully applied MMP to singularities.

Xu received his BS in 2002 and MS in 2004 in mathematics from Peking University, and completed his PhD at Princeton University under János Kollár in 2008. He came to MIT as a CLE Moore Instructor in 2008-2011, and was subsequently appointed assistant professor at the University of Utah. He returned to Peking University as a research fellow at the Beijing International Center of Mathematical Research in 2012, and was promoted to professor in 2013. Xu joins the MIT faculty as a full professor in the Department of Mathematics.

Zhiwei Yun’s research is at the crossroads between algebraic geometry, number theory, and representation theory. He studies geometric structures aiming at solving problems in representation theory and number theory, especially those in the Langlands program. While he was a CLE Moore Instructor at MIT, he started to develop the theory of rigid automorphic forms, and used it to answer an open question of J-P Serre on motives, which also led to a major result on the inverse Galois problem in number theory. More recently, in his joint work with Wei Zhang, they give geometric interpretation of higher derivatives of automorphic L- functions in terms of intersection numbers, which sheds new light on the geometric analogue of the Birch and Swinnerton-Dyer conjecture.

Yun earned his BS at Peking University in 2004, after which he completed his PhD at Princeton University in 2009 under the direction of Robert MacPherson. After appointments at the Institute for Advanced Study and as a CLE Moore Instructor at MIT, he held faculty appointments at Stanford and Yale. He returned to the MIT Department of Mathematics as a full professor in the spring of 2018.

Feng Zhang wins 2018 Keio Medical Science Prize

Wed, 09/19/2018 - 10:30am

Molecular biologist Feng Zhang has been named a winner of the prestigious Keio Medical Science Prize. He is being recognized for the groundbreaking development of CRISPR-Cas9-mediated genome engineering in cells and its application for medical science. 

Zhang is the James and Patricia Poitras Professor of Neuroscience at MIT, an associate professor in the departments of Brain and Cognitive Sciences and Biological Engineering, a Howard Hughes Medical Institute investigator, an investigator at the McGovern Institute for Brain Research, and a core member of the Broad Institute of MIT and Harvard.

“We are delighted that Feng is now a Keio Prize laureate,” says McGovern Institute Director Robert Desimone. “This truly recognizes the remarkable achievements that he has made at such a young age.”

Zhang is a molecular biologist who has contributed to the development of multiple molecular tools to accelerate the understanding of human disease and create new therapeutic modalities. During his graduate work, Zhang contributed to the development of optogenetics, a system for activating neurons using light, which has advanced our understanding of brain connectivity.

Zhang went on to pioneer the deployment of the microbial CRISPR-Cas9 system for genome engineering in eukaryotic cells. The ease and specificity of the system has led to its widespread use across the life sciences and it has groundbreaking implications for disease therapeutics, biotechnology, and agriculture. He has continued to mine bacterial CRISPR systems for additional enzymes with useful properties, leading to the discovery of Cas13, which targets RNA, rather than DNA, and may potentially be a way to treat genetic diseases without altering the genome. Zhang has also developed a molecular detection system called SHERLOCK based on the Cas13 family, which can sense trace amounts of genetic material, including viruses and alterations in genes that might be linked to cancer.

“I am tremendously honored to have our work recognized by the Keio Medical Prize,” says Zhang. “It is an inspiration to us to continue our work to improve human health.”

Now in its 23rd year, the Keio Medical Science Prize is awarded to a maximum of two scientists each year. The other 2018 laureate, Masashi Yanagisawa, director of the International Institute for Integrative Sleep Medicine at the University of Tsukuba, is being recognized for his seminal work on sleep control mechanisms.

The prize is offered by Keio University, and the selection committee specifically looks for laureates that have made an outstanding contribution to medicine or the life sciences. The prize was initially endowed by Mitsunada Sakaguchi in 1994, with the express condition that it be used to commend outstanding science, promote advances in medicine and the life sciences, expand researcher networks, and contribute to the wellbeing of humankind. The winners receive a certificate of merit, a medal, and a monetary award of approximately $90,000.

The prize ceremony will be held on Dec. 18 at Keio University in Tokyo.

3 Questions: Richard Lester on the MIT China Summit

Tue, 09/18/2018 - 11:59pm

On Nov. 12 and 13, leaders in industry, government, and academia will convene at the inaugural MIT China Summit in Beijing, to explore topics at the frontiers of science and technology and the role of research and education in shaping tomorrow’s world. MIT News spoke with Richard Lester, the associate provost of MIT who oversees international activities, about the summit and its significance for the Institute.

Q: Why is MIT doing a summit in China?

A: The idea for the MIT China Summit came out of the MIT Global Strategy report published by my office last year. One of the report’s recommendations was for the Institute to convene periodic summits in targeted regions of the world, to demonstrate MIT’s interest in working with and learning from partners in these regions; to increase regional knowledge of who we are, how we work, and what we stand for; and to provide a focus for developing new collaborations. This is the first such summit.   Many people may wonder why we’re doing this summit now in light of the current tensions between the U.S. and China. I think the answer is that even though there are aggravated political strains over trade and technology, at the same time there are opportunities for us to work together on issues that are important to both countries and also to the rest of the world — issues like climate change mitigation, clean energy, environmental sustainability, urbanization, and food and water security. Indeed, now may be an especially important time for a university like MIT to focus attention on the possibilities for U.S.-China cooperation in applying science and technology to help solve great global challenges.   

As President Reif recently noted in an op-ed for The New York Times, China is advancing rapidly in critical fields of science and technology. In areas such as quantum computing, 5G technology, and facial and spoken language recognition, China is a leader. The Chinese are also making bold national investments in key areas of research like biotechnology and space, and directly supporting startups and recruiting talent from around the world. Chinese researchers are increasingly present at the frontiers of science and technology, where MIT faculty and students must also be. 

At the same time, we understand that interactions with China must be approached thoughtfully and should be carefully reviewed, as is the case with our other international engagements. The summit will provide an opportunity for us to talk about what is most important to us in possible future China collaborations.

Q: What will happen at the summit?

A: The Summit in Beijing will be a two-day event. On the first day, several MIT programs will be hosting activities and events around the city. These will provide snapshots of different facets of MIT and what we are doing in China. For example, Professor Siqi Zheng, Director of the China Future City Lab at DUSP [Department of Urban Studies and Planning], will hold a symposium on new urban developments in China. MIT Sloan Global Programs will hold its inaugural International Faculty Fellows Conference at Tsinghua University. And the Institute will also host MIT Better World (Beijing), a special reception for alumni from China, Taiwan, and Hong Kong featuring a fireside chat with President Reif.

The second day will consist of a plenary conference. This event will engage the audience — some 500 invited guests from academia, industry, finance, and government, as well as MIT alums — in an MIT-mediated survey of some of the most exciting topics at the frontiers of science as well as potential solutions to some of the world’s most challenging problems. The conference will be opened by President Reif, and the program will feature 15 leading MIT professors, together with prominent Chinese scientific and business leaders. Two key themes of the event will be: What can we learn from each other, and what might we do together to address major global challenges.

Our local host for all these events is the Chinese Academy of Sciences.

Q: How far back does MIT’s engagement with China go?

A: The first Chinese student came to MIT in 1877, to study at MIT’s School of Mechanic Arts. Over the next 50 years, 400 Chinese students were enrolled here at MIT, many of them sent by the Chinese government. This history was beautifully told by Professor Emma Teng in an exhibit she curated last year called China Comes to Tech. She showed us that these early Chinese students were deeply involved in all aspects of the MIT community. They came to study subjects like railroad engineering, mining engineering, and naval architecture, and they also participated in athletics, debate, theater, the professional societies, and virtually every aspect of student life.

Chinese students have brought so much to our campus over the past 140 years. And our Chinese faculty and alumni, people like I.M. Pei, Sam Ting, Charles Zhang, and others, have enjoyed great entrepreneurial and academic success and made important contributions in the U.S., in China, and around the world. The MIT China Summit will celebrate this history, while looking ahead to what we hope will be a new century of even more creative and dynamic interactions between MIT and China.

An extinction without warning

Tue, 09/18/2018 - 11:59pm

The most severe mass extinction in Earth’s history occurred with almost no early warning signs, according to a new study by scientists at MIT, China, and elsewhere.

The end-Permian mass extinction, which took place 251.9 million years ago, killed off more than 96 percent of the planet’s marine species and 70 percent of its terrestrial life — a global annihilation that marked the end of the Permian Period.

The new study, published today in the GSA Bulletin, reports that in the approximately 30,000 years leading up to the end-Permian extinction, there is no geologic evidence of species starting to die out. The researchers also found no signs of any big swings in ocean temperature or dramatic fluxes of carbon dioxide in the atmosphere. When ocean and land species did die out, they did so en masse, over a period that was geologically instantaneous.

“We can say for sure that there were no initial pulses of extinction coming in,” says study co-author Jahandar Ramezani, a research scientist in MIT’s Department of Earth, Atmospheric, and Planetary Sciences. “A vibrant marine ecosystem was continuing until the very end of Permian, and then bang — life disappears. And the big outcome of this paper is that we don’t see early warning signals of the extinction. Everything happened geologically very fast.”

Ramezani’s co-authors include Samuel Bowring, professor of geology at MIT, along with scientists from the Chinese Academy of Sciences, the National Museum of Natural History, and the University of Calgary.

Finding missing pieces

For over two decades, scientists have tried to pin down the timing and duration of the end-Permian mass extinction to gain insights into its possible causes. Most attention has been devoted to well-preserved layers of fossil-rich rocks in eastern China, in a place known to geologists as the Meishan section. Scientists have determined that this section of sedimentary rocks was deposited in an ancient ocean basin, just before and slightly after the end-Permian extinction. As such, the Meishan section is thought to preserve signs of how Earth’s life and climate fared leading up to the calamitous event. 

“However, the Meishan section was deposited in a deep water setting and is highly condensed,” says Shuzhong Shen of the Nanjing Institute of Geology and Palaeontology in China, who led the study. “The rock record may be incomplete.” The whole extinction interval at Meishan comprises just 30 centimeters of ancient sedimentary layers, and he says it’s likely that there were periods in this particular ocean setting when sediments did not settle, creating “depositional gaps” during which any evidence of life or environmental conditions may not have been recorded. 

In 1994, Shen took Bowring, along with paleobiologist Doug Erwin, now curator of paleozoic invertebrates at the National Museum of Natural History and a co-author of the paper, looking for a more complete extinction record in Penglaitan, a much less-studied section of rock in southern China’s Guangxi province. The Penglaitan section is what geologists consider “highly expanded.” Compared with Meishan’s 30 centimeters of sediments, Penglaitan’s sedimentary layers make up a much more expanded 27 meters that were deposited over the same period of time, just before the main extinction event occurred.

“It’s from a different part of the ancient ocean basin, that was closer to the continent, where you might find coral reefs and a lot more sedimentation and biological activity,” Ramezani says. “So we can see a lot more, as in what’s happening in the environment and with life, in this same period of time.”

The researchers painstakingly collected and analyzed samples from multiple layers of the Penglaitan section, including samples from ash beds that were deposited by volcanic activity that occurred as nearby seafloor was crushed slowly under continental crust. These ash beds contain zircons — tiny mineral grains that contain uranium and lead, the ratios of which researchers can measure to determine the age of the zircon, and the ash bed from which it came.

Ramezani and his colleagues used this geochronology technique, developed to a large extent by Bowring, to determine with high precision the age of multiple ash bed layers throughout the Penglaitan section. From their analysis, they were able to determine that the end-Permian extinction occurred suddenly, around 252 million years ago, give or take 31,000 years.

“A sudden punch”

The team also analyzed sedimentary layers for fossils, as well as oxygen and carbon isotopes, which can tell something about the ocean temperature and the state of its carbon cycle at the time the sediments were deposited. From the fossil record, they expected to see waves of species going extinct in the lead-up to the final extinction horizon. Similarly, they anticipated big changes in ocean temperature and chemistry, that would signal the oncoming disaster.

“We thought we would see a gradual decline in the diversity of life forms or, for example, certain species that are known to be less resilient than others, we would expect them to die out early on, but we don’t see that,” Ramezani says. “Disappearances are very random and don’t conform to any kind of physiologic process or environmental effect. That makes us believe that the changes we are seeing before the event horizon are not really reflecting extinction.”

For example, the researchers found signs that the ocean temperature rose from 30 to 35 degrees Celsius from the base to the top of the 27-meter interval — a period that encompasses about 30,000 years before the main extinction event. This temperature swing, however, is not very significant compared with a much larger heat-up that took place after most species already had died out.

“Big changes in temperature come right after the extinction, when the ocean gets really hot and uncomfortable,” Ramezani says. “So we can rule out that ocean temperature was a driver of the extinction.”

So what could have caused the sudden, global wipeout? The leading hypothesis is that the end-Permian extinction was caused by massive volcanic eruptions that spewed more than 4 million cubic kilometers of lava over what is now known as the Siberian Traps, in Siberia, Russia. Such immense and sustained eruptions likely released huge amounts of sulfur dioxide and carbon dioxide into the air, heating the atmosphere and acidifying the oceans.

Prior work by Bowring and his former graduate student Seth Burgess determined that the timing of the Siberian Traps eruptions matches the timing of the end-Permian extinction. But according to the team’s new data from the Penglaitan section, even though increased global volcanic activity dominated the last 400,000 years of the Permian, it doesn’t appear that there were any dramatic die-outs of marine species or any significant changes in ocean temperature and atmospheric carbon in the 30,000 years leading up to the main extinction.

“We can say there was extensive volcanic activity before and after the extinction, which could have caused some environmental stress and ecologic instability. But the global ecologic collapse came with a sudden blow, and we cannot see its smoking gun in the sediments that record extinction,” Ramezani says. “The key in this paper is the abruptness of the extinction. Any hypothesis that says the extinction was caused by gradual environmental change during the late Permian — all those slow processes, we can rule out. It looks like a sudden punch comes in, and we’re still trying to figure out what it meant and what exactly caused it.”

“This study adds very much to the growing evidence that Earth's major extinction events occur on very short timescales, geologically speaking,” says Jonathan Payne, professor of geological sciences and biology at Stanford University, who was not involved in the research. “It is even possible that the main pulse of Permian extinction occurred in just a few centuries. If it turns out to reflect an environmental tipping point within a longer interval of ongoing environmental change, that should make us particularly concerned about potential parallels to global change happening in the world around us right now.”

This research was supported, in part, by the Chinese Academy of Sciences and the National Natural Science Foundation of China.

Creating 3-D-printed “motion sculptures” from 2-D videos

Tue, 09/18/2018 - 11:59pm

New England Patriots quarterback Tom Brady has often credited his success to spending countless hours studying his opponent’s movements on film. This understanding of movement is necessary for all living species, whether it’s figuring out the best angle for throwing a ball, or perceiving the motion of predators and prey. But simple videos can’t actually give us the full picture.

That’s because traditional videos and photos for studying motion are two-dimensional, and don’t show us the underlying 3-D structure of the person or subject of interest. Without the full geometry, we can’t inspect the small and subtle movements that help us move faster or make sense of the precision needed to perfect our athletic form.

Recently, though, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a way to get a better handle on this understanding of complex motion. 

The new system uses an algorithm that can take 2-D videos and turn them into 3-D-printed “motion sculptures” that show how a human body moves through space.

In addition to being an intriguing aesthetic visualization of shape and time, the “MoSculp” system could enable a much more detailed study of motion for professional athletes, dancers, or anyone who wants to improve their physical skills.

“Imagine you have a video of Roger Federer serving a ball in a tennis match, and a video of yourself learning tennis,” says PhD student Xiuming Zhang, lead author of a new paper about the system. “You could then build motion sculptures of both scenarios to compare them and more comprehensively study where you need to improve.”

Because motion sculptures are 3-D, users can use a computer interface to navigate around the structures and see them from different viewpoints, revealing motion-related information inaccessible from the original viewpoint.

Zhang wrote the paper alongside MIT professors of electrical engineering and computer science William Freeman and Stefanie Mueller, PhD student Jiajun Wu, Google researchers Qiurui He and Tali Dekel, as well as former CSAIL PhD students Andrew Owens and Tianfan Xue.

Bodies in motion

Artists and scientists have long struggled to gain better insight into movement, limited by their own camera lenses and what they could provide.

Previous work has mostly used so-called “stroboscopic” photography techniques, which look a lot like the images in a flip book stitched together. But since these photos only show snapshots of movement, viewers wouldn’t be able to see as much of the trajectory of a person’s arm when they’re hitting a golf ball, for example.

What’s more, these photographs also require laborious preshoot setup, such as using a clean background and specialized depth cameras and lighting equipment. All MoSculp needs is a video sequence.

Given an input video, the system first automatically detects 2-D key points on the subject’s body, such as the hip, knee, and ankle of a ballerina while she’s doing a complex dance sequence. Then, it takes the best possible poses from those points to be turned into 3-D “skeletons.”

After stitching these skeletons together, the system generates a motion sculpture that can be 3-D-printed, showing the smooth, continuous path of movement traced out by the subject. Users can customize their figures to focus on different body parts, assign different materials to distinguish among parts, and even customize lighting.

In user studies, the researchers found that over 75 percent of subjects felt that MoSculp provided a more detailed visualization for studying motion than the standard photography techniques.

“Dance and highly skilled athletic motions often seem like ‘moving sculptures’ but they only create fleeting and ephemeral shapes,” says Aaron Hertzmann, a principal scientist at Adobe's Creative Intelligence Lab who was not involved in the research. “This work shows how to take motions and turn them into real sculptures with objective visualizations of movement, providing a way for athletes to analyze their movements for training, requiring no more equipment than a mobile camera and some computing time.”

The system works best for larger movements, like throwing a ball or taking a sweeping leap during a dance sequence. It also works for situations that might obstruct or complicate movement, such as if people are wearing loose clothing or carrying objects.

Currently, the system only uses single-person scenarios, but the team soon hopes to expand to multiple people. This could open up the potential to study things like social disorders, interpersonal interactions, and team dynamics.

The team will present their paper on the system next month at the User Interface Software and Technology (UIST) conference in Berlin, Germany.

Anant Agarwal, MIT professor and edX CEO, wins Yidan Prize

Tue, 09/18/2018 - 1:30pm

On Sept. 15 the Yidan Prize named MIT professor and edX co-founder Anant Agarwal as one of two 2018 laureates. The Yidan Prize judging panel, led by former Director-General of UNESCO Koichiro Matsuura, took more than six months to consider over 1,000 nominations spanning 92 countries. The Yidan Prize consists of two awards: the Yidan Prize for Education Development, awarded to Agarwal for making education more accessible to people around the world via the edX online platform, and the Yidan Prize for Education Research, awarded  to Larry V. Hedges of Northwestern University for his groundbreaking statistical methods for meta-analysis.

Agarwal is the CEO of edX, the online learning platform founded by MIT and Harvard University in 2012. He taught the first MITx course on edX, which drew 155,000 students from 162 countries. Agarwal has been leading the organization’s rapid growth since its founding. EdX currently offers over 2,000 online courses from more than 130 leading institutions to more than 17 million people around the world.

MITx, MIT’s portfolio of MOOCs delivered through edX, has also continued to expand its offerings, launching the MicroMasters credential in 2015. The credential has now been adopted by over 20 edX partners who have launched 50 different MicroMasters programs.

“I am extremely honored to receive this incredible recognition on behalf of edX, our worldwide partners and learners, from Dr. Charles Chen Yidan and the Yidan Prize Foundation. I also want to thank MIT and Harvard, our founding partners, for their pivotal role in making edX the transformative force in education that it is today. Yidan’s mission to create a better world through education is at the heart of what edX strives to do. This award will help us fulfill our commitment to reimagine education and further our mission to expand access to high-quality education for everyone, everywhere,” says Agarwal.

The Yidan Prize

Founded in 2016 by Charles Chen Yidan, the Yidan Prize aims to create a better world through education. The Yidan Prize for Education Research and the Yidan Prize for Education Development will be awarded in Hong Kong on December 2018 by The Honorable Mrs. Carrie Lam Cheng Yuet-ngor, chief executive of the Hong Kong Special Administrative Region.

Following the ceremony, the laureates will be joined by about 350 practitioners, researchers, policymakers, business leaders, philanthropists, and global leaders in education to launch the 2018 edition of the Worldwide Educating for the Future Index (WEFFI), the first comprehensive index to evaluate inputs into education systems rather than outputs, such as test scores.

Dorothy K. Gordon, chair of UNESCO IFAP and head of the judging panel, commends Professor Agarwal for his work behind the MOOC  movement. “EdX gives people the tools to decide where to learn, how to learn, and what to learn. It brings education into the sharing economy, enabling access for people who were previously excluded from the traditional system of education because of financial, geographic, or social constraints. It is the ultimate disrupter with the ability to reach every corner of the world that is internet enabled, decentralizing and democratizing education.’’

Vice President for Open Learning Sanjay Sarma praises edX for creating a platform “where learners from all over the world can access high-quality education and also for enabling MIT faculty and other edX university partners to rethink how digital technologies can enhance on-campus education by providing a platform that empowers researchers to advance the understanding of teaching through online learning.”  

South Africa startup Wala wins Zambezi Prize for micropayments platform

Tue, 09/18/2018 - 11:10am

The Legatum Center for Development and Entrepreneurship at MIT, with support from the Mastercard Foundation, has named South African startup Wala as the grand prize winner of the 2018 Zambezi Prize for Innovation in Financial Inclusion, as well as the regional winner of the MIT Inclusive Innovation Challenge (IIC) in the Financial Inclusion category. 

Wala is a mobile financial platform geared toward consumers operating outside the formal financial system. Using a blockchain system, it enables zero-fee, instant, borderless micro-payments for emerging market consumers. Through the Wala platform, users receive a cryptocurrency wallet and can access transactional banking, remittances, loans, and insurance.

Wala was chosen from among 10 finalists, all of whom joined leaders from the MIT and African tech ecosystems for the 2018 MIT Open Mic Africa Summit at Strathmore University in Nairobi in late August. The summit culminated in the award ceremony for the Zambezi Prize and for the IIC Africa, with Wala being honored as the $100,000 grand prize winner.

Other Zambezi Prize winners included Tulaa (Kenya) and RecyclePoints (Nigeria), which each won $30,000 as runners-up. $5,000 was awarded to each of the remaining Zambezi finalists: Apollo Agriculture (Kenya), Bidhaa Sasa (Kenya), FarmDrive (Kenya), Farmerline (Ghana), LanteOTC (South Africa), MaTontine (Senegal), and OZÉ (Ghana).

“Innovators like Wala and the other Zambezi and IIC finalists are vital to driving a more inclusive prosperity,” says Georgina Campbell Flatter, executive director of the MIT Legatum Center. “We’re excited to work with them.”

Megan Mitchell, director of fellowship and student programs for the Legatum Center, says MIT was a big winner too. “By bringing these transformative entrepreneurs into the MIT and Legatum Center universe, we’re giving ourselves more inputs and role models to integrate into our curriculum, student programming, and thought leadership.”

All 10 prize finalists will attend the Zambezi boot camp on the MIT campus during the IIC gala in Boston Nov. 5-9. As the Zambezi Prize winner, Wala will join Lynk, Wefarm, and Solar Freeze, the three other winners of the IIC Africa Prize, to represent Africa at the global tournament, which awards over $1 million in prizes. The IIC event is part of the MIT Initiative on the Digital Economy and, along with the MIT Legatum Center’s initiatives, exemplifies MIT’s global commitment to the future of work.

The Mastercard Foundation and several leading organizations attended this year’s Open Mic Africa Summit to engage with the new cohort and other area entrepreneurs. Prior to the prize announcement, event participants took part in cohort-building, panel discussions, a conversation on Kenya’s entrepreneurship ecosystem led by MIT Regional Entrepreneurship Accleration Program Executive Director Sarah Jane Maxted, and an MIT-style hackathon led by MIT Sloan senior lecturer Anjali Sastry and Martin Trust Center for MIT Entrepreneurship entrepreneur-in-residence Nick Meyer.

The summit also wrapped up the 2018 Open Mic Africa tour, a Pan-African event series designed to invigorate and celebrate entrepreneurial ecosystems across the continent. The Zambezi Prize and the Open Mic Africa tour are pillars of the Legatum Center’s Africa Strategy — a global vision to leverage MIT’s ecosystem to improve lives through principled entrepreneurial leadership. The Legatum Center’s strategy is also a core component of the MIT-Africa initiative, an Institute-wide commitment to coordinate and expand MIT connections with the continent.

Legatum Center Global Programs Manager Ali Diallo emphasized the scale of collaboration necessary to execute initiatives like Open Mic Africa and the Zambezi Prize: “We’re especially grateful to the Zambezi Prize Board, the MIT community, our global ecosystem collaborators, and the 40 African tech leaders who served as Zambezi judges and whose dedication to entrepreneurship and financial inclusion helped us discover this new generation of innovators.”

The Zambezi Prize competition, which awards a total of $200,000 in cash prizes, was established in 2015 to discover Africa’s most innovative early-stage startups that address the continent’s biggest financial inclusion challenges.

The challenges were selected by the Zambezi Board Members in collaboration with the Mastercard Foundation. Board Members include Georgina Campbell Flatter, executive director of the Legatum Center and lecturer at MIT Sloan School of Management; Christian Catalini, an assistant professor at MIT Sloan; Sam Epee-Bounya, managing director at Wellington Management; Xavier Faz, a lead at Digital Finance Frontiers, CGAP; Professor Simon Johnson of MIT Sloan; Jake Kendall, director of DFL Labs at Caribou Digital; and Shari Loessberg, a senior lecturer at MIT Sloan.

Book explores milestones of astronomical discovery

Tue, 09/18/2018 - 12:00am

Here’s quick rule of thumb about the universe: Everything old is new again.

Those materials being used when new stars or planets form are just recycled cosmic matter, after all. But also, even our latest scientific discoveries may not be as new as they seem.

That’s one insight from Marcia Bartusiak’s new book, “Dispatches from Planet 3,” published by Yale University Press, a tour of major discoveries in astronomy and astrophysics that digs into the history behind these breakthroughs.

“No discovery comes out of the blue,” says Bartusiak, professor of the practice in MIT’s Graduate Program in Science Writing. “Sometimes it takes decades of preparation for [discoveries] to be built, one brick at a time.”

The book, drawn from her columns in Natural History, underscores that point by highlighting unheralded scientists whose work influenced later discoveries.

Moreover, as Bartusiak observes in the book, recent scientific debates often echo older arguments. Take the kerfuffle last decade about whether or not Pluto should be regarded as a proper planet in our solar system. As Bartusiak recounts in the book, the same thing happened multiple times in the 19th century, when objects called Ceres, Vesta, and Juno first gained and then lost membership in the club of planets. 

“Ceres in the 19th century was a certified planet, along with Vesta and Juno, the big asteroids, until they got demoted into the general asteroid belt,” Bartusiak says. “Then the same thing happened again, and everyone said, ‘Poor Pluto, it’s not a planet any more.’ Well, I’m sure in the 19th century there were people going ‘Poor Ceres, it’s not a planet.’ We’ll get over it.”

(Demoting Pluto, by the way, is a judgment Bartusiak is comfortable with: “They made the right decision. Pluto is a dwarf planet. It’s part of the Kuiper Belt. I’m sure I’ll get a lot of people mad with me, [but] it makes sense to have Pluto in that group, rather than … with the big terrestrial planets and the gas giants.”)

One astronomer who made a crucial Pluto-related discovery was Jane X. Luu, who helped locate asteroids orbiting the sun from even farther away. Luu is just one of many women in “Dispatches from Planet 3” — although, Bartusiak says, that was not by design, but simply a consequence of hunting for the origins of important advances. 

“I did not have an agenda for this book,” Bartusiak says. “I have always been the type of writer that wanted to follow my nose on what the most interesting findings, discoveries, and theories were, without worrying about who was doing them.”

But as it happens, many stories about the development of scientific knowledge involve accomplished female scientists who did not immediately become household names.

Consider the astronomer Cecilia Payne-Gaposchkin, who in the 1920s, Bartusiak notes, “first knew that hydrogen is the major element of the universe. A major discovery! This is the fuel for stars. It was central to astronomical studies. And yet, the greatest astronomer of the time, Henry Norris Russell, made her take [the idea] out of her thesis before they would accept it at Harvard.”

Bartusiak’s book also recounts the career of Beatrice Tinsley, an astrophysicist who in the 1970s developed important work about the ways galaxies change over time, before she died in her early 40s.

“Who really started thinking about galaxy evolution?” Bartusiak asks. “Beatrice Tinsley, ignored when she first started doing this, [produced] one of the most accomplished PhD theses in astronomical history. She was the first to really take it seriously.”

The notion that galaxies evolve, Bartusiak’s book reminds us, is a relatively recent concept, running counter to ages of conventional wisdom. 

“People thought of the universe as being serene [and that] every galaxy was like the Milky Way,” Bartusiak says. “And that was based on what they could see.” Deep in the postwar era, our empirical knowledge expanded, and so did our conception of galactic-scale activity.

In fairness, the Milky Way is pretty placid at the moment.

“It will get active again when we collide with Andromeda, 4 billion years from now,” Bartusiak says. “We’re lucky we’re not in the galactic center or in a very active star cluster. You have stars blowing up, and it probably would be hard for life to start if you were in an area where X-rays were raining down on you, or if a supernova was going off nearby. We’re off in a little spur in a very quiet part of the Milky Way galaxy, which has enabled life on Earth here to evolve and flourish without a cosmic incident raining havoc down upon us.”

Bartusiak closes the book with chapters on black holes, the idea of the multiverse, and our problems in conceptualizing what it means to think that the universe had a beginning.

“We think that black holes and gravitational waves are strange, but there may stranger things to come,” Barytusiak says. “As I say in a chapter with [Harvard theoretical physicist] Lisa Randall, experimenters and theorists used to work in tandem … and now the theorists have moved so far from observations that it’s a little frightening. There’s a need for new instrumentation, the new James Webb telescopes, the new particle accelerators.”

Which ultimately brings Bartusiak to another part of science that definitely has precedent: the need for funding to support research.

“The bigger the instrument, the further out you can see, or the further down into spacetime you can see, so I want people to realize that if you want these stories to continue, you’re going to need a further investment,” Bartusiak says. “But that’s what makes us a civilization. That we can take at least some of our wealth and use it to expand our knowledge about where we live. And that includes the universe, not just the Earth.”

Machine-learning system tackles speech and object recognition, all at once

Tue, 09/18/2018 - 12:00am

MIT computer scientists have developed a system that learns to identify objects within an image, based on a spoken description of the image. Given an image and an audio caption, the model will highlight in real-time the relevant regions of the image being described.

Unlike current speech-recognition technologies, the model doesn’t require manual transcriptions and annotations of the examples it’s trained on. Instead, it learns words directly from recorded speech clips and objects in raw images, and associates them with one another.

The model can currently recognize only several hundred different words and object types. But the researchers hope that one day their combined speech-object recognition technique could save countless hours of manual labor and open new doors in speech and image recognition.

Speech-recognition systems such as Siri, for instance, require transcriptions of many thousands of hours of speech recordings. Using these data, the systems learn to map speech signals with specific words. Such an approach becomes especially problematic when, say, new terms enter our lexicon, and the systems must be retrained.

“We wanted to do speech recognition in a way that’s more natural, leveraging additional signals and information that humans have the benefit of using, but that machine learning algorithms don’t typically have access to. We got the idea of training a model in a manner similar to walking a child through the world and narrating what you’re seeing,” says David Harwath, a researcher in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Spoken Language Systems Group. Harwath co-authored a paper describing the model that was presented at the recent European Conference on Computer Vision.

In the paper, the researchers demonstrate their model on an image of a young girl with blonde hair and blue eyes, wearing a blue dress, with a white lighthouse with a red roof in the background. The model learned to associate which pixels in the image corresponded with the words “girl,” “blonde hair,” “blue eyes,” “blue dress,” “white light house,” and “red roof.” When an audio caption was narrated, the model then highlighted each of those objects in the image as they were described.

One promising application is learning translations between different languages, without need of a bilingual annotator. Of the estimated 7,000 languages spoken worldwide, only 100 or so have enough transcription data for speech recognition. Consider, however, a situation where two different-language speakers describe the same image. If the model learns speech signals from language A that correspond to objects in the image, and learns the signals in language B that correspond to those same objects, it could assume those two signals — and matching words — are translations of one another.

“There’s potential there for a Babel Fish-type of mechanism,” Harwath says, referring to the fictitious living earpiece in the “Hitchhiker’s Guide to the Galaxy” novels that translates different languages to the wearer.

The CSAIL co-authors are: graduate student Adria Recasens; visiting student Didac Suris; former researcher Galen Chuang; Antonio Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab; and Senior Research Scientist James Glass, who leads the Spoken Language Systems Group at CSAIL.

Audio-visual associations

This work expands on an earlier model developed by Harwath, Glass, and Torralba that correlates speech with groups of thematically related images. In the earlier research, they put images of scenes from a classification database on the crowdsourcing Mechanical Turk platform. They then had people describe the images as if they were narrating to a child, for about 10 seconds. They compiled more than 200,000 pairs of images and audio captions, in hundreds of different categories, such as beaches, shopping malls, city streets, and bedrooms.

They then designed a model consisting of two separate convolutional neural networks (CNNs). One processes images, and one processes spectrograms, a visual representation of audio signals as they vary over time. The highest layer of the model computes outputs of the two networks and maps the speech patterns with image data.

The researchers would, for instance, feed the model caption A and image A, which is correct. Then, they would feed it a random caption B with image A, which is an incorrect pairing. After comparing thousands of wrong captions with image A, the model learns the speech signals corresponding with image A, and associates those signals with words in the captions. As described in a 2016 study, the model learned, for instance, to pick out the signal corresponding to the word “water,” and to retrieve images with bodies of water.

“But it didn’t provide a way to say, ‘This is exact point in time that somebody said a specific word that refers to that specific patch of pixels,’” Harwath says.

Making a matchmap

In the new paper, the researchers modified the model to associate specific words with specific patches of pixels. The researchers trained the model on the same database, but with a new total of 400,000 image-captions pairs. They held out 1,000 random pairs for testing.

In training, the model is similarly given correct and incorrect images and captions. But this time, the image-analyzing CNN divides the image into a grid of cells consisting of patches of pixels. The audio-analyzing CNN divides the spectrogram into segments of, say, one second to capture a word or two.

With the correct image and caption pair, the model matches the first cell of the grid to the first segment of audio, then matches that same cell with the second segment of audio, and so on, all the way through each grid cell and across all time segments. For each cell and audio segment, it provides a similarity score, depending on how closely the signal corresponds to the object.

The challenge is that, during training, the model doesn’t have access to any true alignment information between the speech and the image. “The biggest contribution of the paper,” Harwath says, “is demonstrating that these cross-modal [audio and visual] alignments can be inferred automatically by simply teaching the network which images and captions belong together and which pairs don’t.”

The authors dub this automatic-learning association between a spoken caption’s waveform with the image pixels a “matchmap.” After training on thousands of image-caption pairs, the network narrows down those alignments to specific words representing specific objects in that matchmap.

“It’s kind of like the Big Bang, where matter was really dispersed, but then coalesced into planets and stars,” Harwath says. “Predictions start dispersed everywhere but, as you go through training, they converge into an alignment that represents meaningful semantic groundings between spoken words and visual objects.”

“It is exciting to see that neural methods are now also able to associate image elements with audio segments, without requiring text as an intermediary,” says Florian Metze, an associate research professor at the Language Technologies Institute at Carnegie Mellon University. “This is not human-like learning; it’s based entirely on correlations, without any feedback, but it might help us understand how shared representations might be formed from audio and visual cues.”

Pilot program helps students boost wellness

Mon, 09/17/2018 - 1:20pm

MIT’s new ENGINEERyourHEALTH PLUS program utilizes recreation to help students find practices to enhance their emotional, mental, and physical wellbeing. The program was developed by Director of MIT Recreation Stephanie Smith and her predecessor Tim Mertz, who both work for Health Fitness Corporation, which operates the Zesiger Center and the Alumni Pool and Wang Fitness Center.

The main goal of ENGINEERyourHEALTH PLUS is to help students access MIT Recreation's wellness-boosting services and activities — such as massage, yoga, and personal training — more easily. The program even employs therapists and trainers who are available as late as 10 p.m., which makes it easier for students to fit sessions into their busy schedules.

“Once you’re in this program, Steph and her team really take a consultative approach in helping the student to make decisions (about what activities would be most beneficial),” Mertz says.

The inspiration came from a review of survey data. Mertz and Smith discovered that students cited cost, time, and location as persistent barriers to seeking support.

“At the same time across campus, [there was] this intentional focus on mental health and improving the climate for different quality of life indicators,” Mertz explains. “We thought, well, MIT Recreation can do so much on the preventative, proactive side, and to be viewed and included as part of the campus care network would be excellent.”

So, Smith and Mertz reached out to Student Support Services (S3) and suggested a partnership.

“We had a discussion with them and said, ‘It would be great if students who need help and see you first could be referred into this program,’” Mertz explains, “then we can prescribe or provide access to these free programs.”

Costs are covered by a grant from Suzy Nelson, vice president and dean for student life. The three-year pilot is currently in its second year, and Mertz notes that student feedback has been very positive.

Junior Jessica Quaye, an electrical engineering and computer science major, was referred to the program in the fall of her sophomore year. After a successful first year, Quaye entered her second year with a heavy course load and lots of activities. Soon, the amount of work it took to fulfill her many commitments was wearing her down.

“Everything was just overwhelming me at the time, and so I went to S3 and I spoke to my dean and I was just crying and frustrated.” After Quaye’s S3 dean referred her to the ENGINEERyourHEALTH PLUS program, she booked a massage which left her feeling refreshed and at ease. “[Stephanie] just made the entire process easy,” she explains, referring to the process of enrolling in ENGINEERyourHEALTH PLUS and selecting wellness activities.

Quaye reflects on the warm, welcoming nature of her massage therapist, and the impact it had on her at the time. “For me, just having someone be nice to me in a really stressful moment just warmed my heart and made me feel like everything will be okay regardless of how stressed you are and how badly things are going,” she says.

The following spring, Smith reached out to Quaye with an offer to continue with the program.

“I was like, ‘Of course!’” Quaye insists that every student at MIT can benefit from ENGINEERyourHEALTH PLUS — not just those struggling with mental health. “It’s a break for [students],” Quaye says of the program. “For people who are taking on a lot of responsibilities, having the exercise or the massage just helps you to break away and just to take care of yourself and make sure you’re healthy.”

Anyone interested can read more about ENGINEERyourHEALTH PLUS at the MIT Recreation website, visit Student Support Services online, stop by their office in Room 5-104, or call (617) 253-4861 to schedule an appointment.

IEEE honors disaster response system developers

Mon, 09/17/2018 - 1:10pm

Gregory Hogan, Paul Breimyer, and Andy Vidan have been named the 2019 recipients of the IEEE's Innovation in Societal Infrastructure Award, which recognizes individuals whose work on efficient infrastructure systems demonstrates an innovative application of information technology and has potential to make a substantial impact on society.

Hogan, Breimyer, and Vidan were selected for their lead roles in Lincoln Laboratory's development and promotion of the Next-Generation Incident Command System (NICS), a distributed system that facilitates emergency responses and disaster recovery.

"The development of the Next-Generation Incident Command System is of critical importance for enabling timely sharing of information and effective coordination and command of thousands of responders from hundreds of agencies during rapidly evolving, catastrophic events," says Professor Ling Liu of Georgia Tech, chair of the award committee. "Such R&D efforts will translate effectively to saving lives, reducing loss of resources, and protecting our social, economic, and physical environments. This is exactly the type of innovation and leadership that the IEEE Innovation in Societal Infrastructure award committee is set to recognize and promote."

Hogan, the associate leader of Lincoln Laboratory's Advanced Sensors and Techniques Group, called it "unexpected and humbling" to hear that the team had been selected for the award.

"It has been a tremendous experience to work with Paul and Andy on an R&D project that has had such a significant impact on the disaster response community," Hogan says. "NICS has been adopted state-wide by California and by Victoria, Australia, as operational systems supporting many thousands of first responders, law enforcement officials, and other governmental agencies. Now, in partnership with NATO, we are adapting the system for multinational use."

Hogan, Breimyer, and Vidan began exploring how the laboratory's expertise in sensors, information extraction, and data analysis could improve responses to disasters back in 2009. As a first step, they had to understand the problems and demands encountered by first responders. Working with the California Department of Forestry and Fire Protection (CAL FIRE), which annually responds to several catastrophic, large-scale wildfires, they developed a picture of the common challenges emergency responders face. They identified two major challenges: obtaining accurate, updated, comprehensive situational awareness of the disaster, and communicating that information to a widespread, often multiagency, cadre of responders.

Hogan, Breimyer, and Vidan spent five years developing, building, field testing, and promoting a distributed, internet-based, collaborative information system that integrates data from multiple sources — for example, responders on the ground, airborne imaging sensors, weather and traffic reports, and maps — into a real-time, cohesive picture of how a disaster is unfolding. Developed through an iterative design/implementation process, NICS has undergone upgrades as communication and visualization technologies have advanced. The Department of Homeland Security's Science and Technology Directorate (DHS S&T) has sponsored the continual evolution of NICS since 2010.

NICS has been used in national and international disaster responses, and its architecture has been made available on the internet for registered users. It has been deployed to hundreds of incidents, ranging from wildfires to mudslides to floods. More than 570 organizations in 40 U.S. states and five foreign countries have used NICS to improve their responses to natural and human-made disasters. In 2014, the Emergency Management Directorate of Australia's largest state, Victoria, began implementing NICS' open platform architecture into its emergency management system, and the directorate continues to share software updates with the NICS community.

"We were fortunate to have the support and resources from across Lincoln Laboratory, the Departments of Defense and Homeland Security, CAL FIRE, Emergency Management Victoria, and the first responder community to design, prototype, implement, field test, and, most importantly, operationalize an advanced, fault-tolerant, and scalable distributed system for humanitarian assistance and disaster response," says Vidan, a former Lincoln Laboratory associate technology officer who is now the chief executive officer of Composable Analytics, a Lincoln Laboratory spinoff software company.

In September 2017, about 1,300 disaster responders from 34 NATO countries participated in a simulated emergency response in which NICS was used to provide real-time situational awareness. This exercise, held in in Bosnia and Herzegovina, showcased NICS's capabilities to a multinational disaster response community. Lincoln Laboratory and DHS S&T are already involved in a collaboration with NATO to implement NICS in southeastern European nations. During a four-year partnership, the laboratory will work with local and federal response agencies in Bosnia and Herzegovina, Croatia, Macedonia, and Montenegro to adapt NICS for their specific needs.

Breimyer says he is "honored and humbled" by the award.

"I'd like to thank IEEE, Lincoln Laboratory, DHS S&T, and the hundreds of participating organizations in the first responder community," says Breimyer, a former technical staff member at the laboratory and now the director of software engineering at Audible Inc. "I feel incredibly fortunate to have partnered with Gregg and Andy, as well as the countless visionary first responders whose dedication to improving emergency response inspires us all."

IEEE will present the awards to the three recipients at the 2019 IEEE Symposium on Security and Privacy, which will be held May 20-22 in San Francisco.

Detangling DNA replication

Mon, 09/17/2018 - 12:30pm

DNA is a lengthy molecule — approximately 1,000-fold longer than the cell in which it resides — so it can’t be jammed in haphazardly. Rather, it must be neatly organized so proteins involved in critical processes can access the information contained in its nucleotide bases. Think of the double helix like a pair of shoe laces twisted together, coiled upon themselves again and again to make the molecule even more compact.

However, when it comes time for cell division, this supercoiled nature makes it difficult for proteins involved in DNA replication to access the strands, separate them, and copy them so one DNA molecule can become two.

Replication begins at specific regions of the chromosome where specialized proteins separate the two strands, pulling apart the double helix as you would the two shoe laces. However, this local separation actually tangles the rest of the molecule further, and without intervention creates a buildup of tension, stalling replication. Enter the enzymes known as topoisomerases, which travel ahead of the strands as they are being peeled apart, snipping them, untwisting them, and then rejoining them to relieve the tension that arises from supercoiling.

These topoisomerases are generally thought to be sufficient to allow replication to proceed. However, a team of researchers from MIT and the Duke University School of Medicine suggests the enzymes may require guidance from additional proteins, which recognize the shape characteristic of overtwisted DNA.

“We’ve known for a long time that topoisomerases are necessary for replication, but it’s never been clear if they were sufficient on their own,” says Michael Laub, an MIT professor of biology, Howard Hughes Medical Institute Investigator, and senior author of the study. “This is the first paper to identify a protein in bacteria, or eukaryotes, that is required to localize topoisomerases ahead of replication forks and to help them do what they need to do there.”

Postdoc Monica Guo ’07 and former graduate student Diane Haakonsen PhD ’16 are co-first authors of the study, which appeared online in the journal Cell on Sept. 13.

Necessary but not sufficient

Although it’s well established that topoisomerases are crucial to DNA replication, it is now becoming clear that we know relatively little about the mechanisms regulating their activity, including where and when they act to relieve supercoiling.

These enzymes fall into two groups, type I and type II, depending on how many strands of DNA they cut. The researchers focused on type II topoisomerases found in a common species of freshwater bacteria, Caulobacter crescentus. Type II topoisomerases in bacteria are of particular interest because a number of antibiotics target them in order to prevent DNA replication, treating a wide variety of microbial infections, including tuberculosis. Without topoisomerases, the bacteria can’t grow. Since these bacterial enzymes are unique, poisons directed at them won’t harm human topoisomerases.

For a long time, type II topoisomerases were generally assumed adequate on their own to manage the overtwisted supercoils that arise during replication. Although researchers working in E. coli and other, higher organisms have pinpointed additional proteins that can activate or repress these enzymes, none of these proteins were required for replication.

Such findings hinted that there might be similar interactions taking place in other organisms. In order to understand the protein factors involved in compacting Caulobacter DNA — regulating topoisomerase activity specifically — the researchers screened their bacteria for proteins that bound tightly to supercoiled DNA. From there, they honed in on one protein, GapR, which they observed was essential for DNA replication. In bacteria missing GapR, the DNA became overtwisted, replication slowed, and the bacteria eventually died.

Surprisingly, the researchers found that GapR recognized the structure of overtwisted DNA rather than specific nucleotide sequences.

“The vast majority of DNA-binding proteins localize to specific locations of the genome by recognizing a specific set of bases,” Laub says. “But GapR basically pays no attention to the actual underlying sequence — just the shape of overtwisted DNA, which uniquely arises in front of replication forks and transcription machinery.”

The crystal structure of the protein bound to DNA, solved by Duke’s Maria Schumacher, revealed that GapR recognizes the backbone of DNA (rather than the bases), forming a snug clamp that encircles the overtwisted DNA. However, when the DNA is relaxed in its standard form, it no longer fits inside the clamp. This might signify that GapR sits on DNA only at positions where topoisomerase is needed.

An exciting milestone

Although GapR appears to be required for DNA replication, it’s still not clear precisely how this protein promotes topoisomerase function to relieve supercoiling.

“In the absence of any other proteins, GapR is able to help type II topoisomerases remove positive supercoils faster, but we still don’t quite know how,” Guo says. “One idea is that GapR interacts with topoisomerases, recognizing the overtwisted DNA and recruiting the topoisomerases. Another possibility is that GapR is essentially grabbing onto the DNA and limiting the movement of the positive supercoils, so topoisomerases can target and eliminate them more quickly.”

Anthony Maxwell, a professor of biological chemistry at the John Innes Centre who was not involved with the study, says the buildup of DNA supercoils is a key problem in both bacterial replication and transcription.

“Identifying GapR and its potential role in controlling supercoiling in vivo is an exciting milestone in understanding the control of DNA topology in bacteria,” he says. “Further work will be required to show how exactly these proteins cooperate to maintain bacterial genomic integrity.”

According to Guo, the study provides insight into a fundamental process — DNA replication — and the ways topoisomerases are regulated, which could extend to eukaryotes.

“This was the first demonstration that a topoisomerase activator is required for DNA replication,” she says. “Although there’s no GapR homolog in higher organisms, there could be similar proteins that recognize the shape of the DNA and aid or position topoisomerases.”

This could open up a new field of drug research, she says, targeting activators like GapR to increase the efficacy of existing topoisomerase poisons to treat conditions like respiratory and urinary tract infections. After all, many topoisomerase inhibitors have become less effective due to antibiotic resistance. But only time will tell; there is still much to learn in order to untangle the complex process of DNA replication, along with its many twists and turns.

The research was funded by NIH grants, the HHMI International Predoctoral Fellowship, and the Jane Coffin Childs Memorial Fellowship.

Abdul Latif Jameel Clinic for Machine Learning in Health at MIT aims to revolutionize disease prevention, detection, and treatment

Mon, 09/17/2018 - 10:00am

Today, MIT and Community Jameel, the social enterprise organization founded and chaired by Mohammed Abdul Latif Jameel ’78, launched the Abdul Latif Jameel Clinic for Machine Learning in Health (J-Clinic). This is the fourth major collaborative effort between MIT and Community Jameel.

J-Clinic, a key part of the MIT Quest for Intelligence, will focus on developing machine learning technologies to revolutionize the prevention, detection, and treatment of disease. It will concentrate on creating and commercializing high-precision, affordable, and scalable machine learning technologies in areas of health care ranging from diagnostics to pharmaceuticals, with three main areas of focus:

  • preventative medicine methods and technologies with the potential to change the course of noninfectious disease by stopping it in its tracks;
  • cost-effective diagnostic tests that may be able to both detect and alleviate health problems; and
  • drug discovery and development to enable faster and cheaper discovery, development, and manufacture of new pharmaceuticals, particularly those targeted for individually customized therapies.

J-Clinic’s holistic approach will utilize MIT’s strong expertise in cellular and medical biology, computer science, engineering, and the social sciences, among other areas.

“The health care system has no shortage of data,” says MIT President L. Rafael Reif. “But it has far too little access to the kinds of tools and experts who can translate population-level data into clinical insights that could make it possible to tune care precisely for individuals. Building on MIT’s deep expertise in fields from cancer to neuroscience, and our longstanding connections to Boston’s world-class medical community, J-Clinic offers an accelerated path to creating new technologies that could help make health care more effective everywhere — from villages in developing nations to major teaching hospitals.”

“We are grateful to Community Jameel for their humanitarian vision, boldness, generosity, and continued enthusiasm for collaborating with MIT on their efforts to help make a better world,” Reif adds.

J-Clinic will leverage MIT’s strong relationship with industry and Boston-area hospitals to test, integrate, and deploy new technologies. It will also seek to advance patentable research that could be commercialized and spun-out through licensing to startups and pharmaceutical companies putting these advances into real-life practice.

“The J-Clinic will positively impact the world by accelerating the creation of machine learning technologies and algorithms that will make preventing, detecting, and treating disease more precise, affordable, and personalized,” says Anantha P. Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science, who will serve as J-Clinic’s chair. “It will be a truly multifaceted effort that amplifies synergies between the life sciences and the latest research in human and machine intelligence. J-Clinic will inspire innovation for the betterment of humanity.”

As part of its work, J-Clinic will support research projects, education, workshops, and other activities at the intersection of machine learning and biology.

“Channeling MIT’s machine learning expertise into health care will transform medical outcomes for people around the world,” says Fady Jameel, president of Community Jameel International. “Health care has been an important sphere of activity for Community Jameel since our earliest days, from founding the first nonprofit hospital for physical rehabilitation in Saudi Arabia, to partnering on the King Salman Center for Disability Research. J-Clinic continues our journey of supporting cutting-edge research and driving innovation in health care, in Saudi Arabia and around the whole world.”

This marriage of machine learning with clinical and biological insights aspires to spur a global transformation in the health care and medical fields with the aim to save the lives of millions of people, spawn new technologies, and improve the entire health care industry around the globe.

The Community Jameel gift to establish J-Clinic is part of MIT’s current $5 billion Campaign for a Better World and is consistent with Community Jameel’s focus on creating a better future. Earlier collaborations between MIT and Community Jameel include the Abdul Latif Jameel Poverty Action Lab (J-PAL), established in 2003, which seeks answers to poverty in a changing world; the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS), created in 2014, which addresses food and water scarcity and safety issues as the result of population rises and climate change; and the Abdul Latif Jameel World Education Lab (J-WEL), launched in 2017, which pursues innovative, scalable, and sustainable educational innovation.

Community Jameel and MIT have also collaborated in the Abdul Latif Jameel-Toyota Endowed Scholarship since 1994 and the MIT Enterprise Forum Arab Startup Competition and Saudi Startup Competition.

3Q: Sheila Widnall on sexual harassment in STEM

Mon, 09/17/2018 - 12:00am

Sheila Widnall, MIT Institute Professor and former secretary of the U.S. Air Force, was co-chair of a report commissioned by the National Academies of Sciences, Engineering, and Medicine to explore the impact of sexual harassment of women in those fields. Along with co-chair Paula Johnson, president of Wellesley College, Widnall and dozens of panel members and researchers spent two years collecting and analyzing data for the report, which was released over the summer. On Sept. 18, Widnall, Johnson, and Brandeis University Professor Anita Hill will offer their thoughts on the report’s findings and recommendations, in a discussion at MIT’s Huntington Hall, Room 10-250. Widnall spoke with MIT News about some of the report’s key takeaways.

Q: As a woman who has been working in academia for many years, did you find  anything in the results of this report that surprised you, anything that was unexpected?

A: Well, not unexpected, but the National Academy reports have to be based on data, and so our committee was composed of scientists, engineers, and social scientists, who have somewhat different ways of looking at problems. One of the challenges was to bring the committee together to agree on a common result. We couldn’t just make up things; we had to get data. So, we had some fundamental data from various universities that were taken by a recognized survey platform, and that was the foundation of our data.

We had data for thousands and thousands of faculty and students. We did not look at student-on-student behavior, which we felt was not really part of our charge. We were looking at the structure of academic institutions and the environment that’s created in the university. We also looked at the relationship between faculty, who hold considerable authority over the climate, and the futures of students, which can be influenced by faculty through activities such as thesis advising, and letter writing, and helping people find the next rung in their career.

At the end of the report, after we’d accumulated all this data and our conclusions about it, we said, “OK, what’s the solution?” And the solution is leadership. There is no other way to get started in some of these very difficult climate issues than leadership. Presidents, provosts, deans, department heads, faculty — these are the leaders at a university, and they are essential for dealing with these issues. We can’t make little recommendations to do this or do that. It really boils down to leadership.

Q: What are some of the specific recommendations or programs that the report committee would like to see adopted?

A: We found many productive actions taken by universities, including climate surveys, and our committee was particularly pleased with ombudsman programs — having a way that individuals can go to people and discuss issues and get help. I think MIT has been a leader in that; I’m not sure all universities have those. And another recommendation — I hate to use the word training, because faculty hate the word training — but MIT has put in place some things that faculty have to work through in terms of training, mainly to understand the definitions of what these various terms mean, in terms of the legal structure, the climate structure. The bottom line is you want to create a civil and welcoming climate where people feel free to express any concerns that they have.

One of the things we did, since we were data-driven, was that we tried to collect examples of processes and programs that have been put in place by other societies, and put them forward as examples.

We found various professional societies that are very aware of things that can happen offsite, so they have instituted special policies or even procedures for making sure that a meeting is a safe and welcoming environment for people who come across the country to go to a professional meeting. There are several examples of that in the report, of societies that have really stepped forward and put in place procedures and principles about “this is how you should behave at a meeting.” So I think that’s very welcome.

Q: One of the interesting findings of the report was that gender harassment — stereotyping what people can or can’t do based on their gender — was especially pervasive. What are some of the impacts of that kind of behavior?

A: A hostile work environment is caused by the uncivility of the climate. All the little microinsults, things like telling women they can’t solder or that women don’t belong in science or engineering. I think that’s really an important point in our report. Gender discrimination is most pervasive, and many people don’t think it’s wrong; they just don’t give it a second thought.

If you have a climate where people feel that they can get away with that kind of behavior, then it’s more likely to happen. If you have an environment where people are expected to be polite — is that an old-fashioned word? — or civil, people act respectfully.

It’s pretty clear that physical assault is unacceptable. So we didn’t deal a lot with that issue. It’s certainly a very serious kind of harassment. But we did try to focus on this less obvious form and the responsibilities of universities to create a safe and welcoming climate. I think MIT does a really good job of that.

I think the numbers have helped to improve the climate. You know, when I came to MIT women were 1 percent of the undergraduate student body. Now it’s 46 percent, so clearly, times have changed.

When I came here as a freshman, my freshman advisor said, “What are you doing here?” That wasn’t exactly welcoming. He looked at me as if I didn’t belong here. And I don’t think that’s the case anymore, not with such a high percentage of undergraduates being women. I think increasingly, people do feel that women are an inherent part of the field of engineering, in the field of science, in medicine.

Inspired by nature, reaching across disciplines

Mon, 09/17/2018 - 12:00am

Years ago, Tzu-Chieh “Zijay” Tang and his peers in his high school biology club would gather after school to go on a nature hike into the mountains of Taipei, Taiwan. Together, they’d trek eight or nine miles, often reaching the summit of choice past midnight. For Tang, that’s when the mountains truly became alive.

“That’s the prime time for frogs, snakes, stag beetles, and other insects,” Tang says. “That’s when they’re most active.” A budding biologist, Tang collected specimens from his hikes and expeditions into local forests and was inspired by the diversity of the different fauna he saw in natural environments.

As he delved deeper into nature, Tang developed an interest in molecular biology, and pursued life science research at Academia Sinica, the national academy of Taiwan. There, he gained a hands-on approach to performing research, and opted to continue his studies in life science during his undergraduate studies at National Taiwan University.

Before arriving at MIT, Tang also studied design and architecture, and materials science, which ultimately stoked his passion for biology and the structures of living things. Now a fifth-year graduate student in the Department of Biological Engineering, Tang is working on engineering living materials that can sense aspects of their environment and relay what they’ve sensed back to researchers.

Life’s architectures

After graduating, Tang joined the Taiwanese air force and worked at Hualien Airport. The mandatory military service temporarily paused his science studies; afterward, he resumed his research at Academia Sinica. In the evenings, instead of venturing into the forests, Tang explored design and architecture as he finished his undergraduate studies.

He soon learned of efforts to build sustainable cities in the United Arab Emirates, and moved to Masdar City, Abu Dhabi, to pursue a master’s in materials science and engineering at the Masdar Institute of Science and Technology (now Khalifa University of Science and Technology). In Masdar City, he focused on atomic force microscopy, a technique which helps researchers study the physics of objects’ surfaces. While his peers focused on pure materials like graphite, Tang drew from his background in biology and examined DNA molecules, dragonfly wings, shrimp shells, and fish scales (“I was curious about what they’d look like,” Tang says.)

The fish scales helped Tang discover a new interest: biological engineering. After examining a Gulf parrotfish he found at a local market in Abu Dhabi, Tang and his colleagues decided to focus on the scales’ nanoscale water-repellent properties. The scales represented a “safe, energy-efficient” solution in biology that could potentially be applied to the problem of marine biofouling — when organisms such as barnacles and algae grow on pipes — in variable environmental conditions.

“If you have a problem, and you look into the problem in nature and see how animals or plants deal with these kinds of problems and extract those design principles, you can try to replicate [them] using engineering approaches,” Tang says. He cites the work of Neri Oxman, associate professor of media arts and sciences at the MIT Media Lab and Tang’s co-advisor, as an example of nature-inspired materials research.

Even after the project on the fish scales, Tang wasn’t quite ready to dive into the field of biological engineering. “This is a relatively new field. Sometimes there are too many options and a lot of possibilities, and you have to know more before you make decisions,” Tang says.

Part of Tang’s research into the field brought him to the Materials Research Society fall meeting in Boston. There, he learned of the synthetic biology research led at MIT by Timothy Lu, associate professor of biological engineering and electrical engineering and computer science. Encouraged by the sense of community he found among synthetic biology researchers at MIT and in Lu’s lab, Tang applied to the biological engineering graduate program. In addition to studying biological engineering in Lu’s lab, Tang is also a part of the Mediated Matter group led by Oxman.

Inspired by kombucha

Tang studies biosensing applied to water testing, and is an Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) fellow in water solutions. Biosensing, Tang says, provides an advantage over traditional water testing methods: It doesn’t require electricity. But currently, biosensing has a long way to go before it’s viable for widespread employment.

While researchers have engineered microbes like E. coli to sense, record, and relay information from their environments, Tang focuses on “creating an environment where you can protect those microbes, and, at the same time, don’t let them escape into the environment.” For Tang, this involves encapsulating approximately 1 billion E. coli in a hydrogel material — inspired by the popping boba in bubble tea — specifically engineered to do just that.

But wouldn’t it be efficient if the sensing bacteria could learn to support themselves, too? To study how microbes could produce self-supporting matrices, Tang looks to SCOBY, the floating biofilm added to teas to create the popular fermented tea drink kombucha. SCOBY stands for symbiotic community of bacteria and yeast, and contains a cellulose-rich architecture that could serve as a model for the creation of self-supporting matrices with sensing microbes.

To study and engineer sensing SCOBYs, Tang collaborates with colleagues in the Department of Bioengineering at Imperial College London through the MIT International Science and Technology Initiatives (MISTI). Through the collaboration, Tang hopes to create a living material inspired by kombucha that can not only sense contaminants in water, but also serve as a filter.

Tang envisions the impacts of potential living-filters as far-reaching. “You can actually dry [the kombucha-inspired filter],” he says. “Even in remote areas, people can grow it themselves. You don’t have to do anything. Just put it in a fresh culture and it will grow.”

Supporting students

During his time at MIT, Tang has also served as a teaching assistant for 6.129/20.129 (Biological Circuit Engineering Lab), a synthetic biology lab course that teaches students the fundamentals of research techniques in synthetic biology.

Compared to building traditional electrical circuits, “building biological systems is actually more complicated and time consuming,” Tang says. As a part of the course, students propose their own biological circuits, and build them using the techniques gained in the lab.

“I really appreciate that [the department] has the vision to let the students do this,” Tang says, citing the intense time commitment of lab work as well as the rapidly developing nature of the field. “They really know how to be the pioneers.”

Pages