MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 months 12 hours ago

Astrophysical shock phenomena reproduced in the laboratory

Tue, 08/06/2019 - 11:59pm

Vast interstellar events where clouds of charged matter hurtle into each other and spew out high-energy particles have now been reproduced in the lab with high fidelity. The work, by MIT researchers and an international team of colleagues, should help resolve longstanding disputes over exactly what takes place in these gigantic shocks.

Many of the largest-scale events, such as the expanding bubble of matter hurtling outward from a supernova, involve a phenomenon called collisionless shock. In these interactions, the clouds of gas or plasma are so rarefied that most of the particles involved actually miss each other, but they nevertheless interact electromagnetically or in other ways to produces visible shock waves and filaments. These high-energy events have so far been difficult to reproduce under laboratory conditions that mirror those in an astrophysical setting, leading to disagreements among physicists as to the mechanisms at work in these astrophysical phenomena.

Now, the researchers have succeeded in reproducing critical conditions of these collisionless shocks in the laboratory, allowing for detailed study of the processes taking place within these giant cosmic smashups. The new findings are described in the journal Physical Review Letters, in a paper by MIT Plasma Science and Fusion Center Senior Research Scientist Chikang Li, five others at MIT, and 14 others around the world.

Virtually all visible matter in the universe is in the form of plasma, a kind of soup of subatomic particles where negatively charged electrons swim freely along with positively charged ions instead of being connected to each other in the form of atoms. The sun, the stars, and most clouds of interstellar material are made of plasma.

Most of these interstellar clouds are extremely tenuous, with such low density that true collisions between their constituent particles are rare even when one cloud slams into another at extreme velocities that can be much faster than 1,000 kilometers per second. Nevertheless, the result can be a spectacularly bright shock wave, sometimes showing a great deal of structural detail including long trailing filaments.

Astronomers have found that many changes take place at these shock boundaries, where physical parameters “jump,” Li says. But deciphering the mechanisms taking place in collisionless shocks has been difficult, since the combination of extremely high velocities and low densities has been hard to match on Earth.

While collisionless shocks had been predicted earlier, the first one that was directly identified, in the 1960s, was the bow shock formed by the solar wind, a tenuous stream of particles emanating from the sun, when it hits Earth’s magnetic field. Soon, many such shocks were recognized by astronomers in interstellar space. But in the decades since, “there has been a lot of simulations and theoretical modeling, but a lack of experiments” to understand how the processes work, Li says.

Li and his colleagues found a way to mimic the phenomena in the laboratory by generating a jet of low-density plasma using a set of six powerful laser beams, at the OMEGA laser facility at the University of Rochester, and aiming it at a thin-walled polyimide plastic bag filled with low-density hydrogen gas. The results reproduced many of the detailed instabilities observed in deep space, thus confirming that the conditions match closely enough to allow for detailed, close-up study of these elusive phenomena. A quantity called the mean free path of the plasma particles was measured as being much greater than the widths of the shock waves, Li says, thus meeting the formal definition of a collisionless shock.

At the boundary of the lab-generated collisionless shock, the density of the plasma spiked dramatically. The team was able to measure the detailed effects on both the upstream and downstream sides of the shock front, allowing them to begin to differentiate the mechanisms involved in the transfer of energy between the two clouds, something that physicists have spent years trying to figure out. The results are consistent with one set of predictions based on something called the Fermi mechanism, Li says, but further experiments will be needed to definitively rule out some other mechanisms that have been proposed.

“For the first time we were able to directly measure the structure” of important parts of the collisionless shock, Li says. “People have been pursuing this for several decades.”

The research also showed exactly how much energy is transferred to particles that pass through the shock boundary, which accelerates them to speeds that are a significant fraction of the speed of light, producing what are known as cosmic rays. A better understanding of this mechanism “was the goal of this experiment, and that’s what we measured” Li says, noting that they captured a full spectrum of the energies of the electrons accelerated by the shock.

"This report is the latest installment in a transformative series of experiments, annually reported since 2015, to emulate an actual astrophysical shock wave for comparison with space observations," says Mark Koepke, a professor of physics at West Virginia University and chair of the Omega Laser Facility User Group, who was not involved in the study. "Computer simulations, space observations, and these experiments reinforce the physics interpretations that are advancing our understanding of the particle acceleration mechanisms in play in high-energy-density cosmic events such as gamma-ray-burst-induced outflows of relativistic plasma."

The international team included researchers at the University of Bordeaux in France, the Czech Academy of Sciences, the National Research Nuclear University in Russia, the Russian Academy of Sciences, the University of Rome, the University of Rochester, the University of Paris, Osaka University in Japan, and the University of California at San Diego. It was supported by the U.S. Department of Energy and the French National Research Agency.

New insights into bismuth’s character

Tue, 08/06/2019 - 3:40pm

The search for better materials for computers and other electronic devices has focused on a group of materials known as “topological insulators” that have a special property of conducting electricity on the edge of their surfaces like traffic lanes on a highway. This can increase energy efficiency and reduce heat output.

The first experimentally demonstrated topological insulator in 2009 was bismuth-antimony, but only recently did researchers identify pure bismuth as a new type of topological insulator. A group of researchers in Europe and the U.S. provided both experimental evidence and theoretical analysis in a 2018 Nature Physics report.

Now, researchers at MIT along with colleagues in Boston, Singapore, and Taiwan have conducted a theoretical analysis to reveal several more previously unidentified topological properties of bismuth. The team was led by senior authors MIT Associate Professor Liang Fu, MIT Professor Nuh Gedik, Northeastern University Distinguished Professor Arun Bansil, and Research Fellow Hsin Lin at Academica Sinica in Taiwan.

“It’s kind of a hidden topology where people did not know that it can be that way,” says MIT postdoc Su-Yang Xu, a coauthor of the paper published recently in PNAS.

Topology is a mathematical tool that physicists use to study electronic properties by analyzing electrons’ quantum wave functions. The “topological” properties give rise to a high degree of stability in the material and make its electronic structure very robust against minor imperfections in the crystal, such as impurities, or minor distortions of its shape, such as stretching or squeezing.

“Let’s say I have a crystal that has imperfections. Those imperfections, as long as they are not so dramatic, then my electrical property will not change,” Xu explains. “If there is such topology and if the electronic properties are uniquely tied to the topology rather than the shape, then it will be very robust.”

“In this particular compound, unless you somehow apply pressure or something to distort the crystal structure, otherwise this conduction will always be protected,” Xu says.

Since the electrons carrying a certain spin can only move in one direction in these topological materials, they cannot bounce backwards or scatter, which is the behavior that makes silicon- and copper-based electronic devices heat up.

While materials scientists seek to identify materials with fast electrical conduction and low heat output for advanced computers, physicists want to classify the types of topological and other properties that underlie these better-performing materials.

In the new paper, “Topology on a new facet of bismuth,” the authors calculated that bismuth should show a state known as a “Dirac surface state,” which is considered a hallmark of these topological insulators. They found that the crystal is unchanged by a half-circle rotation (180 degrees). This is called a twofold rotational symmetry. Such a twofold rotational symmetry protects the Dirac surface states. If this twofold rotation symmetry of the crystal is disrupted, these surface states lose their topological protection.

Bismuth also features a topological state along certain edges of the crystal where two vertical and horizontal faces meet, called a “hinge” state. To fully realize the desired topological effects in this material, the hinge state and other surface states must be coupled to another electronic phenomenon known as “band inversion” that the theorists’ calculations show also is present in bismuth. They predict that these topological surface states could be confirmed by using an experimental technique known as photoemission spectroscopy.

If electrons flowing through copper are like a school of fish swimming through a lake in summer, electrons flowing across a topological surface are more like ice skaters crossing the lake’s frozen surface in winter. For bismuth, however, in the hinge state, their motion would be more akin to skating on the corner edge of an ice cube.

The researchers also found that in the hinge state, as the electrons move forward, their momentum and another property, called spin — which defines a clockwise or counterclockwise rotation of the electrons — is “locked.” “Their direction of spinning is locked with respect to their direction of motion,” Xu explains.

These additional topological states might help explain why bismuth lets electrons travel through it much farther than most other materials, and why it conducts electricity efficiently with many fewer electrons than materials such as copper.

“If we really want to make these things useful and significantly improve the performance of our transistors, we need to find good topological materials — good in terms of they are easy to make, they are not toxic, and also they are relatively abundant on earth,” Xu suggests. Bismuth, which is an element that is safe for human consumption in the form of remedies to treat heartburn, for example, meets all these requirements.

“This work is a culmination of a decade and a half’s worth of advancement in our understanding of symmetry-protected topological materials,” says David Hsieh, professor of physics at Caltech, who was not involved in this research.

“I think that these theoretical results are robust, and it is simply a matter of experimentally imaging them using techniques like angle-resolved photoemission spectroscopy, which Professor Gedik is an expert in,” Hsieh adds.

Northeastern University Professor Gregory Fiete notes that “Bismuth-based compounds have long played a starring role in topological materials, though bismuth itself was originally believed to be topologically trivial.”

“Now, this team has discovered that pure bismuth is multiply topological, with a pair of surface Dirac cones untethered to any particular momentum value,” says Fiete, who also was not involved in this research. “The possibility to move the Dirac cones through external parameter control may open the way to applications that exploit this feature."

Caltech's Hsieh notes that the new findings add to the number of ways that topologically protected metallic states can be stabilized in materials. “If bismuth can be turned from semimetal into insulator, then isolation of these surface states in electrical transport can be realized, which may be useful for low-power electronics applications,” Hsieh explains.

Also contributing to the bismuth topology paper were MIT postdoc Qiong Ma; Tay-Rong Chang of the Department of Physics, National Cheng Kung University, Taiwan, and the Center for Quantum Frontiers of Research and Technology, Taiwan; Xiaoting Zhou, Department of Physics, National Cheng Kung University, Taiwan; and Chuang-Han Hsu, Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore.

This work was partly supported by the Center for Integrated Quantum Materials and the U.S. Department of Energy, Materials Sciences and Engineering division.

Computer-aided knitting

Tue, 08/06/2019 - 11:35am

The oldest known knitting item dates back to Egypt in the Middle Ages, by way of a pair of carefully handcrafted socks. Although handmade clothes have occupied our closets for centuries, a recent influx of high-tech knitting machines have changed how we now create our favorite pieces. 

These systems, which have made anything from Prada sweaters to Nike shirts, are still far from seamless. Programming machines for designs can be a tedious and complicated ordeal: When you have to specify every single stitch, one mistake can throw off the entire garment. 

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a new approach to streamline the process: a new system and design tool for automating knitted garments. 

In one paper, a team created a system called “InverseKnit”, that translates photos of knitted patterns into instructions that are then used with machines to make clothing. An approach like this could let casual users create designs without a memory bank of coding knowledge, and even reconcile issues of efficiency and waste in manufacturing. 

“As far as machines and knitting go, this type of system could change accessibility for people looking to be the designers of their own items,'' says Alexandre Kaspar, CSAIL PhD student and lead author on a new paper about the system. “We want to let casual users get access to machines without needed programming expertise, so they can reap the benefits of customization by making use of machine learning for design and manufacturing.” 

In another paper, researchers came up with a computer-aided design tool for customizing knitted items. The tool lets non-experts use templates for adjusting patterns and shapes, like adding a triangular pattern to a beanie, or vertical stripes to a sock. You can image users making items customized to their own bodies, while also personalizing for preferred aesthetics.

InverseKnit 

Automation has already reshaped the fashion industry as we know it, with potential positive residuals of changing our manufacturing footprint as well. 

To get InverseKnit up and running, the team first created a dataset of knitting instructions, and the matching images of those patterns. They then trained their deep neural network on that data to interpret the 2-D knitting instructions from images. 

This might look something like giving the system a photo of a glove, and then letting the model produce a set of instructions, where the machine then follows those commands to output the design. 

When testing InverseKnit, the team found that it produced accurate instructions 94% of the time. 

“Current state-of-the-art computer vision techniques are data-hungry, and they need many examples to model the world effectively,” says Jim McCann, assistant professor in the Carnegie Mellon Robotics Institute. “With InverseKnit, the team collected an immense dataset of knit samples that, for the first time, enables modern computer vision techniques to be used to recognize and parse knitting patterns.” 

While the system currently works with a small sample size, the team hopes to expand the sample pool to employ InverseKnit on a larger scale. Currently, the team only used a specific type of acrylic yarn, but they hope to test different materials to make the system more flexible. 

A tool for knitting

While there’s been plenty of developments in the field — such as Carnegie Mellon’s automated knitting processes for 3-D meshes — these methods can often be complex and ambiguous. The distortions inherent in 3-D shapes hamper how we understand the positions of the items, and this can be a burden on the designers. 

To address this design issue, Kaspar and his colleagues developed a tool called “CADKnit”, which uses 2-D images, CAD software, and photo editing techniques to let casual users customize templates for knitted designs.

The tool lets users design both patterns and shapes in the same interface. With other software systems, you’d likely lose some work on either end when customizing both. 

“Whether it’s for the everyday user who wants to mimic a friend’s beanie hat, or a subset of the public who might benefit from using this tool in a manufacturing setting, we’re aiming to make the process more accessible for personal customization,'' says Kaspar. 

The team tested the usability of CADKnit by having non-expert users create patterns for their garments and adjust the size and shape. In post-test surveys, the users said they found it easy to manipulate and customize their socks or beanies, successfully fabricating multiple knitted samples. They noted that lace patterns were tricky to design correctly and would benefit from fast realistic simulation.

However the system is only a first step towards full garment customization. The authors found that garments with complicated interfaces between different parts — such as sweaters — didn’t work well with the design tool. The trunk of sweaters and sleeves can be connected in various ways, and the software didn’t yet have a way of describing the whole design space for that.

Furthermore, the current system can only use one yarn for a shape, but the team hopes to improve this by introducing a stack of yarn at each stitch. To enable work with more complex patterns and larger shapes, the researchers plan to use hierarchical data structures that don’t incorporate all stitches, just the necessary ones.

“The impact of 3-D knitting has the potential to be even bigger than that of 3-D printing. Right now, design tools are holding the technology back, which is why this research is so important to the future,” says McCann. 

A paper on InverseKnit was presented by Kaspar alongside MIT postdocs Tae-Hyun Oh and Petr Kellnhofer, PhD student Liane Makatura, MIT undergraduate Jacqueline Aslarus, and MIT Professor Wojciech Matusik. It was presented at the International Conference on Machine Learning this past June in Long Beach, California. 

A paper on the design tool was led by Kaspar alongside Makatura and Matusik.

How brain cells pick which connections to keep

Tue, 08/06/2019 - 11:00am

Brain cells, or neurons, constantly tinker with their circuit connections, a crucial feature that allows the brain to store and process information. While neurons frequently test out new potential partners through transient contacts, only a fraction of fledging junctions, called synapses, are selected to become permanent.  

The major criterion for excitatory synapse selection is based on how well they engage in response to experience-driven neural activity, but how such selection is implemented at the molecular level has been unclear. In a new study, MIT neuroscientists have identified the gene and protein, CPG15, that allows experience to tap a synapse as a keeper.

In a series of novel experiments described in Cell Reports, the team at MIT’s Picower Institute for Learning and Memory used multi-spectral, high-resolution two-photon microscopy to literally watch potential synapses come and go in the visual cortex of mice — both in the light, or normal visual experience, and in the darkness, where there is no visual input. By comparing observations made in normal mice and ones engineered to lack CPG15, they were able to show that the protein is required in order for visual experience to facilitate the transition of nascent excitatory synapses to permanence.

Mice engineered to lack CPG15 only exhibit one behavioral deficiency: They learn much more slowly than normal mice, says senior author Elly Nedivi, the William R. (1964) and Linda R. Young Professor of Neuroscience in the Picower Institute and a professor of brain and cognitive sciences at MIT. They need more trials and repetitions to learn associations that other mice can learn quickly. The new study suggests that’s because without CPG15, they must rely on circuits where synapses simply happened to take hold, rather than on a circuit architecture that has been refined by experience for optimal efficiency.

“Learning and memory are really specific manifestations of our brain’s ability in general to constantly adapt and change in response to our environment,” Nedivi says. “It’s not that the circuits aren’t there in mice lacking CPG15, they just don’t have that feature — which is really important — of being optimized through use.”

Watching in light and darkness

The first experiment reported in the paper, led by former MIT postdoc Jaichandar Subramanian, who is now an assistant professor at the University of Kansas, is a contribution to neuroscience in and of itself, Nedivi says. The novel labeling and imaging technologies implemented in the study, she says, allowed tracking key events in synapse formation with unprecedented spatial and temporal resolution. The study resolved the emergence of “dendritic spines,” which are the structural protrusions on which excitatory synapses are formed, and the recruitment of the synaptic scaffold, PSD95, that signals that a synapse is there to stay.

The team tracked specially labeled neurons in the visual cortex of mice after normal visual experience, and after two weeks in darkness. To their surprise, they saw that spines would routinely arise and then typically disappear again at the same rate regardless of whether the mice were in light or darkness. This careful scrutiny of spines confirmed that experience doesn’t matter for spine formation, Nedivi said. That upends a common assumption in the field, which held that experience was necessary for spines to even emerge.

By keeping track of the presence of PSD95 they could confirm that the synapses that became stabilized during normal visual experience were the ones that had accumulated that protein. But the question remained: How does experience drive PSD95 to the synapse? The team hypothesized that CPG15, which is activity dependent and associated with synapse stabilization, does that job.

CPG15 represents experience

To investigate that, they repeated the same light-versus-dark experiences, but this time in mice engineered to lack CPG15. In the normal mice, there was much more PSD95 recruitment during the light phase than during the dark, but in the mice without CPG15, the experience of seeing in the light never made a difference. It was as if CPG15-less mice in the light were like normal mice in the dark.

Later they tried another experiment testing whether the low PSD95 recruitment seen when normal mice were in the dark could be rescued by exogenous expression of CPG15. Indeed, PSD95 recruitment shot up, as if the animals were exposed to visual experience. This showed that CPG15 not only carries the message of experience in the light, it can actually substitute for it in the dark, essentially “tricking” PSD95 into acting as if experience had called upon it.

“This is a very exciting result, because it shows that CPG15 is not just required for experience-dependent synapse selection, but it’s also sufficient,” says Nedivi, “That’s unique in relation to all other molecules that are involved in synaptic plasticity.”

A new model and method

In all, the paper’s data allowed Nedivi to propose a new model of experience-dependent synapse stabilization: Regardless of neural activity or experience, spines emerge with fledgling excitatory synapses and the receptors needed for further development. If activity and experience send CPG15 their way, that draws in PSD95 and the synapse stabilizes. If experience doesn’t involve the synapse, it gets no CPG15, very likely no PSD95, and the spine withers away.

The paper potentially has significance beyond the findings about experience-dependent synapse stabilization, Nedivi says. The method it describes of closely monitoring the growth or withering of spines and synapses amid a manipulation (like knocking out or modifying a gene) allows for a whole raft of studies in which examining how a gene, or a drug, or other factors affect synapses.

“You can apply this to any disease model and use this very sensitive tool for seeing what might be wrong at the synapse,” she says.

In addition to Nedivi and Subramanian, the paper’s other authors are Katrin Michel and Marc Benoit.

The National Institutes of Health and the JPB Foundation provided support for the research.

Daniel Freedman wins Special Breakthrough Prize in Fundamental Physics

Tue, 08/06/2019 - 10:00am

Daniel Z. Freedman, professor emeritus in MIT’s departments of Mathematics and Physics, has been awarded the Special Breakthrough Prize in Fundamental Physics. He shares the $3 million prize with two colleagues, Sergio Ferrara of CERN and Peter van Nieuwenhuizen of Stony Brook University, with whom he developed the theory of supergravity.

The trio is honored for work that combines the principles of supersymmetry, which postulates that all fundamental particles have corresponding, unseen “partner” particles; and Einstein's theory of general relativity, which explains that gravity is the result of the curvature of space-time.

When the theory of supersymmetry was developed in 1973, it solved some key problems in particle physics, such as unifying three forces of nature (electromagnetism, the weak nuclear force, and the strong nuclear force), but it left out a fourth force: gravity. Freedman, Ferrara, and van Nieuwenhuizen addressed this in 1976 with their theory of supergravity, in which the gravitons of general relativity acquire superpartners called gravitinos.

Freedman’s collaboration with Ferrara and van Nieuwenhuizen began late in 1975 at École Normale Supérior in Paris, where he was visiting on a minisabbatical from Stony Brook, where he was a professor. Ferrara had also come to ENS, to work on a different project for a week. The challenge of constructing supergravity was in the air at that time, and Freedman told Ferrara that he was thinking about it. In their discussions, Ferrara suggested that progress could be made via an approach that Freedman had previously used in a related problem involving supersymmetric gauge theories.

“That turned me in the right direction,” Freedman recalls. In short order, he formulated the first step in the construction of supergravity and proved its mathematical consistency. “I returned to Stony Brook convinced that I could quickly find the rest of the theory,” he says. However, “I soon realized that it was harder than I had expected.”

At that point he asked van Nieuwenhuizen to join him on the project. “We worked very hard for several months until the theory came together. That was when our eureka moment occurred,” he says.

“Dan’s work on supergravity has changed how scientists think about physics beyond the standard model, combining principles of supersymmetry and Einstein’s theory of general relativity,” says Michael Sipser, dean of the MIT School of Science and the Donner Professor of Mathematics. “His exemplary research is central to mathematical physics and has given us new pathways to explore in quantum field theory and superstring theory. On behalf of the School of Science, I congratulate Dan and his collaborators for this prestigious award.”

Freedman joined the MIT faculty in 1980, first as professor of applied mathematics and later with a joint appointment in the Center for Theoretical Physics. He regularly taught an advanced graduate course on supersymmetry and supergravity. An unusual feature of the course was that each assigned problem set included suggestions of classical music to accompany students’ work. 

“I treasure my 36 years at MIT,” he says, noting that he  worked with “outstanding” graduate students with “great resourcefulness as problem solvers.” Freedman fully retired from MIT in 2016.

He is now a visiting professor at Stanford University and lives in Palo Alto, California, with his wife, Miriam, an attorney specializing in public education law.

The son of small-business people, Freedman was the first in his family to attend college. He became interested in physics during his first year at Wesleyan University, when he enrolled in a special class that taught physics in parallel with the calculus necessary to understand its mathematical laws. It was a pivotal experience. “Learning that the laws of physics can exactly describe phenomena in nature — that totally turned me on,” he says.

Freedman learned about winning the Breakthrough Prize upon returning from a morning boxing class, when his wife told him that a Stanford colleague, who was on the Selection Committee, had been trying to reach him. “When I returned the call, I was overwhelmed with the news,” he says.

Freedman, who holds a BA from Wesleyan and an MS and PhD in physics from the University of Wisconsin, is a former Sloan Fellow and a two-time Guggenheim Fellow. The three collaborators received the Dirac Medal and Prize in 1993, and the Dannie Heineman Prize in Mathematical Physics in 2006. He is a fellow of the American Academy of Arts and Sciences.

Founded by a group of Silicon Valley entrepreneurs, the Breakthrough Prizes recognize the world’s top scientists in life sciences, fundamental physics, and mathematics. The Special Breakthrough Prize in Fundamental Physics honors profound contributions to human knowledge in physics. Earlier honorees include Jocelyn Bell Burnell; the LIGO research team, including MIT Professor Emeritus Rainer Weiss; and Stephen Hawking.  

Automating artificial intelligence for medical decision-making

Mon, 08/05/2019 - 11:59pm

MIT computer scientists are hoping to accelerate the use of artificial intelligence to improve medical decision-making, by automating a key step that’s usually done by hand — and that’s becoming more laborious as certain datasets grow ever-larger.

The field of predictive analytics holds increasing promise for helping clinicians diagnose and treat patients. Machine-learning models can be trained to find patterns in patient data to aid in sepsis care, design safer chemotherapy regimens, and predict a patient’s risk of having breast cancer or dying in the ICU, to name just a few examples.

Typically, training datasets consist of many sick and healthy subjects, but with relatively little data for each subject. Experts must then find just those aspects — or “features” — in the datasets that will be important for making predictions.

This “feature engineering” can be a laborious and expensive process. But it’s becoming even more challenging with the rise of wearable sensors, because researchers can more easily monitor patients’ biometrics over long periods, tracking sleeping patterns, gait, and voice activity, for example. After only a week’s worth of monitoring, experts could have several billion data samples for each subject.  

In a paper being presented at the Machine Learning for Healthcare conference this week, MIT researchers demonstrate a model that automatically learns features predictive of vocal cord disorders. The features come from a dataset of about 100 subjects, each with about a week’s worth of voice-monitoring data and several billion samples — in other words, a small number of subjects and a large amount of data per subject. The dataset contain signals captured from a little accelerometer sensor mounted on subjects’ necks.

In experiments, the model used features automatically extracted from these data to classify, with high accuracy, patients with and without vocal cord nodules. These are lesions that develop in the larynx, often because of patterns of voice misuse such as belting out songs or yelling. Importantly, the model accomplished this task without a large set of hand-labeled data.

“It’s becoming increasing easy to collect long time-series datasets. But you have physicians that need to apply their knowledge to labeling the dataset,” says lead author Jose Javier Gonzalez Ortiz, a PhD student in the MIT Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to remove that manual part for the experts and offload all feature engineering to a machine-learning model.”

The model can be adapted to learn patterns of any disease or condition. But the ability to detect the daily voice-usage patterns associated with vocal cord nodules is an important step in developing improved methods to prevent, diagnose, and treat the disorder, the researchers say. That could include designing new ways to identify and alert people to potentially damaging vocal behaviors.

Joining Gonzalez Ortiz on the paper is John Guttag, the Dugald C. Jackson Professor of Computer Science and Electrical Engineering and head of CSAIL’s Data Driven Inference Group; Robert Hillman, Jarrad Van Stan, and Daryush Mehta, all of Massachusetts General Hospital’s Center for Laryngeal Surgery and Voice Rehabilitation; and Marzyeh Ghassemi, an assistant professor of computer science and medicine at the University of Toronto.

Forced feature-learning

For years, the MIT researchers have worked with the Center for Laryngeal Surgery and Voice Rehabilitation to develop and analyze data from a sensor to track subject voice usage during all waking hours. The sensor is an accelerometer with a node that sticks to the neck and is connected to a smartphone. As the person talks, the smartphone gathers data from the displacements in the accelerometer.

In their work, the researchers collected a week’s worth of this data — called “time-series” data — from 104 subjects, half of whom were diagnosed with vocal cord nodules. For each patient, there was also a matching control, meaning a healthy subject of similar age, sex, occupation, and other factors.

Traditionally, experts would need to manually identify features that may be useful for a model to detect various diseases or conditions. That helps prevent a common machine-learning problem in health care: overfitting. That’s when, in training, a model “memorizes” subject data instead of learning just the clinically relevant features. In testing, those models often fail to discern similar patterns in previously unseen subjects.

“Instead of learning features that are clinically significant, a model sees patterns and says, ‘This is Sarah, and I know Sarah is healthy, and this is Peter, who has a vocal cord nodule.’ So, it’s just memorizing patterns of subjects. Then, when it sees data from Andrew, which has a new vocal usage pattern, it can’t figure out if those patterns match a classification,” Gonzalez Ortiz says.

The main challenge, then, was preventing overfitting while automating manual feature engineering. To that end, the researchers forced the model to learn features without subject information. For their task, that meant capturing all moments when subjects speak and the intensity of their voices.

As their model crawls through a subject’s data, it’s programmed to locate voicing segments, which comprise only roughly 10 percent of the data. For each of these voicing windows, the model computes a spectrogram, a visual representation of the spectrum of frequencies varying over time, which is often used for speech processing tasks. The spectrograms are then stored as large matrices of thousands of values.

But those matrices are huge and difficult to process. So, an autoencoder — a neural network optimized to generate efficient data encodings from large amounts of data — first compresses the spectrogram into an encoding of 30 values. It then decompresses that encoding into a separate spectrogram.  

Basically, the model must ensure that the decompressed spectrogram closely resembles the original spectrogram input. In doing so, it’s forced to learn the compressed representation of every spectrogram segment input over each subject’s entire time-series data. The compressed representations are the features that help train machine-learning models to make predictions.  

Mapping normal and abnormal features

In training, the model learns to map those features to “patients” or “controls.” Patients will have more voicing patterns than will controls. In testing on previously unseen subjects, the model similarly condenses all spectrogram segments into a reduced set of features. Then, it’s majority rules: If the subject has mostly abnormal voicing segments, they’re classified as patients; if they have mostly normal ones, they’re classified as controls.

In experiments, the model performed as accurately as state-of-the-art models that require manual feature engineering. Importantly, the researchers’ model performed accurately in both training and testing, indicating it’s learning clinically relevant patterns from the data, not subject-specific information.

Next, the researchers want to monitor how various treatments — such as surgery and vocal therapy — impact vocal behavior. If patients’ behaviors move form abnormal to normal over time, they’re most likely improving. They also hope to use a similar technique on electrocardiogram data, which is used to track muscular functions of the heart. 

Following the current: MIT examines water consumption sustainability

Mon, 08/05/2019 - 4:00pm

At the 2019 MIT Commencement address, Michael Bloomberg highlighted the climate crisis as “the challenge of our time.” Climate change is expected to worsen drought and cause Boston, Massachusetts, sea level to rise by 1.5 feet by 2050. While numerous MIT students and researchers are working to ensure access to clean and sustainable sources of drinking water well into the future, MIT is also responding to the urgency of the climate crisis with a close examination of campus sustainability practices, including a recent focus on its own water consumption.

A working group on campus water use, led by the MIT Office of Sustainability (MITOS) and Department of Facilities, is supported by the Abdul Latif Jameel Water and Food Systems Lab (J-WAFS) and includes representatives of numerous other groups, offices, students, and campus leaders. While the MITOS initiative is focusing on campus water management, MIT student clubs are raising local consciousness around drinking-water issues via research and outreach activities. Through all of these efforts, members of the community aim to help MIT change its water usage practices and become a model for sustainable water use at the university level.

The water subcommittee: providing water leadership to promote institutional change

Gathering campus stakeholders to develop sustainability recommendations is a practiced strategy for the Office of Sustainability. MITOS working groups have previously analyzed environmental issues such as energy use, storm water management, and the sustainability of MIT’s food system, another initiative in which J-WAFS has played a role. The current working group addressing campus water use practices is managed by Steven Lanou, sustainability project manager at MITOS. “Work done in the late 1990s reduced campus water use by an estimated 60 percent,” he explains. “And now, we need to look strategically again at all of our systems” to improve water management in the face of future climate uncertainty.

Beginning in fall 2018, MITOS met with local stakeholders, including the Cambridge Water Department, the MIT Department of Facilities, and the MIT Water Club, to explore how water is used and managed on campus.

The water subcommittee falls underneath the Sustainability Leadership Steering Committee, which was created by, and reports to, the Office of the Provost and the Office of the Executive Vice President and Treasurer, upon which Professor John H. Lienhard, director of J-WAFS and Abdul Latif Jameel Professor of Water and Mechanical Engineering, also sits. The steering committee is charged by the provost and the executive vice president and treasurer of MIT to recommend strategies for campus leadership on sustainability issues. The water subcommittee will bring concrete suggestions for water usage changes to the MIT administration and work to implement them across campus. Professor Lienhard has “been key in helping us shape what a water stewardship program might look like,” according to Lanou.

Other J-WAFS staff are also involved in the subcommittee, as well as leaders from the Environmental Solutions Initiative (ESI), Department of Facilities, MIT Dining, the MIT Investment Management Company, and the Water Club. Based on a thorough review of data related to MIT’s water use, the subcommittee has started to identify the most strategic areas for intervention, and is gearing up now to get additional input this fall and begin to develop recommendations for how MIT can reduce water consumption, mitigate its overall climate impact, and adapt to an uncertain future.

Water has been a focus of discussion and planning for sustainable campus practices for several years already. A MITOS stormwater and land management working group devoted to priority-setting for campus sustainability, which convened in the 2014 academic year, identified MIT’s water footprint as one of several key areas for discussion and intervention. Following the release of the stormwater and land management working group recommendations in 2016, MITOS teamed up with the Office of Campus Planning, the Department of Facilities, and the Office of Environment, Health and Safety to explore stormwater management solutions that improve the health of Cambridge, Massachusetts waterways and ecosystems. Among the outcomes was a draft stormwater management and landscape ecology plan that is focused on enhancing the productivity of the campus’ built and ecological systems in order to capture, absorb, reuse, and treat stormwater. This effort has informed the implementation of advanced stormwater management infrastructure on campus, including the recently completed North Corridor improvements in conjunction with the construction of the MIT.nano building.

In addition, MITOS is leading a research effort with the MIT Center for Global Change Science and Department of Facilities to understand campus risks to flooding during current and future climate conditions. The team is evaluating probabilities and flood depths to a range of scenarios, including intense, short-duration rainfall over campus; 24-hour rainfall over campus/Cambridge from tropical storms or nor’easters; sea-level rise and coastal storm surge of the Charles River; and up-river rainfall that raises the level of the Charles River. To understand MIT’s water consumption and key areas for intervention, this year’s water subcommittee is informed by data gathered by Lanou on the water consumption across campus — in buildings, labs, and landscaping processes — as well as the consumption of water by the MIT community.

An additional dimension of water stewardship to be considered by the subcommittee is the role and impact of bottled-water purchases on campus. The subcommittee has begun to look at data on annual bottled-water consumption to help understand the current trends. Understanding the impacts of single-use disposable bottles on campus is important. “I see so much bottled water consumption on campus,” notes John Lienhard. “It’s costly, energy-intensive, and adds plastic to the environment.” Only 9 percent of all plastics manufactured since 2015 has been recycled, and 12 billion metric tons of plastic will end up in landfills by 2050. Mark Hayes, director of MIT Dining and another subcommittee member, has participated in student-led bottled-water reduction efforts on two college campuses, and he hopes to help MIT better understand and address the issue here. Hayes would like to see MIT consider “expanding water refilling stations, exploring the impact and reduction [of] plastic recycling, and increasing campus education on these efforts.” Taking on the challenge of changing campus water consumption habits, and decreasing the associated waste, will hopefully position MIT as a leader in these kinds of sustainability efforts and encourage other campuses to adopt similar policies.

Students taking action

Student groups are also using education around bottled water alternatives to encourage behavior change. Andrew Bouma, a PhD student in John Lienhard’s lab, is investigating local attitudes toward bottled water. His interest in this issue began upon meeting several students who drank mostly bottled water. “It frustrated me that people had this perception that the tap water wasn’t safe,” Bouma explains, “even though Cambridge and Boston have really great water.” He became involved with the MIT Water Club and ran a blind taste test at the 2019 MIT Water Night to evaluate perceptions of tap water, bottled water, and recycled wastewater.

Bouma explained that bottled-water drinkers often cite superior flavor as a motivating factor; however, only four or five of the 70-80 participants correctly identified the different sources, suggesting that the flavor argument holds little water. Many participants also held reservations about water safety. Bouma hopes that the taste test can address these barriers more effectively than sharing statistics. “When people can hold a cup of water in their hands and see it and taste it, it makes people confront their presumptions in a different way,” he explains.

A broader impact

The MIT Water Club, including Bouma, repeated the taste test at the Cambridge River Arts Festival in June to examine public perceptions of public and bottled water. Fewer than 5 percent of the 242 respondents identified all four water sources, approximately the same outcome as would be expected from random guessing. Many participants held concerns about the safety of public water, which the Water Club tried to combat with information about water treatment and testing procedures. Bouma hopes to continue addressing water consumption issues as co-president of the Water Club.

Other student groups are encouraging behavior change around water consumption as well. The MIT Graduate Student Council (GSC) and the GSC Sustainability Subcommittee, with support from the Department of Facilities, funded five water-bottle refilling stations across campus in 2015. These efforts underscore the commitment of MIT students to promoting sustainable water consumption on campus.

A unique “MIT spin” on campus water sustainability

Lanou hopes that MIT will bring its technical strength to bear on water issues by using campus as a living laboratory to test water technologies. For example, Kripa Varanasi, professor of mechanical engineering and a J-WAFS-funded principal investigator, is piloting a water capture project at MIT’s Central Utility Plant that uses electricity to condense fog into liquid water for collection. Varanasi’s lab is able to test the technology in real-world conditions and improve the plant’s water efficiency at the same time. “It's a great example of MIT being willing to use its facilities to test campus research,” explains Lanou. These technological advancements — many of which are supported by J-WAFS — could support water resilience at MIT and elsewhere.

As the climate crisis brings water scarcity issues to the forefront, understanding and modeling water-use practices will become increasingly critical. With the water subcommittee working to bring recommendations for campus water use to the administration, and MIT students engaging with the broader Cambridge community on bottled water issues, the MIT community is poised to rise to the challenge.

A new way to block unwanted genetic transfer

Mon, 08/05/2019 - 1:30pm

We receive half of our genes from each biological parent, so there’s no avoiding inheriting a blend of characteristics from both. Yet, for single-celled organisms like bacteria that reproduce by splitting into two identical cells, injecting variety into the gene pool isn’t so easy. Random mutations add some diversity, but there’s a much faster way for bacteria to reshuffle their genes and confer evolutionary advantages like antibiotic resistance or pathogenicity.

Known as horizontal gene transfer, this process permits bacteria to pass pieces of DNA to their peers, in some cases allowing those genes to be integrated into the recipient’s genome and passed down to the next generation.

The Grossman lab in the MIT Department of Biology studies one class of mobile DNA, known as integrative and conjugative elements (ICEs). While ICEs contain genes that can be beneficial to the recipient bacterium, there’s also a catch — receiving a duplicate copy of an ICE is wasteful, and possibly lethal. The biologists recently uncovered a new system by which one particular ICE, ICEBs1, blocks a donor bacterium from delivering a second, potentially deadly copy.

“Understanding how these elements function and how they're regulated will allow us to determine what drives microbial evolution,” says Alan Grossman, department head and senior author on the study. “These findings not only provide insight into how bacteria block unwanted genetic transfer, but also how we might eventually engineer this system to our own advantage.”

Former graduate student Monika Avello PhD ’18 and current graduate student Kathleen Davis are co-first authors on the study, which appeared online in Molecular Microbiology on July 30.

Checks and balances

Although plasmids are perhaps the best-known mediators of horizontal transfer, ICEs not only outnumber plasmids in most bacterial species, they also come with their own tools to exit the donor, enter the recipient, and integrate themselves into the recipient’s chromosome. Once the donor bacterium makes contact with the recipient, the machinery encoded by the ICE can pump the ICE DNA from one cell to the other through a tiny channel.

For horizontal transfer to proceed, there are physical barriers to overcome, especially in so-called Gram-positive bacteria, which boast thicker cell walls than their Gram-negative counterparts, despite being less widely studied. According to Davis, the transfer machinery essentially has to “punch a hole” through the recipient cell. “It’s a rough ride and a waste of energy for the recipient if that cell already contains an ICE with a specific set of genes,” she says.         

Sure, ICEs are “selfish bits of DNA” that persist by spreading themselves as widely as possible, but in order to do so they must not interfere with their host cell’s ability to survive. As Avello explains, ICEs can’t just disseminate their DNA “without certain checks and balances.”

 “There comes a point where this transfer comes at a cost to the bacteria or doesn't make sense for the element,” she says. “This study is beginning to get at the question of when, why, and how ICEs might want to block transfer.”

The Grossman lab works in the Gram-positive Bacillus subtilis, and had previously discovered two mechanisms by which ICEBs1 could prevent redundant transfer before it becomes lethal. The first, cell-cell signaling, involves the ICE in the recipient cell releasing a chemical cue that prohibits the donor’s transfer machinery from being assembled. The second, immunity, initiates if the duplicate copy is already inside the cell, and prevents the replicate from being integrated into the chromosome.

However, when the researchers tried eliminating both fail-safes simultaneously, rather than re-instating ICE transfer as they expected, the bacteria still managed to obstruct the duplicate copy. ICEBs1 seemed to have a third blocking strategy, but what might it be?

The third tactic

In this most recent study, they’ve identified the mysterious blocking mechanism as a type of “entry exclusion,” whereby the ICE in the recipient cell encodes molecular machinery that physically prevents the second copy from breaching the cell wall. Scientists had observed other mobile genetic elements capable of exclusion, but this was the first time anyone had witnessed this phenomenon for an ICE from Gram-positive bacteria, according to Avello.

The Grossman lab determined that this exclusion mechanism comes down to two key proteins. Avello identified the first protein, YddJ, expressed by the ICEBs1 in the recipient bacterium, forming a “protective coating” on the outside of the cell and blocking a second ICE from entering.

But the biologists still didn’t know which piece of transfer machinery YddJ was blocking, so Davis performed a screen and various genetic manipulations to pinpoint YddJ’s target. YddJ, it turned out, was obstructing another protein called ConG, which likely forms part of the transfer channel between the donor and recipient bacteria. Davis was surprised to find that, while Gram-negative ICEs encode a protein that’s quite similar to ConG, the Gram-negative YddJ equivalent is actually much different.

“This just goes to show that you can’t assume the transfer machinery in Gram-positive ICEs like ICEBs1 are the same as the well-studied Gram-negative ICEs,” she says.

The team concluded that ICEBs1 must have three different mechanisms to prevent duplicate transfer: the two they’d previously uncovered plus this new one, exclusion.

Cell-cell signaling allows a cell to spread the word to its neighbors that it already has a copy of ICEBs1, so there’s no need to bother assembling the transfer machinery. If this fails, exclusion kicks in to physically block the transfer machinery from penetrating the recipient cell. If that proves unsuccessful and the second copy enters the recipient, immunity will initiate and prevent the second copy from being integrated into the recipient’s chromosome.

“Each mechanism acts at a different step, because none of them alone are 100 percent effective,” Grossman says. “That’s why it’s helpful to have multiple mechanisms.”

They don’t know all the details of this transfer machinery just yet, he adds, but they do know that YddJ and ConG are key players.

“This initial description of the ICEBs1 exclusion system represents the first report that provides mechanistic insights into exclusion in Gram-positive bacteria, and one of only a few mechanistic studies of exclusion in any conjugation system,” says Gary Dunny, a professor of microbiology and immunology at the University of Minnesota who was not involved in the study. “This work is significant medically because ICEs can carry “cargo” genes such as those conferring antibiotic resistance, and also of importance to our basic understanding of horizontal gene transfer systems and how they evolve.”

As researchers continue to probe this blocking mechanism, it might be possible to leverage ICE exclusion to design bacteria with specific functions. For instance, they could engineer the gut microbiome and introduce beneficial genes to help with digestion. Or, one day, they could perhaps block horizontal gene transfer to combat antibiotic resistance.

“We had suspected that Gram-positive ICEs might be capable of exclusion, but we didn’t have proof before this,” Avello says. Now, researchers can start to speculate about how pathogenic Gram-positive species might control the movement of ICEs throughout a bacterial population, with possible ramifications for disease research.

This work was funded by research and predoctoral training grants from the National Institute of General Medical Sciences of the National Institutes of Health.

Throwing lifelines to job seekers after incarceration

Sat, 08/03/2019 - 11:59pm

It’s Wednesday morning and Brooke Wages is standing in front of a whiteboard, bouncing ideas off her startup partner Sarika Ram, a rising junior at Boston University, and writing out a game plan for the rest of the day. It’s early, but Wages is focused and energetic about the work ahead of her. You can tell that she is, to use one of her favorite phrases, killing the game.

Wages and her team have just finished interviewing formerly incarcerated individuals who are now seeking job training and placement through the team’s startup, Surge Employment Solutions, which aims to place people in well-paid, high-skilled trade jobs after they have served time in prison. Today Wages and Ram are planning out the next few months of their pilot program, during which they will start training their selected candidates for their future jobs. By November, the selected candidates will be working their new positions.

Wages is in the dual-degree master’s of business administration and master’s of public administration program at the MIT Sloan School of Management and the Harvard Kennedy School of Government. She founded Surge last year, along with Ram and rising Harvard University sophomore Amisha Kambath. The team has partnered with the Boston Mayor’s Office of Returning Citizens, the Massachusetts Parole Board, Dorchester Bay Economic Development Corporation, and Strive Boston in their outreach to formerly incarcerated citizens.

Her interest in this area began when she was an undergraduate at North Carolina State University. A mechanical engineering major, she also began to study inequality and the discrimination faced by citizens returning to the workforce after incarceration. Wages was particularly influenced by the late sociologist Devah Pager, especially her book “Marked: Race, Crime, and Finding Work in an Era of Mass Incarceration.” Pager’s research documents discrimination against ex-offenders in the job market and how this bias contributes to recidivism, particularly among black men.

Upon learning about these injustices, “I felt moved,” Wages recalls. “I felt like there was a fire inside to do this work.”

Taking action

After graduating, Wages started working as an engineer in the oil and gas industry, but she still found time to work with former inmates seeking employment. She volunteered with the National Alliance for the Empowerment of the Formerly Incarcerated (NAEFI) and attended reentry circles, which welcome a returning citizen back into a community and establish a support system. Through this work, she got to know people coming out of the prison system.

“[Discrimination against the formerly incarcerated] became more than just this appalling thing that I read about. It became someone’s life story. I really recognized how we had equal value, but I just, by the luck of the draw, happened to be born in a different place” than many of the former inmates she had been meeting through NAEFI, Wages says.

In her engineering work, Wages was finding it difficult to find contractors for highly skilled trade jobs. Meanwhile, she was getting to know people having a hard time finding employment after their release. Taking these two contrasting experiences to heart, Wages founded Surge.

Wages emphasizes that Surge should not be characterized as solely a staffing company or a workforce development company. Rather, the startup assesses a client’s staffing needs, trains returning citizens, and places them in specific roles in the client’s company. The organization does not start training people unless they have a job secured for them first.

“We talk to the client, understand their needs and then develop a unique, personalized training program for that specific position,” she says. “That’s a business model that is not currently being used for the formerly incarcerated population.”

The team currently works out of the Boston University BUild Lab IDG Capital Student Innovation Center as part of the university’s Summer Accelerator Program. Surge also recently won $10,000 from the IDEAS Global Challenge from MIT’s PKG Center, which has also been crucial in funding the startup.

Among the classes in her Sloan program that have been particularly formative, Wages cites 15.S03 (Leading the Way: Perspectives on Advancing Equity and Inclusion), for giving her tools to create systems within her own business to promote equity and inclusion.

“The course provided me with a startup reference guide. We read and discussed the leading evidence-based diversity and inclusion research on topics such as hiring, pay, performance evaluation, identity bias, and harassment, to name a few,” she says. “Just as we acknowledge and address the bias reentering people face in the job market, we need to acknowledge our brain’s proclivity toward bias and build systems that help eliminate that.”

Forging relationships

Wages says much of her success has resulted from connections she has made through her extracurricular activities, such as The Educational Justice Institute (TEJI) at MIT, where she is a graduate fellow. TEJI has provided significant mentorship and support to Wages and her team.

Through TEJI, Wages was a teaching assistant for an “inside-out” class on nonviolent philosophy. The class, ES.114 (Non-violence as a Way of Life), taught by humanities lecturer Lee Perlman of the MIT Experimental Study Group, was based in a prison and comprised half undergraduate students and half incarcerated students. Because it was a discussion-based course, Wages says, all of the students in the class had the opportunity to share life experiences and understand different perspectives. She enjoyed facilitating that process and seeing the strong relationships it helped create among the students.

Wages also serves as the events chair for MIT’s Black Business Students Association and is a fellow at the Forté Foundation, an organization that empowers women in business. She has also gone on the FoundHers retreat for female entrepreneurs, where she connected with other women who have founded startups.

“[Brooke] is a great mentor,” Ram says. “She has lots of undergrads that she takes under her wing.”

Wages has also formed a strong bond with her team and stresses that Surge would not be possible without Ram and Kambath. The trio’s personal relationship is important to Wages, and the group often spends time together outside of work. They take art and dance classes together, for example, and they are prepping for an upcoming Indian movie marathon.

Wages can also be found at the dog park virtually every day, with her dog Grace. “She is the best. She is a chihuahua-heeler mix and all-black — all-black everything, that’s how we operate!” Wages jokes.

Above all of the personal and professional relationships that Wages has created in Boston, her connection to her Christian faith remains as one of the most important things in her life. She is particularly driven by one piece of scripture, in Hebrew 13:3: “Remember those in prison as if you were their fellow prisoners, and those who are mistreated as if you yourselves were suffering.”

Marcus Karel, food science pioneer and professor emeritus of chemical engineering, dies at 91

Fri, 08/02/2019 - 4:10pm

Marcus “Marc” G. Karel PhD ’60, professor emeritus of chemical engineering, died on July 25 at age 91. A member of the MIT community since 1951, Karel inspired a generation of food scientists and engineers through his work in food technology and controlled release of active ingredients in food and pharmaceuticals.

Karel was born in Lvov, Poland (now Lviv, Ukraine) to Cila and David Karel, who ran a small chain of women’s clothing stores in the town. After war arrived in Poland in 1939, the family business was lost, relatives were scattered and disappeared, and the Karels spent the last 22 months of the war in hiding. After the war, Karel and his family eventually emigrated to the United States, where they settled in Newton, Massachusetts, just outside of Boston. Karel completed his bachelor’s degree at Boston University in 1955 and earned his doctorate in 1960 at MIT.

Before Karel started his graduate studies at MIT, he was invited by the head of the former Department of Food Technology to manage the Packaging Laboratory. Here he began his interest in the external and internal factors that influence food stability. In 1961, he was appointed professor of food engineering at MIT in the former Department of Nutrition and Food Science (Course 20), eventually becoming deputy head of the department. When Course 20 (then called Applied Biological Sciences) was disbanded in 1988, Karel was invited to join the Department of Chemical Engineering. After retiring from MIT in 1989, he became the State of New Jersey Professor at Rutgers University from 1989 to 1996, and from 1996 to 2007 he consulted for various government and industrial organizations.

During his academic career at MIT and Rutgers, Karel supervised over 120 graduate students and postdocs. Most of them are now leaders in food engineering. Several of his trainees from industry are now vice presidents of research and development at several companies. Along with his engineering accomplishments, Karel was known for his ability to build and manage successful teams, nurture talent, and create a family environment among researchers.

Karel was a pioneer in several areas, including oxidative reactions in food, drying of biological materials, and the preservation and packaging and stabilization of low-moisture foods. His fundamental work on oxidation of lipids and stabilization led to important improvements in food packaging. Also, when NASA needed expertise to design food and food systems for long-term space travel, it was Karel’s work that formed the platform for many of the enabling developments of the U.S. space program. MIT Professor Emeritus Charles Cooney relates, “When the solution to an important problem required improved analytical techniques, he pioneered the development of the techniques. When the solution required deeper insight into the physical chemistry of foods, he formulated the theoretical framework for the solution. When the solution required identification of new materials and new processes, he was on the front line with innovative technologies. No one has had the impact on the field of food science and engineering as Marc.”

Karel earned many recognitions for his work, including a Life Achievement Award from the International Association for Engineering and Food, election to the American Institute of Medical and Biological Engineering, the Institute of Food Technologists (IFT)’s Nicholas Appert Medal (the highest honor in food technology), election to the Food Engineering Hall of Fame, several honorary doctorates, and the one of which he was most proud: the first William V. Cruess Award for Excellence in Teaching from the IFT. The first edition of his co-authored book, "The Physical Principles of Food Preservation," is considered by many to be the "bible" of the field of food stability.

Karel is survived by his wife of almost 61 years, Carolyn Frances (Weeks) Karel; son Steven Karel and daughters Karen Karel and Debra Karel Nardone; grandchildren Amanda Nardone, Kristen Nardone, Emma Griffith, and Bennet Karel; sister Rena Carmel, niece Julia Carmel, and great-nephew David Carmel; Leslie Griffith (mother of Emma and Ben); nephew James Weeks Jr., and niece Sharon Weeks Mancini.

Funeral arrangements were private. A celebration of Karel’s life will take place later this year. Memorial contributions may be made to the American Red Cross.

Model predicts cognitive decline due to Alzheimer’s, up to two years out

Thu, 08/01/2019 - 11:59pm

A new model developed at MIT can help predict if patients at risk for Alzheimer’s disease will experience clinically significant cognitive decline due to the disease, by predicting their cognition test scores up to two years in the future.

The model could be used to improve the selection of candidate drugs and participant cohorts for clinical trials, which have been notoriously unsuccessful thus far. It would also let patients know they may experience rapid cognitive decline in the coming months and years, so they and their loved ones can prepare.  

Pharmaceutical firms over the past two decades have injected hundreds of billions of dollars into Alzheimer’s research. Yet the field has been plagued with failure: Between 1998 and 2017, there were 146 unsuccessful attempts to develop drugs to treat or prevent the disease, according to a 2018 report from the Pharmaceutical Research and Manufacturers of America. In that time, only four new medicines were approved, and only to treat symptoms. More than 90 drug candidates are currently in development.

Studies suggest greater success in bringing drugs to market could come down to recruiting candidates who are in the disease’s early stages, before symptoms are evident, which is when treatment is most effective. In a paper to be presented next week at the Machine Learning for Health Care conference, MIT Media Lab researchers describe a machine-learning model that can help clinicians zero in on that specific cohort of participants.

They first trained a “population” model on an entire dataset that included clinically significant cognitive test scores and other biometric data from Alzheimer’s patients, and also healthy individuals, collected between biannual doctor’s visits. From the data, the model learns patterns that can help predict how the patients will score on cognitive tests taken between visits. In new participants, a second model, personalized for each patient, continuously updates score predictions based on newly recorded data, such as information collected during the most recent visits.

Experiments indicate accurate predictions can be made looking ahead six, 12, 18, and 24 months. Clinicians could thus use the model to help select at-risk participants for clinical trials, who are likely to demonstrate rapid cognitive decline, possibly even before other clinical symptoms emerge. Treating such patients early on may help clinicians better track which antidementia medicines are and aren’t working.

“Accurate prediction of cognitive decline from six to 24 months is critical to designing clinical trials,” says Oggi Rudovic, a Media Lab researcher. “Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming. Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”

Joining Rudovic on the paper are: Yuria Utsumi, an undergraduate student, and Kelly Peterson, a graduate student, both in the Department of Electrical Engineering and Computer Science; Ricardo Guerrero and Daniel Rueckert, both of Imperial College London; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.

Population to personalization

For their work, the researchers leveraged the world’s largest Alzheimer’s disease clinical trial dataset, called Alzheimer's Disease Neuroimaging Initiative (ADNI). The dataset contains data from around 1,700 participants, with and without Alzheimer’s, recorded during semiannual doctor’s visits over 10 years.

Data includes their AD Assessment Scale-cognition sub-scale (ADAS-Cog13) scores, the most widely used cognitive metric for clinical trials of Alzheimer’s disease drugs. The test assesses memory, language, and orientation on a scale of increasing severity up to 85 points. The dataset also includes MRI scans, demographic and genetic information, and cerebrospinal fluid measurements.

In all, the researchers trained and tested their model on a sub-cohort of 100 participants, who made more than 10 visits and had less than 85 percent missing data, each with more than 600 computable features. Of those participants, 48 were diagnosed with Alzheimer’s disease. But data are sparse, with different combinations of features missing for most of the participants.  

To tackle that, the researchers used the data to train a population model powered by a “nonparametric” probability framework, called Gaussian Processes (GPs), which has flexible parameters to fit various probability distributions and to process uncertainties in data. This technique measures similarities between variables, such as patient data points, to predict a value for an unseen data point — such as a cognitive score. The output also contains an estimate for how certain it is about the prediction. The model works robustly even when analyzing datasets with missing values or lots of noise from different data-collecting formats.

But, in evaluating the model on new patients from a held-out portion of participants, the researchers found the model’s predictions weren’t as accurate as they could be. So, they personalized the population model for each new patient. The system would then progressively fill in data gaps with each new patient visit and update the ADAS-Cog13 score prediction accordingly, by continuously updating the previously unknown distributions of the GPs. After about four visits, the personalized models significantly reduced the error rate in predictions. It also outperformed various traditional machine-learning approaches used for clinical data.

Learning how to learn

But the researchers found the personalized models’ results were still suboptimal. To fix that, they invented a novel “metalearning” scheme that learns to automatically choose which type of model, population or personalized, works best for any given participant at any given time, depending on the data being analyzed. Metalearning has been used before for computer vision and machine translation tasks to learn new skills or adapt to new environments rapidly with a few training examples. But this is the first time it’s been applied to tracking cognitive decline of Alzheimer’s patients, where limited data is a main challenge, Rudovic says.

The scheme essentially simulates how the different models perform on a given task — such as predicting an ADAS-Cog13 score — and learns the best fit. During each visit of a new patient, the scheme assigns the appropriate model, based on the previous data. With patients with noisy, sparse data during early visits, for instance, population models make more accurate predictions. When patients start with more data or collect more through subsequent visits, however, personalized models perform better.

This helped reduce the error rate for predictions by a further 50 percent. “We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic says. “So, we wanted to learn how to learn with this metalearning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to deploy.”

Next, the researchers are hoping to partner with pharmaceutical firms to implement the model into real-world Alzheimer’s clinical trials. Rudovic says the model can also be generalized to predict various metrics for Alzheimer’s and other diseases.

Finding novel materials for practical devices

Thu, 08/01/2019 - 12:55pm

In recent years, machine learning has been proving a valuable tool for identifying new materials with properties optimized for specific applications. Working with large, well-defined data sets, computers learn to perform an analytical task to generate a correct answer and then use the same technique on an unknown data set. 

While that approach has guided the development of valuable new materials, they’ve primarily been organic compounds, notes Heather Kulik PhD ’09, an assistant professor of chemical engineering. Kulik focuses instead on inorganic compounds — in particular, those based on transition metals, a family of elements (including iron and copper) that have unique and useful properties. In those compounds — known as transition metal complexes — the metal atom occurs at the center with chemically bound arms, or ligands, made of carbon, hydrogen, nitrogen, or oxygen atoms radiating outward. 

Transition metal complexes already play important roles in areas ranging from energy storage to catalysis for manufacturing fine chemicals — for example, for pharmaceuticals. But Kulik thinks that machine learning could further expand their use. Indeed, her group has been working not only to apply machine learning to inorganics — a novel and challenging undertaking — but also to use the technique to explore new territory. “We were interested in understanding how far we could push our models to do discovery — to make predictions on compounds that haven’t been seen before,” says Kulik. 

Sensors and computers 

For the past four years, Kulik and Jon Paul Janet, a graduate student in chemical engineering, have been focusing on transition metal complexes with “spin” — a quantum mechanical property of electrons. Usually, electrons occur in pairs, one with spin up and the other with spin down, so they cancel each other out and there’s no net spin. But in a transition metal, electrons can be unpaired, and the resulting net spin is the property that makes inorganic complexes of interest, says Kulik. “Tailoring how unpaired the electrons are gives us a unique knob for tailoring properties.” 

A given complex has a preferred spin state. But add some energy — say, from light or heat — and it can flip to the other state. In the process, it can exhibit changes in macroscale properties such as size or color. When the energy needed to cause the flip — called the spin-splitting energy — is near zero, the complex is a good candidate for use as a sensor, or perhaps as a fundamental component in a quantum computer. 

Chemists know of many metal-ligand combinations with spin-splitting energies near zero, making them potential “spin-crossover” (SCO) complexes for such practical applications. But the full set of possibilities is vast. The spin-splitting energy of a transition metal complex is determined by what ligands are combined with a given metal, and there are almost endless ligands from which to choose. The challenge is to find novel combinations with the desired property to become SCOs — without resorting to millions of trial-and-error tests in a lab. 

Translating molecules into numbers 

The standard way to analyze the electronic structure of molecules is using a computational modeling method called density functional theory, or DFT. The results of a DFT calculation are fairly accurate — especially for organic systems — but performing a calculation for a single compound can take hours, or even days. In contrast, a machine learning tool called an artificial neural network (ANN) can be trained to perform the same analysis and then do it in just seconds. As a result, ANNs are much more practical for looking for possible SCOs in the huge space of feasible complexes. 

Because an ANN requires a numerical input to operate, the researchers’ first challenge was to find a way to represent a given transition metal complex as a series of numbers, each describing a selected property. There are rules for defining representations for organic molecules, where the physical structure of a molecule tells a lot about its properties and behavior. But when the researchers followed those rules for transition metal complexes, it didn’t work. “The metal-organic bond is very tricky to get right,” says Kulik. “There are unique properties of the bonding that are more variable. There are many more ways the electrons can choose to form a bond.” So the researchers needed to make up new rules for defining a representation that would be predictive in inorganic chemistry. 

Using machine learning, they explored various ways of representing a transition metal complex for analyzing spin-splitting energy. The results were best when the representation gave the most emphasis to the properties of the metal center and the metal-ligand connection and less emphasis to the properties of ligands farther out. Interestingly, their studies showed that representations that gave more equal emphasis overall worked best when the goal was to predict other properties, such as the ligand-metal bond length or the tendency to accept electrons. 

Testing the ANN 

As a test of their approach, Kulik and Janet — assisted by Lydia Chan, a summer intern from Troy High School in Fullerton, California — defined a set of transition metal complexes based on four transition metals — chromium, manganese, iron, and cobalt — in two oxidation states with 16 ligands (each molecule can have up to two). By combining those building blocks, they created a “search space” of 5,600 complexes — some of them familiar and well-studied, and some of them totally unknown. 

In previous work, the researchers had trained an ANN on thousands of compounds that were well-known in transition metal chemistry. To test the trained ANN’s ability to explore a new chemical space to find compounds with the targeted properties, they tried applying it to the pool of 5,600 complexes, 113 of which it had seen in the previous study. 

The result was the plot labeled "Figure 1" in the slideshow above, which sorts the complexes onto a surface as determined by the ANN. The white regions indicate complexes with spin-splitting energies within 5 kilo-calories per mole of zero, meaning that they are potentially good SCO candidates. The red and blue regions represent complexes with spin-splitting energies too large to be useful. The green diamonds that appear in the inset show complexes that have iron centers and similar ligands — in other words, related compounds whose spin-crossover energies should be similar. Their appearance in the same region of the plot is evidence of the good correspondence between the researchers’ representation and key properties of the complex. 

But there’s one catch: Not all of the spin-splitting predictions are accurate. If a complex is very different from those on which the network was trained, the ANN analysis may not be reliable — a standard problem when applying machine learning models to discovery in materials science or chemistry, notes Kulik. Using an approach that looked successful in their previous work, the researchers compared the numeric representations for the training and test complexes and ruled out all the test complexes where the difference was too great. 

Focusing on the best options 

Performing the ANN analysis of all 5,600 complexes took just an hour. But in the real world, the number of complexes to be explored could be thousands of times larger — and any promising candidates would require a full DFT calculation. The researchers therefore needed a method of evaluating a big data set to identify any unacceptable candidates even before the ANN analysis. To that end, they developed a genetic algorithm — an approach inspired by natural selection — to score individual complexes and discard those deemed to be unfit. 

To prescreen a data set, the genetic algorithm first randomly selects 20 samples from the full set of complexes. It then assigns a “fitness” score to each sample based on three measures. First, is its spin-crossover energy low enough for it to be a good SCO? To find out, the neural network evaluates each of the 20 complexes. Second, is the complex too far away from the training data? If so, the spin-crossover energy from the ANN may be inaccurate. And finally, is the complex too close to the training data? If so, the researchers have already run a DFT calculation on a similar molecule, so the candidate is not of interest in the quest for new options. 

Based on its three-part evaluation of the first 20 candidates, the genetic algorithm throws out unfit options and saves the fittest for the next round. To ensure the diversity of the saved compounds, the algorithm calls for some of them to mutate a bit. One complex may be assigned a new, randomly selected ligand, or two promising complexes may swap ligands. After all, if a complex looks good, then something very similar could be even better — and the goal here is to find novel candidates. The genetic algorithm then adds some new, randomly chosen complexes to fill out the second group of 20 and performs its next analysis. By repeating this process a total of 21 times, it produces 21 generations of options. It thus proceeds through the search space, allowing the fittest candidates to survive and reproduce, and the unfit to die out. 

Performing the 21-generation analysis on the full 5,600-complex data set required just over five minutes on a standard desktop computer, and it yielded 372 leads with a good combination of high diversity and acceptable confidence. The researchers then used DFT to examine 56 complexes randomly chosen from among those leads, and the results confirmed that two-thirds of them could be good SCOs. 

While a success rate of two-thirds may not sound great, the researchers make two points. First, their definition of what might make a good SCO was very restrictive: For a complex to survive, its spin-splitting energy had to be extremely small. And second, given a space of 5,600 complexes and nothing to go on, how many DFT analyses would be required to find 37 leads? As Janet notes, “It doesn’t matter how many we evaluated with the neural network because it’s so cheap. It’s the DFT calculations that take time.” 

Best of all, using their approach enabled the researchers to find some unconventional SCO candidates that wouldn’t have been thought of based on what’s been studied in the past. “There are rules that people have — heuristics in their heads — for how they would build a spin-crossover complex,” says Kulik. “We showed that you can find unexpected combinations of metals and ligands that aren’t normally studied but can be promising as spin-crossover candidates.” 

Sharing the new tools 

To support the worldwide search for new materials, the researchers have incorporated the genetic algorithm and ANN into "molSimplify," the group’s online, open-source software toolkit that anyone can download and use to build and simulate transition metal complexes. To help potential users, the site provides tutorials that demonstrate how to use key features of the open-source software codes. Development of molSimplify began with funding from the MIT Energy Initiative in 2014, and all the students in Kulik’s group have contributed to it since then. 

The researchers continue to improve their neural network for investigating potential SCOs and to post updated versions of molSimplify. Meanwhile, others in Kulik’s lab are developing tools that can identify promising compounds for other applications. For example, one important area of focus is catalyst design. Graduate student in chemistry Aditya Nandy is focusing on finding a better catalyst for converting methane gas to an easier-to-handle liquid fuel such as methanol — a particularly challenging problem. “Now we have an outside molecule coming in, and our complex — the catalyst — has to act on that molecule to perform a chemical transformation that takes place in a whole series of steps,” says Nandy. “Machine learning will be super-useful in figuring out the important design parameters for a transition metal complex that will make each step in that process energetically favorable.” 

This research was supported by the U.S. Department of the Navy’s Office of Naval Research, the U.S. Department of Energy, the National Science Foundation, and the MIT Energy Initiative Seed Fund Program. Jon Paul Janet was supported in part by an MIT-Singapore University of Technology and Design Graduate Fellowship. Heather Kulik has received a National Science CAREER Award (2019) and an Office of Naval Research Young Investigator Award (2018), among others.  

This article appears in the Spring 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. 

Software to empower workers on the factory floor

Wed, 07/31/2019 - 11:59pm

Manufacturers are constantly tweaking their processes to get rid of waste and improve productivity. As such, the software they use should be as nimble and responsive as the operations on their factory floors.

Instead, much of the software in today’s factories is static. In many cases, it’s developed by an outside company to work in a broad range of factories, and implemented from the top down by executives who know software can help but don’t know how best to adopt it.

That’s where MIT spinout Tulip comes in. The company has developed a customizable manufacturing app platform that connects people, machines, and sensors to help optimize processes on a shop floor. Tulip’s apps provide workers with interactive instructions, quality checks, and a way to easily communicate with managers if something is wrong.

Managers, in turn, can make changes or additions to the apps in real-time and use Tulip’s analytics dashboard to pinpoint problems with machines and assembly processes.

“With this notion of agile manufacturing [in which changes are constant], you need your software to match the philosophical process you’re using to improve your organization,” says Tulip co-founder and CTO Rony Kubat ’01, SM ’08, PhD ’12. “With our platform, we’re empowering the manufacturing engineers on the line to make changes themselves. That’s in contrast to the traditional way of making manufacturing software. It’s a bottom-up kind of thing.”

Tulip, founded by Kubat and CEO Natan Linder SM ’11, PhD ’17, is currently working with multiple Fortune 100 and Fortune 500 companies operating in 13 different countries, including Bosch, Jabil, and Kohler. Tulip’s customers make everything from shoes to jewelry, medical devices, and consumer electronics.

With the platform’s scalable design, Kubat says it can help factories of any size, as long as they employ people on the shop floor.

In that way, Tulip’s tools are empowering workers in an industry that has historically trended toward automation. As the company continues building out its platform — including adding machine vision and machine learning capabilities — it hopes to continue encouraging manufacturers to see people as an indispensable resource.

A new approach to manufacturing software

In 2012, Kubat was pursuing his PhD in the MIT Media Lab’s Fluid Interfaces group when he met Linder, then a graduate student. During their research, several Media Lab member companies gave the founders tours of their factory floors and introduced them to some of the production challenges they were grappling with.

“The Media Lab is such a special place,” Kubat says. “You have this contrast of an antidisciplinary mentality, where you’re putting faculty from completely different walks of life in the same building, giving it this creative wildness that is really invigorating, plus this grounding in the real world that comes from the member organizations that are part of the Media Lab.”

During those factory tours, the founders noticed similar problems across industries.

“The typical way manufacturing software is deployed is in these multiyear cycles,” Kubat says. “You sign a multimillion dollar contract that’s going to overhaul everything, and you get three years to deploy it all, and you get your screens in the end that everyone isn’t really happy with because they solve yesterday’s problems. We’re bringing a more modern approach to software development for manufacturing.”

In 2014, just as Linder completed his PhD research, the founders decided to start Tulip. (Linder would later return to MIT to defend his thesis.) Relying on their personal savings for funding, they recruited a team of students from MIT’s Undergraduate Research Opportunities Program and began building a prototype for New Balance, a Media Lab member company that has factories in New England.

“We worked really closely with the first customers to do super fast iterations to make these proofs of concept that we’d try to deploy as quickly as possible,” Kubat says. “That approach isn’t new from a software perspective — deploy fast and iterate — but it is new for the manufacturing software world.”

An engine for manufacturing

The app-based platform the founders eventually built out has little in common with the sweeping software implementations that traditionally upend factory operations for better or worse. Tulip’s apps can be installed in just one workstation then scaled up as needed.

The apps can also be designed by managers with no coding experience, over the course of an afternoon. Typically they can use Tulip’s app templates, which can be customized for common tasks like guiding a worker through an assembly process or completing a checklist.

Workers using the apps on the shop floor can submit comments on their interactive screens to do things like point out defects. Those comments are sent directly to the manager, who can make changes to the apps remotely.

“It’s a data-driven opportunity to engage the operators on the line, to gain some ownership over the process,” Kubat says.

The apps are integrated with machines and tools on the factory floor through Tulip’s router-like gateways. Those gateways also sync with sensors and cameras to give managers data from both humans and machines. All that information helps managers find bottlenecks and other factors holding back productivity.

Workers, meanwhile, are given real-time feedback on their actions from the cameras, which are usually trained on the part as it’s being assembled or on the bins the workers are reaching into. If a worker assembles a part improperly, for example, Tulip’s camera can detect the mistake, and its app can alert the worker to the error, presenting instructions on fixing it.

A demonstration of a worker assembling a part wrong, Tulip's sensors detecting the error, and then Tulip's app providing instructions for correcting the mistake.

Such quality checks can be sprinkled throughout a production line. That’s a big upgrade over traditional methods for data collection in factories, which often include a stopwatch and a clipboard, the founders say.

“That process is expensive,” Kubat says of traditional data collection methods. “It’s also biased, because when you’re being observed you might behave differently. It’s also a sampling of things, not the true picture. Our take is that all of that execution data should be something you get for free from a system that gives you additional value.”

The data Tulip collects are channeled into its analytics dashboard, which can be used to make customized tables displaying certain metrics to managers and shop floor workers.

In April, the company launched its first machine vision feature, which further helps workers minimize mistakes and improve productivity. Those objectives are in line with Tulip’s broader goal of empowering workers in factories rather than replacing them.

“We’re helping companies launch products faster and improve efficiency,” Kubat says. “That means, because you can reduce the cost of making products with people, you push back the [pressure of] automation. You don’t need automation to give you quality at scale. This has the potential to really change the dynamics of how products are delivered to the public.”

Speeding up drug discovery for brain diseases

Wed, 07/31/2019 - 2:25pm

A research team led by Whitehead Institute scientists has identified 30 distinct chemical compounds — 20 of which are drugs undergoing clinical trial or have already been approved by the FDA — that boost the protein production activity of a critical gene in the brain and improve symptoms of Rett syndrome, a rare neurodevelopmental condition that often provokes autism-like behaviors in patients. The new study, conducted in human cells and mice, helps illuminate the biology of an important gene, called KCC2, which is implicated in a variety of brain diseases, including autism, epilepsy, schizophrenia, and depression. The researchers’ findings, published in the July 31 online issue of Science Translational Medicine, could help spur the development of new treatments for a host of devastating brain disorders.

“There’s increasing evidence that KCC2 plays important roles in several different disorders of the brain, suggesting that it may act as a common driver of neurological dysfunction,” says senior author Rudolf Jaenisch, a founding member of Whitehead Institute and professor of biology at MIT. “These drugs we’ve identified may help speed up the development of much-needed treatments.”

KCC2 works exclusively in the brain and spinal cord, carrying ions in and out of specialized cells known as neurons. This shuttling of electrically charged molecules helps maintain the cells’ electrochemical makeup, enabling neurons to fire when they need to and to remain idle when they don’t. If this delicate balance is upset, brain function and development go awry.

Disruptions in KCC2 function have been linked to several human brain disorders, including Rett syndrome (RTT), a progressive and often debilitating disorder that typically emerges early in life in girls and can involve disordered movement, seizures, and communication difficulties. Currently, there is no effective treatment for RTT.

Jaenisch and his colleagues, led by first author Xin Tang, devised a high-throughput screen assay to uncover drugs that increase KCC2 gene activity. Using CRISPR/Cas9 genome editing and stem cell technologies, they engineered human neurons to provide rapid readouts of the amount of KCC2 protein produced. The researchers created these so-called reporter cells from both healthy human neurons as well as RTT neurons that carry disease-causing mutations in the MECP2 gene. These reporter neurons were then fed into a drug-screening pipeline to find chemical compounds that can enhance KCC2 gene activity.

Tang and his colleagues screened over 900 chemical compounds, focusing on those that have been FDA-approved for use in other conditions, such as cancer, or have undergone at least some level of clinical testing. “The beauty of this approach is that many of these drugs have been studied in the context of non-brain diseases, so the mechanisms of action are known,” says Tang. “Such molecular insights enable us to learn how the KCC2 gene is regulated in neurons, while also identifying compounds with potential therapeutic value.”

The Whitehead Institute team identified a total of 30 drugs with KCC2-enhancing activity. These compounds, referred to as KEECs (short for KCC2 expression-enhancing compounds), work in a variety of ways. Some block a molecular pathway, called FLT3, which is found to be overactive in some forms of leukemia. Others inhibit the GSK3b pathway that has been implicated in several brain diseases. Another KEEC acts on SIRT1, which plays a key role in a variety of biological processes, including aging.

In followup experiments, the researchers exposed RTT neurons and mouse models to KEEC treatment and found that some compounds can reverse certain defects associated with the disease, including abnormalities in neuronal signaling, breathing, and movement. These efforts were made possible by a collaboration with Mriganka Sur’s group at the Picower Institute for Learning and Memory, in which Keji Li and colleagues led the behavioral experiments in mice that were essential for revealing the drugs’ potency.

“Our findings illustrate the power of an unbiased approach for discovering drugs that could significantly improve the treatment of neurological disease,” says Jaenisch. “And because we are starting with known drugs, the path to clinical translation is likely to be much shorter.”

In addition to speeding up drug development for Rett syndrome, the researchers’ unique drug-screening strategy, which harnesses an engineered gene-specific reporter to unearth promising drugs, can also be applied to other important disease-related genes in the brain. “Many seemingly distinct brain diseases share common root causes of abnormal gene expression or disrupted signaling pathways,” says Tang. “We believe our method has broad applicability and could help catalyze therapeutic discovery for a wide range of neurological conditions.”

Support for this work was provided by the National Institutes of Health, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Rett Syndrome Research Trust, the International Rett Syndrome Foundation, the Damon Runyon Cancer Foundation, and the National Cancer Institute.

Lowering emissions without breaking the bank

Wed, 07/31/2019 - 2:20pm

India’s economy is booming, driving up electric power consumption to unprecedented levels. The nation’s installed electricity capacity, which increased fivefold in the past three decades, is expected to triple over the next 20 years. At the same time, India has committed to limiting its carbon dioxide emissions growth; its Paris Agreement climate pledge is to decrease its carbon dioxide emissions intensity of GDP (CO2 emissions per unit of GDP) by 33 to 35 percent by 2030 from 2005 levels, and to boost carbon-free power to about 40 percent of installed capacity in 2030.

Can India reach its climate targets without adversely impacting its rate of economic growth — now estimated at 7 percent annually — and what policy strategy would be most effective in achieving that goal?

To address these questions, researchers from the MIT Joint Program on the Science and Policy of Global Change developed an economy-wide model of India with energy-sector detail, and applied it to simulate the achievement of each component of the nation’s Paris pledge. Representing the emissions intensity target with an economy-wide carbon price and the installed capacity target with a Renewable Portfolio Standard (RPS), they assessed the economic implications of three policy scenarios — carbon pricing, an RPS, and a combination of carbon pricing with an RPS. Their findings appear in the journal Climate Change Economics.

As a starting point, the researchers determined that imposing an economy-wide emissions reduction policy alone to meet the target emissions intensity, simulated through a carbon price, would result in the lowest cost to India’s economy. This approach would lead to emissions reductions not only in the electric power sector but throughout the economy. By contrast, they found that an RPS, which would enforce a minimum level of currently more expensive carbon-free electricity, would have the highest per-ton cost — more than 10 times higher than the economy-wide CO2 intensity policy.

“In our modeling framework, allowing emissions reduction across all sectors of the economy through an economy-wide carbon price ensures that the least-cost pathways for reducing emissions are observed,” says Arun Singh, lead author of the study. “This is constrained when electricity sector-specific targets are introduced. If renewable electricity costs are higher than the average cost of electricity, a higher share of renewables in the electricity mix makes electricity costlier, and the impacts of higher electricity prices reverberate across the economy.” A former research assistant at the MIT joint program and graduate student at the MIT Institute for Data, Systems and Society’s Technology and Policy Program, Singh now serves as an energy specialist consultant at the World Bank.

Combining an economy-wide carbon price with an RPS would, however, bring the price per ton of CO2 down from $23.38/tCO2 (in 2011 U.S. dollars) under a standalone carbon-pricing policy to a far more politically viable $6.17/tCO2 when an RPS is added. If wind and solar costs decline significantly, the cost to the economy would decrease considerably; at the lowest wind and solar cost levels simulated, the model projects that economic losses under a carbon price with RPS would be only slightly higher than those under a standalone carbon price. Thus, declining wind and solar costs could enable India to set more ambitious climate policies in future years without significantly impeding economic growth.

“Globally, it has been politically impossible to introduce CO2 prices high enough to mitigate climate change in line with the Paris Agreement goals,” says Valerie Karplus, co-author and assistant professor at the MIT Sloan School of Management. “Combining pricing approaches with technology-specific policies may be important in India, as they have elsewhere, for the politics to work.”

Developed by Singh in collaboration with his master’s thesis advisors at MIT (Karplus, and MIT Joint Program Principal Research Scientist Niven Winchester, who also co-authored the study), the economy-wide model of India enables researchers to gauge the cost-effectiveness and efficiency of different technology and policy choices designed to transition the country to a low-carbon energy system.

“The study provides important insights about the costs of different policies, which are relevant to nations that have pledged emission targets under the Paris Agreement but have not yet developed polices to meet those targets,” says Winchester, who is also a senior fellow at Motu Economic and Public Policy Research.

The study was supported by the MIT Tata Center for Technology and Design, the Energy Information Administration of the U.S. Department of Energy, and the MIT Joint Program.

Why did my classifier just mistake a turtle for a rifle?

Wed, 07/31/2019 - 2:00pm

A few years ago, the idea of tricking a computer vision system by subtly altering pixels in an image or hacking a street sign seemed like more of a hypothetical threat than anything to seriously worry about. After all, a self-driving car in the real world would perceive a manipulated object from multiple viewpoints, cancelling out any misleading information. At least, that’s what one study claimed.

“We thought, there’s no way that’s true!” says MIT PhD student Andrew Ilyas, then a sophomore at MIT. He and his friends — Anish Athalye, Logan Engstrom, and Jessy Lin — holed up at the MIT Student Center and came up with an experiment to refute the study. They would print a set of three-dimensional turtles and show that a computer vision classifier could mistake them for rifles.

The results of their experiments, published at last year’s International Conference on Machine Learning (ICML), were widely covered in the media, and served as a reminder of just how vulnerable the artificial intelligence systems behind self-driving cars and face-recognition software could be. “Even if you don’t think a mean attacker is going to perturb your stop sign, it’s troubling that it’s a possibility,” says Ilyas. “Adversarial example research is about optimizing for the worst case instead of the average case.”

With no faculty co-authors to vouch for them, Ilyas and his friends published their study under the pseudonym “Lab 6,” a play on Course 6, their Department of Electrical Engineering and Computer Science (EECS) major. Ilyas and Engstrom, now an MIT graduate student, would go on to publish five more papers together, with a half-dozen more in the pipeline.

At the time, the risk posed by adversarial examples was still poorly understood. Yann LeCun, the head of Facebook AI, famously downplayed the problem on Twitter. “Here’s one of the pioneers of deep learning saying, this is how it is, and they say, nah!” says EECS Professor Aleksander Madry. “It just didn’t sound right to them and they were determined to prove why. Their audacity is very MIT.” 

The extent of the problem has grown clearer. In 2017, IBM researcher Pin-Yu Chen showed that a computer vision model could be compromised in a so-called black-box attack by simply feeding it progressively altered images until one caused the system to fail. Expanding on Chen’s work at ICML last year, the Lab 6 team highlighted multiple cases in which classifiers could be duped into confusing cats and skiers for guacamole and dogs, respectively.

This spring, Ilyas, Engstrom, and Madry presented a framework at ICML for making black-box attacks several times faster by exploiting information gained from each spoofing attempt. The ability to mount more efficient black-box attacks allows engineers to redesign their models to be that much more resilient.

“When I met Andrew and Logan as undergraduates, they already seemed like experienced researchers,” says Chen, who now works with them via the MIT-IBM Watson AI Lab. “They’re also great collaborators. If one is talking, the other jumps in and finishes his thought.”

That dynamic was on display recently as Ilyas and Engstrom sat down in Stata to discuss their work. Ilyas seemed introspective and cautious, Engstrom, outgoing, and at times, brash.

“In research, we argue a lot,” says Ilyas. “If you’re too similar you reinforce each other’s bad ideas.” Engstrom nodded. “It can get very tense.”

When it comes time to write papers, they take turns at the keyboard. “If it’s me, I add words,” says Ilyas. “If it’s me, I cut words,” says Engstrom.

Engstrom joined Madry’s lab for a SuperUROP project as a junior; Ilyas joined last fall as a first-year PhD student after finishing his undergraduate and MEng degrees early. Faced with offers from other top graduate schools, Ilyas opted to stay at MIT. A year later, Engstrom followed.

This spring the pair was back in the news again, with a new way of looking at adversarial examples: not as bugs, but as features corresponding to patterns too subtle for humans to perceive that are still useful to learning algorithms. We know instinctively that people and machines see the world differently, but the paper showed that the difference could be isolated and measured.

They trained a model to identify cats based on “robust” features recognizable to humans, and “non-robust” features that humans typically overlook, and found that visual classifiers could just as easily identify a cat from non-robust features as robust. If anything, the model seemed to rely more on the non-robust features, suggesting that as accuracy improves, the model may become more susceptible to adversarial examples. 

“The only thing that makes these features special is that we as humans are not sensitive to them,” Ilyas told Wired.

Their eureka moment came late one night in Madry’s lab, as they often do, following hours of talking. “Conversation is the most powerful tool for scientific discovery,” Madry likes to say. The team quickly sketched out experiments to test their idea.

“There are many beautiful theories proposed in deep learning,” says Madry. “But no hypothesis can be accepted until you come up with a way of verifying it.”

“This is a new field,” he adds. “We don’t know the answers to the questions, and I would argue we don’t even know the right questions. Andrew and Logan have the brilliance and drive to help lead the way.”

Jack Kerrebrock, professor emeritus of aeronautics and astronautics, dies at 91

Wed, 07/31/2019 - 10:48am

Jack L. Kerrebrock, professor emeritus of aeronautics and astronautics at MIT, died at home on July 19. He was 91.

Born in Los Angeles in 1928, Kerrebrock received his BS in 1950 from Oregon State University, his MS in 1951 from Yale University, and his PhD in 1956 from Caltech. With a passion for aerospace, he held positions with the National Advisory Committee for Aeronautics, Caltech, and Oak Ridge National Laboratory before joining the faculty of MIT as an assistant professor in 1960.

Promoted to associate professor in 1962 and to full professor in 1965, Kerrebrock founded and directed the Department of Aeronautics and Astronautics’ Space Propulsion Laboratory from 1962 until 1976, when it merged with the department’s Gas Turbine Laboratory, of which he had become director in 1968. In 1978, he accepted the role of head of the Department of Aeronautics and Astronautics (AeroAstro).

Kerrebrock enjoyed an international reputation as an expert in the development of propulsion systems for aircraft and spacecraft. Over the years, he served as chair or member of multiple advisory committees — both government and professional — and as NASA associate administrator of aeronautics and space technology.

As associate director of engineering, Kerrebrock was the faculty leader of the Daedalus Project in AeroAstro. Daedalus was a human-powered aircraft that, on 23 April 1988, flew a distance of 72.4 miles (115.11 kilometers) in three hours, 54 minutes, from Heraklion on the island of Crete to the island of Santorini. Daedalus still holds the world record for human-powered flight. This flight was the culmination of a decade of work by MIT students and alumni and made a major contribution to the understanding of the science and engineering of human-powered flight.

Elected to the National Academy of Engineering in 1978, Kerrebrock was the recipient of numerous accolades, including election to the status of honorary fellow of the American Institute of Aeronautics and Astronautics, as well as the Explorers Club and the American Academy of Arts and Sciences. A member of the American Association for the Advancement of Science, Sigma Xi, Tau Beta Pi, and Phi Kappa Phi, he received NASA’s Distinguished Service Medal in 1983. He was also a contributor to the Intergovernmental  Panel on Climate Change, which along with Al Gore won the Nobel Prize in 2007.

Although a luminary in his field, Kerrebrock — an enthusiastic outdoorsman — was perhaps never happier than when climbing a mountain, hiking a wilderness trail, or leading a group of young people through ice and snow to teach them independence and survival skills. He ran his first Boston Marathon in his early 50s on a whim, with no training, following that with several more marathons, including the Marine Corps Marathon in Washington.

Kerrebrock and his wife Crickett traveled widely, to destinations including South Africa, Scotland, Tuscany, Paris, and a very special trip to Canaveral for one of the last Space Shuttle launches, where he was able to introduce his wife to his friend Neil Armstrong, who was one of her heroes.

Kerrebrock was married to Rosemary “Crickett” Redmond (Keough) Kerrebrock for the last 12 years of his life. He was previously married for 50 years to the late Bernice “Vickie” (Veverka) Kerrebrock, who died in 2003. In addition to his wife, Kerrebrock leaves behind two children, Nancy Kerrebrock (Clint Cummins) of Palo Alto, California, and Peter Kerrebrock (Anne) of Hingham, Masachusetts; and five grandchildren, Lewis Kerrebrock, Gale Kerrebrock, Renata Cummins, Skyler Cummins, and Lance Cummins. He was preceded in death by his son Christopher Kerrebrock, brother Glenn, and sister Ann. He also is remembered fondly by the Redmond children, Paul J. Redmond Jr. and his partner Joe Palombo, Kelly Redmond and her husband Philip Davis, Maura Redmond, Meaghan Winokur and James Winokur and their children, Laine and Alicia.

A public memorial service is being planned at MIT and will be announced soon. In lieu of flowers, contributions in his memory may be made to the Jack and Vickie Kerrebrock Fellowship Fund, Massachusetts Institute of Technology, 600 Memorial Drive, Cambridge MA 02139.

Professor Emeritus Samuel Bowring, pioneering geologist and expert in geochronology, dies at 65

Tue, 07/30/2019 - 3:50pm

Professor Emeritus Samuel A. Bowring, a longtime MIT professor of geology, died on July 17 at age 65.

Known for his exceptional skill as a field geologist and innovator in uranium-lead isotopic geochronology, Bowring worked to achieve unprecedented analytical precision and accuracy in calibrating the geologic record and reconstructing the co-evolution of life and the solid Earth.

No dates, no rates

A favorite aphorism, “No dates, no rates,” appeared in many of Bowring’s lectures and talks — meaning, to fully understand the past events preserved in the rock record you have to understand their timing. One of his earliest major contributions, which transformed what geologist know about the early evolution of the Earth, was his work in the 1980s on the Acasta gneiss complex, a rock body in northwestern Canada, pushing back the date of the oldest-known rocks to 4.03 billion years. The granitic samples he collected from an outcrop on an island in the remote Acasta River basin turned out to be rare remnants of the Earth’s earliest crust.  

“What is more important about the Acasta gneiss complex than its 4.03 billion year age alone is its character, which Sam recognized and documented,” said Paul Hoffman, Harvard University Sturgis Hooper Professor Emeritus of Geology and career-long Bowring collaborator and friend. Hoffman explains that the Acasta rocks, paired with Bowring’s advocacy, fundamentally changed geologists’ understanding of continental formation. Prior to Bowring’s work the prevailing view was that the continents had steadily grown over geologic time. But, with these ancient gneiss samples, Bowring was able to characterize a complex history which predated the moment of their crystallization, which points instead to a process of ongoing crustal “recycling” — where rock near the Earth’s surface, through the mechanisms of plate tectonics, is subsumed and transformed by the mantle’s convective currents. According to Hoffman, “Sam’s fascination with the creation and preservation of continental crust never left him, whether he was at Great Bear Lake, the Grand Canyon, or the High Cascades in Washington State.”

Beyond studying the physical processes which shape the lithosphere, Bowring also sought to understand those which shape the biosphere. His work on sedimentary layers of the Precambrian/Cambrian boundary age determined the timing and rate of the pivotal biological event known as the Cambrian Explosion, beginning nearly 540 million years ago. He was able to establish that the Early Cambrian period which saw the most dramatic burst of evolutionary activity and animal diversity ever known — including the first emergence of chordates, brachiopods, and arthropods — spanned not 10 to 50 million years as was previously-believed, but instead lasted a mere 5 to 6 million years.

Longtime friend and colleague Tim Grove, the Robert R. Shrock Professor of Earth and Planetary Sciences at MIT, wrote of the achievement in a citation for the American Geophysical Union when Bowring was awarded the Walter H. Bucher Medal in 2016: “Sam showed that during this brief time interval more phyla than have ever since existed on Earth came into existence. This represents a truly profound and astonishing new discovery about how life evolved on Earth.”

Bowring also established the timing and duration of what has come to be known as “The Great Dying”: the largest of Earth’s five major mass extinctions, which marked the end of the Permian period and saw the elimination of over 96% of marine species and about 70% of species on land. Rocks collected by Bowring and collaborators from sites across China spanning the Permian-Triassic boundary revealed that the ecological collapse happened at breakneck speed — occurring in less than 30,000 years at a rate many times faster than previous estimates — and with little-to-no warning in geological terms.

A world-expert in uranium-lead isotopic dating, by 2002 Bowring began to see what he later termed “the double-edged sword of high-precision geochronology.” As the field experienced rapid advancements in precision, resolution, and quantitative stratigraphic analyses, many new techniques were developing in parallel. He recognized that without calibration and intercalibration of radioisotopic dating methods and quantitative chronostratigraphy, their accuracy and capacity as individual tools for understanding deep time were diminished. In response, he and colleague Doug Erwin conceived the EARTHTIME Initiative, a community-based effort to foster collaboration across the disciplines and eliminate inter-laboratory and inter-technique biases. Bowring’s common refrain to members to “check our egos at the door” reflected his unwavering goal to push the accuracy of geochronology to new levels, and helped the initiative build consensus and develop best practices and protocols. EARTHTIME continues to lead international workshops, expanding beyond topics of calibration and standardization to engage with the broader geoscience community, seeking to understand the rock record in ever more refined and nuanced ways.

“If the art of geochronology is the rendering of dates in their proper geologic context, Sam is our Michelangelo,” former MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) department head and close friend and colleague Tom Jordan said of Bowring. “He has always insisted that knowing what you are dating and why are as important as fixing the date itself; that the precision of absolute dating is most powerful when samples can be placed precisely in section.”

Bowring’s interest in the applications of tracer isotopes to examine Earth systems also extended to their utility in tracking environmental contaminants. His lab has developed methods for not only tracing naturally-occurring sources and establishing natural regional baselines, but also for documenting variations which correlate with anthropogenic inputs associated with urbanization and industrialization.

A dedicated teacher and mentor

Bowring joined the faculty of EAPS at MIT in 1991 where, in addition to fostering the careers of over two dozen graduate students and postdocs, he demonstrated a career-long commitment to advancing undergraduate education. For more than 20 years Bowring served as a first-year and undergraduate advisor, eventually being named a Margaret MacVicar Faculty Fellow in 2006 by the Institute program which recognizes faculty for, “exemplary and sustained contributions to the teaching and education of undergraduates at MIT,” and later earning the MIT Everett Moore Baker Memorial Award for Excellence in Undergraduate Teaching in 2007. He was also deeply involved in helping to shape curricula, serving on the MIT Committee on Curriculum from 2007 to 2010. He also served as chair of the EAPS Program in Geology and Geochemistry from 1999 until 2002, at which time he became chair of the EAPS Undergraduate Committee, serving until 2015. As a field geologist, he took his keen interest in engaging students to off-campus venues, leading annual trips into the field which were fixtures in the department’s calendar — from western Massachusetts to Yellowstone to the Las Vegas desert.

“Sam was an exceptionally effective and dedicated undergraduate educator, having gone well ‘above and beyond’ for EAPS and our students,” recalls Grove. “He took on more undergraduate teaching than any other member of our department in the last 25 years and was deeply committed to the importance of training undergraduates in the field — providing students with hands-on experience and using real-world geology to inspire and teach fundamentals.”

Bowring also was instrumental in guiding Terrascope, a first-year learning community created jointly by EAPS and the Department of Civil and Environmental Engineering in 2002. Bowring became associate director of the program in 2006, going on to serve as director from 2008 to 2015. The nationally-recognized program, which has been the subject of several academic papers and has grown to become one of MIT’s largest first-year communities, asks students with diverse research interests to tackle complex, global problems involving sustainability, climate, and the Earth system in a series of team-oriented, student-driven classes. In 2013, Bowring and his coauthors described the innovative curriculum by saying, “Our emphasis is on using a multidisciplinary approach to show that understanding the geosciences … is important to the students' world view, whether they know it or not. We believe it is our responsibility to teach as many students as we can about the Earth system, and in our experience, Terrascope students have a greatly expanded consciousness about the Earth and humans’ effect on it.”

Born in Portsmouth, New Hampshire, on Sept. 27, 1953, Bowring was raised in Durham, New Hampshire, where he also later attended the University of New Hampshire. After graduating in 1976 with a bachelor’s degree in geology, he went on to study at the New Mexico Institute of Mining and Technology, where he earned a master’s in 1980.

At the University of Kansas, Bowring had the opportunity early on to work with PhD advisor Randall Van Schmus on a project in the Northwest Territories of Canada (NWT) — where he was first introduced to collaborator Hoffman — which laid the foundation for both his PhD and continuing studies in the NWT’s Proterozozoic Wopmay orogen after joining the faculty at Washington University in St. Louis (WU) in 1984. It was as an assistant professor at WU that Bowring made his seminal analysis of the Acasta gneiss from the region, along with Ian Williams from the Australian National University.

In addition to being named a member of the National Academy of Sciences and the American Academy for the Advancement of Science, Bowring, the Robert R. Schrock Emeritus Professor of Geology, was a fellow of the American Geophysical Union and was recognized by the organization with both the Norman L. Bowen Award and Walter H. Bucher Medal. He was also a fellow of both the Geochemical Society and the Geological Society of America.

He is survived by his wife of 30 years, Kristine M. (Fox) Bowring, two stepdaughters, Kelley Kintner and Sara Henrick, as well as his siblings, James Bowring, Joseph Bowring, and Margaret Ann Bowring-Price. At the family’s request, there will be no formal services.

School of Engineering second quarter 2019 awards

Tue, 07/30/2019 - 1:40pm

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. Every quarter, the School of Engineering publicly recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in their academic departments, labs, and centers.

Antione Allanore, of the Department of Materials Science and Engineering, won the Elsevier Atlas Award on May 15; he also won third place for best conference proceedings manuscript at the TMS Annual Meeting and Exhibition on March 14.

Dimitri Antoniada, of the Department of Electrical Engineering and Computer Science, was elected to the American Academy of Arts and Sciences on April 18.

Martin Bazant, of the Department of Chemical Engineering, was named a fellow of the American Physical Society on Oct. 17, 2018.

Sangeeta Bhatia, of the Department of Electrical Engineering and Computer Science, was awarded an honorary degree of doctor of science from the University of London on July 4; she was also awarded the Othmer Gold Medal from the Science History Institute on March 8.

Richard Braatz, of the Department of Chemical Engineering, was elected to the National Academy of Engineering on Feb. 11.

Tamara Broderick, of the Department of Electrical Engineering and Computer Science, won the Notable Paper Award at the International Conference on Artificial Intelligence and Statistics on April 18.

Fikile Brushett, of the Department of Chemical Engineering, won the Electrochemical Society’s 2019 Supraniam Srinivasan Young Investigator Award on Oct. 9, 2018; he was also named to the annual Talented Twelve list by Chemical Engineering News on Aug. 22, 2017.

Vincent W.S. Chan, of the Department of Electrical Engineering and Computer Science, received the Best Paper Award at the IEEE International Conference on Communications on May 10.

Arup Chakraborty, of the Department of Chemical Engineering, won a Guggenheim Fellowship on March 4, 2018.

Anantha Chandrakasan, of the Department of Electrical Engineering and Computer Science, was elected to American Academy of Arts and Sciences on April 18.

Kwanghun Chung, of the Department of Chemical Engineering, was awarded a Presidential Early Career Awards for Scientists and Engineers on July 10.

Constantinos Daskalakis, of the Department of Electrical Engineering and Computer Science, won the Grace Murray Hopper Award for Outstanding Computer Scientist from the Association of Computing Machinery on May 8.

Jesús del Alamo, Department of Electrical Engineering and Computer Science, was named a Fellow of the Materials Research Society on May 2.

Elazer R. Edelman, of the Institute for Medical Engineering and Science, won the Excellence in Mentoring Award from the Corrigan Minehan Heart Center at the Massachusetts General Hospital on June 18.

Karen K. Gleason, of the Department of Chemical Engineering, was honored with the John M. Prausnitz Institute AIChE Lecturer Award by the American Institute of Chemical Engineers on April 3.

Bill Green, of the Department of Chemical Engineering, won the R.H. Wilhelm Award in Chemical Reaction Engineering from the American Institute of Chemical Engineers on July 19.

Paula Hammond, of the Department of Chemical Engineering, was honored with the Margaret H. Rousseau Pioneer Award for Lifetime Achievement by a Woman Chemical Engineer from the American Institute of Chemical Engineers on June 1; she also recieved the American Chemical Society Award in Applied Polymer Science on Jan. 8, 2018.

Ruonan Han, of the Department of Electrical Engineering and Computer Science, won the Outstanding Researcher Award from Intel Corporation on April 1.

Song Han, of the Department of Electrical Engineering and Computer Science, was named to the annual list of Innovators Under 35 by MIT Technology Review on June 25.

Klavs Jensen, of the Department of Chemical Engineering, was honored with the John M. Prausnitz Institute AIChE Lecturer Award by the American Institute of Chemical Engineers on Aug. 21, 2018; he also recognized with the Corning International Prize for Outstanding Work in Continuous Flow Reactors on May 1, 2018.

David R. Karger, of the Department of Electrical Engineering and Computer Science, was elected to the American Academy of Arts and Sciences on April 18.

Dina Katabi, of the Department of Electrical Engineering and Computer Science, was named a Great Immigrant by the Carnegie Corporation of New York on June 27.

Manolis Kellis, of the Department of Electrical Engineering and Computer Science, was honored as a speaker by the Mendel Lectures Committee on May 2.

Jeehwan Kim, of the Department of Mechanical Engineering, awarded the Young Faculty Award from the Defense Advanced Research Projects Agency on May 28.

Heather Kulik, of the Department of Chemical Engineering, was awarded a CAREER award from the National Science Foundation on Feb. 7; she won the Journal of Physical Chemistry and PHYS Division Lectureship Award from the Journal of Physical Chemistry and the Physical Chemistry Division of the American Chemical Society on July 1; she was honored with the Marion Milligan Mason Award Oct. 26, 2018; she earned the DARPA Young Faculty Award on June 20, 2018; she also won the Young Investigator Award from the Office of Naval Research on Feb. 21, 2018.

Robert Langer, of the Department of Chemical Engineering, won the Dreyfus Prize for Chemistry in Support of Human Health from the Camille and Henry Dreyfus Foundation on May 14; he also was named on the 2018 Medicine Maker’s Power List on May 8, 2018; he was also named U.S. Science Envoy on June 18, 2018.

John Lienhard, of the Department of Mechanical Engineering, recevied the Edward F. Obert Award from the American Society of Mechanical Engineers on May 28.

Nancy Lynch, of the Department of Electrical Engineering and Computer Science, won TDCP Outstanding Technical Achievement Award from the Institute for Electrical and Electronics Engineers on April 18.

Karthish Manthiram, of the Department of Chemical Engineering, received a Petroleum Research Fund grant from the American Chemical Society on June 28.

Benedetto Marelli, of the Department of Civil and Environmental Engineering, won a Presidential Early Career Awards for Scientists and Engineers on July 10.

Robert T. Morris, of the Department of Electrical Engineering and Computer Science, was elected to the National Academy of Engineering on Feb. 11.

Heidi Nepf, of the Department of Civil and Environmental Engineering, won the Hunter Rouse Hydraulic Engineering Award from the American Society of Civil Engineers on May 20.

Dava Newman, of the Department of Aeronautics and Astronautics, was named co-chair of the Committee on Biological and Physical Sciences in Space by the National Academies of Sciences, Engineering, and Medicine on April 8.

Kristala Prather, of the Department of Chemical Engineering, was elected fellow of American Association for the Advancement of Science on Nov. 27, 2018.

Ellen Roche, of the Department of Mechanical Engineering, won the Child Health Research Award from the Charles H. Hood Foundation on June 13; she was also awarded a CAREER award from the National Science Foundation on Feb. 20.

Yuriy Román, of the Department of Chemical Engineering, received the Early Career in Catalysis Award from the American Chemical Society Catalysis Science and Technology Division on Feb. 28; he also received the Rutherford Aris Award from the North American Symposium on Chemical Reaction Engineering on March 10.

Julian Shun, of the Department of Electrical Engineering and Computer Science, awarded a CAREER award from the National Science Foundation on Feb. 26.

Hadley Sikes, of the Department of Chemical Engineering, was honored with the Best of BIOT award from the ACS Division of Biochemical Technology on Sept. 9, 2018.

Zachary Smith, of the Department of Chemical Engineering, was awarded the Doctoral New Investigator Grant from the American Chemical Society, on May 22.

Michael Strano, of the Department of Chemical Engineering, won the Andreas Acrivos Award for Professional Progress in Chemical Engineering from American Institute of Chemical Engineers on July 1.

Greg Stephanopoulos, of the Department of Chemical Engineering, was honored with the Gaden Award for Biotechnology and Bioengineering on March 31.

Harry Tuller, of the Department of Materials Science and Engineering, received the Thomas Egleston Medal for Distinguished Engineering Achievement from Columbia University on May 3.

Caroline Uhler, of the Department of Electrical Engineering and Computer Science, won the Simons Investigator Award in the Mathematical Model of Living Systems from Simmons Foundation on June 19.

University of Regensburg and MIT-Germany expand partnership

Tue, 07/30/2019 - 12:10pm

“MISTI brought me beyond the tourism level of being in Germany,” says MIT junior Tatsuya Daniel. “Through my Global Teaching Labs experience with the University of Regensburg, I was able to be directly immersed in the German education style.” Daniel is a student in the MIT-Germany Program and is one of many to benefit from the growing partnership between the program and the University of Regensburg (UR). MIT International Science and Technology Initiatives (MISTI) creates relationships with universities and other organizations around the world, providing students and faculty with opportunities to broaden their research and education. UR was the first university to create an official collaboration with MIT-Germany, helping the program create a model that has now been adopted by other German university partners.

The original agreement was built on a solid foundation of student experiences, and the renewal continues and expands UR’s unique versions of MISTI’s Global Teaching Labs (GTL) and Global Startup Labs (GSL), as well as opportunities for research.

“The renewal of the partnership with the University of Regensburg is an exciting milestone for the MIT-Germany Program,” says faculty director Markus Buehler. “It will allow MIT students to gain valuable teaching and research experiences and participate in cutting edge research. For example, one of our students has joined their theoretical physics department this summer to work on conducting lattice quantum chromodynamics calculations of hadronic observables. We anticipate that many other MIT students will have the opportunity to live, learn, and work in Bavaria through this partnership.”

GTL has proven to be one of the most popular pieces of the collaboration, giving MIT students the opportunity to learn through teaching. GTL challenges MIT students to synthesize and present what they know, work in a team, and communicate with peers of a different cultural background, all while sharing MIT's unique approach to science and engineering education with high school students around the world.

Daniel speaks highly of his experience as a GTL instructor from both an educational and cultural perspective. “By working with UR professors and students, we were able to identify differences in how students are taught in the U.S. and Germany. This helped our preparation by ensuring that we were able to clarify any points of confusion among the high school students.”

Regensburg’s Entrepreneurship Boot Camp is modeled after MISTI’s successful GSL programs. A small team of MIT graduate and undergraduate students work with UR Professor Christian Wolff to create and deliver a six-week entrepreneurship seminar for students in the UR Media Informatics MSc program.

“This was a wonderful experience for me,” said MIT doctoral candidate Madhav Kumar, who visited UR as a GSL instructor last summer. “Teaching entrepreneurship to students with advanced technical degrees was both challenging and extremely enriching. Our one-on-one brainstorming sessions with UR student groups helped us learn each other’s perspectives much better in this shared entrepreneurial journey."  

UR students benefit from participating in the intensive curriculum that ranges from direct exercises to guest speakers “I learned a lot in the GSL,” said UR participant Andrea Fischer. “We not only learned about business, we also trained to speak in front of people and give presentations. And the guest speakers were great — entrepreneurs talking not only about their success, but about their failures as well.”

Another unique feature of the UR partnership is the development of short annual workshops or roundtables on a variety of topics, held at MIT and UR alternately. Past workshops have addressed the latest pedagogical techniques in STEM to select groups of faculty and students. This cultural exchange has proven valuable so far, as participants are able to compare and contrast their experience and best practices.

"We were very surprised to see how diverse and with which original methodical approaches university teaching is done at MIT,” said 2017 workshop attendee Oliver Tempner, professor of chemistry didactics. “I hope that more and more university teachers in Germany will take these student-centered learning approaches into account in their seminars."

This commitment to a student-focused educational experience was also highlighted by participant Arne Dittmer, professor of biology didactics. "I was very impressed by all the activities to improve academic teaching. All the people we met were highly motivated to enhance the culture of teaching and learning at MIT."

New workshop topics are selected each year, and the next session may focus on Regensburg’s deep expertise in physics research. This expansion and strengthening of faculty programs was a critical goal of the renewal.

Another exciting faculty-facing component of the new agreement is the integration of the University of Regensburg/MIT-Germany partnership into the MISTI Global Seed Fund (GSF) program. The inaugural year will provide one award to support the international exchange of faculty and students to jump-start new collaborative projects. The 2019-20 GSF call for proposals is now open for this and the rest of the MISTI funds.

“The creation of a dedicated seed fund is an exciting new piece of our partnership,” says Justin Leahey, MIT-Germany program manager. “It will be a great complement to our workshops and will further strengthen MIT’s ties with the University of Regensburg.” 

MIT International Science and Technology Initiatives (MISTI) creates experiential learning opportunities across the globe for MIT students that increase their ability to understand and address real-world problems. MISTI’s Global Seed Funds grant program promotes collaboration between MIT faculty members and their counterparts abroad. A nucleus of international activity at MIT, MISTI is made possible through partnerships with corporations, governments, universities, foundations, and individuals. MISTI is located in the Center for International Studies within the School of Humanities, Arts, and Social Sciences.

Pages