Feed aggregator

Author Correction: Sea-level driven land conversion and the formation of ghost forests

Nature Climate Change - Fri, 08/02/2019 - 12:00am

Nature Climate Change, Published online: 02 August 2019; doi:10.1038/s41558-019-0568-8

Author Correction: Sea-level driven land conversion and the formation of ghost forests

Model predicts cognitive decline due to Alzheimer’s, up to two years out

MIT Latest News - Thu, 08/01/2019 - 11:59pm

A new model developed at MIT can help predict if patients at risk for Alzheimer’s disease will experience clinically significant cognitive decline due to the disease, by predicting their cognition test scores up to two years in the future.

The model could be used to improve the selection of candidate drugs and participant cohorts for clinical trials, which have been notoriously unsuccessful thus far. It would also let patients know they may experience rapid cognitive decline in the coming months and years, so they and their loved ones can prepare.  

Pharmaceutical firms over the past two decades have injected hundreds of billions of dollars into Alzheimer’s research. Yet the field has been plagued with failure: Between 1998 and 2017, there were 146 unsuccessful attempts to develop drugs to treat or prevent the disease, according to a 2018 report from the Pharmaceutical Research and Manufacturers of America. In that time, only four new medicines were approved, and only to treat symptoms. More than 90 drug candidates are currently in development.

Studies suggest greater success in bringing drugs to market could come down to recruiting candidates who are in the disease’s early stages, before symptoms are evident, which is when treatment is most effective. In a paper to be presented next week at the Machine Learning for Health Care conference, MIT Media Lab researchers describe a machine-learning model that can help clinicians zero in on that specific cohort of participants.

They first trained a “population” model on an entire dataset that included clinically significant cognitive test scores and other biometric data from Alzheimer’s patients, and also healthy individuals, collected between biannual doctor’s visits. From the data, the model learns patterns that can help predict how the patients will score on cognitive tests taken between visits. In new participants, a second model, personalized for each patient, continuously updates score predictions based on newly recorded data, such as information collected during the most recent visits.

Experiments indicate accurate predictions can be made looking ahead six, 12, 18, and 24 months. Clinicians could thus use the model to help select at-risk participants for clinical trials, who are likely to demonstrate rapid cognitive decline, possibly even before other clinical symptoms emerge. Treating such patients early on may help clinicians better track which antidementia medicines are and aren’t working.

“Accurate prediction of cognitive decline from six to 24 months is critical to designing clinical trials,” says Oggi Rudovic, a Media Lab researcher. “Being able to accurately predict future cognitive changes can reduce the number of visits the participant has to make, which can be expensive and time-consuming. Apart from helping develop a useful drug, the goal is to help reduce the costs of clinical trials to make them more affordable and done on larger scales.”

Joining Rudovic on the paper are: Yuria Utsumi, an undergraduate student, and Kelly Peterson, a graduate student, both in the Department of Electrical Engineering and Computer Science; Ricardo Guerrero and Daniel Rueckert, both of Imperial College London; and Rosalind Picard, a professor of media arts and sciences and director of affective computing research in the Media Lab.

Population to personalization

For their work, the researchers leveraged the world’s largest Alzheimer’s disease clinical trial dataset, called Alzheimer's Disease Neuroimaging Initiative (ADNI). The dataset contains data from around 1,700 participants, with and without Alzheimer’s, recorded during semiannual doctor’s visits over 10 years.

Data includes their AD Assessment Scale-cognition sub-scale (ADAS-Cog13) scores, the most widely used cognitive metric for clinical trials of Alzheimer’s disease drugs. The test assesses memory, language, and orientation on a scale of increasing severity up to 85 points. The dataset also includes MRI scans, demographic and genetic information, and cerebrospinal fluid measurements.

In all, the researchers trained and tested their model on a sub-cohort of 100 participants, who made more than 10 visits and had less than 85 percent missing data, each with more than 600 computable features. Of those participants, 48 were diagnosed with Alzheimer’s disease. But data are sparse, with different combinations of features missing for most of the participants.  

To tackle that, the researchers used the data to train a population model powered by a “nonparametric” probability framework, called Gaussian Processes (GPs), which has flexible parameters to fit various probability distributions and to process uncertainties in data. This technique measures similarities between variables, such as patient data points, to predict a value for an unseen data point — such as a cognitive score. The output also contains an estimate for how certain it is about the prediction. The model works robustly even when analyzing datasets with missing values or lots of noise from different data-collecting formats.

But, in evaluating the model on new patients from a held-out portion of participants, the researchers found the model’s predictions weren’t as accurate as they could be. So, they personalized the population model for each new patient. The system would then progressively fill in data gaps with each new patient visit and update the ADAS-Cog13 score prediction accordingly, by continuously updating the previously unknown distributions of the GPs. After about four visits, the personalized models significantly reduced the error rate in predictions. It also outperformed various traditional machine-learning approaches used for clinical data.

Learning how to learn

But the researchers found the personalized models’ results were still suboptimal. To fix that, they invented a novel “metalearning” scheme that learns to automatically choose which type of model, population or personalized, works best for any given participant at any given time, depending on the data being analyzed. Metalearning has been used before for computer vision and machine translation tasks to learn new skills or adapt to new environments rapidly with a few training examples. But this is the first time it’s been applied to tracking cognitive decline of Alzheimer’s patients, where limited data is a main challenge, Rudovic says.

The scheme essentially simulates how the different models perform on a given task — such as predicting an ADAS-Cog13 score — and learns the best fit. During each visit of a new patient, the scheme assigns the appropriate model, based on the previous data. With patients with noisy, sparse data during early visits, for instance, population models make more accurate predictions. When patients start with more data or collect more through subsequent visits, however, personalized models perform better.

This helped reduce the error rate for predictions by a further 50 percent. “We couldn’t find a single model or fixed combination of models that could give us the best prediction,” Rudovic says. “So, we wanted to learn how to learn with this metalearning scheme. It’s like a model on top of a model that acts as a selector, trained using metaknowledge to decide which model is better to deploy.”

Next, the researchers are hoping to partner with pharmaceutical firms to implement the model into real-world Alzheimer’s clinical trials. Rudovic says the model can also be generalized to predict various metrics for Alzheimer’s and other diseases.

Finding novel materials for practical devices

MIT Latest News - Thu, 08/01/2019 - 12:55pm

In recent years, machine learning has been proving a valuable tool for identifying new materials with properties optimized for specific applications. Working with large, well-defined data sets, computers learn to perform an analytical task to generate a correct answer and then use the same technique on an unknown data set. 

While that approach has guided the development of valuable new materials, they’ve primarily been organic compounds, notes Heather Kulik PhD ’09, an assistant professor of chemical engineering. Kulik focuses instead on inorganic compounds — in particular, those based on transition metals, a family of elements (including iron and copper) that have unique and useful properties. In those compounds — known as transition metal complexes — the metal atom occurs at the center with chemically bound arms, or ligands, made of carbon, hydrogen, nitrogen, or oxygen atoms radiating outward. 

Transition metal complexes already play important roles in areas ranging from energy storage to catalysis for manufacturing fine chemicals — for example, for pharmaceuticals. But Kulik thinks that machine learning could further expand their use. Indeed, her group has been working not only to apply machine learning to inorganics — a novel and challenging undertaking — but also to use the technique to explore new territory. “We were interested in understanding how far we could push our models to do discovery — to make predictions on compounds that haven’t been seen before,” says Kulik. 

Sensors and computers 

For the past four years, Kulik and Jon Paul Janet, a graduate student in chemical engineering, have been focusing on transition metal complexes with “spin” — a quantum mechanical property of electrons. Usually, electrons occur in pairs, one with spin up and the other with spin down, so they cancel each other out and there’s no net spin. But in a transition metal, electrons can be unpaired, and the resulting net spin is the property that makes inorganic complexes of interest, says Kulik. “Tailoring how unpaired the electrons are gives us a unique knob for tailoring properties.” 

A given complex has a preferred spin state. But add some energy — say, from light or heat — and it can flip to the other state. In the process, it can exhibit changes in macroscale properties such as size or color. When the energy needed to cause the flip — called the spin-splitting energy — is near zero, the complex is a good candidate for use as a sensor, or perhaps as a fundamental component in a quantum computer. 

Chemists know of many metal-ligand combinations with spin-splitting energies near zero, making them potential “spin-crossover” (SCO) complexes for such practical applications. But the full set of possibilities is vast. The spin-splitting energy of a transition metal complex is determined by what ligands are combined with a given metal, and there are almost endless ligands from which to choose. The challenge is to find novel combinations with the desired property to become SCOs — without resorting to millions of trial-and-error tests in a lab. 

Translating molecules into numbers 

The standard way to analyze the electronic structure of molecules is using a computational modeling method called density functional theory, or DFT. The results of a DFT calculation are fairly accurate — especially for organic systems — but performing a calculation for a single compound can take hours, or even days. In contrast, a machine learning tool called an artificial neural network (ANN) can be trained to perform the same analysis and then do it in just seconds. As a result, ANNs are much more practical for looking for possible SCOs in the huge space of feasible complexes. 

Because an ANN requires a numerical input to operate, the researchers’ first challenge was to find a way to represent a given transition metal complex as a series of numbers, each describing a selected property. There are rules for defining representations for organic molecules, where the physical structure of a molecule tells a lot about its properties and behavior. But when the researchers followed those rules for transition metal complexes, it didn’t work. “The metal-organic bond is very tricky to get right,” says Kulik. “There are unique properties of the bonding that are more variable. There are many more ways the electrons can choose to form a bond.” So the researchers needed to make up new rules for defining a representation that would be predictive in inorganic chemistry. 

Using machine learning, they explored various ways of representing a transition metal complex for analyzing spin-splitting energy. The results were best when the representation gave the most emphasis to the properties of the metal center and the metal-ligand connection and less emphasis to the properties of ligands farther out. Interestingly, their studies showed that representations that gave more equal emphasis overall worked best when the goal was to predict other properties, such as the ligand-metal bond length or the tendency to accept electrons. 

Testing the ANN 

As a test of their approach, Kulik and Janet — assisted by Lydia Chan, a summer intern from Troy High School in Fullerton, California — defined a set of transition metal complexes based on four transition metals — chromium, manganese, iron, and cobalt — in two oxidation states with 16 ligands (each molecule can have up to two). By combining those building blocks, they created a “search space” of 5,600 complexes — some of them familiar and well-studied, and some of them totally unknown. 

In previous work, the researchers had trained an ANN on thousands of compounds that were well-known in transition metal chemistry. To test the trained ANN’s ability to explore a new chemical space to find compounds with the targeted properties, they tried applying it to the pool of 5,600 complexes, 113 of which it had seen in the previous study. 

The result was the plot labeled "Figure 1" in the slideshow above, which sorts the complexes onto a surface as determined by the ANN. The white regions indicate complexes with spin-splitting energies within 5 kilo-calories per mole of zero, meaning that they are potentially good SCO candidates. The red and blue regions represent complexes with spin-splitting energies too large to be useful. The green diamonds that appear in the inset show complexes that have iron centers and similar ligands — in other words, related compounds whose spin-crossover energies should be similar. Their appearance in the same region of the plot is evidence of the good correspondence between the researchers’ representation and key properties of the complex. 

But there’s one catch: Not all of the spin-splitting predictions are accurate. If a complex is very different from those on which the network was trained, the ANN analysis may not be reliable — a standard problem when applying machine learning models to discovery in materials science or chemistry, notes Kulik. Using an approach that looked successful in their previous work, the researchers compared the numeric representations for the training and test complexes and ruled out all the test complexes where the difference was too great. 

Focusing on the best options 

Performing the ANN analysis of all 5,600 complexes took just an hour. But in the real world, the number of complexes to be explored could be thousands of times larger — and any promising candidates would require a full DFT calculation. The researchers therefore needed a method of evaluating a big data set to identify any unacceptable candidates even before the ANN analysis. To that end, they developed a genetic algorithm — an approach inspired by natural selection — to score individual complexes and discard those deemed to be unfit. 

To prescreen a data set, the genetic algorithm first randomly selects 20 samples from the full set of complexes. It then assigns a “fitness” score to each sample based on three measures. First, is its spin-crossover energy low enough for it to be a good SCO? To find out, the neural network evaluates each of the 20 complexes. Second, is the complex too far away from the training data? If so, the spin-crossover energy from the ANN may be inaccurate. And finally, is the complex too close to the training data? If so, the researchers have already run a DFT calculation on a similar molecule, so the candidate is not of interest in the quest for new options. 

Based on its three-part evaluation of the first 20 candidates, the genetic algorithm throws out unfit options and saves the fittest for the next round. To ensure the diversity of the saved compounds, the algorithm calls for some of them to mutate a bit. One complex may be assigned a new, randomly selected ligand, or two promising complexes may swap ligands. After all, if a complex looks good, then something very similar could be even better — and the goal here is to find novel candidates. The genetic algorithm then adds some new, randomly chosen complexes to fill out the second group of 20 and performs its next analysis. By repeating this process a total of 21 times, it produces 21 generations of options. It thus proceeds through the search space, allowing the fittest candidates to survive and reproduce, and the unfit to die out. 

Performing the 21-generation analysis on the full 5,600-complex data set required just over five minutes on a standard desktop computer, and it yielded 372 leads with a good combination of high diversity and acceptable confidence. The researchers then used DFT to examine 56 complexes randomly chosen from among those leads, and the results confirmed that two-thirds of them could be good SCOs. 

While a success rate of two-thirds may not sound great, the researchers make two points. First, their definition of what might make a good SCO was very restrictive: For a complex to survive, its spin-splitting energy had to be extremely small. And second, given a space of 5,600 complexes and nothing to go on, how many DFT analyses would be required to find 37 leads? As Janet notes, “It doesn’t matter how many we evaluated with the neural network because it’s so cheap. It’s the DFT calculations that take time.” 

Best of all, using their approach enabled the researchers to find some unconventional SCO candidates that wouldn’t have been thought of based on what’s been studied in the past. “There are rules that people have — heuristics in their heads — for how they would build a spin-crossover complex,” says Kulik. “We showed that you can find unexpected combinations of metals and ligands that aren’t normally studied but can be promising as spin-crossover candidates.” 

Sharing the new tools 

To support the worldwide search for new materials, the researchers have incorporated the genetic algorithm and ANN into "molSimplify," the group’s online, open-source software toolkit that anyone can download and use to build and simulate transition metal complexes. To help potential users, the site provides tutorials that demonstrate how to use key features of the open-source software codes. Development of molSimplify began with funding from the MIT Energy Initiative in 2014, and all the students in Kulik’s group have contributed to it since then. 

The researchers continue to improve their neural network for investigating potential SCOs and to post updated versions of molSimplify. Meanwhile, others in Kulik’s lab are developing tools that can identify promising compounds for other applications. For example, one important area of focus is catalyst design. Graduate student in chemistry Aditya Nandy is focusing on finding a better catalyst for converting methane gas to an easier-to-handle liquid fuel such as methanol — a particularly challenging problem. “Now we have an outside molecule coming in, and our complex — the catalyst — has to act on that molecule to perform a chemical transformation that takes place in a whole series of steps,” says Nandy. “Machine learning will be super-useful in figuring out the important design parameters for a transition metal complex that will make each step in that process energetically favorable.” 

This research was supported by the U.S. Department of the Navy’s Office of Naval Research, the U.S. Department of Energy, the National Science Foundation, and the MIT Energy Initiative Seed Fund Program. Jon Paul Janet was supported in part by an MIT-Singapore University of Technology and Design Graduate Fellowship. Heather Kulik has received a National Science CAREER Award (2019) and an Office of Naval Research Young Investigator Award (2018), among others.  

This article appears in the Spring 2019 issue of Energy Futures, the magazine of the MIT Energy Initiative. 

SWEDEN: Big cities face power shortage after fuel-tax hike

ClimateWire News - Thu, 08/01/2019 - 6:58am
Sweden's introduction today of a tax aimed at phasing out the nation's last remaining coal and gas plants to curb global warming comes with an unintended consequence for some of its biggest cities.

WILDFIRES: Putin sends military to fight blazes raging in Siberia

ClimateWire News - Thu, 08/01/2019 - 6:58am
President Vladimir Putin ordered the Russian military to help battle wildfires burning across a territory the size of Belgium after record high temperatures turned huge patches of forest into a tinderbox.

TEMPERATURES: Scientists: 10 warmest U.K. years have all been since 2002

ClimateWire News - Thu, 08/01/2019 - 6:58am
Britain's weather service says the country's 10 hottest years since the 19th century have all occurred since 2002, as climate change makes the U.K. warmer and wetter.

CALIFORNIA: Newsom adds 400 firefighters as chaparral turns to tinder

ClimateWire News - Thu, 08/01/2019 - 6:58am
California is hiring almost 400 firefighters as the wildfire season approaches.

EXTREME WEATHER: Businesses learn hard lessons when not prepared for disaster

ClimateWire News - Thu, 08/01/2019 - 6:58am
When Hurricane Irma hit Puerto Rico in September 2017, Carlos Melendez couldn't contact the staffers or customers of his San Juan-based technology firm, Wovenware.

FINANCE: BlackRock shareholders lose billions on fossil fuel — study

ClimateWire News - Thu, 08/01/2019 - 6:58am
Shareholders in BlackRock Inc., the world's largest asset manager, have lost billions of dollars on the firm's investments in oil, gas and coal companies, according to a new report.

COURTS: Judge won't indulge 'revolutionary' wilderness case

ClimateWire News - Thu, 08/01/2019 - 6:58am
A federal court has scrapped for good an unconventional climate case asserting a "right to wilderness."

COURTS: Greens sue over climate threats to penguins

ClimateWire News - Thu, 08/01/2019 - 6:58am
An environmental group is suing on behalf of the world's tallest penguin species, claiming that global warming is rapidly depleting emperor penguin populations that should be protected by the Fish and Wildlife Service.

TRANSPORTATION: Climate policy for cars could hurt the poor, advocates say

ClimateWire News - Thu, 08/01/2019 - 6:58am
Environmental justice advocates worry a proposed cap-and-invest program for cars won't help the low-income and minority communities that have historically borne the brunt of air pollution from vehicles.

EXTREME WEATHER: Immigrants might face storm dangers over risk of deportation

ClimateWire News - Thu, 08/01/2019 - 6:58am
The day before President Trump's dramatic announcement on July 12 that immigration authorities would make mass arrests in the coming days, the Department of Homeland Security sent a different signal.

CAMPAIGN 2020: 'That is kindergarten.' Dems attack Biden's climate plan

ClimateWire News - Thu, 08/01/2019 - 6:58am
DETROIT — Democrats running for president expanded their attacks last night about insufficient climate action beyond the usual targets of President Trump and oil companies. They took aim at former Vice President Joe Biden.

ENDANGERED SPECIES: Trump set to weaken wildlife rules during 'mass extinction'

ClimateWire News - Thu, 08/01/2019 - 6:58am
Scientists say plant and animal species are disappearing so fast that it amounts to the sixth mass extinction in Earth's history. That's the backdrop to proposed changes under the Trump administration that could make it harder to protect wildlife.

Software to empower workers on the factory floor

MIT Latest News - Wed, 07/31/2019 - 11:59pm

Manufacturers are constantly tweaking their processes to get rid of waste and improve productivity. As such, the software they use should be as nimble and responsive as the operations on their factory floors.

Instead, much of the software in today’s factories is static. In many cases, it’s developed by an outside company to work in a broad range of factories, and implemented from the top down by executives who know software can help but don’t know how best to adopt it.

That’s where MIT spinout Tulip comes in. The company has developed a customizable manufacturing app platform that connects people, machines, and sensors to help optimize processes on a shop floor. Tulip’s apps provide workers with interactive instructions, quality checks, and a way to easily communicate with managers if something is wrong.

Managers, in turn, can make changes or additions to the apps in real-time and use Tulip’s analytics dashboard to pinpoint problems with machines and assembly processes.

“With this notion of agile manufacturing [in which changes are constant], you need your software to match the philosophical process you’re using to improve your organization,” says Tulip co-founder and CTO Rony Kubat ’01, SM ’08, PhD ’12. “With our platform, we’re empowering the manufacturing engineers on the line to make changes themselves. That’s in contrast to the traditional way of making manufacturing software. It’s a bottom-up kind of thing.”

Tulip, founded by Kubat and CEO Natan Linder SM ’11, PhD ’17, is currently working with multiple Fortune 100 and Fortune 500 companies operating in 13 different countries, including Bosch, Jabil, and Kohler. Tulip’s customers make everything from shoes to jewelry, medical devices, and consumer electronics.

With the platform’s scalable design, Kubat says it can help factories of any size, as long as they employ people on the shop floor.

In that way, Tulip’s tools are empowering workers in an industry that has historically trended toward automation. As the company continues building out its platform — including adding machine vision and machine learning capabilities — it hopes to continue encouraging manufacturers to see people as an indispensable resource.

A new approach to manufacturing software

In 2012, Kubat was pursuing his PhD in the MIT Media Lab’s Fluid Interfaces group when he met Linder, then a graduate student. During their research, several Media Lab member companies gave the founders tours of their factory floors and introduced them to some of the production challenges they were grappling with.

“The Media Lab is such a special place,” Kubat says. “You have this contrast of an antidisciplinary mentality, where you’re putting faculty from completely different walks of life in the same building, giving it this creative wildness that is really invigorating, plus this grounding in the real world that comes from the member organizations that are part of the Media Lab.”

During those factory tours, the founders noticed similar problems across industries.

“The typical way manufacturing software is deployed is in these multiyear cycles,” Kubat says. “You sign a multimillion dollar contract that’s going to overhaul everything, and you get three years to deploy it all, and you get your screens in the end that everyone isn’t really happy with because they solve yesterday’s problems. We’re bringing a more modern approach to software development for manufacturing.”

In 2014, just as Linder completed his PhD research, the founders decided to start Tulip. (Linder would later return to MIT to defend his thesis.) Relying on their personal savings for funding, they recruited a team of students from MIT’s Undergraduate Research Opportunities Program and began building a prototype for New Balance, a Media Lab member company that has factories in New England.

“We worked really closely with the first customers to do super fast iterations to make these proofs of concept that we’d try to deploy as quickly as possible,” Kubat says. “That approach isn’t new from a software perspective — deploy fast and iterate — but it is new for the manufacturing software world.”

An engine for manufacturing

The app-based platform the founders eventually built out has little in common with the sweeping software implementations that traditionally upend factory operations for better or worse. Tulip’s apps can be installed in just one workstation then scaled up as needed.

The apps can also be designed by managers with no coding experience, over the course of an afternoon. Typically they can use Tulip’s app templates, which can be customized for common tasks like guiding a worker through an assembly process or completing a checklist.

Workers using the apps on the shop floor can submit comments on their interactive screens to do things like point out defects. Those comments are sent directly to the manager, who can make changes to the apps remotely.

“It’s a data-driven opportunity to engage the operators on the line, to gain some ownership over the process,” Kubat says.

The apps are integrated with machines and tools on the factory floor through Tulip’s router-like gateways. Those gateways also sync with sensors and cameras to give managers data from both humans and machines. All that information helps managers find bottlenecks and other factors holding back productivity.

Workers, meanwhile, are given real-time feedback on their actions from the cameras, which are usually trained on the part as it’s being assembled or on the bins the workers are reaching into. If a worker assembles a part improperly, for example, Tulip’s camera can detect the mistake, and its app can alert the worker to the error, presenting instructions on fixing it.

A demonstration of a worker assembling a part wrong, Tulip's sensors detecting the error, and then Tulip's app providing instructions for correcting the mistake.

Such quality checks can be sprinkled throughout a production line. That’s a big upgrade over traditional methods for data collection in factories, which often include a stopwatch and a clipboard, the founders say.

“That process is expensive,” Kubat says of traditional data collection methods. “It’s also biased, because when you’re being observed you might behave differently. It’s also a sampling of things, not the true picture. Our take is that all of that execution data should be something you get for free from a system that gives you additional value.”

The data Tulip collects are channeled into its analytics dashboard, which can be used to make customized tables displaying certain metrics to managers and shop floor workers.

In April, the company launched its first machine vision feature, which further helps workers minimize mistakes and improve productivity. Those objectives are in line with Tulip’s broader goal of empowering workers in factories rather than replacing them.

“We’re helping companies launch products faster and improve efficiency,” Kubat says. “That means, because you can reduce the cost of making products with people, you push back the [pressure of] automation. You don’t need automation to give you quality at scale. This has the potential to really change the dynamics of how products are delivered to the public.”

Speeding up drug discovery for brain diseases

MIT Latest News - Wed, 07/31/2019 - 2:25pm

A research team led by Whitehead Institute scientists has identified 30 distinct chemical compounds — 20 of which are drugs undergoing clinical trial or have already been approved by the FDA — that boost the protein production activity of a critical gene in the brain and improve symptoms of Rett syndrome, a rare neurodevelopmental condition that often provokes autism-like behaviors in patients. The new study, conducted in human cells and mice, helps illuminate the biology of an important gene, called KCC2, which is implicated in a variety of brain diseases, including autism, epilepsy, schizophrenia, and depression. The researchers’ findings, published in the July 31 online issue of Science Translational Medicine, could help spur the development of new treatments for a host of devastating brain disorders.

“There’s increasing evidence that KCC2 plays important roles in several different disorders of the brain, suggesting that it may act as a common driver of neurological dysfunction,” says senior author Rudolf Jaenisch, a founding member of Whitehead Institute and professor of biology at MIT. “These drugs we’ve identified may help speed up the development of much-needed treatments.”

KCC2 works exclusively in the brain and spinal cord, carrying ions in and out of specialized cells known as neurons. This shuttling of electrically charged molecules helps maintain the cells’ electrochemical makeup, enabling neurons to fire when they need to and to remain idle when they don’t. If this delicate balance is upset, brain function and development go awry.

Disruptions in KCC2 function have been linked to several human brain disorders, including Rett syndrome (RTT), a progressive and often debilitating disorder that typically emerges early in life in girls and can involve disordered movement, seizures, and communication difficulties. Currently, there is no effective treatment for RTT.

Jaenisch and his colleagues, led by first author Xin Tang, devised a high-throughput screen assay to uncover drugs that increase KCC2 gene activity. Using CRISPR/Cas9 genome editing and stem cell technologies, they engineered human neurons to provide rapid readouts of the amount of KCC2 protein produced. The researchers created these so-called reporter cells from both healthy human neurons as well as RTT neurons that carry disease-causing mutations in the MECP2 gene. These reporter neurons were then fed into a drug-screening pipeline to find chemical compounds that can enhance KCC2 gene activity.

Tang and his colleagues screened over 900 chemical compounds, focusing on those that have been FDA-approved for use in other conditions, such as cancer, or have undergone at least some level of clinical testing. “The beauty of this approach is that many of these drugs have been studied in the context of non-brain diseases, so the mechanisms of action are known,” says Tang. “Such molecular insights enable us to learn how the KCC2 gene is regulated in neurons, while also identifying compounds with potential therapeutic value.”

The Whitehead Institute team identified a total of 30 drugs with KCC2-enhancing activity. These compounds, referred to as KEECs (short for KCC2 expression-enhancing compounds), work in a variety of ways. Some block a molecular pathway, called FLT3, which is found to be overactive in some forms of leukemia. Others inhibit the GSK3b pathway that has been implicated in several brain diseases. Another KEEC acts on SIRT1, which plays a key role in a variety of biological processes, including aging.

In followup experiments, the researchers exposed RTT neurons and mouse models to KEEC treatment and found that some compounds can reverse certain defects associated with the disease, including abnormalities in neuronal signaling, breathing, and movement. These efforts were made possible by a collaboration with Mriganka Sur’s group at the Picower Institute for Learning and Memory, in which Keji Li and colleagues led the behavioral experiments in mice that were essential for revealing the drugs’ potency.

“Our findings illustrate the power of an unbiased approach for discovering drugs that could significantly improve the treatment of neurological disease,” says Jaenisch. “And because we are starting with known drugs, the path to clinical translation is likely to be much shorter.”

In addition to speeding up drug development for Rett syndrome, the researchers’ unique drug-screening strategy, which harnesses an engineered gene-specific reporter to unearth promising drugs, can also be applied to other important disease-related genes in the brain. “Many seemingly distinct brain diseases share common root causes of abnormal gene expression or disrupted signaling pathways,” says Tang. “We believe our method has broad applicability and could help catalyze therapeutic discovery for a wide range of neurological conditions.”

Support for this work was provided by the National Institutes of Health, the Simons Foundation Autism Research Initiative, the Simons Center for the Social Brain at MIT, the Rett Syndrome Research Trust, the International Rett Syndrome Foundation, the Damon Runyon Cancer Foundation, and the National Cancer Institute.

Lowering emissions without breaking the bank

MIT Latest News - Wed, 07/31/2019 - 2:20pm

India’s economy is booming, driving up electric power consumption to unprecedented levels. The nation’s installed electricity capacity, which increased fivefold in the past three decades, is expected to triple over the next 20 years. At the same time, India has committed to limiting its carbon dioxide emissions growth; its Paris Agreement climate pledge is to decrease its carbon dioxide emissions intensity of GDP (CO2 emissions per unit of GDP) by 33 to 35 percent by 2030 from 2005 levels, and to boost carbon-free power to about 40 percent of installed capacity in 2030.

Can India reach its climate targets without adversely impacting its rate of economic growth — now estimated at 7 percent annually — and what policy strategy would be most effective in achieving that goal?

To address these questions, researchers from the MIT Joint Program on the Science and Policy of Global Change developed an economy-wide model of India with energy-sector detail, and applied it to simulate the achievement of each component of the nation’s Paris pledge. Representing the emissions intensity target with an economy-wide carbon price and the installed capacity target with a Renewable Portfolio Standard (RPS), they assessed the economic implications of three policy scenarios — carbon pricing, an RPS, and a combination of carbon pricing with an RPS. Their findings appear in the journal Climate Change Economics.

As a starting point, the researchers determined that imposing an economy-wide emissions reduction policy alone to meet the target emissions intensity, simulated through a carbon price, would result in the lowest cost to India’s economy. This approach would lead to emissions reductions not only in the electric power sector but throughout the economy. By contrast, they found that an RPS, which would enforce a minimum level of currently more expensive carbon-free electricity, would have the highest per-ton cost — more than 10 times higher than the economy-wide CO2 intensity policy.

“In our modeling framework, allowing emissions reduction across all sectors of the economy through an economy-wide carbon price ensures that the least-cost pathways for reducing emissions are observed,” says Arun Singh, lead author of the study. “This is constrained when electricity sector-specific targets are introduced. If renewable electricity costs are higher than the average cost of electricity, a higher share of renewables in the electricity mix makes electricity costlier, and the impacts of higher electricity prices reverberate across the economy.” A former research assistant at the MIT joint program and graduate student at the MIT Institute for Data, Systems and Society’s Technology and Policy Program, Singh now serves as an energy specialist consultant at the World Bank.

Combining an economy-wide carbon price with an RPS would, however, bring the price per ton of CO2 down from $23.38/tCO2 (in 2011 U.S. dollars) under a standalone carbon-pricing policy to a far more politically viable $6.17/tCO2 when an RPS is added. If wind and solar costs decline significantly, the cost to the economy would decrease considerably; at the lowest wind and solar cost levels simulated, the model projects that economic losses under a carbon price with RPS would be only slightly higher than those under a standalone carbon price. Thus, declining wind and solar costs could enable India to set more ambitious climate policies in future years without significantly impeding economic growth.

“Globally, it has been politically impossible to introduce CO2 prices high enough to mitigate climate change in line with the Paris Agreement goals,” says Valerie Karplus, co-author and assistant professor at the MIT Sloan School of Management. “Combining pricing approaches with technology-specific policies may be important in India, as they have elsewhere, for the politics to work.”

Developed by Singh in collaboration with his master’s thesis advisors at MIT (Karplus, and MIT Joint Program Principal Research Scientist Niven Winchester, who also co-authored the study), the economy-wide model of India enables researchers to gauge the cost-effectiveness and efficiency of different technology and policy choices designed to transition the country to a low-carbon energy system.

“The study provides important insights about the costs of different policies, which are relevant to nations that have pledged emission targets under the Paris Agreement but have not yet developed polices to meet those targets,” says Winchester, who is also a senior fellow at Motu Economic and Public Policy Research.

The study was supported by the MIT Tata Center for Technology and Design, the Energy Information Administration of the U.S. Department of Energy, and the MIT Joint Program.

Why did my classifier just mistake a turtle for a rifle?

MIT Latest News - Wed, 07/31/2019 - 2:00pm

A few years ago, the idea of tricking a computer vision system by subtly altering pixels in an image or hacking a street sign seemed like more of a hypothetical threat than anything to seriously worry about. After all, a self-driving car in the real world would perceive a manipulated object from multiple viewpoints, cancelling out any misleading information. At least, that’s what one study claimed.

“We thought, there’s no way that’s true!” says MIT PhD student Andrew Ilyas, then a sophomore at MIT. He and his friends — Anish Athalye, Logan Engstrom, and Jessy Lin — holed up at the MIT Student Center and came up with an experiment to refute the study. They would print a set of three-dimensional turtles and show that a computer vision classifier could mistake them for rifles.

The results of their experiments, published at last year’s International Conference on Machine Learning (ICML), were widely covered in the media, and served as a reminder of just how vulnerable the artificial intelligence systems behind self-driving cars and face-recognition software could be. “Even if you don’t think a mean attacker is going to perturb your stop sign, it’s troubling that it’s a possibility,” says Ilyas. “Adversarial example research is about optimizing for the worst case instead of the average case.”

With no faculty co-authors to vouch for them, Ilyas and his friends published their study under the pseudonym “Lab 6,” a play on Course 6, their Department of Electrical Engineering and Computer Science (EECS) major. Ilyas and Engstrom, now an MIT graduate student, would go on to publish five more papers together, with a half-dozen more in the pipeline.

At the time, the risk posed by adversarial examples was still poorly understood. Yann LeCun, the head of Facebook AI, famously downplayed the problem on Twitter. “Here’s one of the pioneers of deep learning saying, this is how it is, and they say, nah!” says EECS Professor Aleksander Madry. “It just didn’t sound right to them and they were determined to prove why. Their audacity is very MIT.” 

The extent of the problem has grown clearer. In 2017, IBM researcher Pin-Yu Chen showed that a computer vision model could be compromised in a so-called black-box attack by simply feeding it progressively altered images until one caused the system to fail. Expanding on Chen’s work at ICML last year, the Lab 6 team highlighted multiple cases in which classifiers could be duped into confusing cats and skiers for guacamole and dogs, respectively.

This spring, Ilyas, Engstrom, and Madry presented a framework at ICML for making black-box attacks several times faster by exploiting information gained from each spoofing attempt. The ability to mount more efficient black-box attacks allows engineers to redesign their models to be that much more resilient.

“When I met Andrew and Logan as undergraduates, they already seemed like experienced researchers,” says Chen, who now works with them via the MIT-IBM Watson AI Lab. “They’re also great collaborators. If one is talking, the other jumps in and finishes his thought.”

That dynamic was on display recently as Ilyas and Engstrom sat down in Stata to discuss their work. Ilyas seemed introspective and cautious, Engstrom, outgoing, and at times, brash.

“In research, we argue a lot,” says Ilyas. “If you’re too similar you reinforce each other’s bad ideas.” Engstrom nodded. “It can get very tense.”

When it comes time to write papers, they take turns at the keyboard. “If it’s me, I add words,” says Ilyas. “If it’s me, I cut words,” says Engstrom.

Engstrom joined Madry’s lab for a SuperUROP project as a junior; Ilyas joined last fall as a first-year PhD student after finishing his undergraduate and MEng degrees early. Faced with offers from other top graduate schools, Ilyas opted to stay at MIT. A year later, Engstrom followed.

This spring the pair was back in the news again, with a new way of looking at adversarial examples: not as bugs, but as features corresponding to patterns too subtle for humans to perceive that are still useful to learning algorithms. We know instinctively that people and machines see the world differently, but the paper showed that the difference could be isolated and measured.

They trained a model to identify cats based on “robust” features recognizable to humans, and “non-robust” features that humans typically overlook, and found that visual classifiers could just as easily identify a cat from non-robust features as robust. If anything, the model seemed to rely more on the non-robust features, suggesting that as accuracy improves, the model may become more susceptible to adversarial examples. 

“The only thing that makes these features special is that we as humans are not sensitive to them,” Ilyas told Wired.

Their eureka moment came late one night in Madry’s lab, as they often do, following hours of talking. “Conversation is the most powerful tool for scientific discovery,” Madry likes to say. The team quickly sketched out experiments to test their idea.

“There are many beautiful theories proposed in deep learning,” says Madry. “But no hypothesis can be accepted until you come up with a way of verifying it.”

“This is a new field,” he adds. “We don’t know the answers to the questions, and I would argue we don’t even know the right questions. Andrew and Logan have the brilliance and drive to help lead the way.”

Jack Kerrebrock, professor emeritus of aeronautics and astronautics, dies at 91

MIT Latest News - Wed, 07/31/2019 - 10:48am

Jack L. Kerrebrock, professor emeritus of aeronautics and astronautics at MIT, died at home on July 19. He was 91.

Born in Los Angeles in 1928, Kerrebrock received his BS in 1950 from Oregon State University, his MS in 1951 from Yale University, and his PhD in 1956 from Caltech. With a passion for aerospace, he held positions with the National Advisory Committee for Aeronautics, Caltech, and Oak Ridge National Laboratory before joining the faculty of MIT as an assistant professor in 1960.

Promoted to associate professor in 1962 and to full professor in 1965, Kerrebrock founded and directed the Department of Aeronautics and Astronautics’ Space Propulsion Laboratory from 1962 until 1976, when it merged with the department’s Gas Turbine Laboratory, of which he had become director in 1968. In 1978, he accepted the role of head of the Department of Aeronautics and Astronautics (AeroAstro).

Kerrebrock enjoyed an international reputation as an expert in the development of propulsion systems for aircraft and spacecraft. Over the years, he served as chair or member of multiple advisory committees — both government and professional — and as NASA associate administrator of aeronautics and space technology.

As associate director of engineering, Kerrebrock was the faculty leader of the Daedalus Project in AeroAstro. Daedalus was a human-powered aircraft that, on 23 April 1988, flew a distance of 72.4 miles (115.11 kilometers) in three hours, 54 minutes, from Heraklion on the island of Crete to the island of Santorini. Daedalus still holds the world record for human-powered flight. This flight was the culmination of a decade of work by MIT students and alumni and made a major contribution to the understanding of the science and engineering of human-powered flight.

Elected to the National Academy of Engineering in 1978, Kerrebrock was the recipient of numerous accolades, including election to the status of honorary fellow of the American Institute of Aeronautics and Astronautics, as well as the Explorers Club and the American Academy of Arts and Sciences. A member of the American Association for the Advancement of Science, Sigma Xi, Tau Beta Pi, and Phi Kappa Phi, he received NASA’s Distinguished Service Medal in 1983. He was also a contributor to the Intergovernmental  Panel on Climate Change, which along with Al Gore won the Nobel Prize in 2007.

Although a luminary in his field, Kerrebrock — an enthusiastic outdoorsman — was perhaps never happier than when climbing a mountain, hiking a wilderness trail, or leading a group of young people through ice and snow to teach them independence and survival skills. He ran his first Boston Marathon in his early 50s on a whim, with no training, following that with several more marathons, including the Marine Corps Marathon in Washington.

Kerrebrock and his wife Crickett traveled widely, to destinations including South Africa, Scotland, Tuscany, Paris, and a very special trip to Canaveral for one of the last Space Shuttle launches, where he was able to introduce his wife to his friend Neil Armstrong, who was one of her heroes.

Kerrebrock was married to Rosemary “Crickett” Redmond (Keough) Kerrebrock for the last 12 years of his life. He was previously married for 50 years to the late Bernice “Vickie” (Veverka) Kerrebrock, who died in 2003. In addition to his wife, Kerrebrock leaves behind two children, Nancy Kerrebrock (Clint Cummins) of Palo Alto, California, and Peter Kerrebrock (Anne) of Hingham, Masachusetts; and five grandchildren, Lewis Kerrebrock, Gale Kerrebrock, Renata Cummins, Skyler Cummins, and Lance Cummins. He was preceded in death by his son Christopher Kerrebrock, brother Glenn, and sister Ann. He also is remembered fondly by the Redmond children, Paul J. Redmond Jr. and his partner Joe Palombo, Kelly Redmond and her husband Philip Davis, Maura Redmond, Meaghan Winokur and James Winokur and their children, Laine and Alicia.

A public memorial service is being planned at MIT and will be announced soon. In lieu of flowers, contributions in his memory may be made to the Jack and Vickie Kerrebrock Fellowship Fund, Massachusetts Institute of Technology, 600 Memorial Drive, Cambridge MA 02139.

Pages