MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 19 hours 31 min ago

3Q: A bold mission to touch the sun

Mon, 08/13/2018 - 12:00am

On Sunday, NASA launched a bold mission to fly directly into the sun’s atmosphere, with a spacecraft named the Parker Solar Probe, after solar astrophysicist Eugene Parker. The incredibly resilient vessel, vaguely shaped like a lightbulb the size of a small car, was launched early in the morning from Cape Canaveral Air Force Station in Florida. Its trajectory will aim straight for the sun, where the probe will come closer to the solar surface than any other spacecraft in history.

The probe will orbit the blistering corona, withstanding unprecedented levels of radiation and heat, in order to beam back to Earth data on the sun’s activity. Scientists hope such data will illuminate the physics of stellar behavior. The data will also help to answer questions about how the sun’s winds, eruptions, and flares shape weather in space, and how that activity may affect life on Earth, along with astronauts and satellites in space.

Several researchers from MIT are collaborating on the mission, including co-principal investigators John Belcher, the Class of 1992 Professor of Physics, and John Richardson, a principal research scientist in MIT’s Kavli Institute for Astrophysics and Space Research. MIT News spoke with Belcher about the historic mission and its roots at the Institute.

Q: This has to be one extreme vehicle to withstand the sun’s radiation at such close range. What kind of effects will the probe experience as it orbits the sun, and what about the spacecraft will help it stay on course?

A: The spacecraft will come as close as 3.9 million miles to the sun, well within the orbit of Mercury and more than seven times closer than any spacecraft has come before. This distance is about 8.5 solar radii, very close to the region where the solar wind is accelerated. At these distances the sun will be over 500 times brighter than it appears to Earth, and particle radiation from solar activity will be harsh.

In order to survive, the spacecraft folds its solar panels into the shadows of its protective solar shade, leaving just enough of the specially angled panels in sunlight to provide power closer to the sun. To perform these unprecedented investigations, the spacecraft and instruments will be protected from the sun’s heat by a 4.5-inch-thick carbon-composite shield, which will need to withstand temperatures outside the spacecraft that reach nearly 2,500 degrees Fahrenheit. 

Q: What data will the probe be collecting, and what insights are scientists ultimately hoping to gain from these data?

A: There will be a variety of instruments to measure solar particles and fields near the sun, including a low-energy plasma instrument, a magnetometer, and a suite of energetic particle instruments. These will help determine the structure and dynamics of the magnetic fields at the sources of solar wind, trace the flow of energy that heats the corona and accelerates the solar wind, and determine what mechanisms accelerate and transport energetic particles. 

The acceleration of the solar wind is still an outstanding question, mostly because all of the acceleration is over by [the time the wind has traveled] 25 solar radii. The Earth sits at 215 solar radii, so we have never made the most crucial observations close to the sun. It is only by getting this close to the sun that we have a chance of answering definitely what accelerates the wind. The major question is whether thermal processes or wave acceleration processes are most important, or both.  

Q: What is MIT’s role in this endeavor?  


A: John Richardson and I are co-investigators on the Solar Wind Electrons Alphas and Protons (SWEAP) Investigation for the mission. The principal investigator, Professor Justin Kasper of the University of Michigan, is an MIT graduate and was trained by Alan Lazarus, working on the Faraday cup launched on the DSCOVR satellite in 2014.

The SWEAP Investigation is the set of instruments on the spacecraft that will directly measure the properties of the plasma in the solar atmosphere during these encounters. A special component of SWEAP is a small instrument that will look around the protective heat shield of the spacecraft directly at the sun, the only instrument on the spacecraft to do so. This will allow SWEAP to sweep up a sample of the atmosphere of the sun, our star, for the first time at these distances.

This small instrument looking around the heat shield is a Faraday cup, and is a direct descendant of the first instrument to measure the existence of the supersonic solar wind expansion.  That measurement was carried out by Professor Herb Bridge, Dr. Al Lazarus, and Professor Bruno Rossi, [all of MIT], on Explorer 10 in 1961.

At the same time the solar probe Faraday cup is measuring the properties of the solar wind close to the sun at 8 solar radii, a sister Faraday cup on Voyager (launched in 1977) will probably be measuring plasma in the local interstellar space, totally outside the solar atmosphere, beyond 100 astronomical units, or 20,000 solar radii. This Voyager 2 instrument has been in space for more than 40 years, consistently returning data to Earth. Thus two probes which trace their lineage to MIT Professor Herb Bridge will be making measurements at opposite ends of the solar system, from as close as you can get to the sun to as far away as the local interstellar medium. 

A summer tune-up for industry professionals

Sat, 08/11/2018 - 11:59pm

Kristala Jones Prather is speaking in a packed MIT lecture hall. Many of her students wear reading glasses, some have a little less hair than they used to, and most of them are well dressed and groomed. But all of these engineers, biologists, chemists, microbiologists, and biochemists take furious notes in thick course binders and lean forward to study the equations she jots on the chalkboard.

As Prather delves into Fermentation Technology, a short program offered by MIT Professional Education, she engages and challenges her students. “Do we have a few biochemists? Does this model remind you of anything?” she asks. “It may have been a dark time, but think back to your undergraduate biochemistry class,” she jokes before diving back into her lecture, one of 16 lectures the students will absorb. The course is the oldest in the MIT Professional Education catalog.

Since 1962, this intensive program has attracted industry professionals to campus for five days that promise a review of the fundamentals in the application of biological and engineering principles to problems involving microbial, mammalian, and biological and biochemical systems.

Fermentation Technology gathers a diverse array of professionals to glean the latest insights on terrain they navigate every day at work. It is an opportunity for them to gain knowledge of what might be coming next in biological and biochemical technology, with an emphasis on biological systems with industrial practices. Prather, the Arthur D. Little Professor of Chemical Engineering at MIT, oversees the course with Daniel I. C. Wang, an Institute Professor in the Department of Chemical Engineering.

In addition to Prather and Wang, Fermentation Technology features a mix of guest lecturers that include other MIT faculty and industry professionals, such as Neal Connors from Phoenix BioConsulting in New Jersey, Kara Calhoun from the California biotech company Genentech, and Morris Z. Rosenberg, a biotech consultant in Washington.

As she wraps up her first of two 90-minute lectures of the day, Prather deadpans: “Marinate on that over the break. I’m happy to answer questions when we come back if it’s still not making sense to you.”

As the room empties for lunch, several of the visiting professionals make quick calls into the office or to check on family back home. Bill Morrison, a facilities engineer at BioMarin Pharmaceuticals in San Rafael, California, explains why he’s flown into Boston for hours of difficult lectures. He is moving into a process engineering role at his company and the course material is helpful for the most part. “I’m weak on the theory, but the other part about the mechanism of production is more up my alley,” he says.

Katherine Wyndham from Novavax Inc., a clinical-stage vaccine company headquartered in Gaithersburg, Maryland, says she is a member of the manufacturing, science, and technology group at her company. “This course is really giving me a technical base for what I do,” she says. “I’d say 50 percent is directly applicable to stuff I use every day, and the other 50 percent provides me with new insight into what the process development group does.”

Making additional notes at her lecture seat, Soniya Parulekar of Merck and Company, a global pharmaceutical company, has arrived from Philadelphia for the program. She works in fermentation research and development. “A lot of the things I’m seeing discussed in this course are giving me a better sense of what I’m working on — a deeper knowledge,” she says.

Soon enough Prather is back from lunch. She begins to animatedly discuss modeling and bioprocess monitoring as industry professionals from across the country settle into their chairs to absorb as much information as they can.

There are 2.5 days left of the course. Or to be exact, seven more lectures, including: perfusion reactors, medium design and high cell-density cultivation, power requirement in bioreactors, oxygen transfer and shear in bioreactors, design of experiments, analytics in biomanufacturing, and bioprocess simulation and economics. Attention in the room is still running high.

For Prather, teaching a room full of professionals offers interesting opportunities as a teacher. “I teach the same material in my biochemical engineering class for undergraduates,” she says. “The short-course students bring a much richer perspective based on their own professional experiences. Sometimes,” she adds, “they teach me things that I can then offer to our own students.”

3Q: Muriel Médard on the world-altering rise of 5G

Fri, 08/10/2018 - 4:35pm

The rise of 5G, or fifth generation, mobile technologies is refashioning the wireless communications and networking industry. The School of Engineering recently asked Muriel Médard, the Cecil H. Green Professor in the Electrical Engineering and Computer Science Department at MIT, to explain what that means and why it matters.

Médard, the co-founder of three companies to commercialize network coding — CodeOn, Steinwurf and Chocolate Cloud — is considered a global technology leader. Her work in network coding, hardware implementation, and her original algorithms have received widespread recognition and awards. At MIT, Médard leads the Network Coding and Reliable Communications Group at the Research Laboratory for Electronics.

Q. People are hearing that 5G will transform industries across the world and bring advances in smart transportation, health care, wearables, augmented reality, and the internet of things. The media report that strategic players in the U.S. and internationally are developing these technologies for market by 2020 or earlier. What sets this generation apart from its predecessors?

A. The reason 5G is so different is that what exactly it will look like is still up in the air. Everyone agrees the phrase is a bit of a catch-all. I’ll give you some big brush strokes on 5G and what people are looking at actively in the area.

In second, third, and fourth generations, people got a phone service that by 4G really became a system of phone plus data. It was all fairly traditional. For instance, people are used to switching manually from their cellular provider to available Wi-Fi at their local coffee shop or wherever.

One of the main ideas behind 5G is that you’ll have a single network that allows a blended offering. People are looking at using a multi-path approach, which means drawing on Wi-Fi and non-Wi-Fi 5G (or sometimes 4G) seamlessly. This poses some difficult coordination problems. It requires network coding, by using algebraic combinations, across different paths to create a single, smooth experience.

Another important part of 5G is that people are looking at using millimeter waves, which occupy frequencies that are high enough to avoid interference among multiple senders that are transmitting simultaneously in fairly close proximity relative to what is possible now. These high frequencies, with wide open spectrum regions, may be well-suited for very large amounts of data that need to be transmitted over fairly short distances.

There is also what people call “the fog,” which is something more than just how people feel in the morning before coffee. Fog computing, in effect, involves extending cloud capabilities, such as compute, storage and networking services, through various nodes and IoT gateways. It involves being able to draw on the presence of different users nearby in order to establish small, lightweight, rapidly set-up, rapidly torn-down, peer-to-peer type networks. Again, the right coding is extremely important so that we don't have difficult problems of coordination. You must be able to code across the different users and the different portions of the network.

Q. You’ve described 5G as actively looking at incorporating services and modes of communications that have not been part of traditional offerings. What else sets it apart?

A. Let’s talk about global reach. With 5G, people are looking at incorporating features, such as satellite service, that are seamlessly integrated with terrestrial service. For this, we also really need reliance on coding. You can imagine how there is no way you can rely on traditional coordination and scheduling across satellites and nodes on the ground on large scale.

Another thing that makes 5G so different from other evolutions is the sheer volume of players. If you were talking about 3G or 4G, it was pretty straightforward. Your key players were doing equipment provisioning to service providers.

Now it’s a very busy and more varied set of players. The different aspects that I’ve talked about are often not all considered by the same player. Some people are looking at worldwide coverage via satellite networking. Other people are looking at blending new channels, such as the millimeter wave ones I referred to earlier, with Wi-Fi, which basically requires marrying existing infrastructure with new ones.

I think finding a coherent and central source of information is a big challenge. You have the organization that governs cellular standards, 3GPP, but the whole industry is transforming as we watch in the area of 5G. It’s not clear whether it’s going to be 3GPP still calling the shots. You have so many new entrants that are not necessarily part of the old guard.

Q. What do you believe people will notice on a daily level with the rise of 5G?

A. I’ll give you my vision for the future of 5G, with the caveat that we’re now moving into an area that is more a matter of opinion. I see heterogeneity as part of the design. You're going to have a network that is talking to a large and disparate set of nodes with very different purposes for very different applications. You’re going to see a view that emphasizes integration of existing and new resources over just the deployment of new resources.

And I think the people who are going to win in 5G may not be the same players as before. It will be the company that figures out how to provide people with a seamless experience using the different substrates in a way that is highly opportunistic. It has to be a system that integrates everything naturally because you cannot preplan the satellite beam you're going to be in, the fog network you're going to be in, and the IoT devices that are going to be around you. There is no way even to maintain or manage so much information. Everything is becoming too complex and, in effect, organic. And my view on how to do that? Network coding. That’s an opinion but it’s a strongly held one.

Study suggests glaucoma may be an autoimmune disease

Fri, 08/10/2018 - 5:00am

Glaucoma, a disease that afflicts nearly 70 million people worldwide, is something of a mystery despite its prevalence. Little is known about the origins of the disease, which damages the retina and optic nerve and can lead to blindness.

A new study from MIT and Massachusetts Eye and Ear has found that glaucoma may in fact be an autoimmune disorder. In a study of mice, the researchers showed that the body’s own T cells are responsible for the progressive retinal degeneration seen in glaucoma. Furthermore, these T cells appear to be primed to attack retinal neurons as the result of previous interactions with bacteria that normally live in our body.

The discovery suggests that it could be possible to develop new treatments for glaucoma by blocking this autoimmune activity, the researchers say.

“This opens a new approach to prevent and treat glaucoma,” says Jianzhu Chen, an MIT professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the study, which appears in Nature Communications on Aug. 10. 

Dong Feng Chen, an associate professor of ophthalmology at Harvard Medical School and the Schepens Eye Research Institute of Massachusetts Eye and Ear, is also a senior author of the study. The paper’s lead authors are Massachusetts Eye and Ear researchers Huihui Chen, Kin-Sang Cho, and T.H. Khanh Vu.

Genesis of glaucoma

One of the biggest risk factors for glaucoma is elevated pressure in the eye, which often occurs as people age and the ducts that allow fluid to drain from the eye become blocked. The disease often goes undetected at first; patients may not realize they have the disease until half of their retinal ganglion cells have been lost.

Most treatments focus on lowering pressure in the eye (also known as intraocular pressure). However, in many patients, the disease worsens even after intraocular pressure returns to normal. In studies in mice, Dong Feng Chen found the same effect.

“That led us to the thought that this pressure change must be triggering something progressive, and the first thing that came to mind is that it has to be an immune response,” she says.

To test that hypothesis, the researchers looked for immune cells in the retinas of these mice and found that indeed, T cells were there. This is unusual because T cells are normally blocked from entering the retina, by a tight layer of cells called the blood-retina barrier, to suppress inflammation of the eye. The researchers found that when intraocular pressure goes up, T cells are somehow able to get through this barrier and into the retina.

The Mass Eye and Ear team then enlisted Jianzhu Chen, an immunologist, to further investigate what role these T cells might be playing in glaucoma. The researchers generated high intraocular pressure in mice that lack T cells and found that while this pressure induced only a small amount of damage to the retina, the disease did not progress any further after eye pressure returned to normal.

Further studies revealed that the glaucoma-linked T cells target proteins called heat shock proteins, which help cells respond to stress or injury. Normally, T cells should not target proteins produced by the host, but the researchers suspected that these T cells had been previously exposed to bacterial heat shock proteins. Because heat shock proteins from different species are very similar, the resulting T cells can cross-react with mouse and human heat shock proteins.

To test this hypothesis, the team brought in James Fox, a professor in MIT’s Department of Biological Engineering and Division of Comparative Medicine, whose team maintains mice with no bacteria. The researchers found that when they tried to induce glaucoma in these germ-free mice, the mice did not develop the disease.

Human connection

The researchers then turned to human patients with glaucoma and found that these patients had five times the normal level of T cells specific to heat shock proteins, suggesting that the same phenomenon may also contribute to the disease in humans. The researchers’ studies thus far suggest that the effect is not specific to a particular strain of bacteria; rather, exposure to a combination of bacteria can generate T cells that target heat shock proteins.

One question the researchers plan to study further is whether other components of the immune system may be involved in the autoimmune process that gives rise to glaucoma. They are also investigating the possibility that this phenomenon may underlie other neurodegenerative disorders, and looking for ways to treat such disorders by blocking the autoimmune response.

“What we learn from the eye can be applied to the brain diseases, and may eventually help develop new methods of treatment and diagnosis,” Dong Feng Chen says.

The research was funded by the National Institutes of Health, the Lion’s Foundation, the Miriam and Sheldon Adelson Medical Research Foundation, the National Nature Science Foundation of China, the Ivan R. Cottrell Professorship and Research Fund, the Koch Institute Support (core) Grant from the National Cancer Institute, and the National Eye Institute Core Grant for Vision Research.

Artificial intelligence model “learns” from patient data to make cancer treatment less toxic

Thu, 08/09/2018 - 11:59pm

MIT researchers are employing novel machine-learning techniques to improve the quality of life for patients by reducing toxic chemotherapy and radiotherapy dosing for glioblastoma, the most aggressive form of brain cancer.

Glioblastoma is a malignant tumor that appears in the brain or spinal cord, and prognosis for adults is no more than five years. Patients must endure a combination of radiation therapy and multiple drugs taken every month. Medical professionals generally administer maximum safe drug doses to shrink the tumor as much as possible. But these strong pharmaceuticals still cause debilitating side effects in patients.

In a paper being presented next week at the 2018 Machine Learning for Healthcare conference at Stanford University, MIT Media Lab researchers detail a model that could make dosing regimens less toxic but still effective. Powered by a “self-learning” machine-learning technique, the model looks at treatment regimens currently in use, and iteratively adjusts the doses. Eventually, it finds an optimal treatment plan, with the lowest possible potency and frequency of doses that should still reduce tumor sizes to a degree comparable to that of traditional regimens.

In simulated trials of 50 patients, the machine-learning model designed treatment cycles that reduced the potency to a quarter or half of nearly all the doses while maintaining the same tumor-shrinking potential. Many times, it skipped doses altogether, scheduling administrations only twice a year instead of monthly.

“We kept the goal, where we have to help patients by reducing tumor sizes but, at the same time, we want to make sure the quality of life — the dosing toxicity — doesn’t lead to overwhelming sickness and harmful side effects,” says Pratik Shah, a principal investigator at the Media Lab who supervised this research.

The paper’s first author is Media Lab researcher Gregory Yauney.

Rewarding good choices

The researchers’ model uses a technique called reinforced learning (RL), a method inspired by behavioral psychology, in which a model learns to favor certain behavior that leads to a desired outcome.

The technique comprises artificially intelligent “agents” that complete “actions” in an unpredictable, complex environment to reach a desired “outcome.” Whenever it completes an action, the agent receives a “reward” or “penalty,” depending on whether the action works toward the outcome. Then, the agent adjusts its actions accordingly to achieve that outcome.

Rewards and penalties are basically positive and negative numbers, say +1 or -1. Their values vary by the action taken, calculated by probability of succeeding or failing at the outcome, among other factors. The agent is essentially trying to numerically optimize all actions, based on reward and penalty values, to get to a maximum outcome score for a given task.

The approach was used to train the computer program DeepMind that in 2016 made headlines for beating one of the world’s best human players in the game “Go.” It’s also used to train driverless cars in maneuvers, such as merging into traffic or parking, where the vehicle will practice over and over, adjusting its course, until it gets it right.

The researchers adapted an RL model for glioblastoma treatments that use a combination of the drugs temozolomide (TMZ) and procarbazine, lomustine, and vincristine (PVC), administered over weeks or months.

The model’s agent combs through traditionally administered regimens. These regimens are based on protocols that have been used clinically for decades and are based on animal testing and various clinical trials. Oncologists use these established protocols to predict how much doses to give patients based on weight.

As the model explores the regimen, at each planned dosing interval — say, once a month — it decides on one of several actions. It can, first, either initiate or withhold a dose. If it does administer, it then decides if the entire dose, or only a portion, is necessary. At each action, it pings another clinical model — often used to predict a tumor’s change in size in response to treatments — to see if the action shrinks the mean tumor diameter. If it does, the model receives a reward.

However, the researchers also had to make sure the model doesn’t just dish out a maximum number and potency of doses. Whenever the model chooses to administer all full doses, therefore, it gets penalized, so instead chooses fewer, smaller doses. “If all we want to do is reduce the mean tumor diameter, and let it take whatever actions it wants, it will administer drugs irresponsibly,” Shah says. “Instead, we said, ‘We need to reduce the harmful actions it takes to get to that outcome.’”

This represents an “unorthodox RL model, described in the paper for the first time,” Shah says, that weighs potential negative consequences of actions (doses) against an outcome (tumor reduction). Traditional RL models work toward a single outcome, such as winning a game, and take any and all actions that maximize that outcome. On the other hand, the researchers’ model, at each action, has flexibility to find a dose that doesn’t necessarily solely maximize tumor reduction, but that strikes a perfect balance between maximum tumor reduction and low toxicity. This technique, he adds, has various medical and clinical trial applications, where actions for treating patients must be regulated to prevent harmful side effects.

Optimal regimens

The researchers trained the model on 50 simulated patients, randomly selected from a large database of glioblastoma patients who had previously undergone traditional treatments. For each patient, the model conducted about 20,000 trial-and-error test runs. Once training was complete, the model learned parameters for optimal regimens. When given new patients, the model used those parameters to formulate new regimens based on various constraints the researchers provided.

The researchers then tested the model on 50 new simulated patients and compared the results to those of a conventional regimen using both TMZ and PVC. When given no dosage penalty, the model designed nearly identical regimens to human experts. Given small and large dosing penalties, however, it substantially cut the doses’ frequency and potency, while reducing tumor sizes.

The researchers also designed the model to treat each patient individually, as well as in a single cohort, and achieved similar results (medical data for each patient was available to the researchers). Traditionally, a same dosing regimen is applied to groups of patients, but differences in tumor size, medical histories, genetic profiles, and biomarkers can all change how a patient is treated. These variables are not considered during traditional clinical trial designs and other treatments, often leading to poor responses to therapy in large populations, Shah says.

“We said [to the model], ‘Do you have to administer the same dose for all the patients? And it said, ‘No. I can give a quarter dose to this person, half to this person, and maybe we skip a dose for this person.’ That was the most exciting part of this work, where we are able to generate precision medicine-based treatments by conducting one-person trials using unorthodox machine-learning architectures,” Shah says.

Risk, failure, and living your life: The economics of being an early-career scientist

Thu, 08/09/2018 - 11:59pm

At the end of each weekday, doctoral student Ryan Hill travels from his office in the MIT Department of Economics to his residence at the west end of campus. There, he joins his wife, Sarah, and their two-year-old daughter, Norah, for dinner. After dinner, Hill spends an hour or two playing with Norah before putting her to bed.

“I think most people would agree that research can consume your mind. Even when you walk out of the lab, you’re still thinking about it,” Hill says. “It’s been really nice to go home and forget all of that for a while and play with my daughter. … It’s a nice change of pace.”

Often, once Norah’s in bed, Hill returns to the department building located on MIT’s East Campus to resume his research, on the role science plays in innovation. His research and studies are supported by a National Science Foundation Graduate Research Fellowship grant he received after studying economics and mathematics at Brigham Young University in Provo, Utah.

“Science is an important part of innovation — it’s very early in the process,” Hill says. “My research is empirical, so I use data to try to learn how science works.” Part of that learning process places Hill at his laptop, analyzing “outputs” of research such as publications or citations. But outputs alone don’t tell the whole story: He also focuses on “inputs” of research, like how projects are chosen for research in the first place.

Hill has also forged connections with other MIT graduate students who have families, and he has spoken out about issues — arising at MIT and in Washington — that impact this community. “I’m one of many that are involved in these efforts, but when I get the chance I try to represent and communicate the needs of student families at MIT,” he says.

The role of risk

Most recently, Hill has been analyzing risk — specifically, the risk of failed research projects. “Scientists work on a lot of different projects all the time. Once in a while, they turn into a useful innovation and a paper. But there are a lot of projects that fail,” Hill says. “And that’s a fundamental part of innovation.”

Hill cites the law of large numbers, a theorem in probability theory that describes the results of experiments repeated over and over. “If enough people are working on science, we’ll get a steady stream of useful innovations. But at an individual level, research involves a lot of risk.” Hill points to graduate students as examples of scientists who take on risks in research. At MIT, graduate students often focus on a handful of projects. “And so the question is, at the project level, whether they’re facing a risk of failure that could affect the outcomes in their career. … It might affect the projects they choose to work on and whether they should be doing something that’s very risky or not.”

For one of his thesis chapters, Hill worked with data from Chile’s Very Large Telescope. Performing research at the telescope is an exercise in risk. Hill says scientists apply to reserve time on the telescope six months in advance for one or two nights. If they’re lucky and get accepted, scientists fly to Chile prepared to look through the scope. But there’s a catch.

“There’s a lot of variation,” Hill says. “Some people fly all the way down and it’s cloudy for the whole night.”

The telescope’s public databases allow Hill to see who worked at the telescopes, what the weather was like at the time, and whether that affected the researcher’s ability to publish, and further, how that affected the researcher’s career.

“It does affect people, at least for a while,” Hill says. “In most cases, at least in astronomy, people do have chances to catch up [professionally]. If you have a project failure, it doesn’t seem like you’re hurt forever. Many researchers are able to come back and do other projects, but it might take a few years.”

Studying scoops

During Hill’s third year, he excitedly began to gather research for a different study of innovation. He recalls that even his advisor, Heidi Williams, associate professor of economics, encouraged him in this direction.

About a month into Hill’s dive into his research, Williams sent him an email with a subject line along the lines of “Was this the paper you wanted to write?”

“It was literally a paper on the same topic, same area,” Hill says with a laugh. “Even table-for-table, it was exactly what I had planned to do.” Once he was scooped, Hill did what most researchers do: “I abandoned it.”

Now, Hill studies the “scooped” phenomenon. “It’s fundamental to science. Whenever I talk to people about it, it’s often the same reaction — ‘Oh! This happens to me!’”

Hill hopes that by looking at the data behind scoops, he’ll be able to determine whether getting scooped is “as big of a deal as we think.”

Advocating for families

In 2017, MIT’s Graduate Student Council invited Hill to visit Washington to discuss with members of Congress the impacts of a proposed tax bill that would make tuition waivers taxable.

“I was happy to volunteer my efforts and talk about why that’s bad — especially toward families,” Hill says. “I have a young daughter, and [the proposed legislation] affects families in particular. There are already a lot of financial barriers for students.”

To discuss the potential impacts, Hill wrote an op-ed for the Deseret News of Salt Lake City. In the op-ed, Hill argued that the taxation of tuition waivers would stifle scientific research by placing undue burdens on students. “Students are the engine that makes science happen at MIT,” Hill says.

After the bill passed without the provision to tax tuition waivers, Hill had a chance to reflect on his work.  “You don’t often get a chance to take your research and talk to policy makers,” Hill says. “Most people don’t get Science or Nature or other academic journals in their mailboxes. So we need to communicate our research to the world.”

Hill has also advocated for the student family community at MIT, participating in meetings, focus groups, and other efforts related to health care coverage, family housing, and affordable parking. “I think the Institute is slowly improving their support of student families, such as recently standardizing the parental leave policies, but there is more work to be done,” he says.

In the future, Hill hopes to become an economics professor and to continue to pursue research: “There’s a lot of different ways in economics that we can try to help improve policies and institutions around the world.”

To Hill, his time at MIT “feels like a big blessing.”

"One thing I've learned is that there's always going to be a better time to start — you know, to get married or start having kids or to do whatever your other personal life goals are,” he says. “PhDs are so long, you have to learn how to live your life. It may seem a little inconvenient [to raise a family during graduate school], but it's been awesome for us.”

The debate over how working memory works

Thu, 08/09/2018 - 2:30pm

In a debate where the stakes are nothing short of understanding how the brain maintains its so-called “sketchpad of conscious thought,” researchers discuss exactly what makes working memory work in dueling papers in the Aug. 8 edition of the Journal of Neuroscience.

Working memory is how you hold things in mind like the directions to a new restaurant and the list of specials the waiter rattles off after you sit down. Given that working memory capacity is a strong correlate of intelligence and that its dysfunction is a major symptom in common psychiatric disorders such as schizophrenia and autism, Mikael Lundqvist, a postdoc at MIT’s Picower Institute for Learning and Memory and lead author of one of the papers, says it’s important that the field achieve a true understanding of how it works.

Lundqvist's corresponding author and Picower Professor Earl Miller adds that if scientist can figure out how working memory works, “we can figure out how to fix it.”

“Working memory is the sketchpad of consciousness,” Miller says. “Doesn’t everyone want to know how our conscious mind works?”

The opposing paper in the “Dual Perspectives” section of the journal is led by Christos Constantinidis of the Wake Forest School of Medicine.

The central issue of the debate is what happens after you hear or see what you need to remember and must then hold or control it in mind to apply it later. During that interim, or delay period, the central question is whether neurons in your brain’s prefrontal cortex maintain it by persistently firing away, like an idling car engine, or whether they spike in brief but coordinated bursts to store and retrieve information via the patterns of their connections, which is akin to how longer-term memory works.

In their essay, Lundqvist, Miller, and Pawel Herman off the KTH Royal Institute of Technology in Stockholm, Sweden take the latter position. They argue that brief, coordinated bursts are clearly evident in the observations of the most recent experiments and that such activity can more satisfactorily produce key attributes of working memory, including efficient, independent control of multiple items with precise timing.

Importantly, the idea that spiking during the delay period drives changes in neural connections, or synapses, reinforces the classic idea that spiking has a crucial role in working memory, Miller says. The disagreement, he says, is merely that the spiking activity is not as persistent as it looks in older experiments.

“We’re showing additional mechanisms by which spiking maintains working memory and gives volitional control,” Miller says. “Our work doesn’t argue against the idea that delay activity spiking plays a role in working memory, it adds further support. We are just saying that at a more granular level, there are some additional things going on.”

For example, much of the disagreement arises from how different researchers have collected and analyzed their data. The data supporting the persistence interpretation arise mostly from analyses in which researchers averaged the firing patterns of small numbers of neurons over many working memory trials, the MIT authors say.

Averages, however, tend to smooth data out over the long term. Instead, in newer experiments, scientists have analyzed the spiking of many neurons in each individual trial. There, it’s clear that as animals perform working memory tasks, populations of neurons fire in brief, coordinated bursts, Miller and Lundqvist say.

In their research, members of the Miller lab have also shown how groups of neurons are coordinated, demonstrating how a large-scale, precisely timed interplay of brain rhythms correlate with goal-directed control of working memory functions such as storing or releasing, information from being in mind.

Some of the disagreement also arises from models of working memory function. Miller and Lundqvist argue that it makes functional sense that neurons fire in short, cohesive bursts in accord with circuit-wide oscillations. That uses less energy than keeping neurons firing all the time, for example, and readily explains how multiple items can be held in mind simultaneously (distinct bursts representing different pieces of information can occur at different times). Moreover, storing information in patterns of synaptic connections makes the information more resilient to distraction than if neurons are constantly trying to maintain it through activity.

“Storing information with a mixture of spiking and synapses gives the brain more flexibility,” Lundqvist says. “It can juggle the activation of different memories, allowing the brain to hold multiple memories without them interfering with each other. Plus, synapses can store temporarily store memories while the spiking processes other thoughts.

“This could explain how our working memory is not erased by things that temporarily distract us,” he says.

With a lot of new research activity and data coming in, Lundqvist added, it’s a debate whose time has come.

“This is a good time to see what the evidence is and to determine what are the experiments that will settle this,” he says. “We need more experiments to settle this. They will give us not only more insight into this question of persistence but also about working memory function.”

MIT paper recommends four major principles to help research continue to move forward: measuring the activity of whole populations of individual neurons; analyzing every trial separately; making the tasks animals do complex enough to require controlling multiple pieces of information; and measuring neural rhythms instead of just spiking.

The Miller lab’s research on working memory is funded by the National Institutes of Health, the Office of Naval Research, and the MIT Picower Institute Innovation Fund.

Krithika Ramchander and Andrea Beck awarded J-WAFS fellowships for water solutions

Thu, 08/09/2018 - 2:20pm

The Abdul Latif Jameel World Water and Food Security Lab (J-WAFS) has announced that two MIT PhD students, Krithika Ramchander and Andrea Beck, have been awarded fellowships to pursue water resource solutions for the 2018-2019 academic year. A third student, Julia Sokol, was chosen to receive an honorable mention. 

This fall will mark the second year that J-WAFS has awarded fellowships to outstanding PhD candidates pursuing water sector research. The Rasikbhai L. Meswani Fellowship for Water Solutions and the J-WAFS Graduate Student Fellowship Program both give fellows one semester of funding as well as networking, mentorship, and opportunities to showcase their research. 

The students were selected based on the quality and relevance of their research, as well as their demonstrated commitment to global as well as local challenges of water safety and water supply. The doctoral research topics the students are pursuing exemplify the wide range of approaches that J-WAFS supports across its various funding mechanisms. From the development of a novel, environmentally sustainable, and accessible water filter for rural communities in India, to a qualitative analysis of how to best strengthen a region’s public water and sanitation utilities, to engineering an innovative drip irrigation system designed to improve efficiency and reduce energy use, these research areas apply knowledge to the development of practical solutions that could be transformational for the communities that need them.

Krithika Ramchander, who has been awarded the 2018-2019 Rasikbhai L. Meswani Fellowship for Water Solutions, is a PhD candidate in the Department of Mechanical Engineering and a past co-president of the MIT Water Club. The focus of Ramchander’s research is to develop a low-cost, point-of-use water filter using sapwood xylem from coniferous trees to facilitate safe access to drinking water for rural communities in India that lack access to safe water supplies. 

Through research and field studies, she and others in Professor Rohit Karnik’s lab have shown how sapwood xylem could be repurposed into a water filter capable of meeting the drinking water requirement of an average household for nearly a week. The widespread availability of conifers in particular regions in India could allow for the manufacture of inexpensive, xylem-based filtration devices. If scaled up, this technology could support local economies across the globe as well as facilitate access to safe drinking water in regions that lack centralized water distribution systems. 

The project, in collaboration with MIT D-Lab, has also been supported by two J-WAFS Solutions Grants in 2016 and 2017.

The winner of the 2018-2019 J-WAFS Graduate Student Fellowship, Andrea Karin Beck, is a PhD candidate in the Department of Urban Studies and Planning. Beck is examining how transnational water operators’ partnerships (WOPs) could provide an alternative approach for strengthening public water and sanitation utilities in developing countries. 

In contrast to public-private partnerships that are commonly used by water and sanitation utilities, WOPs are aimed at peer-to-peer capacity-building on a not-for-profit and solidarity basis.  To date, more than 200 WOPs have been formed around the world, predominantly between operators in the Global South. 

Working with Professor Lawrence Susskind, Beck seeks to understand how different WOP constellations affect the everyday practices of water utility workers, and how these practices in turn mediate access to water and sanitation services among urban populations.

J-WAFS has also awarded an honorable mention to Julia Sokol, a PhD candidate in the Department of Mechanical Engineering, who researches novel designs for drip irrigation emitters that operate at lower pressures and are more clog-resistant than currently-available products.

Sokol is a student in Global Engineering and Research (GEAR) Lab run by Professor Amos Winter. Her research there involves experimentally validating and refining models of the drip emitters used for drip irrigation. A new clog-resistant design, if made commercially available, could help lower the capital, operating, and labor costs for farmers that use drip irrigation systems. She is currently collaborating with a manufacturing partner in India as well as field trial partners in the Middle East and North Africa to produce emitters according to this new design, test them on working farms, quantify their impact, and collect user feedback. 

At MIT, Thomas Piketty calls for policies and collaborations to reduce income inequality

Thu, 08/09/2018 - 2:00pm

Globalization and the expanding ranks of the educational elite have contributed to the rise in inequality worldwide, but political policy changes can impact these trends, French economist Thomas Piketty told a packed house at MIT’s Kresge Auditorium on Tuesday, July 31.

The author of the international best-seller "Capital in the Twenty-First Century," Piketty was addressing the 18th World Economic History Congress, an event chaired by MIT professor of history Anne McCants, who organized the congress, along with local colleagues, under the auspices of the International Economic History Association.

A professor at the School for Advanced Studies in the Social Sciences (EHESS) and at the Paris School of Economics, Piketty began his talk by noting that the World Economic History Congress “is one of the few places in the world where economists and historians talk to each other, and we truly need this interdisciplinary approach.”

In a keyote address titled “Rising Inequality and the Changing Structure of Political Conflict," Piketty noted that inequality has been on the rise worldwide since 1980. He shared findings from the World Inequality Report that show that while inequality dropped dramatically during the World Wars and the Great Depression, the percent of income going to the top 10 percent of earners has since risen steeply almost everywhere. He also pointed out that the rise has been more dramatic in the United States than in Europe.

The period of lower inequality in the United States is correlated with the “rise of the welfare state,” Piketty said — a period during which income taxes hovered around 80 percent. “Clearly this did not destroy the capitalist system,” he noted wryly. “One rationale for why this didn’t lead to disaster – and growth was actually higher in the ’50s … is that paying top managers $1 million etc. is not that useful. But the point is that a change in policy mattered a lot for a change in inequality.”

Interestingly, rising inequality has not led to rising demand for the redistribution of wealth, Piketty said. Why is that? One possibility Piketty suggested is that globalization, which enables entities to skirt internal redistribution efforts — such as by utilizing tax-free havens — has made the vertical redistribution of wealth within a country more difficult to organize. This trend has meant that one of the few things the modern nation-state can control is its borders, leading to more political conflicts centered on border controls and immigration, he said.

Piketty said that this indicates that unequal globalization is a choice. Free-trade treaties could be accompanied by redistributive taxation, but that hasn’t been happening. The conclusion Piketty draws is: “Some ruling groups must believe the system is working fine.”

He identified two seemingly disparate groups — the “Brahmin left,” comprising highly educated voters who tend to vote for liberals; and the “merchant right,” comprising wealthy individuals who tend to vote for conservatives — and suggested that both see advantages in globalization and rising inequality. This seeming paradox underscores the complexity of the problem of inequality and illustrates the need for additional study of the changing, multi-dimensional structure of political-ideological antecedents of inequality, Piketty argued.

Piketty went on to say that to begin this additional research, he recently investigated who votes for which parties in various countries based on wealth, income, education, and other factors. This work led to his working paper, “Brahmin Left vs Merchant Right: Rising Inequality and the Changing Structure of Political Conflict.”

At the event, Piketty outlined several key findings from this research — including that wealth affects voting patterns more than income. The income profile of left-vs.-right voting has been relatively flat over time. The wealthy, however, are much more likely to vote for conservatives than the poor, he said.

Piketty also pointed out that from 1956 to 2017, the educated vote moved dramatically from right to left. Today, the more educated you are, the more likely you are to vote for a liberal political party — in France, Britain, or the United States.

He said he has also found that rising inequality is highly correlated to unequal access to education. In the United States, 95 percent of those with parents at the highest income levels attend college, while only about 20 percent of the poorest do, research shows.

“The rise of higher education is creating a new form of political cleavage,” Piketty said. “The successful look down at those who did not do as well as undeserving.” While the “Brahmin left” want more taxes than do the “merchant right,” they want taxes for “universities and operas,” not wealth redistribution, Piketty argued.

These findings ultimately call for more study and discussion, Piketty said, noting that he has already begun to extend his work to more countries, including those with emerging economies.

Piketty concluded his 1.5-hour talk by noting that his goal is “to convince more people we need a more international and comparative approach to inequality.”

Neuroscientists get at the roots of pessimism

Thu, 08/09/2018 - 11:00am

Many patients with neuropsychiatric disorders such as anxiety or depression experience negative moods that lead them to focus on the possible downside of a given situation more than the potential benefit.

MIT neuroscientists have now pinpointed a brain region that can generate this type of pessimistic mood. In tests in animals, they showed that stimulating this region, known as the caudate nucleus, induced animals to make more negative decisions: They gave far more weight to the anticipated drawback of a situation than its benefit, compared to when the region was not stimulated. This pessimistic decision-making could continue through the day after the original stimulation.

The findings could help scientists better understand how some of the crippling effects of depression and anxiety arise, and guide them in developing new treatments.

“We feel we were seeing a proxy for anxiety, or depression, or some mix of the two,” says Ann Graybiel, an MIT Institute Professor, a member of MIT’s McGovern Institute for Brain Research, and the senior author of the study, which appears in the Aug. 9 issue of Neuron. “These psychiatric problems are still so very difficult to treat for many individuals suffering from them.”

The paper’s lead authors are McGovern Institute research affiliates Ken-ichi Amemori and Satoko Amemori, who perfected the tasks and have been studying emotion and how it is controlled by the brain. McGovern Institute researcher Daniel Gibson, an expert in data analysis, is also an author of the paper.

MIT neuroscientists have found that stimulating part of the striatum can induce feelings of pessimism. (Anatomography/Life Science Databases)

Emotional decisions

Graybiel’s laboratory has previously identified a neural circuit that underlies a specific kind of decision-making known as approach-avoidance conflict. These types of decisions, which require weighing options with both positive and negative elements, tend to provoke a great deal of anxiety. Her lab has also shown that chronic stress dramatically affects this kind of decision-making: More stress usually leads animals to choose high-risk, high-payoff options.

In the new study, the researchers wanted to see if they could reproduce an effect that is often seen in people with depression, anxiety, or obsessive-compulsive disorder. These patients tend to engage in ritualistic behaviors designed to combat negative thoughts, and to place more weight on the potential negative outcome of a given situation. This kind of negative thinking, the researchers suspected, could influence approach-avoidance decision-making.

To test this hypothesis, the researchers stimulated the caudate nucleus, a brain region linked to emotional decision-making, with a small electrical current as animals were offered a reward (juice) paired with an unpleasant stimulus (a puff of air to the face). In each trial, the ratio of reward to aversive stimuli was different, and the animals could choose whether to accept or not.

This kind of decision-making requires cost-benefit analysis. If the reward is high enough to balance out the puff of air, the animals will choose to accept it, but when that ratio is too low, they reject it. When the researchers stimulated the caudate nucleus, the cost-benefit calculation became skewed, and the animals began to avoid combinations that they previously would have accepted. This continued even after the stimulation ended, and could also be seen the following day, after which point it gradually disappeared.

This result suggests that the animals began to devalue the reward that they previously wanted, and focused more on the cost of the aversive stimulus. “This state we’ve mimicked has an overestimation of cost relative to benefit,” Graybiel says.

The study provides valuable insight into the role of the basal ganglia (a region that includes the caudate nucleus) in this type of decision-making, says Scott Grafton, a professor of neuroscience at the University of California at Santa Barbara, who was not involved in the research.

“We know that the frontal cortex and the basal ganglia are involved, but the relative contributions of the basal ganglia have not been well understood,” Grafton says. “This is a nice paper because it puts some of the decision-making process in the basal ganglia as well.”

A delicate balance

The researchers also found that brainwave activity in the caudate nucleus was altered when decision-making patterns changed. This change, discovered by Amemori, is in the beta frequency and might serve as a biomarker to monitor whether animals or patients respond to drug treatment, Graybiel says.

Graybiel is now working with psychiatrists at McLean Hospital to study patients who suffer from depression and anxiety, to see if their brains show abnormal activity in the neocortex and caudate nucleus during approach-avoidance decision-making. Magnetic resonance imaging (MRI) studies have shown abnormal activity in two regions of the medial prefrontal cortex that connect with the caudate nucleus.

The caudate nucleus has within it regions that are connected with the limbic system, which regulates mood, and it sends input to motor areas of the brain as well as dopamine-producing regions. Graybiel and Amemori believe that the abnormal activity seen in the caudate nucleus in this study could be somehow disrupting dopamine activity.

“There must be many circuits involved,” she says. “But apparently we are so delicately balanced that just throwing the system off a little bit can rapidly change behavior.”

The research was funded by the National Institutes of Health, the CHDI Foundation, the U.S. Office of Naval Research, the U.S. Army Research Office, MEXT KAKENHI, the Simons Center for the Social Brain, the Naito Foundation, the Uehara Memorial Foundation, Robert Buxton, Amy Sommer, and Judy Goldberg.

Introducing the latest in textiles: Soft hardware

Wed, 08/08/2018 - 12:59pm

The latest development in textiles and fibers is a kind of soft hardware that you can wear: cloth that has electronic devices built right into it.

Researchers at MIT have now embedded high speed optoelectronic semiconductor devices, including light-emitting diodes (LEDs) and diode photodetectors, within fibers that were then woven at Inman Mills, in South Carolina, into soft, washable fabrics and made into communication systems. This marks the achievement of a long-sought goal of creating “smart” fabrics by incorporating semiconductor devices — the key ingredient of modern electronics — which until now was the missing piece for making fabrics with sophisticated functionality.

This discovery, the researchers  say, could unleash a new “Moore’s Law” for fibers — in other words, a rapid progression in which the capabilities of fibers would grow rapidly and exponentially over time, just as the capabilities of microchips have grown over decades.

The findings are described this week in the journal Nature in a paper by former MIT graduate student Michael Rein; his research advisor Yoel Fink, MIT professor of materials science and electrical engineering and CEO of AFFOA (Advanced Functional Fabrics of America); along with a team from MIT, AFFOA, Inman Mills, EPFL in Lausanne, Switzerland, and Lincoln Laboratory.

A spool of fine, soft fiber made using the new process shows the embedded LEDs turning on and off to demonstrate their functionality. The team has used similar fibers to transmit music to detector fibers, which work even when underwater. (Courtesy of the researchers)

Optical fibers have been traditionally produced by making a cylindrical object called a “preform,” which is essentially a scaled-up model of the fiber, then heating it. Softened material is then drawn or pulled downward under tension and the resulting fiber is collected on a spool.

The key breakthrough for producing  these new fibers was to add to the preform light-emitting semiconductor diodes the size of a grain of sand, and a pair of copper wires a fraction of a hair’s width. When heated in a furnace during the fiber-drawing process, the polymer preform partially liquified, forming a long fiber with the diodes lined up along its center and connected by the copper wires.

In this case, the solid components were two types of electrical diodes made using standard microchip technology: light-emitting diodes (LEDs) and photosensing diodes. “Both the devices and the wires maintain their dimensions while everything shrinks around them” in the drawing process, Rein says. The resulting fibers were then woven into fabrics, which were laundered 10 times to demonstrate their practicality as possible material for clothing.

“This approach adds a new insight into the process of making fibers,” says Rein, who was the paper’s lead author and developed the concept that led to the new process. “Instead of drawing the material all together in a liquid state, we mixed in devices in particulate form, together with thin metal wires.”

One of the advantages of incorporating function into the fiber material itself is that the resulting  fiber is inherently waterproof. To demonstrate this, the team placed some of the photodetecting fibers inside a fish tank. A lamp outside the aquarium transmitted music (appropriately, Handel’s “Water Music”) through the water to the fibers in the form of rapid optical signals. The fibers in the tank converted the light pulses — so rapid that the light appears steady to the naked eye — to electrical signals, which were then converted into music. The fibers survived in the water for weeks.

Though the principle sounds simple, making it work consistently, and making sure that the fibers could be manufactured reliably and in quantity, has been a long and difficult process. Staff at the Advanced Functional Fabric of America Institute, led by Jason Cox and Chia-Chun Chung, developed the pathways to increasing yield, throughput, and overall reliability, making these fibers ready for transitioning to industry. At the same time, Marty Ellis from Inman Mills developed techniques for weaving these fibers into fabrics using a conventional industrial manufacturing-scale loom.

“This paper describes a scalable path for incorporating semiconductor devices into fibers. We are anticipating the emergence of a ‘Moore’s law’ analog in fibers in the years ahead,” Fink says. “It is already allowing us to expand the fundamental capabilities of fabrics to encompass communications, lighting, physiological monitoring, and more. In the years ahead fabrics will deliver value-added services and will no longer just be selected for aesthetics and comfort.”

He says that the first commercial products incorporating this technology will be reaching the marketplace as early as next year — an extraordinarily short progression from laboratory research to commercialization. Such rapid lab-to-market development was a key part of the reason for creating an academic-industry-government collaborative such as AFFOA in the first place, he says. These initial applications will be specialized products involving communications and safety. “It's going to be the first fabric communication system. We are right now in the process of transitioning the technology to domestic manufacturers and industry at an unprecendented speed and scale,” he says.

In addition to commercial applications, Fink says the U.S. Department of Defense — one of AFFOA’s major supporters — “is exploring applications of these ideas to our women and men in uniform.”

Beyond communications, the fibers could potentially have significant applications in the biomedical field, the researchers say. For example, devices using such fibers might be used to make a wristband that could measure pulse or blood oxygen levels, or be woven into a bandage to continuously monitor the healing  process.

The research was supported in part by the MIT Materials Research Science and Engineering Center (MRSEC) through the MRSEC Program of the National Science Foundation, by the U.S. Army Research Laboratory and the U.S. Army Research Office through the Institute for Soldier Nanotechnologies. This work was also supported by the Assistant Secretary of Defense for Research and Engineering.

President Reif urges “farsighted national strategy” to address China competition

Wed, 08/08/2018 - 12:00pm

In an op-ed piece published today in The New York Times, MIT President L. Rafael Reif urges a more farsighted response to address China’s attempts to dominate cutting-edge technologies, which have included tactics such as industrial espionage and theft of intellectual property.

While strong and decisive action against such practices is essential, Reif writes, it is not enough. “[I]t would be a mistake to think that an aggressive defense alone will somehow prevent China’s technological success — or ensure America’s own,” he says.

Rather, the most important action the U.S. can take to protect its global leadership role is to redouble its core strength in innovation, starting with ground-breaking federally funded research.

China has begun to do just that, in a concerted national effort, including a project called “Made in China 2025” that aims to achieve global dominance in several key areas of technology and manufacturing. Because of these ambitious initiatives by the Chinese government, Reif writes, “stopping intellectual property theft and unfair trade practices — even if fully effective — would not allow the United States to relax back into a position of unquestioned innovation leadership.”

Reif adds that “Unless America responds urgently and deliberately to the scale and intensity of this challenge, we should expect that, in fields from personal communications to business, health, and security, China is likely to become the world’s most advanced technological nation and the source of the most advanced technological products in not much more than a decade.”

However, he emphasizes that this outcome is far from inevitable. The most effective countermeasure is to harness the power of federally funded research at American universities, “rooted in a national culture of opportunity and entrepreneurship, inspired by an atmosphere of intellectual freedom, supported by the rule of law and, crucially, pushed to new creative heights by uniting brilliant talent from every sector of our society and every corner of the world.”

Reif concludes that “As a nation, the United States needs to change its focus from merely reacting to China’s actions to building a farsighted national strategy for sustaining American leadership in science and innovation.”

Holding law-enforcement accountable for electronic surveillance

Wed, 08/08/2018 - 10:00am

When the FBI filed a court order in 2016 commanding Apple to unlock the iPhone of one of the shooters in a terrorist attack in San Bernandino, California, the news made headlines across the globe. Yet every day there are tens of thousands of court orders asking tech companies to turn over Americans’ private data. Many of these orders never see the light of day, leaving a whole privacy-sensitive aspect of government power immune to judicial oversight and lacking in public accountability.

To protect the integrity of ongoing investigations, these requests require some secrecy: Companies usually aren’t allowed to inform individual users that they’re being investigated, and the court orders themselves are also temporarily hidden from the public.

In many cases, though, charges never actually materialize, and the sealed orders usually end up forgotten by the courts that issue them, resulting in a severe accountability deficit.

To address this issue, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Internet Policy Research Initiative (IPRI) have proposed a new cryptographic system to improve the accountability of government surveillance while still maintaining enough confidentiality for the police to do their jobs.

“While certain information may need to stay secret for an investigation to be done properly, some details have to be revealed for accountability to even be possible,” says CSAIL graduate student Jonathan Frankle, one of the lead authors of a new paper about the system, which they’ve dubbed “AUDIT” ("Accountability of Unreleased Data for Improved Transparency"). “This work is about using modern cryptography to develop creative ways to balance these conflicting issues.”

Many of AUDIT’s technical methods were developed by one of its co-authors, MIT Professor Shafi Goldwasser. AUDIT is designed around a public ledger on which government officials share information about data requests. When a judge issues a secret court order or a law enforcement agency secretly requests data from a company, they have to make an iron-clad promise to make the data request public later in the form of what’s known as a “cryptographic commitment.” If the courts ultimately decide to release the data, the public can rest assured that the correct documents were released in full. If the courts decide not to, then that refusal itself will be made known.

AUDIT can also be used to demonstrate that actions by law-enforcement agencies are consistent with what a court order actually allows. For example, if a court order leads to the FBI going to Amazon to get records about a specific customer, AUDIT can prove that the FBI’s request is above board using a cryptographic method called “zero-knowledge proofs.” First developed in the 1980s by Goldwasser and other researchers, these proofs counterintuitively make it possible to prove that surveillance is being conducted properly without revealing any specific information about the surveillance.

The team's approach builds on privacy research in accountable systems led by co-author Daniel J. Weitzner, a principal research scientist at CSAIL and director of IPRI.

“As the volume of personal information expands, better accountability for how that information is used is essential for maintaining public trust,” says Weitzner. “We know that the public is worried about losing control over their personal data, so building technology that can improve actual accountability will help increase trust in the internet environment overall.”

Another element of AUDIT is that statistical information can be aggregated so that that the extent of surveillance can be studied at a larger scale. This enables the public to ask all sorts of tough questions about how their data are being shared. What kinds of cases are most likely to prompt court orders? How many judges issued more than 100 orders in the past year, or more than 10 requests to Facebook this month? Frankle says the team’s goal is to establish a set of reliable, court-issued transparency reports, to supplement the voluntary reports that companies put out.

“We know that the legal system struggles to keep up with the complexity of increasing sophisticated users of personal data,” says Weitzner. “Systems like AUDIT can help courts keep track of how the police conduct surveillance and assure that they are acting within the scope of the law, without impeding legitimate investigative activity.”

Importantly, the team developed its aggregation system using an approach called multi-party computation (MPC), which allows courts to disclose relevant information without actually revealing their internal workings or data to one another. The current state-of-the-art MPC would normally be too slow to run on the data of hundreds of federal judges across the entire court system, so the team took advantage of the court system’s natural hierarchy of lower and higher courts to design a particular variant of MPC that would scale efficiently for the federal judiciary.

According to Frankle, AUDIT could be applied to any process in which data must be both kept secret but also subject to public scrutiny. For example, clinical trials of new drugs often involve private information, but also require enough transparency to assure regulators and the public that proper testing protocols are being observed.

“It’s completely reasonable for government officials to want some level of secrecy, so that they can perform their duties without fear of interference from those who are under investigation,” Frankle says. “But that secrecy can’t be permanent. People have a right to know if their personal data has been accessed, and at a higher level, we as a public have the right to know how much surveillance is going on.”

Next the team plans to explore what could be done to AUDIT so that it can handle even more complex data requests - specifically, by looking at tweaking the design via software engineering. They also are exploring the possibility of partnering with specific federal judges to develop a prototype for real-world use.

“My hope is that, once this proof of concept becomes reality, court administrators will embrace the possibility of enhancing public oversight while preserving necessary secrecy,” says Stephen William Smith, a federal magistrate judge who has written extensively about government accountability. “Lessons learned here will undoubtedly smooth the way towards greater accountability for a broader class of secret information processes, which are a hallmark of our digital age.”

Frankle co-wrote the paper with Goldwasser, Weitzner, CSAIL PhD graduate Sunoo Park and undergraduate Daniel Shaar. The paper will be presented at this week’s USENIX Security conference in Baltimore. IPRI team members will also discuss related surveillance issues in more detail at upcoming workshops for both USENIX and this week’s International Cryptography Conference (Crypto 2018) in Santa Barbara.

The research was supported by IPRI, National Science Foundation, the Defense Advanced Research Projects Agency, and the Simons Foundation.

New partnership between MIT-Germany and the Friedrich Alexander University of Erlangen-Nürnberg

Tue, 08/07/2018 - 1:00pm

The MIT-Germany Program, a part of the MIT Science and Technology Initiatives (MISTI), which connects students and faculty members with research and industry partners abroad, recently began a new partnership with the Friedrich Alexander University of Erlangen-Nürnberg (FAU). FAU, currently celebrating its 275th year, is one of the largest universities in Germany, with strong research programs in engineering and technology.

The new partnership is multifaceted. For students, it enables new student internship placements and expands the Global Teaching Labs program to Erlangen and Nürnberg, with both MIT and FAU participants. For faculty, the partnership creates a new MIT-FAU Seed Fund, which will finance collaborative early stage research projects as part of the MISTI Global Seed Funds program. Additional activities, including an annual workshop, will help to cement further faculty and student collaboration between the two universities.

Founded in 1997, the MIT-Germany Program is one of the largest MISTI programs, sending an average of 80 to 100 students each year. Justin Leahey, MIT-Germany Program manager, says student interest for internships in Germany continues to rise, given the high quality of tailored projects with top German partners. “Our students have already had great research internships at FAU itself,” Leahey says. “But FAU’s strong ties to local industry, including the innovative ‘Medical Valley’ cluster, afford new opportunities.”

Bjoern Eskofier, professor of computer science at FAU and current visiting professor at the MIT Media Lab, agrees. “In Germany, we say we have a lot of ‘hidden champions.’” He explains that this refers to companies comprised of less than 500 employees that form the backbone of the German economy. “A lot of them are world market leaders in a really small niche, but nobody can surpass them because of their high quality,” he says. “We call them the ‘Deutscher Mittelstand’ [roughly equivalent to ‘small and medium-sized enterprises’]. I would say that my university is also one of those hidden champions.” While some outside Europe may not immediately be able to place his home university, he points out that FAU has been ranked by Reuters this year as the most innovative university in Germany and number 5 in all of Europe.

“MIT is one of the worldwide leading institutions in engineering sciences and other disciplines, so it’s just logical for us to try to work with the best out there,” Eskofier says. FAU is especially strong in medical and engineering research, with its engineering faculty ranking as one of the three strongest in Germany. Pointing to FAU’s scientific core areas, including medical engineering, life sciences and health, new materials and processes, and information technology, he adds, “This is just a perfect match for things that are also in focus [at MIT].” As the head of the Machine Learning and Data Analytics Lab at FAU, Eskofier is currently collaborating with the MIT Media Lab in machine learning, wearable computing, and human-computer interaction groups.

Bringing researchers from different countries together with varying but overlapping sets of expertise is fruitful in both directions, Eskofier says. The research he is involved in at the Media Lab revolves around social network badges, a project driven by Oren Lederman of the Human Dynamics research group. The group is working on a wearable computing device that measures social interactions, closeness and audio pitch of people in work environments. “It brings up questions like, [do] men more often interrupt women in a meeting scenario?” says Eskofier, explaining the potential applications of this research collaboration back in Germany for The World of Work, another one of FAU’s scientific core areas.

Eskofier hopes to leverage the many complementary projects at FAU and MIT. One familiar challenge that most German university hospitals face is making health care data accessible for research without putting personal data protection and privacy at risk. Researchers at MIT work on a system called Open Algorithms (OPAL), and Eskofier is exploring strategies to make this useful in Germany. This is the type of knowledge sharing and exchange made possible when two universities collaborate through partnerships like the MIT-Germany-FAU cooperation.

In addition to the excellent researchers and facilities at FAU, Eskofier notes the strong embedding in German local and national industry that allows MIT-Germany students to receive exposure to industry projects, including within the automotive and medical technology fields. The German emphasis on having a balanced work-life culture also lends itself to a positive professional experience for students. “That’s a magic secret we only show to visitors that experience it [first-hand],” Eskofier jokes about the high productivity in the German workplace, despite often working fewer hours than Americans.

Last summer, MIT-Germany student Tyndale Hannan worked on a physics project with the Department of Computer Science at FAU. “My mentor would occasionally take a break from work with me to sit down and discuss future prospects in the field of laser physics,” says Hannan. “These talks were invaluable guidance. Overall, working at FAU was a fun and formative experience that opened doors to more opportunities in research.” This summer, the rising junior is now working as a Physics Research Fellow at the University of California at San Diego.

“There is nothing better than to send students away and to hire them again later because they were just immersed in a different scientific culture,” Eskofier says of his FAU students who have brought new skills back from international research experiences. He notes that such opportunities are good for personal development, and that they return from their experience more enriched and full of ideas.

As a visiting professor himself at MIT, Eskofier cites the obvious appeal for FAU students and researchers to come to MIT, and is excited about the bridge that this new partnership has opened up to receive MIT students at FAU. “They will benefit because it’s always good to have exposure to different ideas, skillsets, and mindsets. We have a different way of working and it’s also very competitive and motivating,” he says.

The framework for the new cooperation outlines the current global engagement at FAU focusing on “promoting an increasingly diverse culture of internationalization through complementary strategic institutional measures” with the main goal of expanding and promoting international research collaborations, giving way to greater visibility worldwide. This aligns well with the mission of the MIT-Germany Program and MISTI, which is a part of the Center for International Studies within the School of Humanities, Arts, and Social Sciences. Leahey says, “We’re looking forward to a productive partnership that will benefit students and faculty at both universities.”

Sensor could help doctors select effective cancer therapy

Tue, 08/07/2018 - 4:59am

MIT chemical engineers have developed a new sensor that lets them see inside cancer cells and determine whether the cells are responding to a particular type of chemotherapy drug.

The sensors, which detect hydrogen peroxide inside human cells, could help researchers identify new cancer drugs that boost levels of hydrogen peroxide, which induces programmed cell death. The sensors could also be adapted to screen individual patients’ tumors to predict whether such drugs would be effective against them.

“The same therapy isn’t going to work against all tumors,” says Hadley Sikes, an associate professor of chemical engineering at MIT. “Currently there’s a real dearth of quantitative, chemically specific tools to be able to measure the changes that occur in tumor cells versus normal cells in response to drug treatment.”

Sikes is the senior author of the study, which appears in the Aug. 7 issue of Nature Communications. The paper’s first author is graduate student Troy Langford; other authors are former graduate students Beijing Huang and Joseph Lim and graduate student Sun Jin Moon.

Tracking hydrogen peroxide

Cancer cells often have mutations that cause their metabolism to go awry and produce abnormally high fluxes of hydrogen peroxide. When too much of the molecule is produced, it can damage cells, so cancer cells become highly dependent on antioxidant systems that remove hydrogen peroxide from cells.

Drugs that target this vulnerability, which are known as “redox drugs,” can work by either disabling the antioxidant systems or further boosting production of hydrogen peroxide. Many such drugs have entered clinical trials, with mixed results.

“One of the problems is that the clinical trials usually find that they work for some patients and they don’t work for other patients,” Sikes says. “We really need tools to be able to do more well-designed trials where we figure out which patients are going to respond to this approach and which aren’t, so more of these drugs can be approved.”

To help move toward that goal, Sikes set out to design a sensor that could sensitively detect hydrogen peroxide inside human cells, allowing scientists to measure a cell’s response to such drugs.

Existing hydrogen peroxide sensors are based on proteins called transcription factors, taken from microbes and engineered to fluoresce when they react with hydrogen peroxide. Sikes and her colleagues tried to use these in human cells but found that they were not sensitive in the range of hydrogen peroxide they were trying to detect, which led them to seek human proteins that could perform the task.

Through studies of the network of human proteins that become oxidized with increasing hydrogen peroxide, the researchers identified an enzyme called peroxiredoxin that dominates most human cells’ reactions with the molecule. One of this enzyme’s many functions is sensing changes in hydrogen peroxide levels.

Langford then modified the protein by adding two fluorescent molecules to it — a green fluorescent protein at one end and a red fluorescent protein at the other end. When the sensor reacts with hydrogen peroxide, its shape changes, bringing the two fluorescent proteins closer together. The researchers can detect whether this shift has occurred by shining green light onto the cells: If no hydrogen peroxide has been detected, the glow remains green; if hydrogen peroxide is present, the sensor glows red instead.

Predicting success

The researchers tested their new sensor in two types of human cancer cells: one set that they knew was susceptible to a redox drug called piperlongumine, and another that they knew was not susceptible. The sensor revealed that hydrogen peroxide levels were unchanged in the resistant cells but went up in the susceptible cells, as the researchers expected.

Sikes envisions two major uses for this sensor. One is to screen libraries of existing drugs, or compounds that could potentially be used as drugs, to determine if they have the desired effect of increasing hydrogen peroxide concentration in cancer cells. Another potential use is to screen patients before they receive such drugs, to see if the drugs will be successful against each patient’s tumor. Sikes is now pursuing both of these approaches.

“You have to know which cancer drugs work in this way, and then which tumors are going to respond,” she says. “Those are two separate but related problems that both need to be solved for this approach to have practical impact in the clinic.”

The research was funded by the Haas Family Fellowship in Chemical Engineering, the National Science Foundation, a Samsung Fellowship, and a Burroughs Wellcome Fund Career Award at the Scientific Interface.

Mass timber: Thinking big about sustainable construction

Mon, 08/06/2018 - 11:59pm

The construction and operation of all kinds of buildings uses vast amounts of energy and natural resources. Researchers around the world have therefore been seeking ways to make buildings more efficient and less dependent on emissions-intensive materials.

Now, a project developed through an MIT class has come up with a highly energy-efficient design for a large community building that uses one of the world’s oldest construction materials. For this structure, called “the Longhouse,” massive timbers made of conventional lumber would be laminated together like a kind of supersized plywood.

The design will be presented this October at the Maine Mass Timber Conference, which is dedicated to exploring new uses of this material, which can be used to build safe, sound high-rise buildings, if building codes permit them.

John Klein, a research scientist in MIT’s architecture department who taught a workshop called Mass Timber Design that came up with the new design, explains that “in North America, we have an abundance of forest resources, and a lot of it is overgrown. There’s an effort to find ways to use forest products sustainably, and the forests are actively undergoing thinning processes to prevent forest fires and beetle infestations.”

People tend to think of wood as a suitable material for structures just a few stories high, but not for larger structures, Klein says. But already some builders are beginning to use mass timber products (a term that basically applies to any wood products much larger than conventional lumber) for bigger structures, including medium-rise buildings of up to 20 stories. Even taller buildings should ultimately be practical with this technology, he says. One of the largest mass timber buildings in the U.S. is the new 82,000-square-foot John W. Olver Design Building at the University of Massachusetts at Amherst.

One of the first questions people raise when they hear of such construction has to do with fire. Can such tall wooden structures really be safe? In fact, Klein says, tests have demonstrated that mass timber structures can resist fire as well or better than steel. That’s because wood exposed to fire naturally produces a layer of char, which is highly insulating and can protect the bulk of the wood for more than two hours. Steel, in contrast, can fail suddenly when heat softens it and causes it to buckle.

Klein explains that this natural fire resistance makes sense when you think about dropping a lit match onto a pile of wood shavings, versus dropping it onto a log. The shavings will burst into flames, but on the log a match will simply sputter out. The greater the bulk of the wood, the better it resists ignition.

The structure designed by the class uses massive beams made from layers of wood veneers laminated together, a process known as laminated veneer lumber (LVL), made into panels 50 feet long, 10 feet wide, and more than 6 inches thick These are cut to size and used to make a series of large arches, 40 feet tall to the central peak and spanning 50 feet across, made of sections with a triangular cross-section to add structural strength. A series of these arches is assembled to create a large enclosed space with no need for internal structural supports. The pleated design of the roof is designed to accommodate solar panels and windows for natural lighting and passive solar heating.

“The structural depth achieved by building up the triangular section helps us achieve the clear span desired for the communal space, all while lending a visual language on both the interior and the exterior of the structure,” says Demi Fang, an MIT architecture graduate student who was part of the design team. “Each arch tapers and widens along its length, because not every point along the arch will be subject to the same magnitude of forces, and this varying cross-section depth both expresses structural performance while encouraging materials savings,” she says.

The arches would be factory-built in sections, and then bolted together on site to make the complete building. Because the building would be largely prefabricated, the actual on-site construction process would be greatly streamlined, Klein says.

“The Longhouse is a multifunctional building, designed to accommodate a range of event scenarios from co-working, exercise classes, social mixers, exhibitions, dinner gatherings and lectures,” Klein says, adding that it builds on a long tradition of such communal structures in cultures around the world.  

Whereas the production of concrete, used in most of the world’s large buildings, involves large releases of greenhouse gases from the baking of limestone, construction using mass timber has the opposite effect, Klein says. While concrete adds to the world’s burden of greenhouse gases, timber actually lessens it, because the carbon removed from the air while trees grow is essentially sequestered for as long as the building lasts. “The building is a carbon sink,” he says.

One obstacle to greater use of mass timber for large structures is in current U.S. building codes, Klein says, which limit the use of structural wood to residential buildings up to five stories, or commercial buildings up to six stories. But recent construction of much taller timber buildings in Europe, Australia, and Canada — including an 18-story timber building in British Columbia — should help to establish such buildings’ safety and lead to the needed code changes, he says.

Steve Marshall, an assistant director of cooperative forestry with the U.S. Forest Service, who was not involved in this project, says “Longhouse is a wonderfully creative and beautifully executed example of the design potential for mass timber.” He adds that “mass timber is poised to become a significant part of how America builds. The sustainability implications for the places we live, work, and play are huge. In addition to the well-known ramifications such as the sequestration of carbon within the buildings, there are also community benefits such as dramatically reduced truck traffic during the construction process.”

The Longhouse design was developed by a cross-disciplinary team in 4.S13 (Mass Timber Design), a design workshop in MIT’s architecture department that explores the future of sustainable buildings. The team included John Fechtel, Paul Short, Demi Fang, Andrew Brose, Hyerin Lee, and Alexandre Beaudouin-Mackay. It was supported by the Department of Architecture, BuroHappold Engineering and Nova Concepts.

A targeted approach to treating glioma

Mon, 08/06/2018 - 2:59pm

Glioma, a type of brain cancer, is normally treated by removing as much of the tumor as possible, followed by radiation or chemotherapy. With this treatment, patients survive an average of about 10 years, but the tumors inevitably grow back.

A team of researchers from MIT, Brigham and Women’s Hospital, and Massachusetts General Hospital hopes to extend patients’ lifespan by delivering directly to the brain a drug that targets a mutation found in 20 to 25 percent of all gliomas. (This mutation is usually seen in gliomas that strike adults under the age of 45.) The researchers have devised a way to rapidly check for the mutation during brain surgery, and if the mutation is present, they can implant microparticles that gradually release the drug over several days or weeks.

“To provide really effective therapy, we need to diagnose very quickly, and ideally have a mutation diagnosis that can help guide genotype-specific treatment,” says Giovanni Traverso, an assistant professor at Brigham and Women’s Hospital, Harvard Medical School, a research affiliate at MIT’s Koch Institute for Integrative Cancer Research, and one of the senior authors of the paper.

The researchers are also working ways to identify and target other mutations found in gliomas and other types of brain tumors.

“This paradigm allows us to modify our current intraoperative resection strategy by applying molecular therapeutics that target residual tumor cells based on their specific vulnerabilities,” says Ganesh Shankar, who is currently completing a spine surgery fellowship at Cleveland Clinic prior to returning as a neurosurgeon at Massachusetts General Hospital, where he performed this study.

Shankar and Koch Institute postdoc Ameya Kirtane are the lead authors of the paper, which appears in the Proceedings of the National Academy of Sciences the week of Aug. 6. Daniel Cahill, a neurosurgeon at MGH and associate professor at Harvard Medical School, is a senior author of the paper, and Robert Langer, the David H. Koch Institute Professor at MIT, is also an author.

Targeting tumors

The tumors that the researchers targeted in this study, historically known as low-grade gliomas, usually occur in patients between the ages of 20 and 40. During surgery, doctors try to remove as much of the tumor as possible, but they can’t be too aggressive if tumors invade the areas of the brain responsible for key functions such as speech or movement. The research team wanted to find a way to locally treat those cancer cells with a targeted drug that could delay tumor regrowth.

To achieve that, the researchers decided to target a mutation called IDH1/2. Cancer cells with this mutation shut off a metabolic pathway that cells normally use to create a molecule called NAD, making them highly dependent on an alternative pathway that requires an enzyme called NAMPT. Researchers have been working to develop NAMPT inhibitors to treat cancer.

So far, these drugs have not been used for glioma, in part because of the difficulty in getting them across the blood-brain barrier, which separates the brain from circulating blood and prevents large molecules from entering the brain. NAMPT inhibitors can also produce serious side effects in the retina, bone marrow, liver, and blood platelets when they are given orally or intravenously.

To deliver the drugs locally, the researchers developed microparticles in which the NAMPT inhibitor is embedded in PLGA, a polymer that has been shown to be safe for use in humans. Another desirable feature of PLGA is that the rate at which the drug is released can be controlled by altering the ratio of the two polymers that make up PLGA — lactic acid and glycolic acid.

To determine which patients would benefit from treatment with the NAMPT inhibitor, the researchers devised a genetic test that can reveal the presence of the IDH mutation in approximately 30 minutes. This allows the procedure to be done on biopsied tissue during the surgery, which takes about four hours. If the test is positive, the microparticles can be placed in the brain, where they gradually release the drug, killing cells left behind during the surgery.

In tests in mice, the researchers found that treatment with the drug-carrying particles extended the survival of mice with IDH mutant-positive gliomas. As they expected, the treatment did not work against tumors without the IDH mutation. In mice treated with the particles, the team also found none of the harmful side effects seen when NAMPT inhibitors are given throughout the body.

“When you dose these drugs locally, none of those side effects are seen,” Traverso says. “So not only can you have a positive impact on the tumor, but you can also address the side effects which sometimes limit the use of a drug that is otherwise effective against tumors.”

The new approach builds on similar work from Langer’s lab that led to the first FDA-approved controlled drug-release system for brain cancer — a tiny wafer that can be implanted in the brain following surgery.

“I am very excited about this new paper, which complements very nicely the earlier work we did with Henry Brem of Johns Hopkins that led to Gliadel, which has now been approved in over 30 countries and has been used clinically for the past 22 years,” Langer says.

An array of options

The researchers are now developing tests for other common mutations found in brain tumors, with the goal of devising an array of potential treatments for surgeons to choose from based on the test results. This approach could also be used for tumors in other parts of the body, the researchers say.

“There’s no reason this has to be restricted to just gliomas,” Shankar says. “It should be able to be used anywhere where there’s a well-defined hotspot mutation.”

They also plan to do some tests of the IDH-targeted treatment in larger animals, to help determine the right dosages, before planning for clinical trials in patients.

“We feel its best use would be in the early stages, to improve local control and prevent regrowth at the site,” Cahill says. “Ideally it would be integrated early in the standard-of-care treatment for patients, and we would try to put off the recurrence of the disease for many years or decades. That’s what we’re hoping.”

The research was funded by the American Brain Tumor Association, a SPORE grant from the National Cancer Institute, the Burroughs Wellcome Career Award in the Medical Sciences, the National Institutes of Health, and the Division of Gastroenterology at Brigham and Women’s Hospital.

Study: Hole in ionosphere is caused by sudden stratospheric warming

Mon, 08/06/2018 - 12:30pm

Forecasting space weather is even more challenging than regular meteorology. The ionosphere — the upper atmospheric layer containing particles charged by solar radiation — affects many of today’s vital navigation and communication systems, including GPS mapping apps and airplane navigation tools. Being able to predict activity of the charged electrons in the ionosphere is important to ensure the integrity of satellite-based technologies.

Geospace research has long established that certain changes in the atmosphere are caused by the sun’s radiation, through mechanisms including solar wind, geomagnetic storms, and solar flares. Coupling effects — or changes in one atmospheric layer that affect other layers — are more controversial. Debates include the extent of connections between the layers, as well as how far such coupling effects extend, and the details of processes involved with these effects.

One of the more scientifically interesting large-scale atmospheric events is called a sudden stratospheric warming (SSW), in which enormous waves in the troposphere — the lowermost layer of the atmosphere in which we live — propagate upward into the stratosphere. These planetary waves are generated by air moving over geological structures such as large mountain ranges; once in the stratosphere, they interact with the polar jet streams. During a major SSW, temperatures in the stratosphere rise dramatically over the course of a few days.

SSW-induced changes in the ionosphere were once thought to be daytime events. A recent study led by Larisa Goncharenko of MIT Haystack Observatory, available online and in the forthcoming issue of the Journal of Geophysical Research: Space Physics, examined a major SSW from January 2013 and its effect on the nighttime ionosphere. Decades of data from the MIT Millstone Hill geospace facility in Westford, Massachusetts; Arecibo Observatory in Puerto Rico; and the Global Navigation Satellite System (GNSS) was used to measure various parameters in the ionosphere and to separate the effect of the SSW from other, known effects.

The study found that electron density in the nighttime ionosphere was dramatically reduced by the effects of the SSW for several days: A significant hole was formed that stretched across hemispheres from latitudes 55 degrees S to 45 degrees N. They also measured a strong downward plasma motion and a decrease in ion temperature after the SSW.

“Goncharenko et al. show clearly that lower atmospheric forcing associated to the large meteorological event called an SSW can also influence the low- and mid-latitude ionosphere,” says Jorge L. Chau, head of the Radar Remote Sensing Department at the Leibniz Institute of Atmospheric Physics. “In a way the connection was expected, given the strong connectivity between regions; however, due to other competing factors, lack of proper data, and — more important — lack of perseverance to search for such nighttime connections, previous studies have not shown such connections — at least not as clear. The new findings open new challenges as well of opportunities to improve the understanding of lower atmospheric forcing in the ionosphere.”

These significant results from Goncharenko and colleagues are also featured as an AGU research highlight in EOS.

Understanding how events far away and in other layers of the atmosphere affect the ionosphere is an important component of space weather forecasting; additional work is needed to pin down the precise mechanisms by which SSWs affect the nighttime ionosphere and other coupling effects.

“The large depletions in the nighttime ionosphere shown in this study are potentially important for near-Earth space weather as they may impact how the upper atmosphere responds to geomagnetic storms and influence the occurrence of ionosphere irregularities,” says Nick Pedatella, scientist at the High Altitude Observatory of the National Center for Atmospheric Research. “The observed depletions in the nighttime ionosphere provide another point of reference for testing the fidelity of model simulations of the impact of SSWs on the ionosphere.”

Encouraging the next generation of fusion innovators

Mon, 08/06/2018 - 12:00pm

In memory of MIT alumnus Samuel Ing '53, MS '54, ScD '59, his family has established a memorial fund to support graduate students at MIT’s Plasma Science and Fusion Center (PSFC) who are taking part in the center’s push to create a smaller, faster, and less expensive path to fusion energy.

Samuel Ing was born in Shanghai, China in 1932. Mentored by Professor Thomas Sherwood at MIT, he received BS, MS, and ScD degrees in chemical engineering in 1953, 1954, and 1959 respectively. Joining the Xerox Corporation after graduation, he rose from senior scientist, to principal scientist, to senior vice president of the Xerographic Technology Laboratory at the Webster Research Center in Webster, New York. He spent most of his career in western New York State with his wife Mabel, whom he met at an MIT dance. They raised four daughters: Julie, Bonnie, Mimi, and Polly.

An innovator and advocate for new technologies, including desktop publishing, Samuel Ing became intrigued with MIT’s approach to creating fusion energy after attending a talk by PSFC Director Dennis Whyte at the MIT Club in Palo Alto in early 2016. His daughter Emilie “Mimi” Slaughter ’87, SM ’88, who majored in electrical engineering, later expressed her own enthusiasm to her father when, as a member of the School of Engineering Dean’s Advisory Council, she heard Whyte speak in the fall of 2017.

In pursuit of a clean and virtually endless source of energy to fulfill the growing demands around the world, MIT has championed fusion research since the 1970s, designing compact tokamaks that use high magnetic fields to heat and contain the plasma fuel in a donut-shaped vacuum chamber. The PSFC is now working on SPARC, a new high-field, net fusion energy experiment. Researchers are using a thin superconducting tape to create compact electromagnets with fields significantly higher than those available to any other current fusion experiment. These magnets would make it possible to build a smaller, high-field tokamak at less cost, while speeding the quest for fusion energy.

Mimi Slaughter remembers her father’s passion for innovation and entrepreneurship.

“It’s the MIT culture,” she says. “I see that in the fusion lab — the idea of just doing it; figuring out a way to try to make it happen, not necessarily through the traditional channels. I know my Dad agrees. He did that at Xerox. He had his own lab, creating his own desktop copiers. That grew out of what he experienced at MIT.”

The Ing family is celebrating that creative spirit with the Samuel W. Ing Memorial Fund for MIT graduate students who will be driving the research and discovery forward on SPARC. It was a class of PSFC graduate students that proposed the original concept for this experiment, and it will be the young minds with new ideas that, with the support of the fund, will advance fusion research at MIT.

Or as Sam Ing once said: “Very interesting technology. It has a tremendous future, and if anyone can do it, it’s MIT.”

Transforming the U.S. Naval Air Systems Command, with thanks to MIT

Mon, 08/06/2018 - 9:00am

For the past three years, the Department of Defense’s Naval Air Systems Command (NAVAIR) organization has committed to a different kind of mission than any it has pursued before — to transform their engineering acquisition capabilities to a model-based design. Their goal is to shorten the timeline from beginning to delivery without lacking quality or precision.

Since early in 2017, an essential part of implementing that transformation has been NAVAIR’s participation in the MIT program, “Architecture and Systems Engineering: Models and Methods to Manage Complex Systems,” a four-course online course on model-based systems engineering.

“It is taking way too long to develop and deliver the next generation of war fighting capability to our war fighters,” says David Cohen, director of the Air Platform Systems Engineering Department at NAVAIR, referring to the current design and development processes based on systems engineering practices and processes from the 1970s. “We need to shorten that timeline dramatically. We have a national security imperative to be delivering the next level of technology to our warfighter to continue to try to maintain our advantage over our adversaries.”

NAVAIR views the shift to model-based systems engineering as an essential step in shortening and modernizing its abilities to deliver high-quality, state-of-the-art programs. They enrolled their first cohort of 60 engineers and managers into the MIT program in March 2017. The third group will soon complete the four-month program, which has become a key piece of the NAVAIR transformation by building the awareness and skills needed to successfully implement model-based systems engineering.

Procuring naval aviation assets

NAVAIR procures and helps sustain all of the Navy and Marine Corps aviation assets — helicopters, jets, transport aircraft, bombs, avionics, missiles, virtually any kind of weapon used by U.S. sailors and Marines. Their responsibilities include research, design, development, and systems engineering of these assets internally and with contractors; acquisition, testing and evaluation of these assets, as well as training, repair, modification, and in-service engineering and logistics support.

“We are the organization that receives requirements from the Pentagon for a new program, puts them out on contract, does the acquisition of that project and also provides the technical oversight and programmatic oversight during the development of that project to be sure it is maturing as expected and delivering what is needed,” says David Meiser, Advanced Systems Engineering Department head, who is helping to lead the systems transformation effort at NAVAIR.

NAVAIR employs more than 10,000 engineers, plus logisticians, testers, and specialists in a variety of different areas from software, to engines, to structures.

“We are kind of like the FAA for naval aircraft,” says Meiser, referring to the Federal Aviation Administration. “We go through the whole test and certification process and also provide the air-worthiness authority. Once the system is tested and does what it needs to do, we also provide the support mechanism to have ongoing logistics and engineering support needed to maintain these aircraft for 20-50 years.”

Design changes needed

It takes approximately 15 years to build a new weapons system, such as a fighter jet, from idea to fruition. A key reason is due to increasing systems complexity. In the 1960s, the technology of a jet was largely based solely on the air vehicle itself. Today, everything is integrated with the aircraft ranging from how it flies, its targeting system, its weapons capabilities, the visual system, and more.

“They are so much more complex in functionality and capabilities and it’s harder to develop and manage all of the requirements and interfaces,” says Systems Transformation Director Jaime Guerrero of NAVAIR’s Systems Engineering Development and Implementation Center. “You need a model-based approach to do that as opposed to a document-centric approach which has been how NAVAIR has operated for decades.” 

Add to the pressure that NAVAIR leadership was mandating a cycle time collapse from 15 years to less than half that, David Cohen says. 

“That’s where we need to be,” Cohen adds. “The threats we are trying to address with these weapons systems are evolving in a faster pace. We have to be a lot more agile in terms of getting a product to the fleet much faster.”

In 2013, NAVAIR participated in a research effort with the DOD’s Systems Engineering Research Center (SERC) to learn how to find better and faster ways of systems engineering. After collaborating with industry partners, academia, and other government agencies, SERC determined that is was technically feasible to pursue modeling methods as the way forward in the future. Between 2014 and early 2016, NAVAIR engineering leadership researched modeling methods with its key industry partners like Boeing, Lockheed Martin, Raytheon, and 30 other companies to see how they were executing model methods, as well as those practiced in the auto industry where short design timelines are the norm. They also enlisted input from other government agencies that were already moving their processes to a model-centric method.

“We absorbed a lot of information from these industries to see that we could use a different methodology to collapse cycle time,” Guerrero says.

In those two years, NAVAIR researched 40-50 companies, universities, and government agencies and decided it was technically feasible for them to transform in about 10 years to be a different organization with different skills, tools, methods, and processes. They made the commitment to shift to model-based system engineering to incorporate this paradigm shift into its organization.

Implementing model-based systems engineering

Leadership, however, was not supportive of a 10-year transformational window. They wanted to aggressively compress the timeline.

“When we realized leadership wanted to compress the timeline to about a three-year timeline for transforming the organization, we decided to go out and search experts and the best training we could get, the best tools in the market,” Guerrero recalls.

They started searching for the resources needed to do that and attended workshops and symposiums. One of them was sponsored by NASA’s Jet Propulsion Laboratory, which was a few steps ahead in initiating a model-based systems engineering (MBSE) perspective. There, Meiser, and Guerrero learned of the MIT program from Bruce Cameron, director of the Systems Architecture Lab at MIT, who developed the coursework in 2016 and was also in attendance.   

“Some of our partners, especially Boeing, were already involved with the MIT coursework and they recommended it,” says Guerrero. It had also become a command initiative at NAVAIR to push a fast transformation program. “So we had the command initiative and the resources to go out and train as many people as possible,” he says.

NAVAIR committed to the courses as a way to establish a common language, to introduce its workforce to concepts, tools, and terminology that will foster deeper conversations that are going to be necessary to adopt MBSE concepts and advance the level of training.

The entire four-course online program, which runs on the edX online learning platform, requires about 20 weeks for completion. Each course is gated with a weekly lesson which requires about 4-5 hours of work/week. It has a combination of videos, reading material, assessment and course work. At the end of each week, students are required to complete a project which is reviewed by peers.

When Guerrero and Meiser completed the program in the spring of 2017, they realized it would help align NAVAIR’s leadership by educating its command leaders why modeling is part of the solution for them to become a more agile organization.

“The four-course series provides a high-level explanation of how to do systems engineering and architecture in a model-based environment, Meiser says. “At the end of these courses you may not be a total practitioner of model-based engineering but you have an appreciation of the value of model based methods.”

Management commitment from top leadership 

“We came out of that and realized we needed to require a lot of our senior leaders here and some of our chief engineers because it is not about making them modelers or making them experts in the process,” adds Guerrero. “It’s about informing them of how this model-centric method is going to help us as an organization. Leaders have to be in agreement and push in the same direction to make this quick transformation happen.”

Fortunately, NAVAIR’s top leadership was immediately on board.

“What we have going for us at NAVAIR is that they’ve embraced MBSE and faster cycle times as a command initiative and they’ve committed to doing this comprehensively across NAVAIR,” says Meiser, adding they’ve been given the budget to pursue MBSE and top-level support.

Vice Admiral Paul Grosklags, NAVAIR commander, even prepared a video discussing the path to going digital with acquisition, sustainment, and business processes and how it has the potential to increase readiness and speed to the fleet. Encouraged by that, Guerrero and Meiser produced their own YouTube video to help get the message out about the systems engineering transformation at NAVAIR.

As a result, NAVAIR targets the MIT program toward management and command leaders across all of its engineering disciplines as well as logistics and testing, the people who have to facilitate the change. Though they are not the individuals responsible for doing the modeling, they are required to understand the capabilities of model-based systems engineering.

Now that nearly 150 NAVAIR personnel have completed the program, the feedback has been very encouraging. Some with more experience believe it was a great reinforcement of what they knew or should have known. Others say it helped them understand certain MBSE aspects they were not previously familiar with.

“We’ve given it to a fairly diverse group of people,” says Meiser. “One thing I had heard regularly is that people say once they’ve been through it that they look at the problem differently. That has been the effect we’ve wanted to have. They start to think more about how to approach the problems in a model-based approach.”

Participants have also realized the value of pursuing this type of education together in the MIT program.

“We have learned from others NOT to try to do this transformational work in isolation,” adds Meiser. “This discipline is fairly new and having access to others pursuing the same thing has been very helpful for us.”

The leadership perspective

Cohen appreciated the non-intrusive delivery method as well as the content, feeling that the on-site training provided a good balance of depth and instruction time. “It has been an integral first step, especially for bringing the broad workforce at large into the discussion of what MBSE is,” he says.

Cohen knows NAVAIR is embarking on a monumental challenge. After completing the program himself, he realized he had to adjust his expectations.

“It helped alert me to some of those cautionary areas where I could be considered more optimistic about my expectations,” he says. “Throughout the course, there was more emphasis on quality of the product, not just on rapid cycle time.”

He was particularly impressed by the level of respect, knowledge, and professional experience demonstrated by others involved in the course.

“I had to take on board and value the experience of people who have been working in this field a lot longer than we have,” he says. He admits the coursework tempered his aggressive expectations, but it simultaneously highlighted where NAVAIR needed to invest more research and resources in certain program areas to achieve the faster results expected by top leadership.

Cohen credits the program with shaping the transformational process at NAVAIR by pointing out where they need to pursue deeper dives for the next level of depth in workforce training.

“The course gives you the understanding that MBSE has layers to it,” he says. “So depending on where you are in the organization, you will need to get more in-depth training in your area. We found the course introduced everyone to the depth and breadth of what model-based engineering is, its applications and how it’s used.”

At NAVAIR, the program has worked because they intentionally involve a large diversity of people across the organization rather than a few silos involving an entire group or department. They recommend that the program be taken by those in higher levels of an organization who are facilitating the engineering change. Those with more job-specific responsibilities should receive training specific to those precise areas they are going to be implement.

“The courses have helped everyone understand the over-arching goal and establish a common language,” says Cohen. “Although the transition to model-based systems engineering is complicated, we have expanded our skills and contacts tremendously in the process and crystalized where we need to focus on to get results.”

Pages