MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 6 hours 57 min ago

Angela Belcher named head of the Department of Biological Engineering

Mon, 02/25/2019 - 1:10pm

Angela Belcher, the James Mason Crafts Professor of Biological Engineering and Materials Science and Engineering at MIT, has been named the new head of the Department of Biological Engineering, effective July 1. 

“Professor Belcher is a brilliant researcher who has done remarkable work in biomedical engineering and energy research. She has demonstrated exceptional commitment and vision as an educator. I am thrilled that she will be serving as the new head of our biological engineering department. I know she will serve as an excellent leader, and I very much look forward to working with her,” says Anantha Chandrakasan, dean of the MIT School of Engineering and the Vannevar Bush Professor of Electrical Engineering and Computer Science.

When Belcher takes on her new position, the MIT School of Engineering will have a record high number of women on Engineering Council. They include Paula Hammond, head of the Department of Chemical Engineering; Anette “Peko” Hosoi, associate dean of the School of Engineering; Asu Ozdaglar, head of the Department of Electrical Engineering and Computer Science; and Evelyn Wang, head of the Department of Mechanical Engineering. Four of the School of Engineering’s eight departments will be led by women.

“We are privileged to have remarkable women in leadership roles within the School of Engineering to help advance their departments and engineering at large,” Chandrakasan says.

A faculty member in the departments of Biological Engineering and Materials Science and Engineering, as well as at the Koch Institute of Integrative Cancer Research, Belcher is a biological and materials engineer with expertise in the fields of biomaterials, biomolecular materials, organic-inorganic interfaces, and solid-state chemistry and devices.

She succeeds Doug Lauffenburger, a professor of bioengineering who led biological engineering from its start as the Division of Bioengineering and Environmental Health in 1998.

“Professor Belcher has demonstrated remarkable success in teaching our students. Her high teaching scores and positive student evaluations are a credit to her passion for imparting knowledge and commitment in a rigorous while engaging style,” Lauffenburger says.

Belcher’s research focuses on harnessing nature’s processes in order to design technologically important materials and devices for energy, the environment, and medicine. She and members of her research group work on understanding how ancient organisms have evolved to make exquisite nanostructures like shells and glassy diatoms. Using only non-toxic materials, they employ directed evolution to engineer organisms to grow and assemble novel hybrid organic-inorganic electronic, magnetic, and catalytic materials. They encourage these organisms to grow and assemble technologically important materials and devices as varied as solar cells, batteries, medical diagnostics, and basic single-molecule interactions related to disease. Their work in the last year has focused on the development of a real-time surgical guided imaging system for ovarian cancer, and has demonstrated a 40 percent increase in life span with mouse models. Their focus now is on developing real-time, non-invasive, deep imaging systems to find, remove, and treat sub-millimeter tumors to further increase the life span in patients in many types of cancer.

Belcher was awarded a Presidential Early Career Awards for Scientists and Engineers (PECASE) in 2000, and a Packard Fellowship for Science and Engineering and Sloan Research Fellowship in 2001, and in 2004 was named a MacArthur Fellow. She was named a fellow of the American Academy of Arts and Sciences in 2012, won the Lemelson-MIT Prize in 2013, was named a fellow in the National Academy of Inventors in 2015, and was elected to the National Academy of Engineering in 2018. In 2009 Rolling Stone magazine listed Belcher as one of the top 100 people changing the country, and in 2007 TIME magazine named her a “hero” for her research related to climate change. She has also received the Four Star General Recognition Award (U.S. Army) for significant contribution to army transformation and is a past winner of the Eni Prize for Renewable and Non-Conventional Energy.

Belcher has 40 patents; in 2002 she cofounded Cambrios Technologies, and in 2007 she founded Siluria Technologies. She obtained her bachelor’s degree in Creative Studies in 1991 and her PhD in inorganic chemistry in 1997 at the University of California at Santa Barbara.  

Scott Manalis, the Andrew and Erna Viterbi Professor in the MIT departments of Biological Engineering and Mechanical Engineering, a member of MIT’s Koch Institute for Integrative Cancer Research, will support Belcher as associate department head.

An easier way to engineer plants

Mon, 02/25/2019 - 10:59am

MIT researchers have developed a new genetic tool that could make it easier to engineer plants that can survive drought or resist fungal infections. Their technique, which uses nanoparticles to deliver genes into the chloroplasts of plant cells, works with many different plant species, including spinach and other vegetables.

This new strategy could help plant biologists to overcome the difficulties involved in genetically modifying plants, which is now a complex, time-consuming process that has to be customized to the specific plant species that is being altered.

“This is a universal mechanism that works across plant species,” says Michael Strano, the Carbon P. Dubbs Professor of Chemical Engineering at MIT, about the new method.

Strano and Nam-Hai Chua, the deputy chair of the Temasek Life Sciences Laboratory at the National University of Singapore and a professor emeritus at Rockefeller University, are the senior authors of the study, which appears in the Feb. 25 issue of Nature Nanotechnology.

“This is an important first step toward chloroplast transformation,” Chua says. “This technique can be used for rapid screening of candidate genes for chloroplast expression in a wide variety of crop plants.”

This study is the first to emerge from the recently launched Singapore-MIT Alliance for Research and Technology (SMART) program in Disruptive and Sustainable Technologies for Agricultural Precision (DiSTAP), which is headed by Strano and Chua. The lead authors of the study are former MIT postdoc Seon-Yeong Kwak, who is now the scientific director of the DiSTAP program, and MIT graduate student Tedrick Thomas Salim Lew. The research team included scientists from Yield10 Bioscience.

Targeting chloroplasts

A few years ago, Strano and his colleagues discovered that by tuning the size and electrical charge of nanoparticles, they could design the nanoparticles to penetrate plant cell membranes. This mechanism, called lipid exchange envelope penetration (LEEP), allowed them to create plants that glow, by embedding nanoparticles carrying luciferase, a light-emitting protein, into their leaves.

As soon as the MIT team reported using LEEP to get nanoparticles into plants, plant biologists began asking if it could be used to genetically engineer plants, and more specifically, to get genes into chloroplasts. Plant cells have dozens of chloroplasts, so inducing the chloroplasts (instead of just the nucleus) to express genes could be a way to generate much greater quantities of a desired protein.

“Bringing genetic tools to different parts of the plant is something that plant biologists are very interested in,” Strano says. “Every time I give a talk to a plant biology community, they ask if you could use this technique to deliver genes to the chloroplast.”

The chloroplast, best known as the site of photosynthesis, contains about 80 genes, which code for proteins required to perform photosynthesis. The chloroplast also has its own ribosomes, allowing it to assemble proteins within the chloroplast. Until now, it has been very difficult for scientists to get genes into the chloroplast: The only existing technique requires using a high-pressure “gene gun” to force genes into the cells, which can damage the plant and is not very efficient.

Using their new strategy, the MIT team created nanoparticles consisting of carbon nanotubes wrapped in chitosan, a naturally occurring sugar. DNA, which is negatively charged, binds loosely to the positively charged carbon nanotubes. To get the nanoparticles into plant leaves, the researchers apply a needleless syringe filled with the particle solution to the lower side of the leaf surface. Particles enter the leaf through tiny pores called stomata, which normally control water evaporation.

Once inside the leaf, the nanoparticles pass through the plant cell wall, cell membranes, and then the double membranes of the chloroplast. After the particles get inside the chloroplast, the slightly less acidic environment of the chloroplast causes the DNA to be released from the nanoparticles. Once freed, the DNA can be translated into proteins.

In this study, the researchers delivered a gene for yellow fluorescent protein, allowing them to easily visualize which plant cells expressed the protein. They found that about 47 percent of the plant cells produced the protein, but they believe that could be increased if they could deliver more particles.

“The approach reported here certainly opens new research avenues in chloroplast-selective gene delivery for transgene expression in plants, as shown here for several mature non-model species,” says Sanjay Swarup, an associate professor of biological sciences at the National University of Singapore, who was not involved in the research.

More resilient plants

A major advantage of this approach is that it can be used across many plant species. In this study, the researchers tested it in spinach, watercress, tobacco, arugula, and Arabidopsis thaliana, a type of plant commonly used in research. They also showed that the technique is not limited to carbon nanotubes and can potentially be extended to other types of nanomaterials.

The researchers hope that this new tool will allow plant biologists to more easily engineer a variety of desirable traits into vegetables and crops. For example, agricultural researchers in Singapore and elsewhere are interested in creating leafy vegetables and crops that can grow at higher densities, for urban farming. Other possibilities include creating drought-resistant crops; engineering crops such as bananas, citrus, and coffee to be resistant to fungal infections that threaten to wipe them out; and modifying rice so that it doesn’t take up arsenic from groundwater.

Because the engineered genes are carried only in the chloroplasts, which are inherited maternally, they can be passed to offspring but can’t be transferred to other plant species.

“That’s a big advantage, because if the pollen has a genetic modification, it can spread to weeds and you can make weeds that are resistant to herbicides and pesticides. Because the chloroplast is passed on maternally, it’s not passed through the pollen and there’s a higher level of gene containment,” Lew says.

The research was funded by the National Research Foundation of Singapore and the Singapore-MIT Alliance for Research and Technology Center.

Committed to service and science

Sun, 02/24/2019 - 11:59pm

Julia Ginder has to avoid a lot of foods due to allergies. From a young age, she got used to bringing her own snacks to birthday parties and group outings. But she didn’t really know the science behind her allergies until high school, when she read a chapter for class on immunology.

“I read it, and then I read it again, and I went running downstairs to tell my mom, ‘This is what’s wrong with me!’” she recalls.

From then on, Ginder was driven to learn about what made her body react so severely to certain stimuli. Now a biology major, she does research in the lab of Christopher Love, in the Koch Institute for Integrative Cancer Research, where she studies peanut allergies — one of the few food allergies she actually doesn’t have.

“I really enjoy figuring out, what’s the perspective from the biology side? What is the contributing chemistry? And how do those fit together?” she says. “And then, when you take a step back, how do you use that knowledge and perhaps the technology that comes out of it, and actually apply that in the real world?”

Nuts about research

In the Love lab, researchers look at how individual immune cells from people with peanut allergies react when stimulated with peanut extracts. More recently, they’ve been analyzing how the stimulated cells change over the course of treatment, evolving from one state to the next.

“You can watch the activation signals change over time in individual cells from peanut-allergic patients compared to healthy ones,” Ginder explains. “You can then dig deeper and look at distinct populations of cells at a single time point. With all of this information, you can start to get a sense of what critical cell types and signals are making the allergic person maintain a reaction.”

The researchers aim to figure out which cell types are associated with the development of tolerance so that more effective treatments can be developed. For instance, allergic people are sometimes given peanuts in small doses as a sort of biological exposure therapy, but perhaps if more key cell states are identified, targeted drug treatments can be added on top of that to induce those cell states.

Further pursuing her interest in health, Ginder spent the Independent Activities Period of her sophomore year volunteering for Boston Medical Center. The program she worked for helped families learn how to be advocates for their children with autism. For instance, it provided guidance on how to negotiate an appropriate accommodations agreement with their child’s school for their individual needs.

“It [the BMC experience] made it clear to me that for a child to succeed, they need to have support from both the educational side and the health side,” she says. “And it might seem obvious, but, especially for a child who might be coming from a less privileged background, those are two really important angles for ensuring that they are given the opportunity to reach their full potential.”

“The most helpful thing you can do is simply be there.”

Ginder became a swim coach and tutor for Amphibious Achievement in the fall of her first year, almost immediately after arriving at MIT. It’s a program that aims to help high schoolers reach both their athletic and academic goals. The high schoolers, often known as Achievers, are assigned a mentor like Ginder who helps with the academic and the athletic activities.

Local students come to MIT early Sunday morning to practice swimming or rowing, head to the Maseeh dining hall for lunch, participate in an afternoon academics lesson, reflect on their goals, and then spend a half an hour one-on-one with their mentor. It’s a big commitment for both the Achievers and mentors to spend almost six hours every Sunday with the program, but Ginder, who completed her two-year term as one of the co-executive directors this fall, has seen the importance of showing up week after week.

“The most helpful thing you can do is simply be there. Listen if they want to tell you anything, but really just being consistent — every single Sunday, being there.”

Ginder played on the field hockey team during her first year. However, when a practice during her sophomore year left her with a concussion and unable to play, she used the newfound spare time to start volunteering for Camp Kesem (CK). Having really enjoyed her experience at Amphibious Achievement, she was eager to be a counselor for the camp, which serves children whose parents are affected by cancer.

“Being there for someone, whether they are having a tough time or a great day, is really important to me. I felt that CK really aligned with that value I hold, and I hoped to meet even more people at MIT who felt that way. And so I joined, and I’ve loved it,” she says.

Management and moving west

Eventually, Ginder would like to become a physician, possibly in the fields of pediatrics and allergies. However, with a minor in public policy, she’s interested in developing areas outside of science as well. So, for the next couple of years, she’ll be moving westward to work as an associate consultant for Bain and Company in San Francisco.

“The reason I’m most interested in consulting is that there is this strong culture of learning and feedback. I want to improve my ability to be a strong team member, leader, and persuader. I think these are areas where I can continue to grow a lot,” she says. “It may sound silly, but I think for me, as someone who is 5’2” and hoping to become a pediatrician, it’s important to cultivate those professional skills early. I want to also serve as a leader and advocate outside of the clinic.”

As Ginder admits, the move is quite the geographic leap. Right now, her entire family is between a 20-minute and two-hour drive away. Moving to the opposite side of the country will be difficult, but she isn’t one to shy away from a challenge.

“I think it’ll be a bit sad because I’m not going to be as close to my family, but I think that it’ll really push me to be as independent as possible. I’ll need to look for my own opportunities, meet new people, build my network, and be my own person,” she says. “I’m really excited about that.”

Helping women in chemical engineering navigate academic careers

Fri, 02/22/2019 - 1:40pm

On a blustery day at MIT, 22 female graduate students and postdocs from around the country converged to gain insight into the world of chemical engineering academia. Nominated by department heads and professors in leading chemical engineering departments around the country, they represented the top early-career women in their field.

The Rising Stars in Chemical Engineering program was based on other successful Rising Stars programs in the School of Engineering and the School of Science, and for two days, participants networked, presented research, and learned best practices to become successful professors of chemical engineering. 

“The ChemE Rising Stars program was very helpful for me as someone looking to become a successful professor in the field of chemical engineering,” said attendee Molly Kozminsky, currently a postdoc at the University of California at Berkeley. “The program addressed the multiple components of the interview process and was particularly helpful in demystifying the chalk talk.” 

“The program also addressed steps we can take as young faculty members to set ourselves up for success,” Kozminsky said, adding that it was “incredibly positive and supportive, and it increased my confidence in my ability to succeed on the academic job market.”

Karen Gleason, the Alexander and I. Michael Kasser (1960) Professor at MIT and head of the workshop’s steering committee, said the goal of the Rising Stars in Chemical Engineering program is to “bring together the next generation of leaders in the field and help prepare them for careers in academia.”

“We aim to help strengthen the academic pipeline for women in chemical engineering, and provide opportunities for them to develop their own network of peers as they decide their own next steps,” Gleason said.

During the two-day event, participants attended workshops, met individually with MIT faculty, presented their own research with feedback, and learned strategies for job searching, building a career, balancing family and research, and thriving as a chemical engineering professor.

Professor Malancha Gupta SM ’05 PhD ’07, one of the speakers during the workshop, was impressed by the caliber of the cohort.

“The attendees were very talented and ambitious,” she recalled. “The networking lunches and dinners were full of fantastic conversations about ways to make a more inclusive chemical engineering community. I am confident that the attendees will become successful leaders in academia, industry, and national labs. I look forward to crossing paths with them in the future.”

Maggie Qi, another attendee and current postdoc at Harvard University, said the personal stories the female professors shared were are a condensed version of “what you could get from a female mentor over one or two years. How to keep work-life balance, how to plan maternity leave, etc., specifically as a chemical engineering professor, are things that you can rarely learn in many other schools’ departments when very few people know the answer.”

An important point that many speakers touched on was overcoming self-doubt.

“The advice that Karen Gleason and Paula Hammond gave, that ‘women tend to preselect and stop themselves from applying,” Qi said. “Do not preselect yourself,’ is something that I will always remember. This offers me a lot of confidence to apply to schools that I'm interested in in the future.”

Fellow attendee Amber Hubbard, a graduate student at North Carolina State University, agreed that it was enlightening for speakers to emphasize “that no candidate should limit their own opportunities.”

“In other words, as you begin the search for faculty positions, the universities to which you apply are going to critique you and your research, but there is no need to do some of this work for them, ” Hubbard said. “Include yourself in the conversation and don't limit which positions you apply for based on your own perceptions and insecurities.”

During the workshop, the group was broken down into smaller units, where each attendee presented a short lecture on her own research, called a “chalk talk,” to MIT faculty members. Each presentation was followed by immediate feedback and recommendations.

“Being able to give a brief presentation about my research to faculty and other Rising Stars helped me think about my research in the context of presenting to a broader academic audience and about how I plan to develop projects for the future,” Kozminky said.

Attendees said the tone of the first Rising Stars in Chemical Engineering workshop was not only educational, but also hopeful.

“As was mentioned throughout the program, and particularly by Dr. Karen Gleason, I do believe that the academic opportunities for women in engineering have greatly improved over the years,” Hubbard recounted. “It was wonderful having the chance to talk to such inspiring professors not only about the opportunities for women in engineering but also about the state of engineering and STEM fields in our society as a whole. These professors provided so much wisdom, advice, and encouragement about the future of our field and the potential each one of us has to make a lasting impression wherever we end up in our careers. I certainly walked away from this experience excited and inspired about both chemical engineering and academia.”

In addition to Gleason, the event’s steering committee included Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering and former department head, and Paula T. Hammond, the David H. Koch (1962) Professor of Engineering and head of the MIT Department of Chemical Engineering.

3 Questions: Josh Moss on tackling urban pollution

Fri, 02/22/2019 - 1:20pm

Josh Moss is a PhD student in the lab of Professor Jesse Kroll, where he studies atmospheric chemistry and examines the chemistry of gases and particles in the atmosphere that humans are releasing and their interactions with existing particles in the atmosphere. He focuses on organic chemical reactions that occur in the atmosphere which contribute significantly to smog formation. In the laboratory, he uses a controlled atmospheric chamber to conduct physical experiments on the gas phase reactions that originate from smog particles. Moss also works on computer models for chemical reaction generation and predictions. His research is concerned with chemicals commonly found in large urban cities such as Los Angeles, Houston, and Mexico City, and he is interested in the implications that these micro-particles have on human health and climate change. 

Q: What are the real-world implications of your research?

A: Primarily, much of what I study is related to urban pollution. My work is currently centered around understanding the impacts that gasoline, car emissions, and power-plant emissions have on smog formation, and what impacts they may have on the future of the environment. Due to the complexity of the atmosphere, it is difficult to break down all of the chemical reactions leading to smog formation, which is why this has become the focus of our research now. 

I deal mostly with urban chemicals because they have generally been researched less than biologically emitted chemicals, and urban smog has adverse effects on human health in densely populated cities. In terms of human health, small smog particles are generally harmful for people to inhale because it can lead to various diseases such as heart failure, stroke, lung disease, and certain types of cancer. The largest source of uncertainty in global climate models is in these small particles. Right now, we are unsure exactly to what degree the particles are affecting the Earth’s temperature and climate. What we do know is that some particles can scatter sunlight, which cools the Earth. On the other hand, darker particles absorb sunlight and can actually warm the Earth. In addition to this, many particles lead to cloud formation, which contributes to both the cooling and warming of the Earth as well. 

Humans are increasing the concentration of particles in the atmosphere on a regular basis. For instance, tiny particles can originate from burning or from particles that are formed from chemicals that have reacted in the atmosphere, known as secondary organic aerosol. Over the course of their reactions, they tend to stick together with other chemicals. Even though they are not emitted as particles, due to the chemical reactions that they undergo, particles are formed. Understanding secondary organic aerosol is really the core of my research. For example, if you look at a photo of LA, the smog formation over the city is extremely visible because they have an abundance of people in a concentrated area with countless cars. The combination of gas emissions together with the warm, sunny weather creates the perfect condition to form a great deal of smog particles. This is what I am really interested in when it comes to my research. 

Q: What opportunities have you had to delve deeper into your research?

A: I was offered the opportunity to go to Paris last summer, which has led into the next exciting phase of my research, computer modeling. We are collaborating with a lab in Paris who has developed a unique software called GECKO-A that can predict chemical reactions in the atmosphere, giving me a new avenue to pursue in my research. Professor Kroll wrote a grant, funded by MIT International Science and Technology Initiatives with the lab in Paris, that enabled me to travel to France for almost a month in order to learn how to use the software. The software is very complex, relying on quantum chemistry knowledge to predict reactions. Jesse and I are excited for what this can tell us about the atmosphere that experiments cannot. 

The atmosphere is arguably the most complex chemical system on Earth which makes it incredibly difficult to study. After several hours of reaction, a single chemical species can transform into millions of different chemicals. Even though we perform experiments in a controlled atmospheric chamber in our lab, it is impossible to measure and quantify every chemical that is generated during a reaction sequence. In order to dive further into my research, Jesse and I think the best course of action is to compare our experimental results to the model simulation results to improve both data sets. The models can give us detailed insight into the different chemical pathways related to smog formation, and the experimental data can serve to ground the model results in our observable reality.

Q: What is the next step for you?

A: I am still working to finish my thesis, however my long-term goals include becoming a professor. I love teaching and conducting research, so pursuing a career as a professor is a perfect fit for me. I have had several opportunities to TA classes here at MIT, including the Traveling Research Environmental Experience class (TREX), where I went to Hawaii with undergraduates to study volcanic emissions. TREX was one of the most fulfilling teaching experiences, and I hope to carry the excitement and joy I felt from TREX into all of my future teaching endeavors. 

More recently, I have been mulling over a few other potential career paths. I am interested in environmental law and public policy because it would allow me to apply my research and knowledge in order to help shape the policies that protect our environment. I am very passionate about politics, and I have been concerned by the United States’ decreasing leadership role on the global stage, specifically in issues regarding climate change. I believe that scientists should take a more direct role in shaping critical policies, and I would be happy to contribute in any way that I can. My main passion lies in educating and informing people about the difficult and often highly nuanced environmental challenges that we face. I’ve given several public talks in the Boston area, and hosted a variety of classes for middle and high school students which I think is of vital importance for our collective future. I believe that the path to improving our environment, and more broadly our world, lies in education. If I can communicate my enthusiasm for environmental science and chemistry to others, I will consider it as a job well done.

Workshop explores national security repercussions of climate change

Fri, 02/22/2019 - 9:00am

Scientists can, to varying degrees of accuracy, model the climate. They can predict the rate at which greenhouse gas emissions grow, sea levels rise, and ocean temperatures warm. It is also possible to predict the direct impacts climate change will have to infrastructure, including to U.S. military bases. But modeling how societies will react to these climate-driven changes is arguably much harder, and was the central problem at a recent Climate Change and National Security Workshop.

The workshop, jointly organized by MIT Lincoln Laboratory and the MIT Department of Civil and Environmental Engineering and hosted at MIT, brought together science and policy experts from campus, the U.S. Geological Survey, the World Bank, the U.S. Agency for International Development (USAID), and several other organizations to discuss how to predict social and political conflicts that may be caused or exacerbated by the impacts of climate change.

"What do we think those impacts of climate change will be, where and who will be affected, and will those impacts cause conflicts and problems? These questions prompt the kind of analysis that we think is needed," said Edward Wack, who led the workshop and is the assistant head of Lincoln Laboratory's Homeland Protection and Air Traffic Control divisions.

The workshop resulted from a climate change study that Lincoln Laboratory conducted last year. The study's purpose was to develop the laboratory’s strategy to understand, predict, mitigate, and adapt to climate change in ways that connect with the laboratory's national security role. The federally funded research and development center's mission is to develop advanced technology that meets national security needs.

The U.S. Department of Defense (DoD) has weighed the national security implications of climate change for decades. In its 2014 Climate Change Adaptation Roadmap, the DoD declared that "climate change will affect the Department of Defense's ability to defend the nation and poses immediate risks to U.S. national security." Among the reasons that climate change is a risk is its role as a "threat multiplier." That is, it can significantly add to problems of instability — food and water shortages, diseases, economic insecurity, mass migration — that could boil over into conflict. In its most recent climate report, the DoD also assessed the direct impacts climate change poses to its installations over the next 20 years, and revealed that many of its installations are already facing climate change-related risks, including recurrent flooding at 15 bases, drought exposure at 43 bases, and wildfire risk to 36 bases.

"In general, climate change doesn't, on its own, cause conflict, but it makes bad situations worse, especially where local institutions and governments are fragile or lack the capacity to meet existing challenges," said John Conger, who is the director of the nonpartisan Center for Climate and Security and formerly the principal deputy under secretary of defense (comptroller) at the DoD. He was one of several presenters at the workshop.

"The DoD monitors the security situation globally and has often deployed to regions that are embroiled in conflict or have endured natural disasters. Broadly, the DoD will better be able to anticipate where those requirements will emerge if it incorporates climate change into its global calculus," he said.

Anticipating these requirements will depend partly on regional climate models. MIT Professor Elfatih Eltahir of the Department of Civil and Environmental Engineering is developing such models. His research seeks to simulate the climate of different regions around the world in order to understand how climate change and human activity may impact water availability, extreme weather, and the spread of diseases in those areas. He and his students have developed sophisticated numerical models, such as the MIT Regional Climate Model, that are used for predicting such impacts at regional scales.

"Regional climate models are designed with the skill of simulating climate processes at regional and local scales — 100 kilometers to 10 kilometers," Eltahir said. "With such a high resolution, these sophisticated models are capable of resolving the impact of climate change on variables that are important for society."

At the workshop, Eltahir shared his modeling results, such as the high likelihood of severe heat stress impacting regions of South Asia — in particular, heat waves with wet bulb temperatures (the lowest temperature at which air can be cooled through evaporation of water into it) predicted to be higher than any recorded in history and in the deadly range. These modelled conditions may be predictors of mass migration as populated areas become unhabitable.

Adam Schlosser, a senior research scientist in the MIT Center for Global Change Science and deputy director of the MIT Joint Program on the Science and Policy of Global Change, discussed the Joint Program's analysis of water stress in India and China, showing predictions of serious problems by mid-century. He then compared a variety of possible interventions. One intervention would be reducing greenhouse gas emissions, which their analysis shows makes a modest but noticeable reduction in water stress. Other actions directly aimed at conserving water, such as lining irrigation canals with concrete and irrigating more efficiently, show a bigger impact.

These types of intervention analyses may help policy makers confront future climate-related decisions. Schlosser also presented the Joint Program's work coupling economic models with global climate models. This research aims to delineate the economic and climatic impacts of different policy decisions.

"Dr. Schlosser's talk highlighted that climate change is coupled to a variety of other environmental issues, and it isn't always clear how to compare climate mitigation, or reducing the emission of greenhouse gases, with direct attacks on or adaptations to specific problems," said Deborah Campbell, an associate technology officer at Lincoln Laboratory. "Mitigation may cover more problems and be better in the long run, but adaption to a particular environment problem may be more efficient at solving a short-term problem."

When it comes to adaptation, technology can have an immediate impact. Dave Harden, who previously worked for the USAID Bureau for Democracy, Conflict, and Humanitarian Assistance under the administration of President Barack Obama, pointed to the use of desalination technology to solve low water-supply problems in the West Bank and Gaza and its impact on easing potential conflict over the issue between Israelis and Palestinians. 

For Lincoln Laboratory, modeling climate-related scenarios is the first step in figuring out what technology will be needed for adapting to new challenges. Last year, researchers from the laboratory and the Joint Program published a study that found that the expected lifetime of power transformers will be reduced by up to 40 percent as a result of an increasing number of hot days in the United States. Putting monitoring systems in place today, for example, could help energy providers plan for these replacements before widespread grid disruptions occur.

Eltahir also added that technological solutions in the form of new sensors or satellite technologies, such as those developed at Lincoln Laboratory, could provide new, high-resolution data about land surface conditions and atmospheric conditions. Such data would help in developing more accurate regional climate models.

Wack saw the laboratory and MIT continuing to partner on research that investigates how climate change will impact our lives and what role technology can play in avoiding bad outcomes at home and globally. "Climate change poses a real threat to our national security and will require our nation’s best expertise to get out ahead of, and solve, these challenges," he said. "We’re excited to join with MIT campus to develop the advanced technologies needed to protect the nation."

New MRI sensor can image activity deep within the brain

Fri, 02/22/2019 - 5:00am

Calcium is a critical signaling molecule for most cells, and it is especially important in neurons. Imaging calcium in brain cells can reveal how neurons communicate with each other; however, current imaging techniques can only penetrate a few millimeters into the brain.

MIT researchers have now devised a new way to image calcium activity that is based on magnetic resonance imaging (MRI) and allows them to peer much deeper into the brain. Using this technique, they can track signaling processes inside the neurons of living animals, enabling them to link neural activity with specific behaviors.

“This paper describes the first MRI-based detection of intracellular calcium signaling, which is directly analogous to powerful optical approaches used widely in neuroscience but now enables such measurements to be performed in vivo in deep tissue,” says Alan Jasanoff, an MIT professor of biological engineering, brain and cognitive sciences, and nuclear science and engineering, and an associate member of MIT’s McGovern Institute for Brain Research.

Jasanoff is the senior author of the paper, which appears in the Feb. 22 issue of Nature Communications. MIT postdocs Ali Barandov and Benjamin Bartelle are the paper’s lead authors. MIT senior Catherine Williamson, recent MIT graduate Emily Loucks, and Arthur Amos Noyes Professor Emeritus of Chemistry Stephen Lippard are also authors of the study.

Getting into cells

In their resting state, neurons have very low calcium levels. However, when they fire an electrical impulse, calcium floods into the cell. Over the past several decades, scientists have devised ways to image this activity by labeling calcium with fluorescent molecules. This can be done in cells grown in a lab dish, or in the brains of living animals, but this kind of microscopy imaging can only penetrate a few tenths of a millimeter into the tissue, limiting most studies to the surface of the brain.

“There are amazing things being done with these tools, but we wanted something that would allow ourselves and others to look deeper at cellular-level signaling,” Jasanoff says.

To achieve that, the MIT team turned to MRI, a noninvasive technique that works by detecting magnetic interactions between an injected contrast agent and water molecules inside cells. 

Many scientists have been working on MRI-based calcium sensors, but the major obstacle has been developing a contrast agent that can get inside brain cells. Last year, Jasanoff’s lab developed an MRI sensor that can measure extracellular calcium concentrations, but these were based on nanoparticles that are too large to enter cells.

To create their new intracellular calcium sensors, the researchers used building blocks that can pass through the cell membrane. The contrast agent contains manganese, a metal that interacts weakly with magnetic fields, bound to an organic compound that can penetrate cell membranes. This complex also contains a calcium-binding arm called a chelator.

Once inside the cell, if calcium levels are low, the calcium chelator binds weakly to the manganese atom, shielding the manganese from MRI detection. When calcium flows into the cell, the chelator binds to the calcium and releases the manganese, which makes the contrast agent appear brighter in an MRI image.

“When neurons, or other brain cells called glia, become stimulated, they often experience more than tenfold increases in calcium concentration. Our sensor can detect those changes,” Jasanoff says.

Precise measurements

The researchers tested their sensor in rats by injecting it into the striatum, a region deep within the brain that is involved in planning movement and learning new behaviors. They then used potassium ions to stimulate electrical activity in neurons of the striatum, and were able to measure the calcium response in those cells.

Jasanoff hopes to use this technique to identify small clusters of neurons that are involved in specific behaviors or actions. Because this method directly measures signaling within cells, it can offer much more precise information about the location and timing of neuron activity than traditional functional MRI (fMRI), which measures blood flow in the brain.

“This could be useful for figuring out how different structures in the brain work together to process stimuli or coordinate behavior,” he says.

In addition, this technique could be used to image calcium as it performs many other roles, such as facilitating the activation of immune cells. With further modification, it could also one day be used to perform diagnostic imaging of the brain or other organs whose functions rely on calcium, such as the heart.

The research was funded by the National Institutes of Health and the MIT Simons Center for the Social Brain.

Physicists calculate proton’s pressure distribution for first time

Fri, 02/22/2019 - 12:00am

Neutron stars are among the densest-known objects in the universe, withstanding pressures so great that one teaspoon of a star’s material would equal about 15 times the weight of the moon. Yet as it turns out, protons — the fundamental particles that make up most of the visible matter in the universe — contain even higher pressures.

For the first time, MIT physicists have calculated a proton’s pressure distribution, and found that the particle contains a highly pressurized core that, at its most intense point, is generating greater pressures than are found inside a neutron star.

This core pushes out from the proton’s center, while the surrounding region pushes inward. (Imagine a baseball attempting to expand inside a soccer ball that is collapsing.) The competing pressures act to stabilize the proton’s overall structure.

The physicists’ results, published today in Physical Review Letters, represent the first time that scientists have calculated a proton’s pressure distribution by taking into account the contributions of both quarks and gluons, the proton’s fundamental, subatomic constituents.

“Pressure is a fundamental aspect of the proton that we know very little about at the moment,” says lead author Phiala Shanahan, assistant professor of physics at MIT. “Now we’ve found that quarks and gluons in the center of the proton are generating significant outward pressure, and further to the edges, there’s a confining pressure. With this result, we’re driving toward  a complete picture of the proton’s structure.”

Shanahan carried out the study with co-author William Detmold, associate professor of physics at MIT. Both are researchers in the Laboratory for Nuclear Science.

Remarkable quarks

In May 2018, physicists at the U.S. Department of Energy’s Thomas Jefferson National Accelerator Facility announced that they had measured the proton’s pressure distribution for the first time, using a beam of electrons that they fired at a target made of hydrogen. The electrons interacted with quarks inside the protons in the target. The physicists then determined the pressure distribution throughout the proton, based on the way in which the electrons scattered from the target. Their results showed a high-pressure center in the proton that at its point of highest pressure measured about 1035 pascals, or 10 times the pressure inside a neutron star.

However, Shanahan says their picture of the proton’s pressure was incomplete.

“They found a pretty remarkable result,” Shanahan says. “But that result was subject to a number of important assumtions that were necessary because of our incomplete understanding.”

Specifically, the researchers based their pressure estimates on the interactions of a proton’s quarks, but not its gluons. Protons consist of both quarks and gluons, which continuously interact in a dynamic and fluctuating way inside the proton. The Jefferson Lab team was only able to determine the contributions of quarks with its detector, which Shanahan says leaves out a large part of a proton’s pressure contribution.

“Over the last 60 years, we’ve built up quite a good understanding of the role of quarks in the structure of the proton,” she says. “But gluon structure is far, far harder to understand since it is notoriously difficult to measure or calculate.”

A gluon shift

Instead of measuring a proton’s pressure using particle accelerators, Shanahan and Detmold looked to include gluons’ role by using supercomputers to calculate the interactions between quarks and gluons that contribute to a proton’s pressure.

“Inside a proton, there’s a bubbling quantum vacuum of pairs of quarks and antiquarks, as well as gluons, appearing and disappearing,” Shanahan says. “Our calculations include all of these dynamical fluctuations.”

To do this, the team employed a technique in physics known as lattice QCD, for quantum chromodynamics, which is a set of equations that describes the strong force, one of the three fundamental forces of the Standard Model of particle physics. (The other two are the weak and electromagnetic force.) The strong force is what binds quarks and gluons to ultimately make a proton.

Lattice QCD calculations use a four-dimensional grid, or lattice, of points to represent the three dimensions of space and one of time. The researchers calculated the pressure inside the proton using the equations of Quantum Chromodynamics defined on the lattice.

“It’s hugely computationally demanding, so we use the most powerful supercomputers in the world to do these calculations,” Shanahan explains.

The team spent about 18 months running various configurations of quarks and gluons through several different supercomputers, then determined the average pressure at each point from the center of the proton, out to its edge.

Compared with the Jefferson Lab results, Shanahan and Detmold found that, by including the contribution of gluons, the distribution of pressure in the proton shifted significantly.

We’ve looked at the gluon contribution to the pressure distribution for the first time, and we can really see that relative to the previous results the peak has become stronger, and the pressure distribution extends further from the center of the proton,” Shanahan says.

In other words, it appears that the highest pressure in the proton is around 1035 pascals, or 10 times that of a neutron star, similar to what researchers at Jefferson Lab reported. The surrounding low-pressure region extends farther than previously estimated.

Confirming these new calculations will require much more powerful detectors, such as the Electron-Ion Collider, a proposed particle accelerator that physicists aim to use to probe the inner structures of protons and neutrons, in more detail than ever before, including gluons.

“We’re in the early days of understanding quantitatively the role of gluons in a proton,” Shanahan says. “By combining the experimentally measured quark contribution, with our new calculation of the gluon piece, we have the first complete picture of the proton’s pressure, which is a prediction that can be tested at the new collider in the next 10 years.”

This research was supported, in part, by the National Science Foundation and the U.S. Department of Energy.

Quantum dots can spit out clone-like photons

Thu, 02/21/2019 - 2:05pm

In the global quest to develop practical computing and communications devices based on the principles of quantum physics, one potentially useful component has proved elusive: a source of individual particles of light with perfectly constant, predictable, and steady characteristics. Now, researchers at MIT and in Switzerland say they have made major steps toward such a single photon source.

The study, which involves using a family of materials known as perovskites to make light-emitting particles called quantum dots, appears today in the journal Science. The paper is by MIT graduate student in chemistry Hendrik Utzat, professor of chemistry Moungi Bawendi, and nine others at MIT and at ETH in Zurich, Switzerland.

The ability to produce individual photons with precisely known and persistent properties, including a wavelength, or color, that does not fluctuate at all, could be useful for many kinds of proposed quantum devices. Because each photon would be indistinguishable from the others in terms of its quantum-mechanical properties, it could be possible, for example, to delay one of them and then get the pair to interact with each other, in a phenomenon called interference.

“This quantum interference between different indistinguishable single photons is the basis of many optical quantum information technologies using single photons as information carriers,” Utzat explains. “But it only works if the photons are coherent, meaning they preserve their quantum states for a sufficiently long time.”

Many researchers have tried to produce sources that could emit such coherent single photons, but all have had limitations. Random fluctuations in the materials surrounding these emitters tend to change the properties of the photons in unpredictable ways, destroying their coherence. Finding emitter materials that maintain coherence and are also bright and stable is “fundamentally challenging,” Utzat says. That’s because not only the surroundings but even the materials themselves “essentially provide a fluctuating bath that randomly interacts with the electronically excited quantum state and washes out the coherence,” he says.

“Without having a source of coherent single photons, you can’t use any of these quantum effects that are the foundation of optical quantum information manipulation,” says Bawendi, who is the Lester Wolfe Professor of Chemistry. Another important quantum effect that can be harnessed by having coherent photons, he says, is entanglement, in which two photons essentially behave as if they were one, sharing all their properties.

Previous chemically-made colloidal quantum dot materials had impractically short coherence times, but this team found that making the quantum dots from perovskites, a family of materials defined by their crystal structure, produced coherence levels that were more than a thousand times better than previous versions. The coherence properties of these colloidal perovskite quantum dots are now approaching the levels of established emitters, such as atom-like defects in diamond or quantum dots grown by physicists using gas-phase beam epitaxy.

One of the big advantages of perovskites, they found, was that they emit photons very quickly after being stimulated by a laser beam. This high speed could be a crucial characteristic for potential quantum computing applications. They also have very little interaction with their surroundings, greatly improving their coherence properties and stability.

Such coherent photons could also be used for quantum-encrypted communications applications, Bawendi says. A particular kind of entanglement, called polarization entanglement, can be the basis for secure quantum communications that defies attempts at interception.

Now that the team has found these promising properties, the next step is to work on optimizing and improving their performance in order to make them scalable and practical. For one thing, they need to achieve 100 percent indistinguishability in the photons produced. So far, they have reached 20 percent, “which is already very remarkable,” Utzat says, already comparable to the coherences reached by other materials, such as atom-like fluorescent defects in diamond, that are already established systems and have been worked on much longer.

“Perovskite quantum dots still have a long way to go until they become applicable in real applications,” he says, “but this is a new materials system available for quantum photonics that can now be optimized and potentially integrated with devices.”

It’s a new phenomenon and will require much work to develop to a practical level, the researchers say. “Our study is very fundamental,” Bawendi notes. “However, it’s a big step toward developing a new material platform that is promising.”

The work was supported by the U.S. Department of Energy, the National Science Foundation, and the Swiss Federal Commission for Technology and Innovation.

Achieving greater efficiency for fast data center operations

Thu, 02/21/2019 - 12:41pm

Today’s data centers eat up and waste a good amount of energy responding to user requests as fast as possible, with only a few microseconds delay. A new system by MIT researchers improves the efficiency of high-speed operations by better assigning time-sensitive data processing across central processing unit (CPU) cores and ensuring hardware runs productively.

Data centers operate as distributed networks, with numerous web and mobile applications implemented on a single server. When users send requests to an app, bits of stored data are pulled from hundreds or thousands of services across as many servers. Before sending a response, the app must wait for the slowest service to process the data. This lag time is known as tail latency.

Current methods to reduce tail latencies leave tons of CPU cores in a server open to quickly handle incoming requests. But this means that cores sit idly for much of the time, while servers continue using energy just to stay powered on. Data centers can contain hundreds of thousands of servers, so even small improvements in each server’s efficiency can save millions of dollars.

Alternatively, some systems reallocate cores across apps based on workload. But this occurs over milliseconds — around one-thousandth the desired speed for today’s fast-paced requests. Waiting too long can also degrade an app’s performance, because any information that’s not processed before an allotted time doesn’t get sent to the user.

In a paper being presented at the USENIX Networked Systems Design and Implementation conference next week, the researchers developed a faster core-allocating system, called Shenango, that reduces tail latencies, while achieving high efficiencies. First, a novel algorithm detects which apps are struggling to process data. Then, a software component allocates idle cores to handle the app’s workload.

“In data centers, there’s a tradeoff between efficiency and latency, and you really need to reallocate cores at much finer granularity than every millisecond,” says first author Amy Ousterhout, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). Shenango lets servers “manage operations that occur at really short time scales and do so efficiently.”

Energy and cost savings will vary by data center, depending on size and workloads. But the overall aim is to improve data center CPU utilization, so that every core is put to good use. The best CPU utilization rates today sit at about 60 percent, but the researchers say their system could potentially boost that figure to 100 percent.

“Data center utilization today is quite low,” says co-author Adam Belay, an assistant professor of electrical engineering and computer science and a CSAIL researcher. “This is a very serious problem [that can’t] be solved in a single place in the data center. But this system is one critical piece in driving utilization up higher.”

Joining Ousterhout and Belay on the paper are Hari Balakrishnan, the Fujitsu Chair Professor in the Department of Electrical Engineering and Computer Science, and CSAIL PhD students Jonathan Behrens and Joshua Fried.

Efficient congestion-detection

In a real-world data center, Shenango — algorithm and software — would run on each server in a data center. All the servers would be able to communicate with each other.

The system’s first innovation is a novel congestion-detection algorithm. Every five microseconds the algorithm checks data packets queued for processing for each app. If a packet is still waiting from the last observation, the algorithm notes there’s at least a 5-microsecond delay. It also checks if any computation processes, called threads, are waiting to be executed. If so, the system considers that a “congested” app.

It seems simple enough. But the queue’s structure is important to achieving microsecond-scale congestion detection. Traditional thinking meant having the software check the timestamp of each queued-up data packet, which would take too much time.

The researchers implement the queues in efficient structures known as “ring buffers.” These structures can be visualized as different slots around a ring. The first inputted data packet goes into a starting slot. As new data arrive, they’re dropped into subsequent slots around the ring. Usually, these structures are used for first-in-first-out data processing, pulling data from the starting slot and working toward the ending slot.

The researchers’ system, however, only stores data packets briefly in the structures, until an app can process them. In the meantime, the stored packets can be used for congestion checks. The algorithm need only compare two points in the queue — the location of the first packet and where the last packet was five microseconds ago — to determine if packets are encountering a delay.

“You can look at these two points, and track their progress every five microseconds, to see how much data has been processed,” Fried says. Because the structures are simple, “you only have to do this once per core. If you’re looking at 24 cores, you do 24 checks in five microseconds, which scales nicely.”

Smart allocation

The second innovation is called the IOKernel, the central software hub that steers data packets to appropriate apps. The IOKernel also uses the congestion detection algorithm to quickly allocate cores to congested apps orders of magnitude more quickly than traditional approaches.

For instance, the IOKernel may see an incoming data packet for a certain app that requires microsecond processing speeds. If the app is congested due to a lack of cores, the IOKernel immediately devotes an idle core to the app. If it also sees another app running cores with less time-sensitive data, it will grab some of those cores and reallocate them to the congested app. The apps themselves also help out: If an app isn’t processing data, it alerts the IOKernel that its cores can be reallocated. Processed data goes back to the IOKernel to send the response.

“The IOKernel is concentrating on which apps need cores that don’t have them,” Behrens says. “It’s trying to figure out who’s overloaded and needs more cores, and gives them cores as quickly as possible, so they don’t fall behind and have huge latencies.”

The tight communication between the IOKernel, algorithm, apps, and server hardware is “unique in data centers” and allows Shenango to function seamlessly, Belay says: “The system has global visibility into what’s happening in each server. It sees the hardware providing the packets, what’s running where in each core, and how busy each of the apps are. And it does that at the microsecond scale.”

Next, the researchers are refining Shenango for real-world data center implementation. To do so, they’re ensuring the software can handle a very high data throughput and has appropriate security features.

“Providing low-latency network services is critical to many internet applications. Unfortunately, reducing the latency is very challenging especially when multiple applications compete for shared compute resources,” says KyoungSoo Park, an associate professor of electrical engineering at the Korea Advanced Institute of Science and Technology. “Shenango breaks the conventional wisdom that it is impossible to sustain low latency at a very high request load with a variable response time, and it opens a new system design space that realizes microsecond-scale tail latency with practical network applications.”

Exploring the nature of intelligence

Thu, 02/21/2019 - 12:35pm

Algorithms modeled loosely on the brain have helped artificial intelligence take a giant leap forward in recent years. Those algorithms, in turn, have advanced our understanding of human intelligence while fueling discoveries in a range of other fields. 

MIT founded the Quest for Intelligence to apply new breakthroughs in human intelligence to AI, and use advances in AI to push human intelligence research even further. This fall, nearly 50 undergraduates joined MIT’s human-machine intelligence quest under the Undergraduate Research Opportunities Program (UROP). Students worked on a mix of projects focused on the brain, computing, and connecting computing to disciplines across MIT.

Picking the right word with a click

Nicholas Bonaker, a sophomore, is working on a software program called Nomon to help people with nearly complete paralysis to communicate by pressing a button. Nomon was created more than a decade ago by Tamara Broderick, as a master’s thesis, and soon found a following on the web. Now a computer science professor at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), Broderick handed Nomon off to Bonaker last summer for an update at a user’s request. 

The program allows the user to select from more than 80 words or characters on a screen; the user presses a button when a clock corresponding to the desired word or character reaches noon. The hands of each clock move slightly out of phase, helping Nomon to figure out which word or character to choose. The program automatically adapts to a user’s clicking style, giving those with less precise motor control more time to pick their word. 

“Nick has made Nomon much easier for a user to install and run, including directly from Windows," Broderick says. “He has dramatically improved the user interface, and refactored the code to make it easier to incorporate future improvements."

Bonaker’s next step is to test Nomon on able-bodied and motor-impaired users to see how it compares to traditional row-column scanner software. “It’s been fun knowing this could have a big impact on someone’s life,” he says.

Predicting how materials respond to 3-D printing

3-D printers are now mainstream, but industrial molds are still better at turning out items like high-quality car parts, or replacement hips and knees. Senior Alexander Denmark chose a project in the lab of Elsa Olivetti, a professor in the Department of Materials Science and Engineering, to understand how 3-D printing methods can be made more consistent.

Working with graduate students in Olivetti’s lab, Denmark used machine-learning algorithms to explore how the printer’s laser speed, and the layering of different types of materials, influence the properties of the finished product. He helped build a framework for comparing 3-D printing parameters to the final product’s mechanical properties. 

“We hope to use it as a guide in printing experiments,” he says. “Say I want our final product to be really strong, or relatively lightweight, this approach could help tell us at what power to set the laser or how thick each layer of material should be.” 

Denmark says the project helped bring his coding skills to the next level. He also appreciated the mentoring he received from his graduate student colleagues. “They gave me a lot of advice on improving my approach,” he says.

A faster way to find new drugs

Developing new drugs is expensive because of the vast number of chemical combinations possible. Second-year student Alexandra Dima chose to work on a project in the lab of Rafael Gomez-Bombarelli, a professor of materials science and engineering. Bombarelli is using machine-learning tools to narrow the search for promising drug candidates by predicting which molecules are most likely to bind with a target protein in the body. 

So far, Dima has helped to build a database of hundreds of thousands of small molecules and proteins, detailing their chemical structures and binding properties. She has also worked on the deep learning framework aimed at predicting which molecule-protein pairs have the strongest binding affinity, and thus, represent the most promising drug candidates. Specifically, she helped to optimize the parameters of a message-passing neural network in the framework. 

Among the challenges she overcame, she says, was learning to extract massive amounts of data from the web and standardize it. She also enjoyed the deep dive into bioinformatics, and as a computer science and biology major, being able to work on a real-world application. “I feel so lucky that I got to start using my coding skills to build tools that have a real life-sciences application,” she says.

Improving face-recognition models

Neeraj Prasad, a sophomore, is using machine learning tools to test ideas about how the brain organizes visual information. His project in the lab of Pawan Sinha, a neuroscience professor in the Department of Brain and Cognitive Sciences (BCS), started with a puzzle: Why are children who are treated for cataracts unable to later recognize faces? The retina matures faster in newborns with cataracts, leading researchers to hypothesize that the newborns, by missing out on seeing faces through blurry eyes, failed to learn to identify faces by their overall configuration.

With researchers in Sinha’s lab, Prasad tested the idea on computer models based on convolutional neural networks, a form of deep learning that mimics the human visual system. When researchers trained the neural nets on pictures of blurred, filtered, or discolored faces, it was able to generalize what it had learned to new faces, suggesting that the blurry vision we have as babies helps us in learning to recognize faces. The results offer insight into how the visual system develops, and suggest a new method for improving face-recognition software. 

Prasad says he learned new computational techniques and how to use the symbolic math library, TensorFlow. Patience was also required. “It took a lot of time to train the neural nets — the networks are so large that we often had to wait several days, even on a supercomputer, for results,” he says. 

Tracking language comprehension in real time

Language underlies much of what we think of as intelligence: It lets us represent concepts and ideas, think and reason about the world, and communicate and coordinate with others. To understand how the brain pulls it all off, psychologists have developed methods for tracking how quickly people grasp what they read and hear, in so-called sentence-processing experiments. Longer reading times can indicate that a word, in a given context, is harder to comprehend, thus help researchers fill out a general model of how language comprehension works. 

Veronica Boyce, a senior majoring in brain and cognitive sciences, has been working in the lab of BCS computational psycholinguistics professor Roger Levy to adapt a sentence-processing experimental method for the web, where more participants can be recruited. The method is powerful but requires labor-intensive hand-crafting of experimental materials. This fall, she showed that deep-learning language models could automatically generate experimental materials and, remarkably, produce higher-quality experiments than manually-crafted materials.

Boyce presents her results next month at the CUNY Conference on Sentence Processing, and will try to improve on her method by building in grammatical structures as part of a related project under the MIT-IBM Watson AI Lab. Current deep-learning language models have no explicit representation of grammar; the patterns they learn in text and speech are based on statistical calculations rather than a set of symbolic rules governing nouns, verbs and other parts of speech. 

“Our work is showing that these hybrid symbolic-deep learning models often do better than traditional models in capturing grammar in language,” says Levy. “This is exciting for Veronica’s project, and future sentencing-processing work. It has the potential to advance research in both human and machine intelligence.”

A conversational calorie counter

A computer science major and a triple jumper on the MIT Track and Field team, third-year student Elizabeth Weeks had the chance this fall to combine her interests in technology and healthy eating by working on a voice-controlled nutrition app in the lab of James Glass, a senior research scientist at CSAIL.

Coco Nutritionist lets users log their meals by talking into their phone rather than manually typing in the information. A collaboration between computer scientists at MIT and nutritionists at Tufts University, the app is meant to make it easier for people to track what they eat, and thus avoid empty calories and mindless eating. 

Weeks helped develop the user interface, and on the back-end, building a new feature for adding recipes and homemade meals, making meal data in the cloud accessible through a call to the server. “Lots of users had requested that we add this feature, and Elizabeth really pulled it off,” says Mandy Korpusik, a graduate student in CSAIL who led the project. Coco Nutritionist made its debut in the Apple Store last month and has already racked up nearly 900 downloads. 

The Quest for Intelligence UROP projects were funded by former Alphabet executive chairman Eric Schmidt and his wife, Wendy; the MIT­–IBM Watson AI Lab; and the MIT-SenseTime Alliance on Artificial Intelligence

Four from MIT named 2019 Sloan Research Fellows

Thu, 02/21/2019 - 12:30pm

Four members of the MIT faculty representing the departments of Economics, Mathematics, and Physics were recently named recipients of the 2019 Sloan Research Fellowships from the Alfred P. Sloan Foundation. The recipients, all early-career scholars in their fields, will each receive a two-year, $70,000 fellowship to further their research.

This year’s MIT recipients are among 126 scientists who represent 57 institutions of higher education in the United States and Canada. This year’s cohort brings MIT’s total to nearly 300 fellows — more than any single institution in the history of the fellowships since their inception in 1955.  

Sloan Fellows are nominated by their fellow researchers and selected from an independent panel of senior scholars on “the basis of a candidate’s research accomplishments, creativity, and potential to become a leader in his or her field.”

2019 Sloan Fellow Nikhil Agarwal, the Castle Krob Career Development Assistant Professor of Economics in the School of Humanities, Arts, and Social Sciences, studies the empirics of matching markets. 

“In these marketplaces, agents cannot simply choose their most preferred option from a menu with posted prices, because goods may be rationed or agents on the other side of the market must agree to a match,” Agarwal says of markets that include medical residency programs, kidney donation, and public school choice. “My research interests lie in how the market structure, market rules, and government policies affect economic outcomes in these settings. To this end, my research involves both developing new empirical techniques and answering applied questions,” he says.

Nancy Rose, department head and Charles P. Kindleberger Professor of Applied Economics, nominated Agarwal. “Nikhil [Agarwal] has made fundamental contributions to the empirical analysis of matching markets, advancing both economic science and public policy objectives,” says Rose. 

Andrew Lawrie, an assistant professor in the Department of Mathematics, is an analyst studying geometric partial differential equations. He investigates the behavior of waves as they interact with each other and with their surrounding medium. 

Lawrie's research focuses on solitons — coherent solitary waves that describe nonlinear dynamics as varied as rogue waves in the ocean, black holes, and short-pulse lasers. Together with Jacek Jendrej, a researcher at Le Centre National de la Recherche Scientifique and Université Paris 13, Lawrie recently gave the first mathematically rigorous example of a completely inelastic two-soliton collision. 

“Dr. Lawrie's mathematical versatility and knowledge recently has been put on great display,” says one of Lawrie’s nominators of his paper in the research journal Inventiones Mathematicae. “This is one of those papers that completely describe mathematically an important phenomenon.”

“He has amassed an astonishingly broad and deep body of work for somebody who is only on his second year of a tenure track,” says his nominator, who requested anonymity. 

Lawrie’s colleague Yufei Zhao was also named a 2019 Sloan Fellow recipient. Zhao, the Class of 1956 Career Development Assistant Professor in the Department of Mathematics, is a researcher in discrete mathematics who has made significant contributions in combinatorics with applications to computer science. 

In major research accomplishments, Zhao contributed to a better understanding of the celebrated Green-Tao theorem, which states that prime numbers contain arbitrarily long arithmetic progressions. Zhao’s proof, co-authored with Jacob Fox, Zhao’s advisor and a former professor in the mathematics department, and David Conlon at the University of Oxford, simplifies a central part of the proof, allowing a more direct route to the Green-Tao theorem. Their work improves the understanding of pseudorandom structures — non-random objects with random-like properties — and has other applications in mathematics and computer science.

“The resulting proof is clean and fits in 25 pages, well under half the length of the original proof,” says Larry Guth, Zhao’s nominator and a professor of mathematics at MIT. “His expository work on the Green-Tao theorem is a real service to the community.”

The final 2019 Sloan Research Fellow recipient is Daniel Harlow, an assistant professor in the Department of Physics. Harlow researches cosmologic events, viewed through the lens of quantum gravity and quantum field theory.

“My research is focused on understanding the most extreme events in our universe: black holes and the Big Bang. Each year brings more observational evidence for these events, but without a theory of quantum gravity, we are not able to explain them in a satisfying way,” says Harlow, whose work has helped clarify many aspects of symmetries in quantum field theory and quantum gravity.

Harlow, who is a researcher in the Laboratory for Nuclear Science, has been working with Hirosi Ooguri, Fred Kavli Professor and director of the Walter Burke Institute for Theoretical Physics at Caltech, to give improved explanations of several well-known phenomena in the standard model of particle physics.

“We are very proud of Dan’s work with Ooguri on foundational aspects of symmetries in quantum field theory,” says Peter Fisher, department head and professor of physics. 

“Sloan Research Fellows are the best young scientists working today,” says Adam F. Falk, president of the Alfred P. Sloan Foundation. “Sloan Fellows stand out for their creativity, for their hard work, for the importance of the issues they tackle, and the energy and innovation with which they tackle them. To be a Sloan Fellow is to be in the vanguard of 21st century science."

Dan Huttenlocher named inaugural dean of MIT Schwarzman College of Computing

Thu, 02/21/2019 - 6:44am

Dan Huttenlocher SM ’84, PhD ’88, a seasoned builder and leader of new academic entities at Cornell University, has been named as the first dean of the MIT Stephen A. Schwarzman College of Computing. He will assume his new post this summer.

A member of Cornell’s computer science faculty since 1988, Huttenlocher has served since 2012 as the founding dean of Cornell Tech, a graduate school in New York City that focuses on digital technology and its economic and societal impacts. Previously, he helped create and then led Cornell’s Faculty of Computing and Information Science.

Huttenlocher returns to MIT with widely published scholarship in computer science, as well as a strongly interdisciplinary approach to computing. He also brings extensive background in industry: Huttenlocher served for 12 years as a scientist at Xerox’s Palo Alto Research Center (PARC) before leaving to co-found a financial technology company in 2000. He currently chairs the board of the John D. and Catherine T. MacArthur Foundation, and sits on the boards of directors of Amazon and Corning.

Huttenlocher’s appointment was announced this morning by Provost Martin Schmidt in a letter to the MIT community.

“The College of Computing represents a pivotal new chapter in MIT’s perpetual effort to reinvent itself in order to live up to its mission,” Schmidt wrote. “As we take on this Institute-wide challenge, I believe we are very fortunate to have, in Dan, such an inspiring and collaborative leader — someone equipped to ensure a strong beginning for the College and to lay a foundation for its lasting success. I very much look forward to working with Dan as he takes on this new challenge.”

The MIT Schwarzman College of Computing was announced last October as a $1 billion commitment to addressing the opportunities and challenges presented by the prevalence of computing and the rise of artificial intelligence. The initiative aims to reorient MIT to bring the power of computing and artificial intelligence to all fields of study, and to educate students in every discipline to responsibly use and develop these technologies. The college is slated to open in September 2019.

“MIT Schwarzman College is an ambitious experiment in educating the leaders society needs to navigate the algorithmic future,” says MIT President L. Rafael Reif. “For its founding dean, we looked for someone who combined educational creativity, instinctive collegiality, intellectual depth and breadth, institutional savvy, and industry experience. In Dan Huttenlocher, we found all these qualities — along with the signature MIT combination of boldness, enthusiasm, and humility. I'm eager to work with Dan, and I look forward to seeing how he leads our community through the college’s ongoing evolution.”

Huttenlocher holds two degrees from the Institute: In 1984, he earned a master’s degree in electrical engineering and computer science, and in 1988, he earned his PhD in computer science from the Artificial Intelligence Laboratory — a precursor to today’s Computer Science and Artificial Intelligence Laboratory (CSAIL). Huttenlocher sees it as an honor to return to the Institute in a leadership position, especially at a time when computing is evolving rapidly, producing new opportunities and challenges.

“MIT has a bold vision for rethinking computing-related research and education to respond to today’s and tomorrow’s challenges and opportunities,” Huttenlocher says. “The Institute plays a unique role in American and global higher education, and it’s exciting to be coming back to the place where I did my formative research work. I’ve learned a lot from MIT, and the world has changed a lot. If I can help contribute to the ways MIT wants to change in this new world, that is an amazing honor.”

Huttenlocher earned a bachelor’s degree from the University of Michigan in 1980, double-majoring in computer and communication sciences and experimental psychology. In recent years, he has served on two MIT visiting committees, one for Undergraduate and Graduate Programs and the other for the MIT Media Lab.

Since 2018, Huttenlocher has chaired the board of the John D. and Catherine T. MacArthur Foundation, a Chicago-based global foundation with assets of more than $6 billion.

“In the eight years that I have known Dan, as board member and now chair, he has proven himself to be intellectually curious, always respectful, and deeply supportive, especially of work that is both innovative and rigorous,” says Julia Stasch, president of the John D. and Catherine T. MacArthur Foundation. “He challenges us to be pragmatic and to be bold, and to move with the urgency that these times require. He is engaged in all that we do, from criminal justice and journalism to climate solutions, but we have especially benefitted from his wise counsel as we consider, across every area of work, ethical and social considerations in the inevitable and increasing use of technology.”

Thinking multidisciplinary and enterprising

Huttenlocher has taught computer science, business and management, and information science courses at Cornell. In 1998, he chaired the task force that created Cornell’s interdisciplinary Faculty of Computing and Information Science (CIS) in Ithaca. In 2009, he was named dean of CIS; as dean, he helped bring statistics and computing closer together at Cornell, and led the development of Gates Hall as the new home for CIS.

In 2012, Huttenlocher was named founding dean and vice provost of Cornell Tech, a new graduate school focused on technology, business, law, and design, located on Roosevelt Island in the East River between Manhattan and Queens.

Cornell Tech arose from a competition, launched in 2010 by then-New York Mayor Michael Bloomberg, that aimed to build a new applied sciences campus on one of several sites in the city. In 2011, a joint proposal by Cornell and Technion-Israel Institute of Technology was selected from among seven formal proposals. Construction began on the 12-acre campus in 2014, and the first three buildings opened in 2017.

As dean and vice provost of Cornell Tech, Huttenlocher has honed his interdisciplinary approach to computing, fostering education at the intersection of computer science and areas including financial technology, media studies, health care, and urban planning. He built a multidisciplinary faculty — in computer science and engineering, as well as law, management, and design — to shape the school’s curriculum, and has established close ties with industry, nonprofits, government agencies, and investors. Under his leadership, the Cornell Tech curriculum has married computer science with practical application to the workforce and tech startups, creating several new degrees along the way.

Huttenlocher’s own research is broad, spanning algorithms, social media, and computer vision. He’s earned the Longuet-Higgins Award for Fundamental Advances in Computer Vision (2010), and various fellowships and awards from the National Science Foundation, the Association for Computing Machinery, IEEE, and Phi Beta Kappa, the oldest academic honor society in the United States. Throughout his teaching career, Huttenlocher has earned numerous teaching awards, including both the Stephen H. Weiss Fellowship and a Faculty of the Year Award at Cornell, and the New York State Professor of the Year award in 1993.

The path ahead

Huttenlocher believes it is important to help train students to be enterprising, which he sees as “thinking and executing bigger and better,” whether in an entrepreneurial, corporate, government, nonprofit, or academic setting. “It’s about making sure students are prepared to take advantage of the digital age — where you can build things that have a big impact in relatively short order with relatively small groups,” he says.

Huttenlocher himself co-founded a financial technology company, Intelligent Markets, in 2000, remaining involved for six years; the company was later sold to SunGard Financial Systems. After that experience, and helping to launch Cornell Tech, Huttenlocher says he doesn’t subscribe to inflexible visions: Many times, he says, ideas change as an entity evolves.

As dean of the MIT Schwarzman College of Computing, Huttenlocher will aim to educate students on the new societal challenges posed by artificial intelligence and automated decision-making, focusing on ethical questions pertaining to data, algorithms, and computing in general.

“These aren’t issues that can be viewed solely through a technology lens, nor solely through a humanistic lens, nor solely through a social science lens,” he says. “Addressing those questions means bringing those disparate parts of academia together. But it also can’t be answered just by academia. It means engaging people outside academia in understanding what they’re afraid of and what excites them about those technologies.”

With the MIT Schwarzman College of Computing poised to begin hiring 50 new faculty members whose research links computer science with other academic fields, Huttenlocher says, “That’s one of the most exciting parts of this new role: looking at people who can think about research from a computing perspective, and from other perspectives, to understand how computing influences work in other disciplines and how work in other disciplines influences computing.”

“These fields are evolving,” he adds. “MIT needs to not only lead in those areas, but also in their evolution — much as engineering itself evolved a lot 150 years ago, in the early years of the Institute.”

As dean, Huttenlocher will build upon the efforts of five working groups, announced this month by Provost Schmidt. These working groups will begin to give shape to the new college’s organizational structure, faculty appointments, curriculum and degrees, focus on social implications and responsibilities, and infrastructure. The working groups — guided by a steering committee that includes the provost, Dean of Engineering Anantha Chandrakasan, and MIT Faculty Chair Susan Silbey — will aim to produce a report describing their thoughts on these important issues by May.

Leventhal City Prize seeks to spark transformative urban design and planning approaches

Wed, 02/20/2019 - 2:20pm

Vibrant, innovative cities most often result from powerful collaborations among diverse constituencies.

To support this ideal, the MIT Norman B. Leventhal Center for Advanced Urbanism (LCAU) has announced the creation of a new interdisciplinary prize aimed at catalyzing innovative urban design and planning approaches worldwide, with a goal of improving the quality of life and environment for residents. 

The prize has been established in honor of the late Norman B. Leventhal, the visionary developer and philanthropist whose contributions transformed Boston’s urban landscape. His civic leadership drove Boston’s urban revival, through projects such as Rowes Wharf, Center Plaza, South Station, and One Post Office Square.

A prize of $100,000 will be awarded on a three-year cycle to an interdisciplinary team of MIT faculty to work together with either a government agency, nonprofit organization, or civic leadership group anywhere in the world. The winning team must demonstrate the potential to improve the quality of life in cities through an innovative urban design and/or a planning project. The winners must also be able to incorporate the collaborative project in future teaching and research at MIT.

The prize has a number of goals: to develop real-world urban design solutions that advance social and environmental change; to foster new pathways for unconventional projects to get realized; to create innovative solutions using the most advanced knowledge available; and to promote collaboration among MIT faculty, students, and civic entities.

“What makes this prize really unique is that it is offered to a city and an MIT team to work together,” says Hashim Sarkis, dean of the School of Architecture and Planning. “True to the mission of the LCAU and the legacy of Norman Leventhal, it fosters collaboration, imagination, and implementation at the same time.”

For its first cycle, the Norman B. Leventhal City Prize will solicit novel responses related to LCAU’s triennial theme, equitable resilience. Equitable resilience foregrounds concerns for equity when planning, designing, and retrofitting cities with consideration of climate change and other environmental shocks or stresses. Making equity a central goal for resilience efforts, LCAU seeks proposals from all geographies that aim to develop physical design solutions that do not reinforce existing inequalities or create new ones.

“Many cities worldwide are connecting resilience adaptation goals with general development needs and strategic planning efforts,” says Alan M. Berger, the Norman B. Leventhal Professor of Advanced Urbanism. “Although these efforts provide a good starting point, it is imperative to make explicit the differential vulnerability of various socioeconomic groups in the face of increasing severity and frequency of climate change-related risks and natural disasters.”

Since its establishment in 2013 within the School of Architecture and Planning, the LCAU has sought to define the field of advanced urbanism, integrating research on urban design with processes of urbanization and urban culture, to meet the contemporary challenges facing the world’s cities.

Drawing on MIT’s deep history in urban design and planning, architecture, and transportation, the LCAU coordinates multidisciplinary, multifaceted approaches to advance the understanding of cities and propose new forms and systems for urban communities. Support for this program was provided by the Muriel and Norman B. Leventhal Family Foundation and the Sherry and Alan Leventhal Family Foundation.

For more details on the prize, see leventhalcityprize.mit.edu.

Study of quark speeds finds a solution for a 35-year physics mystery

Wed, 02/20/2019 - 1:00pm

MIT physicists now have an answer to a question in nuclear physics that has puzzled scientists for three decades: Why do quarks move more slowly inside larger atoms?

Quarks, along with gluons, are the fundamental building blocks of the universe. These subatomic particles — the smallest particles we know of — are far smaller, and operate at much higher energy levels, than the protons and neutrons in which they are found. Physicists have therefore assumed that a quark should be blithely indifferent to the characteristics of the protons and neutrons, and the overall atom, in which it resides.

But in 1983, physicists at CERN, as part of the European Muon Collaboration (EMC), observed for the first time what would become known as the EMC effect: In the nucleus of an iron atom containing many protons and neutrons, quarks move significantly more slowly than quarks in deuterium, which contains a single proton and neutron. Since then, physicists have found more evidence that the larger an atom’s nucleus, the slower the quarks that move within.

“People have been wracking their brains for 35 years, trying to explain why this effect happens,” says Or Hen, assistant professor of physics at MIT.

Now Hen, Barak Schmookler, and Axel Schmidt, a graduate student and postdoc in MIT’s Laboratory for Nuclear Science, have led an international team of physicists in identifying an explanation for the EMC effect. They have found that a quark’s speed depends on the number of protons and neutrons forming short-ranged correlated pairs in an atom’s nucleus. The more such pairs there are in a nucleus, the more slowly the quarks move within the atom’s protons and neutrons.

Schmidt says an atom’s protons and neutrons can pair up constantly, but only momentarily, before splitting apart and going their separate ways. During this brief, high-energy interaction, he believes that quarks in their respective particles may have a “larger space to play.”

“In quantum mechanics, anytime you increase the volume over which an object is confined, it slows down,” Schmidt says. “If you tighten up the space, it speeds up. That’s a known fact.”

As atoms with larger nuclei intrinsically have more protons and neutrons, they also are more likely to have a higher number of proton-neutron pairs, also known as “short-range correlated” or SRC pairs. Therefore, the team concludes that the larger the atom, the more pairs it is likely to contain, resulting in slower-moving quarks in that particular atom.

Schmookler, Schmidt, and Hen as members of the CLAS Collaboration at the Thomas Jefferson National Accelerator Facility, have published their results today in the journal Nature.

From a suggestion to a full picture

In 2011, Hen and collaborators, who has focused much of their research on SRC pairs, wondered whether this ephemeral coupling had anything to do with the EMC effect and the speed of quarks in atomic nuclei.

They gathered data from various particle accelerator experiments, some of which measured the behavior of quarks in certain atomic nuclei, while others detected SRC pairs in other nuclei. When they plotted the data on a graph a clear trend appeared: The larger an atom’s nucleus, the more SRC pairs there were, and the slower the quarks that were measured. The largest nucleus in the data — gold — contained quarks that moved 20 percent more slowly than those in the smallest measured nucleus, helium.

“This was the first time this connection was concretely suggested,” Hen says. “But we had to do a more detailed study to build a whole physical picture.”

So he and his colleagues analyzed data from an experiment that compared atoms of different sizes and allowed measuring both the quarks’ speed and the number of SRC pairs in each atom’s nucleus. The experiment was carried out at the CEBAF Large Acceptance Spectrometer, or CLAS detector, an enormous, four-story spherical particle accelerator at the Thomas Jefferson National Laboratory in Newport News, Virginia.

Within the detector, Hen describes the team’s target setup as a “kind of a Frankenstein-ish thing,” with mechanical arms, each holding a thin foil made from a different material, such as carbon, aluminum, iron, and lead, each made from atoms containing 12, 27, 67, and 208 protons and neutrons, respectively. An adjacent vessel held liquid deuterium, with atoms containing the lowest number of protons and neutrons of the group.

When they wanted to study a particular foil, they sent a command to the relevant arm to lower the foil of interest, following the deuterium cell and directly in the path of the detector’s electron beam. This beam shot electrons at the deuterium cell and solid foil, at the rate of several billion electrons per second. While a vast majority of electrons miss the targets, some do hit either the protons or neutrons inside the nucleus, or the much tinier quarks themselves. When they hit, the electrons scatter widely, and the angles and energies at which they scatter vary depending on what they hit — information that the detector captures.

Electron tuning

The experiment ran for several months and in the end amassed billions of interactions between electrons and quarks. The researchers calculated the speed of the quark in each interaction, based on the electron’s energy after it scattered, then compared the average quark speed between the various atoms.

By looking at much smaller scaterring angles, corresponding to momentum transfers of a different wave length, the team were able to “zoom out” so that electrons would scatter off the larger protons and neutrons, rather than quarks. SRC pairs are typically extremely energetic and would therefore scatter electrons at higher energies than unpaired protons and neutrons, which is a distinction the researchers used to detect SRC pairs in each material they studied.

“We see that these high-momentum pairs are the reason for these slow-moving quarks,” Hen says.

In particular, they found that the quarks in foils with larger atomic nuclei (and more proton-neutron pairs) moved at most 20 percent slower than deuterium, the material with the least number of pairs.

“These pairs of protons and neutrons have this crazy high-energy interaction, very quickly, and then dissipate,” Schmidt says. “In that time, the interaction is much stronger than normal and the nucleons have significant spatial overlap. So we think quarks in this state slow down a lot.”

Their data show for the first time that how much a quark’s speed is slowed depends on the number of SRC pairs in an atomic nucleus. Quarks in lead, for instance, were far slower than those in aluminum, which themselves were slower than iron, and so on.

The team is now designing an experiment in which they hope to detect the speed of quarks, specifically in SRC pairs.

“We want to isolate and measure correlated pairs, and we expect that will yield this same universal function, in that the way quarks change their velocity inside pairs is the same in carbon and lead, and should be universal across nuclei,” Schmidt says.

Ultimately, the team’s new explanation can help to illuminate subtle yet important differences in the behavior of quarks, the most basic building blocks of the visible world. Scientists have an incomplete understanding of how these tiny particles come to build the protons and neutrons that then come together to form the individual atoms that make up all the material we see in the universe.

“Understanding how quarks interact is really the essence of understanding the visible matter in the universe,” Hen says. “This EMC effect, even though 10 to 20 percent, is something so fundamental that we want to understand it.”

This research was funded, in part, by the U.S. Department of Energy, and the National Science Foundation.

Hiroshi Ishii wins Association for Computing Machinery SIGCHI Lifetime Research Award

Wed, 02/20/2019 - 12:00pm

Hiroshi Ishii, the Jerome B. Wiesner Professor of Media Arts and Sciences at MIT, has been awarded the 2019 Association for Computing Machinery (ACM) SIGCHI Lifetime Research Award. He will accept the award and deliver a keynote presentation at the 2019 CHI Conference on Human Factors in Computing Systems in Glasgow, Scotland, this May.

The Lifetime Research Award is given to individuals whose research in human-computer interaction (HCI) is considered both fundamental and influential to the field. As head of the Tangible Media Group at the MIT Media Lab since 1995, Ishii has pushed the boundaries of digital technology by giving physical form to digital information. He is recognized as a founder of Tangible User Interfaces, a research genre based on the CHI ’97 “Tangible Bits” paper presented with Brygg Ullmer. The paper led to the spinoff ACM International Conference on Tangible, Embedded, and Embodied Interaction, starting in 2007.

“It is an incredible honor for me as an HCI researcher, and I’m extremely excited for this recognition of the Tangible Media Group’s quarter-century battle against the pixel empire of graphical user interfaces,” says Ishii.

Ishii’s work focuses on hypothetical generation of materials that can change form and properties dynamically and computationally, becoming as reconfigurable as the pixels on a graphical user interface screen. His team’s projects, which Ishii describes under the banner of “radical atoms and tangible bits,” have contributed to forming the new stream of “shape-changing user interface” research in the HCI community.

Ishii and his team have presented their work at a variety of academic, design, and artistic venues (including ACM SIGCHI, ACM SIGGRAPH, Industrial Design Society of America, AIGA, Ars Electronica, ICC, Centre Pompidou, Victoria and Albert Museum, Cooper Hewitt Design Museum, and Milan Design Week), emphasizing that the design of engaging and inspiring tangible interactions requires the rigor of both scientific and artistic review, encapsulated by his motto, “Be artistic and analytic. Be poetic and pragmatic.”

“Hiroshi is an inspiration to all of us at the Media Lab,” says lab director Joi Ito. “It’s been exciting to see how his vision has influenced and motivated so many people around the world, as well as here at MIT. This honor is truly deserved.”

Ishii’s keynote presentation at the CHI conference in Glasgow will outline his ongoing vision for “Radical Atoms and Tangible Bits:” seeking to realize seamless interfaces between humans, digital information, and the physical environment.

“Our goal is to invent new design media for artistic expression as well as for scientific analysis, taking advantage of the richness of human senses and skills we develop throughout our lifetime interacting with the physical world, as well as the computational reflection enabled by real-time sensing and digital feedback,” he says. 

Putting data privacy in the hands of users

Wed, 02/20/2019 - 9:48am

A new platform developed by MIT and Harvard University researchers ensures that web services adhere to users’ preferences on how their data are stored and shared in the cloud.

In today’s world of cloud computing, users of mobile apps and web services store personal data on remote data center servers. These data may include photos, social media profiles, email addresses, and even fitness data from wearable devices. Services often aggregate multiple users’ data across servers to gain insights on, say, consumer shopping patterns to help recommend new items to specific users, or may share data with advertisers. Traditionally, however, users haven’t had the power to restrict how their data are processed and shared.

In a paper being presented at this week’s USENIX Networked Systems Design and Implementation conference, the researchers describe a platform, called Riverbed, that forces data center servers to only use data in ways that users explicitly approve.  

In Riverbed, a user’s web browser or smartphone app does not communicate with the cloud directly. Instead, a Riverbed proxy runs on a user’s device to mediate communication. When the service tries to upload user data to a remote service, the proxy tags the data with a set of permissible uses for their data, called a “policy.”

Users can select any number of predefined restrictions — such as, “do not store my data on persistent storage” or “my data may only be shared with the external service x.com.” The proxy tags all the data with the selected policy.

In the datacenter, Riverbed assigns the uploaded data to an isolated cluster of software components, with each cluster processing only data tagged with the same policies. For example, one cluster may contain data that can’t be shared with other services, while another may hold data that can’t be written to disk. Riverbed monitors the server-side code to ensure it adheres to a user’s policies. If it doesn’t, Riverbed terminates the service.

Riverbed aims to enforce user data preferences, while maintaining advantages of cloud computing, such as performing large-scale computations on outsourced servers. “Users give a lot of data to web apps for services, but lose control of how the data is used or where it’s going,” says first author Frank Wang SM ’16, PhD ’18, a recent graduate of the Department of Electrical Engineering and Computer Science and the Computer Science and Artificial Intelligence Laboratory. “We give users control to tell web apps, ‘This is exactly how you can use my data.’”

On that thread, an additional perk for app developers, Wang adds, is establishing more trust with users. “That’s a big thing now,” Wang says. “A selling point for your app would be saying, ‘My app’s goal is to protect user data.’”

Joining Wang on the paper are PhD student Ronny Ko and associate professor of computer science James Mickens, both of Harvard.

Creating “universes”

In 2016, the European Union passed the General Data Protection Regulation (GDPR), which states that users must consent to their data being accessed, that they have the right to request their data be deleted, and that companies must implement appropriate security measures. For web developers, however, these laws provide little technical guidance for writing sophisticated apps that need to leverage user data.

In the past, computer scientists have designed “information flow control” (IFC) systems that allow programmers to label program variables with data policies. But with so many variables and many possible interactions between variables, these systems are difficult to program. Thus, no large-scale web services use IFC techniques.

Primarily, Riverbed leverages the fact that the server-side code of an app can run atop a special “monitor” program — programs that track, regulate, and verify how other programs manipulate data. The monitor creates a separate copy of the app’s code for each unique policy assigned to data. Each copy is called a “universe.” The monitor ensures that users who share the same policy have their data uploaded to, and manipulated by, the same universe. This method enables the monitor to terminate a universe’s code, if that code attempts to violate the universe’s data policy.

This process incorporates a custom interpreter, a program that compiles programming language into code that’s understood by a computer. Interpreters are also used to help runtime programs implement low-level commands into an original program as it runs. The researchers modified a traditional interpreter to extract defined policies from incoming user data and labels certain variables with specific policy direction. Labels will, for instance, denote whitelisted web services for data sharing or restrict persistent storage — meaning the data can’t be stored when the user stops using the web service.

“Say I want my data to be aggregated with other users. That data is put into its own universe with other user data with the same policy,” Wang says. “If a user doesn’t want to share any data with anyone, then that user has their own whole universe. This way, you don’t have any cross-pollination of data.”

For developers, this makes it much easier to comply with GDPR and other privacy laws, Wang says, because users have given explicit consent for data access. “All users in each universe have the same policies, so you can do all your operations and not worry about what data is put into an algorithm, because everyone has the same policy on data in that universe,” Wang says.

Efficient copying

In the worst-case scenario, Wang says, each user of each service would have a separate universe. Generally, this could cause significant computation overhead and slow down the service. But the researchers leveraged a relatively new technique, called “container-based virtualization,” which allow the Riverbed monitor to more efficiently create multiple universes of the same program. As a result, universe management is fast, even if a service has hundreds or thousands of universes.

In their paper, the researchers’ evaluated Riverbed on several apps, demonstrating the platform keeps data secure with little overhead. Results show that more than 1,000 universes can squeeze onto a single server, with added computation that slows down the service by about 10 percent. That’s fast and efficient enough for real-world use, Wang says.

The researchers envision the policies as being written by advocacy groups, such as Electronic Frontier Foundation (EFF), an international nonprofit digital rights group.

New policies can be “dropped in” to a Riverbed-run service at any time, meaning developers don’t need to rewrite code.

Q&A: Why cities aren’t working for the working class

Wed, 02/20/2019 - 9:41am

MIT economist David Autor made news in January, when he delivered the prestigious Richard T. Ely Lecture at the annual meeting of the American Economic Association and presented an attention-grabbing finding about the U.S. economy. Cities no longer provide an abundance of middle-skill jobs for workers without college degrees, he announced, based on his own careful analysis of decades of federal jobs data, which he scutinized by occuptaion, location, and more. MIT News talked to Autor, the Ford Professor of Economics at MIT, about how this sea change is responsible for much of the “hollowing out” of the middle-class work force, and overall inequality, in the U.S. This interview has been edited for length.

Q: Your new research says that changes in the jobs available in cities has played a big role in the growth of inequality and polarization in the U.S. But what exactly is your new finding?

A: There’s a lot of economic literature that says, “Cities are where all the action is.” Wages are higher in cities; people flock to cities. I’ve been writing about the polarization of occupations for a long time, and the hollowing out of the middle class and the geography of that, but I always imagined that polarization reflected a time when the middle-skill jobs had been robustly present in lots of places, both urban and rural, and then hollowed out.

What I didn’t realize was the degree to which [overall U.S.] job polarization is an unwinding of a distinctive feature of high-density metro areas, which was highly present in the postwar period and is now entirely gone. There were these occupations where noncollege workers did skilled work in metro areas: production work and clerical, administrative work. These were middle-skill jobs and they were much more prevalent in cities, urban areas, and metro areas than they were in suburbs and rural areas. But that began to decline in the 1970s and is now extinct. There’s nothing remaining of that.

There’s just less and less mixing of high-skill and low-skill workers, because their jobs are no longer complementary. They’re no longer producing stuff jointly together.

Q: Why were these jobs in cities in the first place?

A:  Two reasons. One is, historically, manufacturing was an urban phenomenon. It had to be, because you needed access to transportation, good infrastructure, and so on. And then for clerical and administrative work, it’s because those occupations are inputs into the world of professionals, so they had to be where the high-skill professionals were.

Q:. To what extent are we talking about strictly a loss of these two types of jobs, pillars of the middle class as they once were?

A: Those are the two biggest examples, but there’s been a big decline of all routine activity. Anything that follows a set of procedures. That doesn’t describe construction work, cleaning, creative work. But you see the decline of routine work even within jobs. There are still office clerical workers, but they’re more educated than they used to be; they do harder jobs. They do a different job — coordinating events, proofing papers, dealing with those godforsaken travel receipts. There are still production workers , but their work is more abstract — they manage complex machines rather than performing repetitive motions. Those are the two leading examples and they cover a lot of the terrain, but they don’t cover all of it.

Q: What are some other implications of this research?

A: There’s this big puzzle in the United States about declining geographic mobility. People have read that as a very negative sign — the labor market is so sclerotic, nobody can afford to move — but my conjecture is that it reflects highly educated people moving to major metro areas to attend college and then staying there. Meanwhile, non-college workers have less and less of an incentive to move to expensive cities where the wage premium for urban non-professionals has fallen steeply.

I think it [job polarization] speaks to our politics. Cities are different places than they used to be. They’re much more educated now, they’re much higher-wage, and they’re younger. They were distinctive before in the set of occupations they might have, and now they’re distinctive in extreme levels of education, high levels of wages, being relatively youthful, and being extremely diverse. You can see how this creates a growing urban, non-urban divide in voting patterns.

Q: Where does your research go from here, and where do policy discussions go?

A: I think there’s a big agenda that flows from this: what it looks like across other industrialized countries, how it relates to the age structure, what the new [jobs] look like. And it raises two other sets of questions. What should you do if you’re not highly educated, what’s the right opportunity, and what’s the right place? And, two, it’s crazy how highly concentrated and expensive educated some cities are. So, is this just a black hole phenomenon? Will cities get denser and denser, or will the countervailing forces prevail? There’s a lot of opportunity for smaller and midsize cities, like Pittsburgh, Nashville, Greensboro-Spartanburg, the places saying, “Hey, we have enough skilled people and restaurants and infrastructure, why don’t you come here?”

Q: Immigration is a prominent political issue right now, and your research here examines it. What did you find?

A: Immigrants are important to this story because the education distribution of immigrants is quite bimodal: They are disproportionately likely to have a postcollege degrees but even more likely to not even have a college degree. And they are concentrated in urban areas. I initially conjectured that part of this [overall wage and polarization trend] was just an artifact of immigration. But you find it equally among immigrants and nonimmigrants, so that’s not it. Immigrants are extremely important to both the skill supply in cities and the services supply in cities, but the hollowing out is not a reflection of immigration in any direct sense.

Q: So as you’ve quantified here, there was an exceptional period in U.S. postwar history that we took as the “normal” state of things.

A: Yes. We’re talking about something that was distinctive about the urban labor market for non-college workers in the postwar era, and now it’s gone.

Festival of Learning highlights innovation

Tue, 02/19/2019 - 3:10pm

The third annual Festival of Learning, organized by MIT Open Learning and the Office of the Vice Chancellor, highlighted educational innovation, including how digital technologies and shared best practices are enabling educators to drive better learning outcomes for MIT students and global learners via online courses. “As a community, we are energized by all the transformation and innovation happening within the education space right now,” said Krishna Rajagopal, dean for digital learning, open learning, as he kicked off the festival.

The educator’s role: to engage and inspire learners

Keynote speaker Po-Shen Loh, Carnegie Mellon University associate professor, founder of online education platform Expii, and coach of the U.S. International Math Olympiad Team, surprised a morning audience of about 400 people in Room 10-250 when he held up a small red die and asked why opposite sides of the die always add up to seven. Loh then began a lively, Socratic interaction with the audience that blended math and physics with engaging humor. What Loh’s inquiry consciously didn’t include was digital technology.

“If we’re all here in this room together,” explained Loh, “we should be taking advantage of this unique opportunity to interact dynamically with each other.” Loh rejected the idea that an educator is someone who simply transmits content to learners. “The teacher’s role is not just to convey information, but to be a cheerleader and a coach inspiring learners to pursue knowledge on their own initiative.”

Loh then held up his smartphone. “Today, every person has an enormous amount of power to do good if they leverage technology.” He described how he founded Expii as a student-directed online learning platform in math and science that would allow users to tailor educational content to however they preferred to learn. As an example, Loh mentioned that teenagers love YouTube because it allows them to decide for themselves how they’ll pursue their own interests; he mentioned the viral Baby Shark phenomenon as an example. Expii followed a similar “personalized engagement” model: “Expii is built in such a way that anyone can contribute and anyone can learn in the ways they want to learn,” said Loh. The takeaway for educators was clear: Making space for personalization can drive engagement.

Loh concluded his hourlong talk by explaining that the accelerating pace of technological change, and the way that change impacts learning and work, have made the capacity to keep learning both urgent and essential: “You need to learn constantly today, no matter who you are and where you are in life,” he said.

Virtual reality in education

Next, D. Fox Harrell, professor of digital media and artificial intelligence and director of the MIT Center for Advanced Virtuality, kicked off the panel “Virtual Experience, Real Liberation: Technologies for Education and the Arts.” He moderated the panel and presented research on how extended-reality technologies such as virtual reality (VR) can be used to enable people to understand systematic social phenomena, such as dehumanizing the other in war, racial and ethnic socialization, and sexism in the workplace. Harrell argued that technologies of virtuality can play a role in serving the social good by reducing bias and helping people critically reflect upon society.

Harrell highlighted research projects on “how to use computer science to impact social issues” such as police brutality and global conflict resolution. VR, for example, is being used to allow people to engage with those on opposite sides of global conflicts virtually, providing them with insights into aspects of their shared humanity and fostering empathy.

Panelist Tabitha Peck, professor of mathematics and computer science at Davidson College, shared her research on using VR to combat implicit bias and stereotype threat, a situation in which individuals are at risk of conforming to a negative stereotype about a group to which they belong. By enabling users to inhabit another person’s body virtually, noted Peck, “a person is offered different perspectives that can impact behavior.” In one example, a domestic abuser was subjected to verbal abuse in a virtual world. “He broke down and cried after,” Peck said, and the experience became an important part of his treatment and recovery efforts.

Eran Egozy​, professor of the practice in music technology and co-founder of Harmonix, next described how he has spent his career tackling a single question: "Can we create a musical instrument which shortens the learning curve for music-making, enabling learners to get to a point of enjoyment faster?” The extremely popular culmination of Egozy’s efforts at Harmonix was “Guitar Hero,” and he detailed the development of the blockbuster game. Egozy ended his talk on a high note, asking everyone in Room 10-250 to pull out their smartphones, connect to the internet, and use their phones to perform as an orchestra in an audience-participation experience called "Tutti." With Egozy waving a baton in the front, and each section of the auditorium assigned a different, smartphone-enabled instrument, the audience played a three-minute musical composition called “Engineered Engineers.”

Finally, in the panel’s Q&A session that ended the morning festivities, Harrell prompted Peck and Egozy to explore how each of their systems play parts in broader ecologies of users, designers, collaborators, caregivers, artists, and more. Technologies of virtuality, he asserted, are not panaceas on their own, but can act within networks of people and systems to serve the greater good.

Afternoon expo and workshops

The festival continued in the afternoon with a learning expo that included 26 exhibitors. One exhibitor was Residential Education, which uses digital tools to drive improved educational outcomes for on-campus courses. Meredith Davies, senior education technologist, explained “we’re here at the festival to educate MIT faculty on the various ways they can use innovation to improve learning. We advise MIT faculty on how they can leverage research-based teaching practices and tailor digital tools to the needs of their learners.”

The festival concluded with three afternoon workshops focused on applying innovative tools and practices. “Applying Learning Sciences to Instruction,” was lead by MIT Senior Learning Scientist Aaron Kessler. The workshop explored the origins of learning science as a bridge between cognitive psychology and other fields such as sociology, political science, computer science, and economics. This interdisciplinary approach allows researchers to gain applied insights into learning and teaching. Kessler encouraged workshop participants to discuss how learning science principles like the testing effect, interleaved practice, and spaced practice might be applied in the ways they teach and learn.

“It’s been great being here today at the Festival of Learning and seeing so many engaged people with so many different ideas about how to improve education,” Poh-Shen Loh said. “What’s really struck me is the high level of enthusiasm everyone has shown for doing things better.”

Serving up brunch for the MIT graduate community

Tue, 02/19/2019 - 10:50am

On Feb. 3, residents of the graduate residence hall Sidney-Pacific hosted a Sunday brunch for all members of the MIT and graduate community. MIT President L. Rafael Reif and Senior Associate Dean for Graduate Education Blanche Staton were invited as special guests to the event.

A team of more than 30 volunteers got up at early hours of the morning to prepare, cook, and serve brunch. Food trays were full of breakfast classics — eggs, sausages, bacon, pancakes, and fruit — in addition to food options for guests with dietary restrictions. As an incentive to enter the brunch early, guests were advised to go eco-friendly by bringing their own reusable plates and utensils to use.

Before brunch officially started, Reif and Staton were led by two Sidney-Pacific Executive Council (SPEC) members on a tour of the graduate residence hall. Then, Reif and Staton were presented with an official red Sidney-Pacific Brunch apron, signifying their welcome to the brunch team. Together with the brunch volunteers, Reif and Staton helped serve food to over 200 attendees.

“Everyone had a great time, and we are very glad that we could do something to benefit the MIT community,” said Sami Yamani, SPEC president and a PhD student in mechanical engineering. “We were all honored to have President Reif and Dean Staton helping us serve brunch to the MIT graduate community and then hang out with students while enjoying their meal.”

The brunches started in 2002 during Sidney-Pacific’s inaugural year as a graduate residence hall. As a way to initiate graduate community within a new space, Roger and Dottie Mark, heads of house for Sidney-Pacific from 2002 to 2013, thought it would be a great idea to have a brunch chair as an officer position.

“We thought having a Sunday brunch would enable the folks to spend some good conversation time with others,” said Dottie Mark.

Added Roger: “Over time, the brunches got better and better as brunch chairs began to compete and get more creative. Everyone seemed to enjoy them.”

Sidney-Pacific brunches are held on a monthly basis during the academic year, with two additional brunches during the summer months. They are open to the entire MIT community and are funded by the Office of Graduate Education (OGE) and Division of Student Life (DSL).

Geeticka Chauhan, resident of Sidney-Pacific and a PhD student in electrical engineering and computer science, said the brunches are an important part to a graduate student’s residence life experience.

“They bring the whole graduate student community together,” Chauhan said. “We have students visiting from all other grad dorms, and sometimes off-campus grad students as well, so being able to bring them all together in a room is a very humbling experience.”

Pages