MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 day 26 min ago

Explosions of universe’s first stars spewed powerful jets

Wed, 05/08/2019 - 12:00am

Several hundred million years after the Big Bang, the very first stars flared into the universe as massively bright accumulations of hydrogen and helium gas. Within the cores of these first stars, extreme, thermonuclear reactions forged the first heavier elements, including carbon, iron, and zinc.

These first stars were likely immense, short-lived fireballs, and scientists have assumed that they exploded as similarly spherical supernovae.

But now astronomers at MIT and elsewhere have found that these first stars may have blown apart in a more powerful, asymmetric fashion, spewing forth jets that were violent enough to eject heavy elements into neighboring galaxies. These elements ultimately served as seeds for the second generation of stars, some of which can still be observed today.

In a paper published today in the Astrophysical Journal, the researchers report a strong abundance of zinc in HE 1327-2326, an ancient, surviving star that is among the universe’s second generation of stars. They believe the star could only have acquired such a large amount of zinc after an asymmetric explosion of one of the very first stars had enriched its birth gas cloud.

“When a star explodes, some proportion of that star gets sucked into a black hole like a vacuum cleaner,” says Anna Frebel, an associate professor of physics at MIT and a member of MIT’s Kavli Institute for Astrophysics and Space Research. “Only when you have some kind of mechanism, like a jet that can yank out material, can you observe that material later in a next-generation star. And we believe that’s exactly what could have happened here.”

This is the first observational evidence that such an asymmetric supernova took place in the early universe,” adds MIT postdoc Rana Ezzeddine, the study’s lead author. “This changes our understanding of how the first stars exploded.”

“A sprinkle of elements”

HE 1327-2326 was discovered by Frebel in 2005. At the time, the star was the most metal-poor star ever observed, meaning that it had extremely low concentrations of elements heavier than hydrogen and helium — an indication that it formed as part of the second generation of stars, at a time when most of the universe’s heavy element content had yet to be forged.

“The first stars were so massive that they had to explode almost immediately,” Frebel says. “The smaller stars that formed as the second generation are still available today, and they preserve the early material left behind by these first stars. Our star has just a sprinkle of elements heavier than hydrogen and helium, so we know it must have formed as part of the second generation of stars.”

In May of 2016, the team was able to observe the star which orbits close to Earth, just 5,000 light years away. The researchers won time on NASA’s Hubble Space Telescope over two weeks, and recorded the starlight over multiple orbits. They used an instrument aboard the telescope, the Cosmic Origins Spectrograph, to measure the minute abundances of various elements within the star.

The spectrograph is designed with high precision to pick up faint ultraviolet light. Some of those wavelength are absorbed by certain elements, such as zinc. The researchers made a list of heavy elements that they suspected might be within such an ancient star, that they planned to look for in the UV data, including silicon, iron, phosophorous, and zinc.

“I remember getting the data, and seeing this zinc line pop out, and we couldn’t believe it, so we redid the analysis again and again,” Ezzeddine recalls. “We found that, no matter how we measured it, we got this really strong abundance of zinc.”

A star channel opens

Frebel and Ezzeddine then contacted their collaborators in Japan, who specialize in developing simulations of supernovae and the secondary stars that form in their aftermath. The researchers ran over 10,000 simulations of supernovae, each with different explosion energies, configurations, and other parameters. They found that while most of the spherical supernova simulations were able to produce a secondary star with the elemental compositions the researchers observed in HE 1327-2326, none of them reproduced the zinc signal.

As it turns out, the only simulation that could explain the star’s makeup, including its high abundance of zinc, was one of an aspherical, jet-ejecting supernova of a first star. Such a supernova would have been extremely explosive, with a power equivalent to about a nonillion times (that’s 10 with 30 zeroes after it) that of a hydrogen bomb.

“We found this first supernova was much more energetic than people have thought before, about five to 10 times more,” Ezzeddine says. “In fact, the previous idea of the existence of a dimmer supernova to explain the second-generation stars may soon need to be retired.”

The team’s results may shift scientists’ understanding of reionization, a pivotal period during which the gas in the universe morphed from being completely neutral, to ionized — a state that made it possible for galaxies to take shape.

“People thought from early observations that the first stars were not so bright or energetic, and so when they exploded, they wouldn’t participate much in reionizing the universe,” Frebel says. “We’re in some sense rectifying this picture and showing, maybe the first stars had enough oomph when they exploded, and maybe now they are strong contenders for contributing to reionization, and for wreaking havoc in their own little dwarf galaxies.”

These first supernovae could have also been powerful enough to shoot heavy elements into neighboring “virgin galaxies” that had yet to form any stars of their own.

“Once you have some heavy elements in a hydrogen and helium gas, you have a much easier time forming stars, especially little ones,” Frebel says. “The working hypothesis is, maybe second generation stars of this kind formed in these polluted virgin systems, and not in the same system as the supernova explosion itself, which is always what we had assumed, without thinking in any other way. So this is opening up a new channel for early star formation.”

This research was funded, in part, by the National Science Foundation.

Wireless movement-tracking system could collect health and behavioral data

Wed, 05/08/2019 - 12:00am

We live in a world of wireless signals flowing around us and bouncing off our bodies. MIT researchers are now leveraging those signal reflections to provide scientists and caregivers with valuable insights into people’s behavior and health.

The system, called Marko, transmits a low-power radio-frequency (RF) signal into an environment. The signal will return to the system with certain changes if it has bounced off a moving human. Novel algorithms then analyze those changed reflections and associate them with specific individuals.

The system then traces each individual’s movement around a digital floor plan. Matching these movement patterns with other data can provide insights about how people interact with each other and the environment.

In a paper being presented at the Conference on Human Factors in Computing Systems this week, the researchers describe the system and its real-world use in six locations: two assisted living facilities, three apartments inhabited by couples, and one townhouse with four residents. The case studies demonstrated the system’s ability to distinguish individuals based solely on wireless signals — and revealed some useful behavioral patterns.

In one assisted living facility, with permission from the patient’s family and caregivers, the researchers monitored a patient with dementia who would often become agitated for unknown reasons. Over a month, they measured the patient’s increased pacing between areas of their unit — a known sign of agitation. By matching increased pacing with the visitor log, they determined the patient was agitated more during the days following family visits. This shows Marko can provide a new, passive way to track functional health profiles of patients at home, the researchers say.

“These are interesting bits we discovered through data,” says first author Chen-Yu Hsu, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We live in a sea of wireless signals, and the way we move and walk around changes these reflections. We developed the system that listens to those reflections … to better understand people’s behavior and health.”

The research is led by Dina Katabi, the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science and director of the MIT Center for Wireless Networks and Mobile Computing (Wireless@MIT). Joining Katabi and Hsu on the paper are CSAIL graduate students Mingmin Zhao and Guang-He Lee and alumnus Rumen Hristov SM ’16.

Predicting “tracklets” and identities

When deployed in a home, Marko shoots out an RF signal. When the signal rebounds, it creates a type of heat map cut into vertical and horizontal “frames,” which indicates where people are in a three-dimensional space. People appear as bright blobs on the map. Vertical frames capture the person’s height and build, while horizontal frames determine their general location. As individuals walk, the system analyzes the RF frames — about 30 per second — to generate short trajectories, called tracklets.

A convolutional neural network — a machine-learning model commonly used for image processing — uses those tracklets to separate reflections by certain individuals. For each individual it senses, the system creates two “filtering masks,” which are small circles around the individual. These masks basically filter out all signals outside the circle, which locks in the individual’s trajectory and height as they move. Combining all this information — height, build, and movement — the network associates specific RF reflections with specific individuals.

But to tag identities to those anonymous blobs, the system must first be “trained.” For a few days, individuals wear low-powered accelerometer sensors, which can be used to label the reflected radio signals with their respective identities. When deployed in training, Marko first generates users’ tracklets, as it does in practice. Then, an algorithm correlates certain acceleration features with motion features. When users walk, for instance, the acceleration oscillates with steps, but becomes a flat line when they stop. The algorithm finds the best match between the acceleration data and tracklet, and labels that tracklet with the user’s identity. In doing so, Marko learns which reflected signals correlate to specific identities.

The sensors never have to be charged, and, after training, the individuals don’t need to wear them again. In home deployments, Marko was able to tag the identities of individuals in new homes with between 85 and 95 percent accuracy.

Striking a good (data-collection) balance

The researchers hope health care facilities will use Marko to passively monitor, say, how patients interact with family and caregivers, and whether patients receive medications on time. In an assisted living facility, for instance, the researchers noted specific times a nurse would walk to a medicine cabinet in a patient’s room and then to the patient’s bed. That indicated that the nurse had, at those specific times, administered the patient’s medication.

The system may also replace questionnaires and diaries currently used by psychologists or behavioral scientists to capture data on their study subjects’ family dynamics, daily schedules, or sleeping patterns, among other behaviors. Those traditional recording methods can be inaccurate, contain bias, and aren’t well-suited for long-term studies, where people may have to recall what they did days or weeks ago. Some researchers have started equipping people with wearable sensors to monitor movement and biometrics. But elderly patients, especially, often forget to wear or charge them. “The motivation here is to design better tools for researchers,” Hsu says.

Why not just install cameras? For starters, this would require someone watching and manually recording all necessary information. Marko, on the other hand, automatically tags behavioral patterns — such as motion, sleep, and interaction — to specific areas, days, and times.

Also, video is just more invasive, Hsu adds: “Most people aren’t that comfortable with being filmed all the time, especially in their own home. Using radio signals to do all this work strikes a good balance between getting some level of helpful information, but not making people feel uncomfortable.”

Katabi and her students also plan to combine Marko with their prior work on inferring breathing and heart rate from the surrounding radio signals. Marko will then be used to associate those biometrics with the corresponding individuals. It could also track people’s walking speeds, which is a good indicator of functional health in elderly patients.

“The potential here is immense,” says Cecilia Mascolo, a professor of mobile systems in the Department of Computer Science and Technology at Cambridge University. “With respect to imaging through cameras, it offers a less data-rich and more targeted model of collecting information, which is very welcome from the user privacy perspective. The data collected, however, is still very rich, and the paper evaluation shows accuracy which can enable a number of very useful applications, for example in elderly care, medical adherence monitoring, or even hospital care.”

“Yet, as a community, we need to aware of the privacy risks that this type of technology bring,” Mascolo adds. Certain computation techniques, she says, should be considered to ensure the data remains private.

Cultural curator

Tue, 05/07/2019 - 11:59pm

Rekha Malhotra joined MIT’s Comparative Media Studies program as a master’s student after 20 years as a flourishing New York City DJ. She has also accrued major accolades for other artistic endeavors: She was the sound designer for a Tony Award-winning Broadway show and a New York University artist in residence, and she has been inducted into People’s Hall of Fame in New York City.

All of these laurels arose from Basement Bhangra, a wildly popular monthly club night that Malhotra began in 1997. The show mixed traditional Punjabi dance music, called bhangra, with old-school hip hop, a fusion Malhotra helped to bring from the U.K. to the U.S. in the 1990s.

At the time, she says, many club owners discouraged or outright banned the genres because South Asian producers didn’t want to hear black music or the “lower-class” bhangra. But Malhotra was undeterred. “I love these two styles of music, and I didn’t want to water it down. I didn’t want anybody to tell me what I can and can’t play,” she explains. Her perseverance paid off: Since then, Malhotra has DJ’ed everywhere from celebrity weddings to the Obama White House to the historic Women’s March on Washington in 2017.

“You always open”

Not only a musical artist, Malhotra is also an activist at her core. She was a founding member of the South Asian Youth Organization in 1996. In college, she was part of a South Asian political rights organization that was formed in response to racial violence in Jersey City, in which one person was killed and one left for dead at a fire station. None of the accused were convicted. The experience politicized her.

For Malhotra, blending bhangra and hip hop was always about more than just producing innovative mixes. In creating her club nights, she also intended to create a space for her audience — and by extension, to support a community of South Asians, dancers, and community activists. She was particularly galvanized by the terrorist attacks on Sept. 11, 2001.

“9/11 was a very significant moment in New York,” Malhotra says. “People of Middle Eastern and South Asian descent were real targets. Nine days after 9/11 we had a party on the books and I really had to think about DJ’ing when there was a collective mourning in the city. And the venue was only a mile from [the World Trade Center]. This neighborhood had finally opened again, and the question was ‘Do we open?’ And the answer is: ‘Yes.’ You always open. … Fundamental to my work is not just playing music, but creating a space to play the music.” Malhotra sees herself as not just a DJ, but as a cultural curator, because of how her activism intersects with her performances.

Ultimately, it was this community work that brought Malhotra to MIT. She had heard about the CMS program from professors and graduates whom she met in New York. At the same time, Malhotra found that she was craving more intellectual engagement with her work.

“Definitely in the pace and the hustle of New York there wasn’t always time to think,” she recalls. “I wanted to reflect on the work I was doing, to gain the qualifications to eventually teach more, to gain an opportunity to write critically and reflect, and also to be in a community of other people who are also thinking and writing and engaged.”

Innovation over tradition

At MIT, Malhotra works with Associate Professor Vivek Bald in the Open Documentary Lab on the Lost Histories Project. The first year of the CMS program is highly structured with coursework, colloquia, and lectures, but in the second year, students are encouraged to sample many of the intellectual resources at MIT and Harvard University. Malhotra relishes the flexibility of the program and has taken full advantage of the broad array of available coursework, including 21M.361 (Electronic Music Composition), 11.S948 (Writing About the Modern City), MAS.S62 (Principles of Awareness), and Harvard’s WOMGEN 1212 (Beyoncé Feminism and Rihanna Woman).

She also appreciates the diversity of the students and faculty in CMS. “I feel like I’ve been able to be myself here,” she says. “And I think that the uniqueness of our program is that there are so many different kinds of people. … We’ve got filmmakers and gamers and scholars and anthropologists. And our professors have so many different interests and backgrounds. They’re in the world and in their academic space too. It’s such a rich community of people.”

As her June graduation nears, Malhotra is working on her thesis, which examines the mythologies around DJ’ing as a cultural practice. She’s weaving in ideas about the physical practices of DJ’ing, gender in DJ’ing, and the concept of authenticity and tradition in club music. “There’s a certain sense of ubiquity around DJ’ing, but what do we really understand about it and how is it actually practiced?” she says. “Is it about cutting and scratching? Is that really how people perform or consume music? It’s one technique and it’s one small part of the spectrum of DJ’ing, and that’s turntablism, which is very specific. A scratch interrupts the flow, but it’s demonstrative. I’m interested in that.”

As an artist who melds the strong cultural touchstones of bhangra and hip hop music, Malhotra also contends with traditionalists. “Once you introduce recording, how does the medium change the art — or does it?” she says. “For any style of culture, there’s often someone saying that it’s being morphed into something that’s not original. But the nature of culture is to keep changing — according to me.” She pauses, adding, “I try to go from a more aesthetic place: Does it sound good? Will it make people dance? That’s my guiding principle. I don’t have any hang-ups around what’s traditional.”

New York, New York

Though a die-hard New Yorker, Malhotra has a fond appreciation for her temporary home in Cambridge. “I try to get immersed in the state of mind and where I’m living. I try to follow local happenings and newsletters. I feel like it’s important to know about the community you’re in. Cambridge really cares about itself.” She smiles. “So much so, that you can’t park a car here! Yes, I’m a grumpy New Yorker and I’d like things to stay open later, but it’s been manageable.”

Luckily, Malhotra can commiserate with several friends from her New York South Asian activist and artist community who are also pursuing work at Harvard and MIT. She also attends an open mic nights organized by SubDrift, a community of Boston-based South Asians. Although she has focused deeply on her academic work while at MIT, she’s made some time to DJ in Boston at the Museum of Fine Arts Late Night, MIT Sloan, and an all-ages party called Local Beats in Somerville.  

After graduation, Malhotra plans to continue the bhangra music podcast she began in 2011, as well as her DJ gigs. She will also attend a Feet in 2 Worlds audio workshop called “Telling Immigrant Food Stories,” for which she was awarded a scholarship. She looks forward to returning to her beloved Jackson Heights neighborhood in Queens, where she has a place on the board of Chhaya CDC, a community organization supporting New Yorkers of South Asian origin. But she is also embracing any new opportunities that come her way.

“The world is open in some ways, but I want to be more intentional and think about what I want to do in the world,” Malhotra says. “I’m in a great space of privilege in having an art career and now having this educational experience. Coming [to MIT] has definitely opened doors in opportunities and in my way of thinking.”

2019 Summer Scholars look forward to MRL lab experience

Tue, 05/07/2019 - 12:45pm

A diverse group with a broad range of personal and scientific interests and experiences, this year’s 10 MIT Materials Research Laboratory Summer Scholars include a former U.S. Navy SEAL, an accomplished classical pianist, and a voice actor. Each was selected for a strong undergraduate record in science and technology.

The Summer Scholars, as MRL calls its National Science Foundation-funded Research Experience for Undergraduates interns, will be on the MIT campus from June 16-Aug. 10. They were chosen from among 286 applicants.

“I was a Navy SEAL for nine years, in which time I was deployed to Iraq and Afghanistan, as well as serving as a mountaineering instructor in Kodiak, Alaska,” says University of Washington junior Chris Moore. While in Alaska, Moore and two fellow SEAL instructors planned and executed an expedition to the summit of Denali (formerly Mount McKinley). 

Clement N. Ekaputra, a Case Western Reserve University junior, plays classical piano and recently performed a concerto as a soloist with the University of Pittsburgh Symphony Orchestra.

When she isn’t pursuing her scientific education, Hunter College physics major Ariane Marchese is a voice actress who volunteers to narrate audiobooks for schools.

Eager learners

While seeking a sharper focus for graduate school research is a common theme for Summer Scholars, this year’s participants are eager learners willing to stretch into new topics and experimental techniques. “I’m really excited to learn from MIT Materials Research Lab faculty and the other talented and diverse interns I’ll be working with,” Marchese says.

University of Puerto Rico at Mayaguez mechanical engineering major Marcos A. Logrono Lopez hopes to pursue research at MIT in the area of microfluidics. “My goal is to understand the behaviors that dominate fluids at the micro scale and implement them into new innovative technologies, such as micropropulsion and microelectromechanical systems,” he says.

“I’m certain that no matter the project I’m assigned to in this internship, I will work passionately and be motivated with the goal of pushing forward the research that takes place at MIT,” Logrono says. “Positivism, humbleness, hard work, respectfulness, and passion for helping others are the fundamental bases of who I am as a person,” he adds.

University of California at Los Angeles junior materials science and engineering major Isabel Albelo hopes the REU experience “will provide me with further clarity as to what I would like to study in graduate school and the field in which I would like to work.” She is currently interested in sustainability, either in the areas of agriculture and food science or renewable energy generation and storage. During the first half of 2018, Albelo studied abroad in Chile despite the difficulty of fitting that experience into an engineering curriculum.

Case Western Reserve University junior Nathan Ewell is most interested in electrochemical engineering and polymer physics. “I am excited to get a feel for what my life will be like as a graduate student in a few years,” he says.

Also interested in polymers and nanomaterials, University of Massachusetts at Amherst chemical engineering major Jared Bowden hopes to work with bioinspired materials. “I am very interested in emulating extremely specialized natural polymers perfected by millions of years of natural selection and applying the benefits of their properties to modern problems,” Bowden says. “I hope to learn new things that I can bring back with me to UMass that will help me in my nanofiber research for my senior thesis.”

Moore, a physics and astronomy major, hopes to conduct optical experimental research in condensed matter, specifically topological defects. “I find the field fascinating, both conceptually and experimentally,” Moore says. “Much of what appeals to me about the research at MIT is how often it creates and broadens new fields of research. This is reflective of the clear experimental direction that I hope to pick up during this experience.”

Melvin Núñez Santiago is majoring in electrical technology with renewable energy at the University Ana G. Mendez at Gurabo in Puerto Rico. Núñez hopes to channel his passion for research and technology development into a summer project related to electronics, power, communications, or energy storage. Marchese, a junior at Hunter College, also expresses interest in energy production and storage, but is interested in all aspects of materials science.

Improving their research and analytical skills is a common goal of this year’s cohort. “By working full-time on a research project with them, I know I will learn a lot about conducting research — about discovering interesting questions and designing methods to solve them,” says Ekaputra, a Case Western Reserve materials science and engineering major.

Dartmouth College junior Carly Tymm says, “I would like to take on a multidisciplinary project at MIT with perspectives from synthetic chemistry, surface science, and bioengineering in the design, synthesis, and analysis of biomaterials. There are many macromolecular solutions to challenges in medicinal materials science that I would like to investigate deeper.” Tymm is a double major in chemistry and biomedical engineering sciences.

Regional explorations

Northwestern University junior materials science and engineering major Leah Borgsmiller will be experiencing Massachusetts for the first time, “so I am excited to spend evenings and weekends exploring the Cambridge/Boston area,” she says. She hopes the intensive eight-week program will help her form long-lasting connections to her peers, as well as MIT faculty.

“In this modern world, we are increasingly more dependent on electronics and energy consumption to power our lives, and so being able to contribute to research to make these processes more efficient and environmentally-friendly would be a rewarding experience,” Borgsmiller says.

Tissue chip headed to International Space Station for osteoarthritis study

Tue, 05/07/2019 - 12:00pm

On May 4, a National Center for Advancing Translational Sciences (NCATS)-supported tissue-chip system with direct clinical applications to health conditions here on Earth was launched on the SpaceX CRS 17/Falcon 9 rocket.

Hundreds of millions of people worldwide suffer from osteoarthritis (OA), and there are currently no disease-modifying drugs that can halt or reverse the progression of OA — only painkillers for short-term symptomatic relief. Millions of healthy young to middle-aged individuals develop post-traumatic osteoarthritis (PTOA) as a result of a traumatic joint injury, like a tear of the anterior cruciate ligament or meniscus, especially in young women playing sports. Exercise-related injuries are also said to be frequent sources of joint injury for crew members living aboard the International Space Station (ISS), and pre-existing joint injuries may also affect astronaut performance in space. These conditions are compounded and worsened by exposure of crew members to weightlessness and radiation on the ISS.

After a traumatic joint injury, there is an immediate upregulation of inflammatory proteins called cytokines in the joint synovial fluid, proteins which are secreted mainly by cells in the joint’s synovial lining. When mechanical trauma to cartilage caused by the initial injury is accompanied by cytokine penetration into cartilage, degradation of cartilage and subchondral bone over weeks and months often progresses to full-blown, painful PTOA in 10-15 years.

To study PTOA on Earth and in space, investigators at MIT have developed a cartilage-bone-synovium micro-physiological system in which primary human cartilage, bone, and synovium tissues (obtained from donor banks) are co-cultured for several weeks. During culture, investigators can monitor intracellular and extracellular biomarkers of disease using quantitative experimental and computational metabolomics and proteomics analyses, along with detection of disease-specific fragments of tissue matrix molecules. In addition, this co-culture system allows investigators to test the effects of potential disease-modifying drugs to prevent cartilage and bone loss on Earth and in space.

Experiments aboard the ISS utilize a Multi-purpose Variable-G Platform, made by Techshot Inc., to study the effects of microgravity and ionizing radiations on a knee tissue chip prepared using cartilage-bone-synovium tissues secured on a biocompatible material. The platform enables automated nutrient media transfer and collection for test conditions with and without disease-modifying drugs, including tests using a one-gravity control system.

These investigations on Earth and in the ISS have the potential to lead to the discovery of treatments and treatment regimens that, if administered immediately after a joint injury, could halt the progression of OA disease before it becomes irreversible. The goal is to treat the root cause of PTOA and prevent permanent joint damage, rather than mask the symptoms with painkillers later in life, as is currently done. These studies are funded by the NIH National Center for Advancing Translational Sciences and the ISS-National Lab.

Why visual stimulation may work against Alzheimer’s

Tue, 05/07/2019 - 11:00am

Several years ago, MIT neuroscientists showed that they could dramatically reduce the amyloid plaques seen in mice with Alzheimer’s disease simply by exposing the animals to light flickering at a specific frequency.

In a new study, the researchers have found that this treatment has widespread effects at the cellular level, and it helps not just neurons but also immune cells called microglia. Overall, these effects reduce inflammation, enhance synaptic function, and protect against cell death, in mice that are genetically programmed to develop Alzheimer’s disease.

“It seems that neurodegeneration is largely prevented,” says Li-Huei Tsai, the director of MIT’s Picower Institute for Learning and Memory and the senior author of the study.

The researchers also found that the flickering light boosted cognitive function in the mice, which performed much better on tests of spatial memory than untreated mice did. The treatment also produced beneficial effects on spatial memory in older, healthy mice.

Chinnakkaruppan Adaikkan, an MIT postdoc, is the lead author of the study, which appears online in Neuron on May 7.

Beneficial brain waves

Tsai’s original study on the effects of flickering light showed that visual stimulation at a frequency of 40 hertz (cycles per second) induces brain waves known as gamma oscillations in the visual cortex. These brain waves are believed to contribute to normal brain functions such as attention and memory, and previous studies have suggested that they are impaired in Alzheimer’s patients.

Tsai and her colleagues later found that combining the flickering light with sound stimuli — 40-hertz tones — reduced plaques even further and also had farther-reaching effects, extending to the hippocampus and parts of the prefrontal cortex. The researchers have also found cognitive benefits from both the light- and sound-induced gamma oscillations. 

In their new study, the researchers wanted to delve deeper into how these beneficial effects arise. They focused on two different strains of mice that are genetically programmed to develop Alzheimer’s symptoms. One, known as Tau P301S, has a mutated version of the Tau protein, which forms neurofibrillary tangles like those seen in Alzheimer’s patients. The other, known as CK-p25, can be induced to produce a protein called p25, which causes severe neurodegeneration. Both of these models show much greater neuron loss than the model they used for the original light flickering study, Tsai says.

The researchers found that visual stimulation, given one hour a day for three to six weeks, had dramatic effects on neuron degeneration. They started the treatments shortly before degeneration would have been expected to begin, in both types of Alzheimer’s models. After three weeks of treatment, Tau P301S mice showed no neuronal degeneration, while the untreated Tau P301S mice had lost 15 to 20 percent of their neurons. Neurodegeneration was also prevented in the CK-p25 mice, which were treated for six weeks.

“I have been working with p25 protein for over 20 years, and I know this is a very neurotoxic protein. We found that the p25 transgene expression levels are exactly the same in treated and untreated mice, but there is no neurodegeneration in the treated mice,” Tsai says. “I haven’t seen anything like that. It’s very shocking.”

The researchers also found that the treated mice performed better in a test of spatial memory called the Morris water maze. Intriguingly, they also found that the treatment improved performance in older mice that did not have a predisposition for Alzheimer’s disease, but not young, healthy mice.

Genetic changes

To try to figure out what was happening at a cellular level, the researchers analyzed the changes in gene expression that occurred in treated and untreated mice, in both neurons and microglia — immune cells that are responsible for clearing debris from the brain.

In the neurons of untreated mice, the researchers saw a drop in the expression of genes associated with DNA repair, synaptic function, and a cellular process called vesicle trafficking, which is important for synapses to function correctly. However, the treated mice showed much higher expression of those genes than the untreated mice. The researchers also found higher numbers of synapses in the treated mice, as well as a greater degree of coherence (a measure of brain wave synchrony between different parts of the brain).

In their analysis of microglia, the researchers found that cells in untreated mice turned up their expression of inflammation-promoting genes, but the treated mice showed a striking decrease in those genes, along with a boost of genes associated with motility. This suggests that in the treated mice, microglia may be doing a better job of fighting off inflammation  and clearing out molecules that could lead to the formation of amyloid plaques and neurofibrillary tangles, the researchers say. They also found lower levels of the version of the Tau protein that tends to form tangles.

A key unanswered question, which the researchers are now investigating, is how gamma oscillations trigger all of these protective measures, Tsai says.

“A lot of people have been asking me whether the microglia are the most important cell type in this beneficial effect, but to be honest, we really don’t know,” she says. “After all, oscillations are initiated by neurons, and I still like to think that they are the master regulators. I think the oscillation itself must trigger some intracellular events, right inside neurons, and somehow they are protected.”

The researchers also plan to test the treatment in mice with more advanced symptoms, to see if neuronal degeneration can be reversed after it begins. They have also begun phase 1 clinical trials of light and sound stimulation in human patients.

The research was funded by the National Institutes of Health, the Halis Family Foundation, the JPB Foundation, and the Robert A. and Renee E. Belfer Family Foundation.

Using AI to predict breast cancer and personalize care

Tue, 05/07/2019 - 10:00am

Despite major advances in genetics and modern imaging, the diagnosis catches most breast cancer patients by surprise. For some, it comes too late. Later diagnosis means aggressive treatments, uncertain outcomes, and more medical expenses. As a result, identifying patients has been a central pillar of breast cancer research and effective early detection.

With that in mind, a team from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and Massachusetts General Hospital (MGH) has created a new deep-learning model that can predict from a mammogram if a patient is likely to develop breast cancer as much as five years in the future. Trained on mammograms and known outcomes from over 60,000 MGH patients, the model learned the subtle patterns in breast tissue that are precursors to malignant tumors.

MIT Professor Regina Barzilay, herself a breast cancer survivor, says that the hope is for systems like these to enable doctors to customize screening and prevention programs at the individual level, making late diagnosis a relic of the past.

Although mammography has been shown to reduce breast cancer mortality, there is continued debate on how often to screen and when to start. While the American Cancer Society recommends annual screening starting at age 45, the U.S. Preventative Task Force recommends screening every two years starting at age 50.

“Rather than taking a one-size-fits-all approach, we can personalize screening around a woman’s risk of developing cancer,” says Barzilay, senior author of a new paper about the project out today in Radiology. “For example, a doctor might recommend that one group of women get a mammogram every other year, while another higher-risk group might get supplemental MRI screening.” Barzilay is the Delta Electronics Professor at CSAIL and the Department of Electrical Engineering and Computer Science at MIT and a member of the Koch Institute for Integrative Cancer Research at MIT.

The team’s model was significantly better at predicting risk than existing approaches: It accurately placed 31 percent of all cancer patients in its highest-risk category, compared to only 18 percent for traditional models.

Harvard Professor Constance Lehman says that there’s previously been minimal support in the medical community for screening strategies that are risk-based rather than age-based.

“This is because before we did not have accurate risk assessment tools that worked for individual women,” says Lehman, a professor of radiology at Harvard Medical School and division chief of breast imaging at MGH. “Our work is the first to show that it’s possible.”  

Barzilay and Lehman co-wrote the paper with lead author Adam Yala, a CSAIL PhD student. Other MIT co-authors include PhD student Tal Schuster and former master’s student Tally Portnoi.

How it works

Since the first breast-cancer risk model from 1989, development has largely been driven by human knowledge and intuition of what major risk factors might be, such as age, family history of breast and ovarian cancer, hormonal and reproductive factors, and breast density.

However, most of these markers are only weakly correlated with breast cancer. As a result, such models still aren’t very accurate at the individual level, and many organizations continue to feel risk-based screening programs are not possible, given those limitations.

Rather than manually identifying the patterns in a mammogram that drive future cancer, the MIT/MGH team trained a deep-learning model to deduce the patterns directly from the data. Using information from more than 90,000 mammograms, the model detected patterns too subtle for the human eye to detect.

“Since the 1960s radiologists have noticed that women have unique and widely variable patterns of breast tissue visible on the mammogram,” says Lehman. “These patterns can represent the influence of genetics, hormones, pregnancy, lactation, diet, weight loss, and weight gain. We can now leverage this detailed information to be more precise in our risk assessment at the individual level.”  

Making cancer detection more equitable

The project also aims to make risk assessment more accurate for racial minorities, in particular. Many early models were developed on white populations, and were much less accurate for other races. The MIT/MGH model, meanwhile, is equally accurate for white and black women. This is especially important given that black women have been shown to be 42 percent more likely to die from breast cancer due to a wide range of factors that may include differences in detection and access to health care.

“It’s particularly striking that the model performs equally as well for white and black people, which has not been the case with prior tools,” says Allison Kurian, an associate professor of medicine and health research/policy at Stanford University School of Medicine. “If validated and made available for widespread use, this could really improve on our current strategies to estimate risk.”

Barzilay says their system could also one day enable doctors to use mammograms to see if patients are at a greater risk for other health problems, like cardiovascular disease or other cancers. The researchers are eager to apply the models to other diseases and ailments, and especially those with less effective risk models, like pancreatic cancer.

“Our goal is to make these advancements a part of the standard of care,” says Yala. “By predicting who will develop cancer in the future, we can hopefully save lives and catch cancer before symptoms ever arise.”

3Q: Susan Hockfield on a new age of living machines

Tue, 05/07/2019 - 10:00am

What if viruses could build batteries with almost no toxic waste? What if a protein common to almost every organism on Earth could purify drinking water at a large scale? What if a nanoparticle-based urine test could detect the early signals of cancer? What if machine learning and other advanced computing methods could engineer higher crop yields? Such biotechnologies may sound like the province of science fiction, but are in fact just over the scientific horizon.

In "The Age of Living Machines," a book published this week by W.W. Norton and Co., MIT President Emerita Susan Hockfield offers a glimpse into a possible future driven by a new convergence of biology and engineering. She describes how researchers from many disciplines, at MIT and elsewhere, are transforming elements of the natural world, such as proteins, viruses, and biological signaling pathways, into “living” solutions for some of the most important — and challenging — needs of the 21st century.

Q. What are living machines?

A: Thanks to the emergence and expansion of the fields of molecular biology and genetics, we are amassing an ever-growing understanding of nature’s genius — the exquisitely adapted molecular and genetic machinery cells use to accomplish a multitude of purposes. I believe we are on the brink of a convergence revolution, where engineers and physical scientists are recognizing how we can use this biological “parts list” to adapt these natural machines to our own uses.

We can already see this revolution at work. In the late 1980s, Peter Agre, a physician-scientist at the Johns Hopkins University Medical Center, found an unknown protein that contaminated his every attempt to isolate the Rh protein from red blood cells. Intrigued by this mysterious interloper, he persevered until he revealed its function and structure. The protein, which he named “aquaporin,” turned out to be an essential piece of the cell’s apparatus for maintaining the right balance of water inside and outside of the cell. Its structure is superbly adapted to let water molecules — and only water molecules — pass through in large number with remarkable efficiently and speed.

The discovery of aquaporin transformed our understanding of the fundamental biology of cells, and thanks to the insight of Agre’s biophysicist colleagues, it may also transform our ability to purify drinking water at a large scale. With the launch of the company Aquaporin A/S in 2005, engineers, chemists, and biologists are translating this molecular machine into working water purification systems, now in people’s sinks and even, in 2015, in space, recycling drinking water for Danish astronauts.

Q: Why do we need living machines?

A: We are facing an existential crisis. The anticipated global population of more than 9.7 billion by 2050 poses daunting challenges for providing sufficient energy, food, and water, as well as health care, more accurately and at lower cost. These challenges are enormous in scale and complexity, and we will need to take equally enormous leaps in our imagination to meet them successfully.

But I am optimistic. Innovations like those inspired by the structure of aquaporin or the viruses that MIT materials scientist and biological engineer Angela Belcher is adapting to build more powerful, smaller batteries with cleaner, more efficient energy storage, demonstrate just how bold we can be. And yet I think the true promise of living machines lies in what we haven’t imagined yet.

In 1937, MIT President Karl Taylor Compton wrote a delightful essay called “The Electron: Its Intellectual and Social Significance” to celebrate the 40th anniversary of the discovery of the electron. Compton wrote that the electron was “the most versatile tool ever utilized,” having already resulted in seemingly magical technologies, such as radio, long-distance telephone calls, and soundtracks for movies. But Compton also recognized — accurately — that we had not even begun to realize the impact of its discovery.

In the coming decades, the atomic parts list discovered by physicists sparked a first convergence revolution, bringing us radar, television, computers, and the internet, just to start. Neither Compton nor anyone else could fully imagine the breadth of innovations to come or how radically our conception of what is possible would be altered. We can’t predict the transformations that “Convergence 2.0” will bring any more than Compton could predict the internet in 1937. But we can see clearly from the first convergence revolution that if we’re willing to throw open the doors of innovation, world-changing ideas will walk through.

Q: How do we ensure that these doors remain open?

A: The convergence revolution is happening all around us, but its success is not inevitable. For it to succeed at the maximum pace with maximum impact, biologists and engineers, along with clinicians, physicists, computational scientists, and others, need to be able to move across disciplines with shared ambition. This will require us to reorganize our thinking and our funding.

The organization of universities into departments serves us well in a number of ways, but it sometimes leads to disciplinary boundaries that can be quite difficult to cross. Interdisciplinary labs and centers can serve as reaction vessels that catalyze new approaches to research. Models for this abound at MIT. For example, soon after chemical engineer Paula Hammond joined MIT’s Koch Institute for Integrative Cancer Research, she found a new use for the layer-by-layer fabrication of nanomaterials she pioneered for energy storage devices. With the expertise of physician and molecular biologist Michael Yaffe, Hammond used that same layering method to produce nanoparticles that deliver a one-two punch of different anti-cancer drugs carefully timed to increase their effectiveness.

Our biggest sources of funding likewise constrain cross-disciplinary efforts, with the National Institutes of Health, the National Science Foundation, and the departments of Energy and Defense all investing in research along disciplinary lines. Increased experimentation with cross-disciplinary and cross-agency funding initiatives could help break down those barriers. We have already seen what such funding models can do. The Human Genome Project — which brought together biologists, computer scientists, chemists, and technologists with funding primarily from U.S.- and U.K.-based agencies — did not just give us the first map of the human genome, but paved the way for tools that allow us to study cells and diseases at entirely new scales of depth and breadth.

But ultimately, we need to renew a shared national commitment to developing new ideas. This July, we will celebrate the 50th anniversary of the Apollo 11 lunar landing. While some might argue that it offered no real benefit, it produced enormous technological gains. We should recall that the technological feat of putting men on the moon and returning them to Earth was accomplished during a time of profound social disruption. Besides providing a focus for our shared ambitions and hopes, the drive to put astronauts on the moon also led to an amazing acceleration of technology in numerous areas including computing, nanotechnology, transportation, aeronautics, and health care. History shows us we need to be willing to make these great leaps, without necessarily knowing where they will take us. Convergence 2.0, the convergence of biology with engineering and the physical sciences, offers a new model for invention, for collaboration, and for shared ambition to solve some of the most pressing problems of this century.

Ocean activity is key controller of summer monsoons

Tue, 05/07/2019 - 12:00am

Each summer, a climatic shift brings persistent wind and rain to much of Southeast Asia, in the form of a seasonal monsoon. The general cause of the monsoon is understood to be an increasing temperature difference between the warming land and the comparatively cool ocean. But for the most part, the strength and timing of the monsoon, on which millions of farmers depend each year, is incredibly difficult to predict.

Now MIT scientists have found that an interplay between atmospheric winds and the ocean waters south of India has a major influence over the strength and timing of the South Asian monsoon.

Their results, published today in the Journal of Climate, show that as the summertime sun heats up the Indian subcontinent, it also kicks up strong winds that sweep across the Indian Ocean and up over the South Asian land mass. As these winds drive northward, they also push ocean waters southward, much like a runner pushing against a treadmill’s belt. The researchers found these south-flowing waters act to transport heat along with them, cooling the ocean and in effect increasing the temperature gradient between the land and sea.

They say this ocean heat transporting mechanism may be a new knob in controlling the seasonal South Asian monsoon, as well as other monsoon systems around the world.

“What we find is, the ocean’s response plays a huge role in modulating the intensity of the monsoon,” says John Marshall, the Cecil and Ida Green Professor of Oceanography at MIT. “Understanding the ocean’s response is critical to predicting the monsoon.”

Marshall’s co-authors on the paper are lead author Nicholas Lutsko, a postdoc in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, and Brian Green, a former graduate student in Marshall’s group who is now at the Univeristy of Washington.

Damps and shifts

Scientists have traditionally focused on the Himalayas as a key influencer of the South Asian monsoon. It’s thought that the massive mountain ridge acts as a barrier against cold winds blowing in from the north, insulating the Indian subcontinent in a warm cocoon and enhancing the summer time temperature difference between the land and the ocean.

“Before, people thought the Himalayas were necessary to have a monsoon system,” Lutsko says. “When people got rid of them in simulations, there was no monsoon. But these models were run without an ocean.”

Lutsko and Marshall suspected that if they were to develop a model of the monsoon that included the ocean’s dynamics, these effects would lessen the monsoon’s intensity. Their hunch was based on previous work in which Marshall and his colleagues found that wind-driven ocean circulation minimized shifts in the Inter Tropical Convergence Zone, or ITCZ, an atmospheric belt near the equator that typically produces dramatic thunderstorms over large areas. This wide zone of atmospheric turbulence is known to shift seasonally between the northern and southern hemispheres, and Marshall found the ocean plays a role in corraling these shifts.

“Based on the idea of the ocean damping the ITCZ shifts, we thought that the ocean would also damp the monsoon,” Marshall says. “But it turns out it actually strengthens the monsoon.”

Seeing past a mountain

The researchers came to this unexpected conclusion after drawing up a simple simulation of a monsoon system, starting with a numerical model that simulates the basic physics of the atmosphere over an “aqua planet” — a world covered entirely in an ocean. The team added a solid, rectangular mass to the ocean to represent a simple land mass. They then varied the amount of sunlight across the simulated planet, to mimic the seasonal cycles of insolation, or sunlight, and also simulated the winds and rains that result from these seasonal shifts in temperature.

They carried out these simulations under different scenarios, including one in which the ocean was static and unmoving, and another in which the ocean was allowed to circulate and respond to atmospheric winds. They observed that winds blowing toward the land prompted ocean waters to flow in the opposite direction, carrying heat away from waters closest to the land. This wind/ocean interaction had a significant effect on any monsoon that formed over the land: the stronger this interplay, or coupling between winds and ocean, the wider the difference in land and sea temperature, and the stronger the intensity of the ensuing monsoon.

Interestingly, their model did not include any sort of Himalayan structure; nevertheless, they were still able to produce a monsoon simply from the effect of the ocean and winds.

“We initially had a picture that we couldn’t make a monsoon without the Himalayas, which was the established wisdom,” Lutsko says. “But in our model, we had no such barrier, and we were still able to generate a monsoon, and we were excited about that.”

Ultimately, their work may help to explain why the South Asian monsoon is one of the strongest monsoon systems in the world. The combination of the Himalayas to the north, which act to warm up the land, and the ocean to the south, which takes heat away from nearby waters, sets up an extreme temperature gradient for one of the most intense, persistent monsoons on the planet.

“One reason the South Asian monsoon is so strong is there’s this big barrier to the north keeping the land warm, and there’s an ocean to the south that’s cooling, so it’s perfectly situated to be really strong,” Lutsko says.

In future work, the researchers plan to apply their newfound observations of the ocean’s role to help interpret variations in monsoons much farther back in time.

“What’s interesting to me is, during times when the northern hemisphere was much colder, you see a collapse of the monsoon system,” Lutsko says. “People don’t know why that happens. But we feel we can explain this, using our minimal model.”

The researchers also believe their new, ocean-based explanation for generating monsoons may help climate modelers to predict how, for example, the monsoon cycle may change in response to ocean warming due to climate change.

“We’re saying you have to understand how the ocean is responding if you want to predict the monsoon,” Lutsko says. “You can’t just focus on the land and the atmosphere. The ocean is key.”

This research is supported in part by the National Science Foundation and the National Oceanic and Atmospheric Administration.

Smarter training of neural networks

Mon, 05/06/2019 - 4:00pm

These days, nearly all the artificial intelligence-based products in our lives rely on “deep neural networks” that automatically learn to process labeled data.

For most organizations and individuals, though, deep learning is tough to break into. To learn well, neural networks normally have to be quite large and need massive datasets. This training process usually requires multiple days of training and expensive graphics processing units (GPUs) — and sometimes even custom-designed hardware.

But what if they don’t actually have to be all that big, after all?

In a new paper, researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) have shown that neural networks contain subnetworks that are up to one-tenth the size yet capable of being trained to make equally accurate predictions — and sometimes can learn to do so even faster than the originals.

The team’s approach isn’t particularly efficient now — they must train and “prune” the full network several times before finding the successful subnetwork. However, MIT Assistant Professor Michael Carbin says that his team’s findings suggest that, if we can determine precisely which part of the original network is relevant to the final prediction, scientists might one day be able to skip this expensive process altogether. Such a revelation has the potential to save hours of work and make it easier for meaningful models to be created by individual programmers, and not just huge tech companies.

“If the initial network didn’t have to be that big in the first place, why can’t you just create one that’s the right size at the beginning?” says PhD student Jonathan Frankle, who presented his new paper co-authored with Carbin at the International Conference on Learning Representations (ICLR) in New Orleans. The project was named one of ICLR’s two best papers, out of roughly 1,600 submissions.
 
The team likens traditional deep learning methods to a lottery. Training large neural networks is kind of like trying to guarantee you will win the lottery by blindly buying every possible ticket. But what if we could select the winning numbers at the very start?

“With a traditional neural network you randomly initialize this large structure, and after training it on a huge amount of data it magically works,” Carbin says. "This large structure is like buying a big bag of tickets, even though there’s only a small number of tickets that will actually make you rich. The remaining science is to figure how to identify the winning tickets without seeing the winning numbers first."

The team’s work may also have implications for so-called “transfer learning,” where networks trained for a task like image recognition are built upon to then help with a completely different task.

Traditional transfer learning involves training a network and then adding one more layer on top that’s trained for another task. In many cases, a network trained for one purpose is able to then extract some sort of general knowledge that can later be used for another purpose.

For as much hype as neural networks have received, not much is often made of how hard it is to train them. Because they can be prohibitively expensive to train, data scientists have to make many concessions, weighing a series of trade-offs with respect to the size of the model, the amount of time it takes to train, and its final performance.

To test their so-called "lottery ticket hypothesis" and demonstrate the existence of these smaller subnetworks, the team needed a way to find them. They began by using a common approach for eliminating unnecessary connections from trained networks to make them fit on low-power devices like smartphones: They "pruned" connections with the lowest "weights" (how much the network prioritizes that connection).

Their key innovation was the idea that connections that were pruned after the network was trained might never have been necessary at all. To test this hypothesis, they tried training the exact same network again, but without the pruned connections. Importantly, they "reset" each connection to the weight it was assigned at the beginning of training. These initial weights are vital for helping a lottery ticket win: Without them, the pruned networks wouldn't learn. By pruning more and more connections, they determined how much could be removed without harming the network's ability to learn.

To validate this hypothesis, they repeated this process tens of thousands of times on many different networks in a wide range of conditions.

“It was surprising to see that resetting a well-performing network would often result in something better,” says Carbin. “This suggests that whatever we were doing the first time around wasn’t exactly optimal, and that there’s room for improving how these models learn to improve themselves.”

As a next step, the team plans to explore why certain subnetworks are particularly adept at learning, and ways to efficiently find these subnetworks.

“Understanding the ‘lottery ticket hypothesis’ is likely to keep researchers busy for years to come,” says Daniel Roy, an assistant professor of statistics at the University of Toronto, who was not involved in the paper. “The work may also have applications to network compression and optimization. Can we identify this subnetwork early in training, thus speeding up training? Whether these techniques can be used to build effective compression schemes deserves study.”

Knight Science Journalism Program at MIT announces 2019-20 fellowship class

Mon, 05/06/2019 - 3:00pm

The Knight Science Journalism program at MIT, an internationally renowned mid-career fellowship program, is proud to announce that 10 elite science journalists representing seven countries and four continents will make up its class of 2019-20.

The fellows, selected from more than 120 applicants, are an award-winning and diverse group. They include accomplished reporters from the Des Moines Register and Milwaukee Journal Sentinel, veteran editors of international outlets like the BBC and New Scientist, and a freelance journalist who was recently named the European Science Writer of the Year.

The fellows will come to Cambridge for a 10-month fellowship that allows them to explore science, technology, and the craft of journalism in depth, to concentrate on a specialty in science, and to learn at some of the top research universities in the world.

“This is a tremendous group of journalists doing work that has real impact,” says Deborah Blum, the program's director. “I think they’ll find that Cambridge is really a unique and inspiring place to learn and grow as a science journalist.”

The Knight Science Journalism Program at MIT (KSJ), supported by a generous endowment from the John S. and James L. Knight Foundation, is recognized around the world as the premier mid-career fellowship program for science writers, editors, and multimedia journalists. With support from the program, fellows pursue an academic year of independent study, augmented by twice-weekly seminars taught by some of the world’s leading scientists and storytellers, as well as a variety of rotating, skills-focused master classes and workshops. The goal: fostering professional growth among the world’s small but essential community of journalists covering science and technology, and encouraging them to pursue that mission, first and foremost, in the public interest.

Since its founding in 1983, the program has hosted more than 350 fellows representing media outlets from The New York Times to Le Monde, from CNN to the Australian Broadcasting Corporation, and more.

In addition to the fellowship program, KSJ publishes the award-winning digital magazine Undark and administers a national journalism prize, the Victor K. McElheny Award. KSJ’s academic home at MIT is the Program in Science, Technology and Society, which is part of the School of Humanities, Arts, and Social Sciences.

The 2019-20 KSJ fellows are:

Anil Ananthaswamy is a freelance journalist and former staff writer and deputy news editor for New Scientist. He also writes for Nature, Scientific American, Quanta, and PNAS’s Front Matter, among others. In 2013, he won the Association of British Science Writers’ Best Investigative Journalism award. He has authored three books: “The Edge of Physics,” “The Man Who Wasn’t There,” which was longlisted for the 2016 Pen/E. O. Wilson Literary Science Writing Award, and most recently, “Through Two Doors at Once.” He teaches an annual science journalism workshop at the National Centre for Biological Sciences in Bangalore, India.

Bethany Brookshire is a staff writer for Science News for Students, a digital magazine that covers the latest in scientific research for children ages 9-14. She is also a contributor to Science News magazine, and a host of the independent podcast Science for the People. She edited “Science Blogging: The Essential Guide,” published in 2016, and has contributed freelance work to Scientific American, Slate, The Guardian, and many other leading publications. She has a BS in biology, a BA in philosophy, and a PhD in physiology and pharmacology.

John Fauber is an investigative medical reporter with the Milwaukee Journal Sentinel and the USA Today Network. His stories also appear in MedPage Today. Since 2009, Fauber’s work has focused on conflicts of interest in medicine. He has won more than 25 national journalism awards, leading to a special commendation for his consistent excellence from the Columbia Journalism Review. Fauber also was a major contributor to a series of stories on prion diseases in humans and animals that was selected as a finalist for the Pulitzer Prize for Explanatory Reporting in 1993.

Andrada Fiscutean is a science and technology journalist based in Romania. She has written about Eastern European hackers, journalists attacked with malware, and North Korean scientists. Her work has been featured in Nature, Ars Technica, Wired, Vice Motherboard, and ZDNet. She’s also editor-in-chief of ProFM radio, where she assembled a team of journalists who cover local news. In 2017, she won Best Feature Story at SuperScrieri, the highest award in Romanian journalism. Passionate about the history of technology, Fiscutean owns several home computers made in Eastern Europe during the 1980s.

Richard Fisher is managing editor of BBC.com features and editor of BBC Future, a science, health and technology features website aimed at international audiences. Through evidence-based analysis, original ideas, and human stories, BBC Future is dedicated to exploring how our world is changing. The site won a 2019 Webby award for “best writing (editorial).” Fisher also oversees the teams behind BBC Culture, the BBC’s global arts site, and BBC Reel, which features short-form factual video stories. Before that, he was a senior news editor and feature editor at New Scientist in London.

Tony Leys has worked at the Des Moines Register as an editor and reporter since 1988. He has been the newspaper’s main health care reporter since 2000, with a strong focus on mental health and health care policy. He also helps cover politics, including Iowa’s presidential caucus campaigns. Leys grew up in the Milwaukee area and graduated from the University of Wisconsin-Madison. He is a national board member of the Association of Health Care Journalists.

Thiago Medaglia is an independent reporter for National Geographic Brazil, where he was previously an editor. Medaglia is also the founder of Ambiental Media, a Brazilian startup that transforms scientific content into accessible, compelling, and innovative journalism products. An award-winning reporter and writer, he has published stories in several media outlets, such as ESPN Brazil, Mother Jones, Estadão, Folha de São Paulo, and others. He is co-author of six books on environmental topics and was a 2015 fellow at the International Center for Journalists.

Sonali Prasad has degrees in both computer science and journalism. In 2016, she was a Google News Lab fellow and won a grant from the Brown Institute of Media Innovation to study coral reef health. She has reported on science and environment issues for publications such as the The Guardian, The Washington Post, Quartz, Mongabay, and Hakai Magazine. She was hired as an investigative reporter at the Columbia Journalism School's Energy and Environment Project, and her team's work on the U.S. Export-Import Bank's dirty fossil fuel investments won an 'Honorable Mention' at the Society of Environmental Journalist awards.

Molly Segal is an independent radio journalist based in Canada’s Rocky Mountains. Her documentaries and reports on environment and science air on the Canadian Broadcasting Corporation’s national radio programs — including Quirks & Quarks, Ideas, Tapestry, and The World This Weekend — as well as WHYY’s The Pulse and WBUR/NPR’s Here and Now. Molly has worked for CBC Radio/TV, stationed across Canada. Her work takes her to remote mountains looking for grizzlies, counting miniscule snails in ancient hot springs, and observing paleontologists looking for 500-million-year-old fossils. Molly is the host and producer of The Narwhal’s upcoming inaugural podcast, Undercurrent: Bear 148.

Eva Wolfangel is a German science journalist, focusing on future technologies such as artificial intelligence and virtual reality, computer science, data journalism, interaction between digital and real worlds, and space travel. She writes for major magazines and newspapers in Germany and Switzerland — including ZEIT, Geo, Spiegel, and NZZ — and produces radio features. After several years as an editor, she became a freelance journalist in 2008. Eva’s specialty is to combine creative writing and technical topics in order to reach a broad audience. In 2018 she was named European Science Writer of the Year by the Association of British Science Writers.

Exploring the effects of moisture and drying on concrete

Mon, 05/06/2019 - 3:00pm

Although it is used to construct some of the world’s largest structures, it turns out that cement actually has something in common with a sponge.

A highly porous material, cement tends to absorb water from precipitation and even ambient humidity. And just as the shape of a sponge changes depending on water saturation, so too does that of cement, according to recent work conducted at MIT.

In a paper published in the Proceedings of the National Academy of Sciences, researchers at the MIT Concrete Sustainability Hub (CSHub), French National Center for Scientific Research (CNRS) and Aix-Marseille University discuss just how the material’s porous network absorbs water and propose how drying permanently rearranges the material and leads to potential structural damage.

But to understand how water can change cement’s pore structure, one must first look at how it contributes to the formation of this very same structure.

Cement paste begins as a dry powder composed of carefully blended ingredients including calcium, iron, aluminum, and silicon. From here, this powder is mixed with a certain proportion of water to form cement paste. This is where the pore network begins to form.

Once the water and the powder mix, they react together and produce compounds known as calcium silicate hydrate (CSH), also known as cement hydrates.

“Cement hydrates are small, on the nanoscale scale,” says Tingtao Zhou, a PhD student in the Department of Physics and the lead author of the paper. “These are the building blocks of cement.”

During cement hydration, the cement hydrate's nanograins aggregate with each other, forming a network that glues all constituents together. While this gives cement its strength, the spaces between the cement hydrates create an extensive pore network in the cement paste.

“You have numerous pores of variable sizes that are interconnected,” describes Zhou. “It becomes very complex. And since they are so small, you don’t even need rain to fill them with water. Even ambient humidity can fill these pores.”

This poses a problem when trying to study the drying of a pore network.

“Let’s say you only have two grains of calcium silicate hydrate; you can imagine there is some water condensation between them,” explains Zhou. “In this case, it is easy to measure the water in the pore space and the pressure of this condensation, which we call capillary pressure. But when you have a massive number of grains the water distribution becomes really complicated — the geometry becomes a mess.”

To deal with water in cement’s messy pore network, Zhou and Katerina Ioannidou, a research scientist with CNRS and the MIT Energy Initiative and a corresponding author of the paper, first wrestled with two issues

The first was partial saturation. Since the pore network is so complex, water becomes unevenly distributed, which makes it difficult to calculate its distribution. 

The second issue is that of multiple scales.

“In the past, researchers would study the movement of water in pores at either the scale of the atom or on the continuum, or visible, scale,” Zhou reports. “This means they lost a lot of information on the mesoscale — which is between the atomistic and continuum scales.”

Over the past decade, Ioannidou, along with researchers Roland Pellenq, Franz-Josef Ulm, Sidney Yip, and Emanuela Del Gado of Georgetown University have all worked to advance the modeling of cement at multiple scales. This recent paper drew upon their work to approach these issues.

Using computational modeling techniques, Zhou and Ioannidou calculated how water distributes within a pore and then determined the force that the water exerted on the pore wall. Once complete, they grouped pores together and simulated the effect of drying on the mesoscale.

After examining the simulations, Zhou and Ioannidou found that the grains had “irreversibly rearranged under mild drying.”

Even though these changes appeared small, they were not necessarily insignificant. “We found irreversible structural changes on the mesoscale,” Zhou notes. “It’s not propagating to a larger scale yet. But what happens when we have many of these drying cycles over many years?”

Though it is too early to know how exactly this kind of structural change affects concrete structures, Zhou hopes to develop a new model to study the long-term consequences of drying.

“In this paper, we have dealt with different spatial scales. But we have yet to deal with different time scales. These changes occur in a period of nanoseconds and we would like to see their influence over the typical lifetime of concrete structures,” he explains.

Still, this computational approach represents a new way to better understand the effects of drying in cement. “In past physical experiments, it is very hard to observe damage on this scale. But computation enables us to simulate this kind of damage,” explains Zhou. “This is the power of computing.”

The MIT Concrete Sustainability Hub (CSHub) is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.

A new approach to targeting tumors and tracking their spread

Mon, 05/06/2019 - 2:59pm

The spread of malignant cells from an original tumor to other parts of the body, known as metastasis, is the main cause of cancer deaths worldwide.

Early detection of tumors and metastases could significantly improve cancer survival rates. However, predicting exactly when cancer cells will break away from the original tumor, and where in the body they will form new lesions, is extremely challenging.

There is therefore an urgent need to develop new methods to image, diagnose, and treat tumors, particularly early lesions and metastases.

In a paper published today in the Proceedings of the National Academy of Sciences, researchers at the Koch Institute for Integrative Cancer Research at MIT describe a new approach to targeting tumors and metastases.

Previous attempts to focus on the tumor cells themselves have typically proven unsuccessful, as the tendency of cancerous cells to mutate makes them unreliable targets.

Instead, the researchers decided to target structures surrounding the cells known as the extracellular matrix (ECM), according to Richard Hynes, the Daniel K. Ludwig Professor for Cancer Research at MIT. The research team also included lead author Noor Jailkhani, a postdoc in the Hynes Lab at the Koch Institute for Integrative Cancer Research.

The extracellular matrix, a meshwork of proteins surrounding both normal and cancer cells, is an important part of the microenvironment of tumor cells. By providing signals for their growth and survival, the matrix plays a significant role in tumor growth and progression.

When the researchers studied this microenvironment, they found certain proteins that are abundant in regions surrounding tumors and other disease sites, but absent from healthy tissues.

What’s more, unlike the tumor cells themselves, these ECM proteins do not mutate as the cancer progresses, Hynes says. “Targeting the ECM offers a better way to attack metastases than trying to prevent the tumor cells themselves from spreading in the first place, because they have usually already done that by the time the patient comes into the clinic,” Hynes says.

The researchers began developing a library of immune reagents designed to specifically target these ECM proteins, based on relatively tiny antibodies, or “nanobodies,” derived from alpacas. The idea was that if these nanobodies could be deployed in a cancer patient, they could potentially be imaged to reveal tumor cells’ locations, or even deliver payloads of drugs.

The researchers used nanobodies from alpacas because they are smaller than conventional antibodies. Specifically, unlike the antibodies produced by the immune systems of humans and other animals, which consist of two “heavy protein chains” and two “light chains,” antibodies from camelids such as alpacas contain just two copies of a single heavy chain.

Nanobodies derived from these heavy-chain-only antibodies comprise a single binding domain much smaller than conventional antibodies, Hynes says.

In this way nanobodies are able to penetrate more deeply into human tissue than conventional antibodies, and can be much more quickly cleared from the circulation following treatment.

To develop the nanobodies, the team first immunized alpacas with either a cocktail of ECM proteins, or ECM-enriched preparations from human patient samples of colorectal or breast cancer metastases.

They then extracted RNA from the alpacas’ blood cells, amplified the coding sequences of the nanobodies, and generated libraries from which they isolated specific anti-ECM nanobodies.

They demonstrated the effectiveness of the technique using a nanobody that targets a protein fragment called EIIIB, which is prevalent in many tumor ECMs.

When they injected nanobodies attached to radioisotopes into mice with cancer, and scanned the mice using noninvasive PET/CT imaging, a standard technique used clinically, they found that the tumors and metastases were clearly visible. In this way the nanobodies could be used to help image both tumors and metastases.

But the same technique could also be used to deliver therapeutic treatments to the tumor or metastasis, Hynes says. “We can couple almost anything we want to the nanobodies, including drugs, toxins or higher energy isotopes,” he says. “So, imaging is a proof of concept, and it is very useful, but more important is what it leads to, which is the ability to target tumors with therapeutics.”

The ECM also undergoes similar protein changes as a result of other diseases, including cardiovascular, inflammatory, and fibrotic disorders. As a result, the same technique could also be used to treat people with these diseases.

In a recent collaborative paper, also published in Proceedings of the National Academy of Sciences, the researchers demonstrated the effectiveness of the technique by using it to develop nanobody-based chimeric antigen receptor (CAR) T cells, designed to target solid tumors.

CAR T cell therapy has already proven successful in treating cancers of the blood, but it has been less effective in treating solid tumors.

By targeting the ECM of tumor cells, nanobody-based CAR T cells became concentrated in the microenvironment of tumors and successfully reduced their growth.

The ECM has been recognized to play crucial roles in cancer progression, but few diagnostic or therapeutic methods have been developed based on the special characteristics of cancer ECM, says Yibin Kang, a professor of molecular biology at Princeton University, who was not involved in the research.

“The work by Hynes and colleagues has broken new ground in this area and elegantly demonstrates the high sensitivity and specificity of a nanobody targeting a particular isoform of an ECM protein in cancer,” Kang says. “This discovery opens up the possibility for early detection of cancer and metastasis, sensitive monitoring of therapeutic response, and specific delivery of anticancer drugs to tumors.”

This work was supported by a Mazumdar-Shaw International Oncology Fellowship, fellowships for the Ludwig Center for Molecular Oncology Research at MIT, the Howard Hughes Medical Institute and a grant from the Department of Defence Breast Cancer Research Program, and imaged on instrumentation purchased with a gift from John S. ’61 and Cindy Reed.

The researchers are now planning to carry out further work to develop the nanobody technique for treating tumors and metastases.

A sustainability rating for space debris

Mon, 05/06/2019 - 2:00pm

Space is becoming increasingly congested, even as our societal dependence on space technology is greater than ever before.

With over 20,000 pieces of debris larger than 10 centimeters, including inactive satellites and discarded rocket parts hurtling around in Earth’s orbit, the risk of damaging collisions increases every year.

In a bid to address this issue, and to foster global standards in waste mitigation, the World Economic Forum has chosen a team led by the Space Enabled Research Group at the MIT Media Lab, together with a team from the European Space Agency (ESA), to launch the Space Sustainability Rating (SSR), a concept developed by the Forum’s Global Future Council on Space Technologies.

Similar to rating systems such as the LEED certification used by the construction industry, the SSR is designed to ensure long-term sustainability by encouraging more responsible behavior among countries and companies participating in space.

The team, announced on May 6 at the Satellite 2019 conference in Washington, also includes collaborators from Bryce Space and Technology, and the University of Texas at Austin.

The MIT portion of the team will be led by Danielle Wood, the Benesse Corporation Career Development Assistant Professor of Research in Education within MIT’s Program in Media Arts and Sciences, and jointly appointed in the Department of Aeronautics and Astronautics. She will be working alongside Minoo Rathnasabapathy, a research engineer within the Space Enabled group. Professor Moriba Jah and Adjunct Professor Diane Howard contribute from the University of Texas at Austin building on Professor Jah’s in-depth research on tracking and visualizing space objects and Professor Howard’s legal knowledge, while Mike French and Aschley Schiller bring expertise about space industry dynamics from Bryce. The MIT-led team joins the efforts of Nikolai Khlystov and Maksim Soshkin in the World Economic Forum Aerospace Industry Team as well as Stijn Lemmens and Francesca Letzia in the Space Debris Office of the European Space Agency.

Working with the World Economic Forum and the other collaborators to create the SSR is directly in line with the mission of the Media Lab's Space Enabled research group, of which Wood is also the founder and head, to advance justice in Earth's complex systems using designs enabled by space.

“One element of justice is ensuring that every country has the opportunity to participate in using space technology as a form of infrastructure to provide vital services in our society such as communication, navigation, and environmental monitoring,” Wood says.

Many aspects of modern society depend on satellite services. Weather reports, for example, depend on a global network of weather satellites operated primarily by governments.

In addition, car drivers, trains, ships and airplanes routinely use satellite positioning services. These same positioning satellites also offer a highly accurate timing signal used by the global banking system to precisely time financial transactions.

“Our global economy depends on our ability to operate satellites safely in order to fly in planes, prepare for severe weather, broadcast television and study our changing climate,” Wood says. “To continue using satellites in orbit around Earth for years to come, we need to ensure that the environment around Earth is as free as possible from trash leftover from previous missions.”

When satellites are retired from useful service, many will remain in orbit for decades longer, adding to the problem of space debris.

In the best case scenario, satellites will gradually drift down to lower orbits and burn up in Earth's atmosphere. However, the higher the orbit a satellite is operating in, the longer it takes to move down and burn up.

When satellite operators design their satellites, they are able to choose which altitude to use, and for how long their spacecraft will operate. They therefore have a responsibility to design their satellites to produce as little waste as possible in Earth's orbit.

“The Space Sustainability Rating will create an incentive for companies and governments operating satellites to take all the steps they can to reduce the creation of space debris,” Wood says. “This will create a more equitable opportunity for new countries to participate in space with less risk of collision with older satellites.”

Many governments already provide guidelines to companies operating within their borders, to help reduce the amount of space debris produced. The space community is also engaged in an ongoing discussion about new ways to reduce the creation of debris.

But in the meantime, multiple companies are planning to launch large constellations of satellites that will quickly increase the number of spacecraft in orbit. These satellite constellations will eventually be decommissioned, adding to the growing space junk problem.

To address this issue, the World Economic Forum Global Future Council on Space Technologies, which is composed of leaders from government, academia and industry, has developed the concept of a voluntary system, the SSR, to encourage those who operate satellites to create as little debris as possible.

The newly announced team will draw up the rules and processes by which the SSR will operate, including determining what information should be collected from satellite operators to assess their impact on space sustainability.

“Countries in every region are starting new space programs to participate in applying space to their national development,” Wood says. “Creating the Space Sustainability Rating with our collaborators is one key step to ensure that all countries continue to increase the benefits we receive from space technology," she says.

With a lack of diversity in existing strategies to tackle the orbital debris challenge, the Global Future Council felt it important to develop an industry-wide approach, according to Nikolai Khlystov, lead for aerospace industry at the World Economic Forum.

“We are very glad to partner with leading industry entities such as the European Space Agency, MIT's Space Enabled research group, the University of Texas at Austin and Bryce Space and Technology to build and launch the Space Sustainability Rating,” Khlystov says.

The envisaged SSR has as clear goal to promote mission designs and operational concepts that avoid an unhampered growth in space debris and the resulting detrimental effects, says Stijn Lemmens, senior space debris mitigation analyst in the Space Debris Office at ESA.

“Together with our collaborators, we aim to put in place a system that has the flexibility to stimulate and drive innovative sustainable design solutions, and spotlight those missions that contribute positively to the space environment,” Lemmens says.

North Atlantic Ocean productivity has dropped 10 percent during Industrial era

Mon, 05/06/2019 - 12:04pm

Virtually all marine life depends on the productivity of phytoplankton — microscopic organisms that work tirelessly at the ocean’s surface to absorb the carbon dioxide that gets dissolved into the upper ocean from the atmosphere.

Through photosynthesis, these microbes break down carbon dioxide into oxygen, some of which ultimately gets released back to the atmosphere, and organic carbon, which they store until they themselves are consumed. This plankton-derived carbon fuels the rest of the marine food web, from the tiniest shrimp to giant sea turtles and humpback whales.

Now, scientists at MIT, Woods Hole Oceanographic Institution (WHOI), and elsewhere have found evidence that phytoplankton’s productivity is declining steadily in the North Atlantic, one of the world’s most productive marine basins.

In a paper appearing today in Nature, the researchers report that phytoplankton’s productivity in this important region has gone down around 10 percent since the mid-19th century and the start of the Industrial era. This decline coincides with steadily rising surface temperatures over the same period of time.

Matthew Osman, the paper’s lead author and a graduate student in MIT’s Department of Earth, Atmospheric, and Planetary Sciences, says there are indications that phytoplankton’s productivity may decline further as temperatures continue to rise as a result of human-induced climate change.

“It’s a significant enough decine that we should be concerned,” Osman says. “The amount of productivity in the oceans roughly scales with how much phytoplankton you have. So this translates to 10 percent of the marine food base in this region that’s been lost over the industrial era. If we have a growing population but a decreasing food base, at some point we’re likely going to feel the effects of that decline.”

Drilling through “pancakes” of ice

Osman and his colleagues looked for trends in phytoplankton’s productivity using the molecular compound methanesulfonic acid, or MSA. When phytoplankton expand into large blooms, certain microbes emit dimethylsulfide, or DMS, an aerosol that is lofted into the atmosphere and eventually breaks down as either sulfate aerosol, or MSA, which is then deposited on sea or land surfaces by winds.

“Unlike sulfate, which can have many sources in the atmosphere, it was recognized about 30 years ago that MSA had a very unique aspect to it, which is that it’s only derived from DMS, which in turn is only derived from these phytoplankton blooms,” Osman says. “So any MSA you measure, you can be confident has only one unique source — phytoplankton.”

In the North Atlantic, phytoplankton likely produced MSA that was deposited to the north, including across Greenland. The researchers measured MSA in Greenland ice cores — in this case using 100- to 200-meter-long columns of snow and ice that represent layers of past snowfall events preserved over hundreds of years.

“They’re basically sedimentary layers of ice that have been stacked on top of each other over centuries, like pancakes,” Osman says.

The team analyzed 12 ice cores in all, each collected from a different location on the Greenland ice sheet by various groups from the 1980s to the present. Osman and his advisor Sarah Das, an associate scientist at WHOI and co-author on the paper, collected one of the cores during an expedition in April 2015.

“The conditions can be really harsh,” Osman says. “It’s minus 30 degrees Celsius, windy, and there are often whiteout conditions in a snowstorm, where it’s difficult to differentiate the sky from the ice sheet itself.”

The team was nevertheless able to extract, meter by meter, a 100-meter-long core, using a giant drill that was delivered to the team’s location via a small ski-equipped airplane. They immediately archived each ice core segment in a heavily insulated cold storage box, then flew the boxes on “cold deck flights” — aircraft with ambient conditions of around minus 20 degrees Celsius. Once the planes touched down, freezer trucks transported the ice cores to the scientists’ ice core laboratories.

“The whole process of how one safely transports a 100-meter section of ice from Greenland, kept at minus-20-degree conditions,  back to the United States is a massive undertaking,” Osman says.

Cascading effects

The team incorporated the expertise of researchers at various labs around the world in analyzing each of the 12 ice cores for MSA. Across all 12 records, they observed a conspicuous decline in MSA concentrations, beginning in the mid-19th century, around the start of the Industrial era when the widescale production of greenhouse gases began. This decline in MSA is directly related to a decline in phytoplankton productivity in the North Atlantic.

“This is the first time we’ve collectively used these ice core MSA records from all across Greenland,  and they show this coherent signal. We see a long-term decline that originates around the same time as when we started perturbing the climate system with industrial-scale greenhouse-gas emissions,” Osman says. “The North Atlantic is such a productive area, and there’s a huge multinational fisheries economy related to this productivity. Any changes at the base of this food chain will have cascading effects that we’ll ultimately feel at our dinner tables.”

The multicentury decline in phytoplankton productivity appears to coincide not only with concurrent long-term warming temperatures; it also shows synchronous variations on decadal time-scales with the large-scale ocean circulation pattern known as the Atlantic Meridional Overturning Circulation, or AMOC. This circulation pattern typically acts to mix layers of the deep ocean with the surface, allowing the exchange of much-needed nutrients on which phytoplankton feed.

In recent years, scientists have found evidence that AMOC is weakening, a process that is still not well-understood but may be due in part to warming temperatures increasing the melting of Greenland’s ice. This ice melt has added an influx of less-dense freshwater to the North Atlantic, which acts to stratify, or separate its layers, much like oil and water, preventing nutrients in the deep from upwelling to the surface. This warming-induced weakening of the ocean circulation could be what is driving phytoplankton’s decline. As the atmosphere warms the upper ocean in general, this could also further the ocean’s stratification, worsening phytoplankton’s productivity.

“It’s a one-two punch,” Osman says. “It’s not good news, but the upshot to this is that we can no longer claim ignorance. We have evidence that this is happening, and that’s the first step you inherently have to take toward fixing the problem, however we do that.”

This research was supported in part by the National Science Foundation (NSF), the National Aeronautics and Space Administration (NASA), as well as graduate fellowship support from the US Department of Defense Office of Naval Research.

Merging cell datasets, panorama style

Mon, 05/06/2019 - 10:59am

A new algorithm developed by MIT researchers takes cues from panoramic photography to merge massive, diverse cell datasets into a single source that can be used for medical and biological studies.

Single-cell datasets profile the gene expressions of human cells — such as a neurons, muscles, and immune cells — to gain insight into human health and treating disease. Datasets are produced by a range of labs and technologies, and contain extremely diverse cell types. Combining these datasets into a single data pool could open up new research possibilities, but that’s difficult to do effectively and efficiently.

Traditional methods tend to cluster cells together based on nonbiological patterns — such as by lab or technologies used — or accidentally merge dissimilar cells that appear the same. Methods that correct these mistakes don’t scale well to large datasets, and require all merged datasets share at least one common cell type.

In a paper published today in Nature Biotechnology, the MIT researchers describe an algorithm that can efficiently merge more than 20 datasets of vastly differing cell types into a larger “panorama.” The algorithm, called “Scanorama,” automatically finds and stitches together shared cell types between two datasets — like combining overlapping pixels in images to generate a panoramic photo.

As long as any other dataset shares one cell type with any one dataset in the final panorama, it can also be merged. But all of the datasets don’t need to have a cell type in common. The algorithm preserves all cell types specific to every dataset.

“Traditional methods force cells to align, regardless of what the cell types are. They create a blob with no structure, and you lose all interesting biological differences,” says Brian Hie, a PhD student in the Computer Science and Artificial Intelligence Laboratory (CSAIL) and a researcher in the Computation and Biology group. “You can give Scanorama datasets that shouldn’t align together, and the algorithm will separate the datasets according to biological differences.”

In their paper, the researchers successfully merged more than 100,000 cells from 26 different datasets containing a wide range of human cells, creating a single, diverse source of data. With traditional methods, that would take roughly a day’s worth of computation, but Scanorama completed the task in about 30 minutes. The researchers say the work represents the highest number of datasets ever merged together.

Joining Hie on the paper are: Bonnie Berger, the Simons Professor of Mathematics at MIT, a professor of electrical engineering and computer science, and head of the Computation and Biology group; and Bryan Bryson, an MIT assistant professor of biological engineering.

Linking “mutual neighbors”

Humans have hundreds of categories and subcategories of cells, and each cell expresses a diverse set of genes. Techniques such as RNA sequencing capture that information in sprawling multidimensional space. Cells are points scattered around the space, and each dimension corresponds to the expression of a different gene.

Scanorama runs a modified computer-vision algorithm, called “mutual nearest neighbors matching,” which finds the closest (most similar) points in two computational spaces. Developed at CSAIL, the algorithm was initially used to find pixels with matching features — such as color levels — in dissimilar photos. That could help computers match a patch of pixels representing an object in one image to the same patch of pixels in another image where the object’s position has been drastically altered. It could also be used for stitching vastly different images together in a panorama.

The researchers repurposed the algorithm to find cells with overlapping gene expression — instead of overlapping pixel features — and in multiple datasets instead of two. The level of gene expression in a cell determines its function and, in turn, its location in the computational space. If stacked on top of one another, cells with similar gene expression, even if they’re from different datasets, will be roughly in the same locations.

For each dataset, Scanorama first links each cell in one dataset to its closest neighbor among all datasets, meaning they’ll most likely share similar locations. But the algorithm only retains links where cells in both datasets are each other’s nearest neighbor — a mutual link. For instance, if Cell A’s nearest neighbor is Cell B, and Cell B’s is Cell A, it’s a keeper. If, however, Cell B’s nearest neighbor is a separate Cell C, then the link between Cell A and B will be discarded.

Keeping mutual links increases the likelihood that the cells are, in fact, the same cell types. Breaking the nonmutual links, on the other hand, prevents cell types specific to each dataset from merging with incorrect cell types. Once all mutual links are found, the algorithm stitches all dataset sequences together. In doing so, it combines the same cell types but keeps cell types unique to any datasets separated from the merged cells. “The mutual links form anchors that enable [correct] cell alignment across datasets,” Berger says.

Shrinking data, scaling up

To ensure Scanorama scales to large datasets, the researchers incorporated two optimization techniques. The first reduces the dataset dimensionality. Each cell in a dataset could potentially have up to 20,000 gene expression measurements and as many dimensions. The researchers leveraged a mathematical technique that summarizes high-dimensional data matrices with a small number of features while retaining vital information. Basically, this led to a 100-fold reduction in the dimensions.

They also used a popular hashing technique to find nearest mutual neighbors more quickly. Traditionally, computing on even the reduced samples would take hours. But the hashing technique basically creates buckets of nearest neighbors by their highest probabilities. The algorithm need only search the highest probability buckets to find mutual links, which reduces the search space and makes the process far less computationally intensive.    

In separate work, the researchers combined Scanorama with another technique they developed that generates comprehensive samples — or “sketches” — of massive cell datasets that reduced the time of combining more than 500,000 cells from two hours down to eight minutes. To do so, they generated the “geometric sketches,” ran Scanorama on them, and extrapolated what they learned about merging the geometric sketches to the larger datasets. This technique itself derives from compressive genomics, which was developed by Berger’s group.

“Even if you need to sketch, integrate, and reapply that information to the full datasets, it was still an order of magnitude faster than combining entire datasets,” Hie says.

Scaling solutions for the developing world

Sun, 05/05/2019 - 12:00am

In 2016, Tanzania passed a bill to cover medical expenses for expectant mothers. But pregnant women in rural parts of the country face a huge obstacle in getting the care they need: reliable transportation. Women in villages that can’t be reached by traditional ambulances have to resort to walking for hours to the nearest hospital, often while already in labor, putting their health and safety in danger.

That same year, students and instructors in the MIT D-Lab class 2.729 (Design for Scale) collaborated with community partner Olive Branch for Children to develop a solution called the Okoa ambulance. “Okoa produces a trailer that can attach to any motorcycle, providing safe transportation from rural areas to hospitals,” explains Toria Yan, a senior studying mechanical engineering at MIT.

Seven thousand miles away, Yan and her fellow students in 2.729 worked on optimizing the design of the Okoa ambulance to minimize production and shipping costs and increase manufacturability.

Throughout the fall 2018 semester, Okoa was one of four real-world projects students in 2.729 worked on — others included a floating water pump for agricultural irrigation in Nepal, an air quality detector for kitchens in India, and a plastic toilet that provides safe sanitation in densely populated areas of Guatemala.

“This class is unique because all the projects already have working prototypes,” explains Maria Yang, class co-instructor and professor of mechanical engineering. “We are asking students to design a way to manufacture the product that’s more cost-efficient and effective.”

The idea for the class first came from staff and instructors in MIT D-Lab. “We were working with people who were trying to solve some of the biggest problems in the developing world, but we realized that just coming up with a proof-of-technology prototype wasn’t enough,” explains Harald Quintus-Bosz, lecturer at MIT D-Lab and chief technology officer at Cooper Perkins, Inc. “We have to scale the solution so it can reach as many people as possible.”

Scaling solutions for problems in the developing world turned out to be a challenge MIT students were uniquely poised to tackle. The main goal of 2.729 is to teach MIT students who already have analytic engineering skills how to design for manufacturability, come up with assembly methods for products, design in the context of emerging economies, and understand entrepreneurship in the developing world.

For Suji Balfe, a junior studying mechanical engineering, figuring out how to increase manufacturing output in developing countries resonated personally. “I was always interested in engineering for the developing world because my mom comes from a foreign country,” she says. “I thought Design for Scale provided an interesting perspective because you’re taking products that already exist in some form and making them more practical for a given audience.”

Balfe’s team worked on a product developed by the company Sensen, which uses data loggers and sensors that provide information on air quality in kitchens and help researchers determine which cookstoves are safest.

“The devices are all Bluetooth-connected, so researchers working in India can upload data to their phones and that is sent to Sensen via the cloud,” explains Danielle Gleason, also a junior mechanical engineering student. “Sensen then analyzes huge amounts of air quality data to help evaluate different cookstoves and cooking methods.”

Both the Okoa and Sensen teams were tasked with finding ways to make each product easier to manufacture and use. But as far as the location where these devices are produced, the two teams took different approaches.

“One of the first questions you have to answer when designing products for the developing world is where are you going to manufacture your device?” says Quintus-Bosz. Companies and startups have to determine whether to manufacture products globally or locally, which is partially a function of the impact objectives of the company.

For Okoa, the team focused on local manufacturing in Tanzania to create ambulance trailers. Their challenge was to find ways to optimize the design so that large parts and subassemblies could be manufactured with capable suppliers within Tanzania and then shipped to rural areas where they would be assembled locally at distribution sites. The team did this by ensuring the trailers could be flat packed and stacked on top of one another. “We optimized the design and changed the geometry of the roof so everything could be quickly assembled on site in Tanzania,” adds Yan.

Meanwhile, Sensen utilized manufacturing methods available in the United States — like thermoforming and injection molding — to redesign the enclosure for the device. “We were able to reduce costs and create a box that required minimal screws and attachments using an injection-molded bottom piece and a thermoformed top piece,” explains Gleason.

From helping people in need of medical attention in Tanzania to improving air quality in kitchens around India, students walk away from the class with a deeper understanding of the unique challenges manufacturing in the developing world poses.

“It’s clear that the students who take this class all want to make a social impact,” adds Yang. By learning how to scale solutions to increase manufacturability, that social impact can have a far greater reach in the developing world.

3Q: Robert Stoner discusses clean energy for India

Fri, 05/03/2019 - 3:40pm

India has made great strides in electrification in recent years, but further investment is still needed, especially in rural areas. Here, Robert Stoner, deputy director of the MIT Energy Initiative and director of the Tata Center for Technology and Design, comments on energy and development opportunities and challenges in India and how MIT is supporting the country’s transition to a low-carbon future.

Q: What is the biggest energy opportunity in India right now?

A: What’s really exciting and almost hard to believe about India is that this giant country just recently electrified its last unconnected village. In other words, 100 percent of Indian villages have been connected to the electricity grid. It doesn’t mean that every household has necessarily been connected, nor that every household that is connected receives electricity 24 hours a day, but they’ve laid out the wires. It’s a very strong declaration on the part of the government of its intention to achieve universal grid access.

India has a power system with roughly 350 gigawatts (GW) of generation, or about a third as much as the United States. India has four times as many people, so the per capita supply is about one-twelfth of what we experience. They continue to add capacity in India, including the world’s largest solar plant, which is capable of producing nearly a gigawatt. In spite of this, there is considerable slack generating capacity, with overall utilization at 60 percent or less. So there’s spare capacity, and at the same time low consumption of electricity — although not necessary in the same place. The Indian government would like to change that in part by expanding rural access, and, of course, the people who live in the countryside want electricity.

Q: What challenges does India face with energy and the environment?

A: One of the difficulties is that the rural population is widely dispersed. It costs a lot of money to provide electricity to them, because the electrical wires have to go a long way from where the generating plants are. Many use only minute quantities of electricity, and it costs a lot to collect from them, so rural electricity users are mostly a losing proposition for the electric utilities, which is why they’ve been slow to invest in connecting them. But now the government has pushed them to make this huge investment in wires, and it has to pay it off, so the status quo isn’t an option — it would bankrupt the utilities. It’s in everybody’s interest to have more electricity flow through those wires, and it will be interesting to see how the utilities handle their new customers.

India is currently implementing an expanded and improved subsidy regime that will allow transfers of government funds to rural consumers and, hopefully, encourage consumption. They have to do this.

Q: What are MIT’s most promising contributions to India’s energy situation right now?

A: Vladimir Bulović’s work on thin films deposited on flexible plastic substrates is a very prominent example. He’s faculty director of the Tata-MIT GridEdge Solar program, which is funded by the Tata Trusts. The GridEdge team is now celebrating a milestone: For the first time, they’ve been able to make thin-film perovskite solar cells on thin plastic in a roll-to-roll process. That’s very different and less costly than the conventional and far more complex silicon wafer process. It’s an exciting development not only because it moves us closer to a new low-cost regime, but also because the product is flexible and much, much lighter. That makes it easy to transport to rural areas and also to form freely into interesting shapes like awnings and car tops.

Vladimir has been quite focused on India thanks to the GridEdge program. The Tata Trusts have committed $15 million over five years to fund the program, making it by far the largest solar project at MIT and a large fraction of all solar activity at MIT. In a very direct way, much of MIT’s current solar technology development is aimed at the developing world.

School of Engineering first quarter 2019 awards

Fri, 05/03/2019 - 3:30pm

Members of the MIT engineering faculty receive many awards in recognition of their scholarship, service, and overall excellence. Every quarter, the School of Engineering publicly recognizes their achievements by highlighting the honors, prizes, and medals won by faculty working in our academic departments, labs, and centers.

Regina Barzilay, of the Department of Electrical Engineering and Computer Science, was named among the “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics on Feb. 1.

Sir Tim Berners-Lee, of the Department of Electrical Engineering and Computer Science, was named Person of the Year by the Financial Times on Mar. 14.

Ed Boyden, of the Department of Biological Engineering and the MIT Media Lab, was awarded the Rumford Prize on Jan. 30.

Emery N. Brown, of the Department of Brain and Cognitive Sciences and the Institute for Medical and Engineering Science, was awarded an honorary degree from the University of Southern California on April 9.

Areg Danagoulian, of the Department of Nuclear Science and Engineering, was named to the Consortium of Monitoring, Technology, and Verification by the Department of Energy’s National Nuclear Security Administration on Jan. 17.

Luca Daniel, of the Department of Electrical Engineering and Computer Science, was awarded a Thornton Family Faculty Research Innovation Fellowship on Feb. 8.

Constantinos Daskalakis, of the Department of Electrical Engineering and Computer Science, was awarded a Frank Quick Faculty Research Innovation Fellowship on Feb. 8.

Srini Devadas, of the Department of Electrical Engineering and Computer Science, won the Distinguished Alumnus Award by the Indian Institute of Technology, Madras on Feb. 1.

Carmen Guerra-Garcia, of the Department of Aeronautics and Astronautics, was named an AIAA Senior Member on April 5.

Thomas Heldt, of the Department of Electrical Engineering and Computer Science and the Institute for Medical Engineering and Science, was named Distinguished Lecturer by the IEEE Engineering in Medicine and Biology Society on Dec. 20, 2018.

Tommi Jaakola, of the Department of Electrical Engineering and Computer Science, was named among “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics on Feb. 1.

Manolis Kellis, of the Department of Electrical Engineering and Computer Science, was named among “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics on Feb. 1.

Sangbae Kim, of the Department of Mechanical Engineering, was named a Defense Science Study Group member on Mar. 20.

Angela Koehler, of the Department of Biological Engineering, won the Junior Bose Award for Teaching Excellence on Mar. 11.

Jing Kong, of the Department of Electrical Engineering and Computer Science, was awarded a Thornton Family Faculty Research Innovation Fellowship on Feb. 8.

Luqiao Liu, of the Department of Electrical Engineering and Computer Science, was received a Young Investigator Research Program grant from the U.S. Air Force Office of Scientific Research on Sept. 26, 2018.

Gareth McKinley, of the Department of Mechanical Engineering, was elected to the National Academy of Engineering on Feb. 2.

Muriel Médard, of the Department of Electrical Engineering and Computer Science, was named a fellow by the National Academy of Inventors on Dec. 11, 2018.

Stefanie Mueller, of the Department of Electrical Engineering and Computer Science, won an NSF CAREER award on Feb. 22.

Julia Ortony, of the Department of Materials Science and Engineering, was awarded a Professor Amar G. Bose Research Grant on Feb. 14.

Ellen Roche, of the Department of Mechanical Engineering and the Institute of Medical Engineering and Science, won an NSF CAREER award on Feb. 20.

Christopher Schuh, of the Department of Materials Science and Engineering, was elected to the National Academy of Engineering on Feb. 7.

Suvrit Sra, of the Department of Electrical Engineering and Computer Science, won an NSF CAREER award on Mar. 11.

Leia Stirling, of the Department of Aeronautics and Astronautics, was named an Alan I. Leshner Leadership Institute fellow on Feb. 11.

Peter Szolovits, of the Department of Electrical Engineering and Computer Science, was named among “Top 100 AI Leaders in Drug Discovery and Advanced Healthcare” by Deep Knowledge Analytics on Feb. 1.

Six suborbital research payloads from MIT fly to space and back

Fri, 05/03/2019 - 2:50pm

Blast off! MIT made its latest foray into research in space on May 2 via six payloads from the Media Lab Space Exploration Initiative, tucked into Blue Origin’s New Shepard reusable space vehicle that took off from a launchpad in West Texas.

It was also the first time in the history of the Media Lab that in-house research projects were launched into space, for several minutes of sustained microgravity. The results of that research may have big implications for semiconductor manufacturing, art and telepresence, architecture and farming, among other things.

“The projects we’re testing operate fundamentally different in Earth’s gravity compared to how they would operate in microgravity,” explained Ariel Ekblaw, the founder and lead of the Media Lab’s Space Exploration Initiative.

Previously, the Media Lab sent projects into microgravity aboard the plane used by NASA to train astronauts, lovingly nicknamed “the vomit comet.” These parabolic flights provide repeated 15 to 30 second intervals of near weightlessness. The New Shepard experiment capsule will coast in microgravity for significantly longer and cross the Karman line (the formal boundary of “space”) in the process. While that may not seem like much time, it’s enough to get a lot accomplished.

“The capsule where the research takes place arcs through space for three minutes, which gives us precious moments of sustained, high quality microgravity,” Ekblaw said. “This provides an opportunity to expand our experiments from prior parabolic flight protocols, and test entirely new research as well.”

Depending on the results of the experiments done during New Shepard’s flight, some of the projects will undergo further, long-term research aboard the International Space Station, Ekblaw said.

On this trip, she sent Tessellated Electromagnetic Space Structures for the Exploration of Reconfigurable, Adaptive Environments, otherwise known as TESSERAE, into space. The ultimate goal for these sensor-augmented hexagonal and pentagonal  “tiles” is to autonomously self-assemble into space structures. These flexible, reconfigurable modules can then be used for habitat construction, in-space assembly of satellites, or even as infrastructure for parabolic mirrors. Ekblaw hopes TESSERAE will one day support in-orbit staging bases for human exploration of the surface of the moon or Mars, or enable low Earth orbit space tourism.

An earlier prototype, flown on a parabolic flight in November 2017, validated the research concept mechanical structure, polarity arrangement of bonding magnets, and the self-assembly physical protocol. On the Blue Origin flight, Ekblaw is testing a new embedded sensor network in the tiles, as well as their communication architecture and guidance control aspects of their self-assembly capabilities. “We’re testing whether they’ll autonomously circulate, find correct neighbors, and bond together magnetically in microgravity for robust self-assembly,” Ekblaw said.

Another experiment aboard New Shepard combined art with the test of a tool for future space exploration — traversing microgravity with augmented mobility. Living Distance, an artwork conceived by the Space Exploration Initiative’s art curator, Xin Liu, explores freedom of movement via a wisdom tooth — yes, you read that correctly!

The tooth traveled to space carried by a robotic device named EBIFA and encased in a crystalline container. Once New Shepard entered space, the container burst open and EBIFA swung into action, shooting cords out with magnetic tips to latch onto a metal surface. The tooth then floated through space with minimal interference in the virtually zero-gravity environment.

“In this journey, the tooth became a newborn entity in space, its crystalline, sculptural body and life supported by an electromechanical system,” Xin Liu wrote. “Each of its weightless movements was carefully calculated on paper and modeled in simulation software, as there can never be a true test like this on Earth.”

The piece builds on a performance art work called Orbit Weaver that Liu performed last year during a parabolic flight, where she was physically tethered to a nylon cord that floated freely and attached to nearby surfaces. Orbit Weaver and Living Distance may offer insights to future human space explorers about how best to navigate weightlessness.

A piece of charcoal also made the trip to space inside a chamber lined with drawing paper, part of a project designed by Ani Liu, a Media Lab alumna. In microgravity, the charcoal will chart its own course inside the chamber, marking the paper as it floats through an arc far above the Earth.

When the chamber returns to the Media Lab, the charcoal will join forces with a KUKA robot that will mimic the charcoal’s trajectory during the three-ish minutes of coasting in microgravity. Together, the charcoal and the robot will become a museum exhibit that provides a demonstration of motion in microgravity to a broad audience and illustrates the Space Exploration Initiative’s aim to democratize access to space and invite the public to engage in space exploration.

Harpreet Sareen, another Media Lab alum, tested how crystals form in microgravity, research that may eventually lead to manufacturing semiconductors in space.

Semiconductors used in today’s technology require crystals with extremely high levels of purity and perfect shapes, but gravity interferes with crystal growth on Earth, resulting in faults, contact stresses, and other flaws. Sareen and his collaborator, Anna Garbier, created a nano-sized lab in a box a little smaller than a half-gallon milk carton. The electric current that kicked off growth of the crystals during the three minutes the New Shepard capsule was suborbital was triggered by onboard rocket commands from Blue Origin.

The crystals will be evaluated for potential industrial applications, and they also have a future as an art installation: Floral Cosmonauts.

And then there are the 40 or so bees (one might say “apionauts”) that made the trip into space on behalf of the Mediated Matter group at the Media Lab, which is interested in seeing the impact space travel has on a queen bee and her retinue. Two queen bees that were inseminated at a U.S. Department of Agriculture facility in Louisiana went to space, each with roughly 20 attendant bees whose job it was to feed her and help control her body temperature.

The bees traveled via two small containers — metabolic support capsules — into which they previously built honeycomb structures. This unique design gives them a familiar environment for their trip. A modified GoPro camera, pointed into the specially designed container housing the bees, was fitted into the top of the case to film the insects and create a record of their behavior during flight.

Everything inside the case was designed to make the journey as comfortable as possible for the bees, right down to a tiny golden heating pad that was to kick into action if the temperature dropped too low for a queen bee’s comfort.

Researchers in the Mediated Matter group will study the behavior of the bees when they return to Earth and are reintroduced to a colony at the Media Lab. Will the queens lay their eggs? Will those eggs hatch? And can bees who’ve been to space continue making pollen and honey once they’ve returned to Earth? Those are among the many questions the team will be asking.

“We currently have no robotic alternative to bees for pollination of many crops,” Ekblaw said. “If we want to grow crops on Mars, we may need to bring bees with us. Knowing if they can survive a mission, reintegrate into the hive, and thrive afterwards is critical.”

As these projects show, the Space Exploration Initiative unites engineers, scientists, artists, and designers across a multifaceted research portfolio. The team looks forward to a regular launch cadence and progressing through microgravity research milestones — from parabolic flights, to further launch opportunities with Blue Origin, to the International Space Station and even lunar landings.

Pages