MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 day 1 min ago

A novel way to advance a better battery design

Wed, 11/07/2018 - 12:00pm

Cadenza Innovation has developed a new design that improves the performance, cost, and safety of large lithium ion batteries. Now, with an unusual strategy for disseminating that technology, the company is poised to have an impact in industries including energy grid storage, industrial machines, and electric vehicles.

Rather than produce the batteries itself, Cadenza licenses its technology to manufacturers producing batteries for diverse applications. The company also works with licensees to both optimize their manufacturing processes and sell the new batteries to end users. The strategy ensures that the four-year old company’s technology is deployed more quickly and widely than would otherwise be possible.

For Cadenza founder Christina Lampe-Onnerud, a former MIT postdoc and a battery industry veteran of more than 20 years, the goal is to help advance the industry just as the global demand for batteries reaches an inflection point.

“The crazy idea at the time [of the company’s founding] was to see if there was a different way to engage with the industry and help it accept a new technology in existing applications like cars or computers,” Lampe-Onnerud says. “Our thought was, if we really want to have an impact, we could inspire the industry to use existing capital deployed to get a better technology into the market globally and be a positive part of the climate change arena.”

With that lofty goal in mind, the Connecticut-based company has secured partnerships with organizations at every level of the battery supply chain, including suppliers of industrial minerals, original equipment manufacturers, and end users. Cadenza has demonstrated its proprietary “supercell” battery architecture in Fiat’s 500e car model and is in the process of completing a demonstration energy storage system to be used by the New York Power Authority, the largest state public utility company in the U.S., when energy demand is at its peak.

The company’s most significant partnership to date, however, was announced in September with Shenzen BAK Battery Company, one of the world’s largest lithium ion battery manufacturers. The companies announced BAK would begin mass producing batteries based on Cadenza’s supercell architecture in the first half of 2019.

The supercell architecture

Lampe-Onnerud’s extensive contacts in the lithium ion battery space and world-renown technical team have quickened the pace of Cadenza’s rise, but the underlying driver of the company’s success is simple economics: Its technology has been shown to offer manufacturers increased energy density in battery cells while reducing production costs.

The majority of rechargeable lithium ion batteries are powered by cylindrical sheets of metal known as “jelly rolls.” For use in big batteries, jelly rolls can be made either large, to limit the total cost of battery assembly, or small, to leverage a more efficient cell design that brings higher energy density. Many electric vehicle (EV) companies use large jelly rolls to avoid the durability and safety concerns that come with tightly packing small jelly rolls into a battery, which can lead to the failure of the entire battery if one jelly roll overheats.

Tesla famously achieves longer vehicle ranges by using small jelly rolls in its batteries, addressing safety issues with cooling tubes, intricate circuitry, and by spacing out each roll. But Cadenza has patented a simpler battery system it calls the “supercell,” that allows small jelly rolls to be tightly packed together into one module.

The key to the supercell is a noncombustible ceramic fiber material that each jelly roll sits in like an egg in a carton. The material helps to control temperature throughout the cell and isolate damage caused by an overheated jelly roll. A metal shunt wrapped around each jelly roll and a flame retardant layer of the supercell wall that relieves pressure in the case of a thermal event add to its safety advantages.

The enhanced safety allows Cadenza to package the jelly rolls tightly for greater energy density, and the supercell’s straightforward design, which leverages many parts that are currently manufactured at low costs and high volumes, keeps production costs down. Finally, each supercell module is designed to click together like LEGO blocks, making it possible for manufacturers to easily scale their battery sizes to fit customer needs.

Cadenza’s safety, cost, and performance features were validated during a grant program with the Advanced Research Projects Agency-Energy (ARPA-E), which gave the company nearly $4 million to test the architecture beginning in 2013.

When the supercell architecture was publicly unveiled in 2016, Lampe-Onnerud made headlines by saying it could be used to boost the range of Tesla’s cars by 70 percent. Now the goal is to get manufacturers to adopt the architecture.

“There will be many winners using this technology,” Lampe-Onnerud says. “We know we can deliver on the [safety, performance, and cost] claims. It’s going to be up to the licensee to decide how they leverage these advantages.”

At MIT, where “data gets to speak”

Lampe-Onnerud and her husband, Per Onnerud, who serves as Cadenza’s chief technology officer, held postdoctoral appointments at MIT after earning their PhDs at Uppsala University in their home country of Sweden. Lampe-Onnerud did lab work in inorganic chemistry in close collaboration with MIT materials science and mathematics professors, while Onnerud did research in the Department of Materials Science and Engineering. The experience left a strong impression on Lampe-Onnerud.

“MIT was a very formative experience,” she says. “You learn how to argue a point so that the data gets to speak. You just enable the data; there’s no spin. MIT has a special place in my heart.”

Lampe-Onnerud has maintained a strong connection with the Institute ever since, participating in alumni groups, giving guest lectures on campus, and serving as a member of the MIT Corporation visiting committee for the chemistry department — all while finding remarkable success in her career.

Lampe-Onnerud founded Boston-Power in 2004, which she grew into an internationally recognized manufacturer of batteries for consumer electronics, vehicles, and industrial applications while serving as the CEO until the company moved operations to China in 2012. In the early stages of the company, more than seven years after Lampe-Onnerud had finished her postdoc work, she discovered the enduring nature of support from the MIT community.

“We started looking for some angel investors, and one of the first groups that responded were the angels affiliated with MIT,” Lampe-Onnerud says. “We support each other because we tend to be attracted to intractable problems. It’s very much in the MIT spirit: We know, if we’re trying to solve big problems, it’s going to be difficult. So we like to collaborate.”

The high-profile experience at Boston Power earned her distinctions including the Technology Pioneer Award from the World Economic Forum, and Swedish Woman of the Year from the Swedish Women’s Educational Association. It also led some to deem her the “Queen of Batteries.”

Immediately after leaving Boston-Power, Lampe-Onnerud and her husband went to work on what would be Cadenza’s supercell architecture in their garage. They wanted to create a solution that would help lower the world’s carbon footprint, but they estimated that, at most, they’d be able to build one gigafactory every 18 months if they were to manufacture the batteries themselves. So they decided to license the technology instead.

The strategy has tradeoffs from a business perspective: Cadenza has needed to raise much less capital than Boston-Power but will allow licensees to generate topline and bottomline growth while it receives a percentage of sales. Lampe-Onnerud is clearly happy to leverage her global network and share the upside to maximize Cadenza’s impact.

“My hope is that we are able to bring people together around this technology to do things that are really important, like taking down our carbon footprint, eliminating NOx [nitrogen oxide] emissions, or improving grid efficiency,” Lampe-Onnerud says. “It’s a different way to work together, so when an element of this ecosystem wins, we all win. It has been an inspiring process.”

Highlighting new research opportunities in civil and environmental engineering

Wed, 11/07/2018 - 10:50am

On October 18, alumni, postdocs, graduate students, and professors gathered at the Department of Civil and Environmental Engineering’s annual New Research Alumni Reception. The annual event invites the broader MIT CEE community back to campus to network with peers and become informed about the innovative research projects that the civil and environmental engineering faculty are working on.

Topics covered ranged from machine learning and blockchain to regional impacts of global climate change, and creative computing for high performance design in structural design. 

McAfee Professor of Engineering and department head Markus Buehler began the night by discussing new additions to CEE and MIT such as the grand opening of MIT.nano, the new MIT Schwarzman College of Computing to support the rise of artificial intelligence and data science, and a revamped student lounge named in honor of late professor and former department head Joseph Sussman. 

Buehler also highlighted the fieldwork offered in CEE, such as ONE-MA3 (Materials in Art, Archeology and Architecture) in Italy; TREX (Traveling Research Environmental Experiences) in Hawaii; and the Agricultural Microbial Ecology subject led by Professor Otto Cordero during January Independent Activities Period in Israel. Furthermore, Buehler explained that CEE provides students with resources such as the new Internship program, alumni panels and receptions, and career fairs, which all offer students a competitive edge post-graduation. 

“The future presents vast, global challenges. MIT is rising to meet them with a wide lens and broad scale work,” Buehler said.

Changing faces of computation

With MIT’s recent $1 billion investment going toward the new MIT Schwarzman College of Computing, Professor John Williams appropriately presented on “New Faces of Computation” — machine learning and blockchain — that are already impacting various aspects of our lives. 

“We can now build machines that can learn to do things that we as humans can’t do,” Williams states. In other words, as noted previously by MIT President L. Rafael Reif, “in order to partner with these machines, we will all need to be bilingual.”

Williams also discussed the importance of MIT’s Geospatial Data Center (GDC), which focuses on computation research in data science, cloud computing, cybersecurity, augmented reality, the internet of things, blockchain, and educational technology. Williams and GDC research scientist Abel Sanchez, a CEE research scientist, are introducing a new two-month online course called Digital Transformation, which focuses on understanding the technologies driving radical changes in industry. The course provides an MIT certificate issued by the Professional Education Program, upon satisfactory completion. 

The world has a vast amount of data that is constantly growing due to our devices, which are able to track everything and everyone. “The amount of data that we are generating has given rise to news innovations in machine learning,” Williams said. “We have big data that is now being leveraged by companies such as Facebook and Google.”

Williams explained that we are capable of creating smart cities by utilizing tools that can be applied to tracking people and transportation, producing detailed maps, conducting digital and high precision farming, and more. He added that the speed of our world is going to increase considerably because machines are capable of learning and outperforming humans.

Regional impacts of global climate change

While machine learning and big data are on the rise, so are the carbon dioxide emissions produced by human activity. Breene M. Kerr Professor Elfatih Eltahir discussed misconceptions of climate change, his research on the rise in global temperatures, and the relationship between climate and infectious diseases. 

He explained that many people in America are aware that global warming exists, however, they do not believe it will directly influence their lives. Eltahir urged a shift in mindset, noting that we could experience very significant effects of global warming. 

“What my group and I look at is how to translate the impacts of climate change into areas such as health, agriculture, water resources; focusing on what people care about at local and regional scales,” Eltahir explained.

Eltahir emphasized that alarming temperatures will be seen around the Persian Gulf, South Asia, and Eastern China, with some areas being uninhabitable during the summers. Eltahir and various CEE students collaborated in the creation of the MIT regional climate model (MRCM), as well as the Hydrology, Entomology and Malaria Transmission Simulator (HYDREMATS) in order to predict regional impacts of climate change. 

Eltahir’s research also examines the relationship between climate and the spread of malaria, showing that there is a small window of temperature that mosquitoes can survive. Although climate change presents many challenges, it could also allow for a decrease in malaria transmission in parts of Africa. 

Creative computing for high performance design in structural engineering 

Integrating computation and climate with architecture, associate professor in CEE and architecture Caitlin Mueller explained how the utilization of machine learning and optimization tools early on in the design process can improve performance and merge structural engineering and design. 

“I think performance, engineering, and efficiency can lead us to some of the most exciting design outcomes out there,” Mueller said. 

Building off of Eltahir’s presentation, Mueller explained the impacts that construction has on the environment. According to Mueller, in the built environment, buildings use roughly 40 percent of energy worldwide and produce a similar proportion of global emissions. She stressed that the environment has not been part of the conversation enough in terms of design, but is hopeful that advancements in computation can help identify these impacts early on. 

Mueller’s research lab, Digital Structures, focuses on linking architectural design and structural engineering through early tools or computational design, in order to help improve performance, energy efficiency, design, and cost. 

In one example, Mueller developed a sofware called structureFIT, which helps designers create shapes for trusses and allows for human and computer collaboration. The software includes an interactive evolutionary algorithm where the computer suggests diverse and high-performing designs to the user. 

With structureFIT and subsequent tools from Mueller’s group, the user is able to analyze the information about the performance of each design much more efficiently. Designers sort through various considerations such as how much material would be necessary, impact on the environment, and which designs are the most intriguing to them. However, the challenging part of this is that the computer generates an overwhelming amount of options. 

Said Mueller: “There is huge opportunity to use advancements in data science and computation to start addressing this challenge we’ve created for ourselves.”

The machine clusters large data sets and organizes them into families of designs or typologies. Typology, a process that has always been manual, is now able to be sorted through the computer, providing the designer with instantaneous feedback. Another example of a machine learning application includes surrogate modelling, which uses real-time approximations for slow simulations to give designers real-time performance information in design processes.

With the help of robots that 3-D print extremely efficient and complex structures, Mueller is able to materialize these designs, while maintaining creativity and performance. 

“Using this combination of state-of-the-art path planning algorithms with topology optimization, we can finally make these structures a reality,” Mueller said. 

“We are investigating a lot of different topics, and collaborating with students in computer science,” she said. “What really unites us is that performance can absolutely play a critical role in creative design when we use these digital tools in creative ways.”  

The alumni and community members present expressed enthusiasm about the new research projects coming from the department — from computation, to climate change and architecture, the innovative research was described as interrelated and highly impactful. After questions and discussion from the attendees, the evening concluded. 

“The research our faculty conduct on a daily basis is inspiring,” said Buehler, reflecting on the night. “It is always gratifying when we have the opportunity to show alumni the innovative direction the department is moving in.”

Ekene Ijeoma joins MIT Media Lab

Wed, 11/07/2018 - 10:10am

Artist Ekene Ijeoma will join the MIT Media Lab, founding and directing the Poetic Justice research group, in January 2019. Ijeoma, who will be an assistant professor, works at the intersections of design, architecture, music, performance, and technology, creating multisensory artworks from personal experiences, social issues, and data studies.

Ijeoma's work explores topics and issues ranging from refugee migration to mass incarceration. At its most basic level, the work aspires to embody human conditions, expand people's thoughts, and engage them in imagining change and acting on it. At the lab, Ijeoma will continue this work in developing new forms of justice through artistic representation and intervention.

“New forms of justice can emerge through art that engages with social, cultural and political issues — ones that aren’t tied to codified laws and biased systems,” he says.

When asked to define “poetic justice,” Ijeoma explained that, for him, the phrase is about using code-switching content, form, context, and function to create artwork with rhythm and harmony that extends our perceptions and exposes the social-political systems affecting us as individuals. An example of this is his “Deconstructed Anthems” project, an ongoing series of music performances and light installations that explores the inequalities of the American Dream and realities of mass incarceration through “The Star-Spangled Banner,” and “​Pan-African AIDS,” a sculpture examining the hypervisibility of the HIV/AIDS epidemic in Africa and the hidden one in Black America. “​Pan-African AIDS,” is on display through April 2019 at the Museum of the City of New York as part of the exhibit Germ City: Microbes and the Metropolis.

Ijeoma’s art practice has been primarily project-based and commission-driven. His recent large works, both deeply conceptual yet highly technical projects, required research and development to happen concurrently with the production of the work. At the Media Lab, with more space for trial and error and failure, he will have the resources and facilities to stay reflective and proactive, to create work outside of commissions, and to expand more artworks into series. In addition, he will have opportunities for more listening to and meditating on issues.

“Like many artists,” said Ijeoma, “A lot of my work comes from vibing and forward thinking — channeling my environment and signaling out the noise.” This aspect of his practice is reflected in work such as “The Refugee Project ” (2014), released a few months before the European refugee crisis, and “Look Up” (2016), released a few days before Pokemon Go; and more recently “Pan-African AIDS” which was presented as news was breaking on the underreported AIDS epidemic in the black populations in areas including the American South.

Ijeoma’s work has been commissioned and presented by venues and events including the Museum of Modern Art, The Kennedy Center, the Design Museum, the Istanbul Design Biennial, Fondation EDF, the Annenberg Space for Photography, the Neuberger Museum of Art at the State University of New York at Purchase, and Storefront for Art and Architecture.

“We are thrilled that Ekene Ijeoma will be joining the Media Lab and MAS program,” said Tod Machover, head of the Program in Media Arts and Sciences, the Media Lab’s academic program. “Ekene’s work is brilliant, bold, and beautiful, and the way he combines expression, reflection, innovation, and activism will place him at the absolute center of Media Lab culture, hopefully for many years to come.”

Ekene Ijeoma graduated with a BS in information technology from Rochester Institute of Technology, and an MA in interaction design from Domus Academy. He has lectured and critiqued at Yale University, Harvard Law School, Columbia University, New York University, the School of Visual Arts, and The New School.

Study: There’s real skill in fantasy sports

Wed, 11/07/2018 - 12:00am

If you’ve ever taken part in the armchair sport of fantasy football and found yourself at the top of your league’s standings at the end of the season, a new MIT study suggests your performance — however far removed from any actual playing field — was likely based on skill rather than luck.

Those looking for ways to improve their fantasy game will have to look elsewhere: The study doesn’t identify any specific qualities that make one fantasy player more skilled over another. Instead, the researchers found, based on the win/loss records of thousands of fantasy players over multiple seasons, that the game of fantasy football is inherently a contest that rewards skill. 

“Some [fantasy] players may know more about statistics, rules of the game, which players are injured, effects of weather, and a host of other factors that make them better at picking players — that’s the skill in fantasy sports,” says Anette “Peko” Hosoi, associate dean of engineering at MIT. “We ask, does that skill have an impact on the outcome of the [fantasy] game? In our analysis, the signal for skill in the data is very clear.”

Other fantasy sports such as baseball, basketball, and hockey also appear to be games of skill — considerably more so than activities based on pure chance, such as coin-flipping. What ultimately do these results mean for the average fantasy player?

“They probably can’t use our study to assemble better sports teams,” says Hosoi, who is also the Neil and Jane Pappalardo Professor of Mechanical Engineering. “But they can use it to talk better smack when they’re at the top of their standings.”

The team’s findings appear this week in the Society for Industrial and Applied Mathematics Review. Hosoi’s co-authors are first author Daniel Getty, a graduate student in MIT’s Department of Aeronautics and Astronautics; graduate student Hao Li; former graduate student Charles Gao; and Masayuki Yano of the University of Toronto.

A fantasy gamble

Hosoi and her colleagues began looking into the roles of skill and chance in fantasy sports several years ago, when they were approached by FanDuel, the second largest company in the daily fantasy sports industry. FanDuel provides online platforms for more than 6 million registered users, who use the site to create and manage fantasy teams — virtual teams made up of real players of professional sports, which fantasy players can pick and draft to their fantasy team. Players can pit their team against other virtual teams, and whether a team wins or loses depends on how the real players perform in actual games in a given day or week.

In recent years, the question has arisen as to whether fantasy sports are a potential form of online gambling. Under a federal law known as the Unlawful Internet Gambling Enforcement Act, or UIGEA, online players of games such as poker are prohibited from transmitting across state lines funds won through gambling activities using the internet. The law exempts fantasy sports, stating that the game is not a form of betting or wagering.

However, the UIGEA was not drafted to alter the legality of internet wagering, which is, for the most part, determined by individual states. As fantasy sports — and fantasy football in particular — have grown more popular, with prominent ads on commercial and cable television, a handful of states have questioned the legality of fantasy sports and the companies that enable them.

Gambling, of course, is defined as any money-exchanging activity that depends mostly on chance. Fantasy sports would not be considered a form of gambling if it were proven to be more of a contest of skill.

“That is the question that FanDuel wanted us to investigate: Have they designed the contest such that skill is rewarded? If so, then these contests should be classified as games of skill, and are not gambling,” Hosoi says. “They gave us all of their data, and asked whether we could determine the relative role of skill and luck in the outcomes.”

Tests of skill and chance

The team analyzed daily fantasy competitions played on FanDuel during the 2013 and 2014 seasons, in baseball, basketball, hockey, and football. In their analysis, the researchers followed guidelines laid out originally by economist and “Freakonomics” author Steven Levitt, along with Thomas Miles and Andrew Rosenfield. In a research paper they wrote in 2012, the economists sought to determine whether a game — in this case, poker — was based more on skill than on chance.

They reasoned that if a game were more skill-based, then a player’s performance should be persistent. It might be good or bad, but it would remain relatively constant over multiple rounds.

To test this in the context of fantasy sports, Hosoi’s team looked at the win/loss record of every fantasy player in FanDuel’s dataset, over one season. For each (anonymized) player, the researchers calculated the fraction of wins the player experienced over the first half of the season versus the second half. They then represented each player’s performance over an entire season as a single dot on a graph whose vertical and horizontal axes represented the win fraction for the first and second halves of the season, respectively.

If a given fantasy sport were based more on skill, then a individual player’s win fraction should be approximately the same — be it 90 percent or 10 percent — for the first and second halves of the season. When every player’s performance is plotted on the same graph, it should roughly resemble a line, indicating a prevalence of skill. On the other hand, if the game were one of chance, every player should have around a 50 percent win fraction, which on the graph would look more like a circular cloud.

For every fantasy sport, the researchers found the graph skewed more linear versus circular, indicating games of skill rather than chance.

The researchers tested a second hypothesis proposed by Levitt: If a game is based on chance, then every player should have the same expected outcome, just as flipping a coin has the same probability for landing heads versus tails. To test this idea, the team split the fantasy player population into two groups: those that played a large number of games, versus those who only participated in a few.

“Even when you correct for biases, like people who quit after losing a lot of games in a row, you find there’s a statistically higher win fraction for people who play a lot versus a little, regardless of the [type of] fantasy sport, which is indicative of skill,” Hosoi says.

The last test, again proposed by Levitt, was to see whether a player’s actions had any impact on the game’s outcome. If the answer is yes, then the game must be one of skill.

“So we looked at how the actual playing population on FanDuel performed, versus a random algorithm,” Hosoi says.

The researchers devised an algorithm that created randomly generated fantasy teams from the same pool of players that were available to the FanDuel users. The algorithm was designed to follow the rules of the game and to be relatively smart in how it generated each team.

“We ran hundreds of thousands of games, and looked at the scores of actual fantasy players, versus scores of computer-generated fantasy players,” Hosoi says. “And you see again that the fantasy players beat the computer-generated ones, indicating that there must be some skill involved.”

Sports on a spectrum

To put their findings in perspective, the researchers plotted the results of each fantasy sport on a spectrum of luck versus skill. Along this spectrum, they also included each fantasy sport’s real counterpart, along with other activities, such as coin flipping, based entirely on chance, and cyclocross racing, which hinges almost entirely on skill.

For the most part, success while playing both fantasy sports and real sports skewed more toward skill, with baseball and basketball, both real and virtual, being more skill-based compared to hockey and football.

Hosoi reasons that skill may play a relatively large role in basketball because the sport encompasses more than 80 games in a season.

“That’s a lot of games, and there are a lot of scoring opportunities in each game,” Hosoi says. “If you get a lucky basket, it doesn’t matter too much. Whereas in hockey, there are so few scoring opportunities that if you get a lucky goal it makes a difference, and luck can play a much larger role.”

Hosoi says the team’s results will ultimately be useful in characterizing fantasy sports, both in and out of the legal system.

“This is one piece of evidence [courts] have to weigh,” Hosoi says. “What I can give them is a quantitative analysis of where [fantasy sports] sit on the skill/luck spectrum. It’s mostly skill, but there’s always a little bit of luck.”

This research was supported, in part, by FanDuel.

Inside the world of livestreaming as entertainment

Wed, 11/07/2018 - 12:00am

Several years ago, a couple thousand people filed into Le Grand Rex, a Paris auditorium, to watch a performance. It was not a concert, however. Instead, a group of professional computer-game players competed to see who could win at “StarCraft 2,” a science fiction game where human exiles from Earth battle aliens.

Beyond the audience watching in person, meanwhile, was another audience streaming an online broadcast of the competition — including T.L. Taylor, a professor in the Comparative Media Studies/Writing program at MIT.

For years, Taylor has been chronicling the rise of esports: competitive computer games watched by audiences like the one at Le Grand Rex. But, as Taylor chronicles in a new book, esports showcases are part of a larger cultural trend toward livestreaming as a distinctive mode of entertainment. That trend also encompasses a scrappier outsider culture of do-it-yourself gaming broadcasts and other uses of streaming, a genre as popular as it is overlooked in the mainstream media.

“We’re at a fascinating moment right now,” says Taylor, about the growth of the livestreaming movement.

And now, in her book, “Watch Me Play: Twitch and the Rise of Game Livestreaming,” Taylor examines the ascendance of livestreaming in its many forms, while analyzing the commercialization of streaming and some of the social tensions that come with the subculture.

As Taylor emphasizes in the book, the rise of livestreaming is very much tied to Twitch, the San Francisco-based streaming website where people broadcast their contests, and their lives. Twitch has about 10 million active daily users and was purchased by Amazon in 2014.

“Formalized competitive computer gaming has been around for decades,” Taylor notes. “But it also used to be a lot of work to be a fan. You had to know what specialist websites to visit. You had to download replay files or seek out recorded videos. Livestreaming changed everything.”

Originally, livestreaming was not necessarily meant to focus on gaming. Instead, it was partly conceived as a “new form of reality TV,” according to Justin Kan, who in 2007 founded Justin.tv, a site broadcasting events from his own life. After seeing how popular livestreaming of gaming was, however, Kan and some partners founded Twitch as a separate platform. It has since grown to encompass people who stream cooking and “social eating” content, music shows, and more.

Still, computer gaming remains a principal driver of livestreaming. One branch of this has become organized esports, complete with teams, sponsors, and corporate investment. Another branch consists of individuals building their own audiences and brands, one gaming session at a time, broadcasting on camera while playing and interacting with their audiences.

This can be a grueling occupation. In the book, Taylor visits the home of a suburban Florida gaming entrepreneur while he broadcasts a playing session that begins at 3:30 a.m., to draw a global audience. After several hours, the session netted this independent livestreaming player about 50 new subscribers, 800 new followers, and $500 in donations, all while his children slept.

“Eventually these livestreamers become not only content producers but also brand and community managers,” Taylor writes in the book. Some of them are also unlikely broadcast personalities, by their own admission. “I guess a part of me is that talkative person on the screen, but as soon as it goes off … I’m kind of a quiet person offstream,” says gaming star J.P. McDaniel, as recounted in Taylor’s book.

Meanwhile, livestreaming is a heavily male-dominated field. As Taylor documents in the book, women, people of color, and participants from the LGBTQ community can face serious levels of harassment, which limits their participation in the culture.

“Women also continue to face stereotypes and pushback when they focus on competitive games and have professional aspirations,” Taylor writes in the book. Indeed, a central theme of “Watch Me Play” is that all forms of livestreaming, including professional esports, have much to tell us about larger social trends, instead of existing as a kind of cultural cul-de-sac.

“Far too often we imagine what happens in play and games as being separate from ‘real life,’” says Taylor. “But our leisure is infused with not only our identities and social worlds, but broader cultural issues. This is probably most obvious when we think about how gender plays a powerful role in our leisure, shaping who is seen as legitimately allowed to play, what they can play, and in what ways.”

For this reason, Taylor adds, “those very moments when people are engaging in play remain some of the most politically infused spaces” in society. Thus, for all the novelty, Taylor hopes her study of livestreaming will appeal to those who have never watched competitive computer games, alone or at Le Grand Rex.   

“My hope is that it [the book] gets picked up by not only those who are interested in livestreaming, but readers who might want to finally understand how to think about gaming” as it expands in society, and as entertainment becomes diversified across media platforms, Taylor says.

“Digital games have become a part of many people’s everyday lives,” she adds. “My hope is that the work helps make clear what is at stake in that.”

Machine-learning system could aid critical decisions in sepsis care

Wed, 11/07/2018 - 12:00am

Researchers from MIT and Massachusetts General Hospital (MGH) have developed a predictive model that could guide clinicians in deciding when to give potentially life-saving drugs to patients being treated for sepsis in the emergency room.

Sepsis is one of the most frequent causes of admission, and one of the most common causes of death, in the intensive care unit. But the vast majority of these patients first come in through the ER. Treatment usually begins with antibiotics and intravenous fluids, a couple liters at a time. If patients don’t respond well, they may go into septic shock, where their blood pressure drops dangerously low and organs fail. Then it’s often off to the ICU, where clinicians may reduce or stop the fluids and begin vasopressor medications such as norepinephrine and dopamine, to raise and maintain the patient’s blood pressure.

That’s where things can get tricky. Administering fluids for too long may not be useful and could even cause organ damage, so early vasopressor intervention may be beneficial. In fact, early vasopressor administration has been linked to improved mortality in septic shock. On the other hand, administering vasopressors too early, or when not needed, carries its own negative health consequences, such as heart arrhythmias and cell damage. But there’s no clear-cut answer on when to make this transition; clinicians typically must closely monitor the patient’s blood pressure and other symptoms, and then make a judgment call.

In a paper being presented this week at the American Medical Informatics Association’s Annual Symposium, the MIT and MGH researchers describe a model that “learns” from health data on emergency-care sepsis patients and predicts whether a patient will need vasopressors within the next few hours. For the study, the researchers compiled the first-ever dataset of its kind for ER sepsis patients. In testing, the model could predict a need for a vasopressor more than 80 percent of the time.

Early prediction could, among other things, prevent an unnecessary ICU stay for a patient that doesn’t need vasopressors, or start early preparation for the ICU for a patient that does, the researchers say.

“It’s important to have good discriminating ability between who needs vasopressors and who doesn’t [in the ER],” says first author Varesh Prasad, a PhD student in the Harvard-MIT Program in Health Sciences and Technology. “We can predict within a couple of hours if a patient needs vasopressors. If, in that time, patients got three liters of IV fluid, that might be excessive. If we knew in advance those liters weren’t going to help anyway, they could have started on vasopressors earlier.”

In a clinical setting, the model could be implemented in a bedside monitor, for example, that tracks patients and sends alerts to clinicians in the often-hectic ER about when to start vasopressors and reduce fluids. “This model would be a vigilance or surveillance system working in the background,” says co-author Thomas Heldt, the W. M. Keck Career Development Professor in the MIT Institute of Medical Engineering and Science. “There are many cases of sepsis that [clinicians] clearly understand, or don’t need any support with. The patients might be so sick at initial presentation that the physicians know exactly what to do. But there’s also a ‘gray zone,’ where these kinds of tools become very important.”

Co-authors on the paper are James C. Lynch, an MIT graduate student; and Trent D. Gillingham, Saurav Nepal, Michael R. Filbin, and Andrew T. Reisner, all of MGH. Heldt is also an assistant professor of electrical and biomedical engineering in MIT’s Department of Electrical Engineering and Computer Science and a principal investigator in the Research Laboratory of Electronics.

Other models have been built to predict which patients are at risk for sepsis, or when to administer vasopressors, in ICUs. But this is the first model trained on the task for the ER, Heldt says. “[The ICU] is a later stage for most sepsis patients. The ER is the first point of patient contact, where you can make important decisions that can make a difference in outcome,” Heldt says.

The primary challenge has been a lack of an ER database. The researchers worked with MGH clinicians over several years to compile medical records of nearly 186,000 patients who were treated in the MGH emergency room from 2014 to 2016. Some patients in the dataset had received vasopressors within the first 48 hours of their hospital visit, while others hadn’t. Two researchers manually reviewed all records of patients with likely septic shock to include the exact time of vasopressor administration, and other annotations. (The average time from presentation of sepsis symptoms to vasopressor initiation was around six hours.)

The records were randomly split, with 70 percent used for training the model and 30 percent for testing it. In training, the model extracted up to 28 of 58 possible features from patients who needed or didn’t need vasopressors. Features included blood pressure, elapsed time from initial ER admission, total fluid volume administered, respiratory rate, mental status, oxygen saturation, and changes in cardiac stroke volume — how much blood the heart pumps in each beat.

In testing, the model analyzes many or all of those features in a new patient at set time intervals and looks for patterns indicative of a patient that ultimately needed vasopressors or didn’t. Based on that information, it makes a prediction, at each interval, about whether the patient will need a vasopressor. In predicting whether patients needed vasopressors in the next two or more hours, the model was correct 80 to 90 percent of the time, which could prevent an excessive half a liter or more of administered fluids, on average.

“The model basically takes a set of current vital signs, and a little bit of what the trajectory looks like, and determines that this current observation suggests this patient might need vasopressors, or this set of variables suggests this patient would not need them,” Prasad says.

Next, the researchers aim to expand the work to produce more tools that predict, in real-time, if ER patients may initially be at risk for sepsis or septic shock. “The idea is to integrate all these tools into one pipeline that will help manage care from when they first come into the ER,” Prasad says.

The idea is to help clinicians at emergency departments in major hospitals such as MGH, which sees about 110,000 patients annually, focus on the most at-risk populations for sepsis. “The problem with sepsis is the presentation of the patient often belies the seriousness of the underlying disease process,” Heldt says. “If someone comes in with weakness and doesn’t feel right, a little bit of fluids may often do the trick. But, in some cases, they have underlying sepsis and can deteriorate very quickly. We want to be able to tell which patients have become better and which are on a critical path if left untreated.”

The work was supported, in part, by a National Defense Science and Engineering Graduate Fellowship, the MIT-MGH Strategic Partnership, and by CRICO Risk Management Foundation and Nihon Kohden Corporation.

Why some Wikipedia disputes go unresolved

Tue, 11/06/2018 - 12:17pm

Wikipedia has enabled large-scale, open collaboration on the internet’s largest general-reference resource. But, as with many collaborative writing projects, crafting the content can be a contentious subject.

Often, multiple Wikipedia editors will disagree on certain changes to articles or policies. One of the main ways to officially resolve such disputes is the Requests for Comment (RfC) process. Quarreling editors will publicize their deliberation on a forum, where other Wikipedia editors will chime in and a neutral editor will make a final decision.

Ideally, this should solve all issues. But a novel study by MIT researchers finds debilitating factors — such as excessive bickering and poorly worded arguments — have led to about one-third of RfCs going unresolved.

For the study, the researchers compiled and analyzed the first-ever comprehensive dataset of RfC conversations, captured over an eight-year period, and conducted interviews with editors who frequently close RfCs, to understand why they don’t find a resolution. They also developed a machine-learning model that leverages that dataset to predict when RfCs may go stale. And, they recommend digital tools that could make deliberation and resolution more effective.

“It was surprising to see a full third of the discussions were not closed,” says Amy X. Zhang, a PhD candidate in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and co-author on the paper, which is being presented at this week’s ACM Conference on Computer-Supported Cooperative Work and Social Computing. “On Wikipedia, everyone’s a volunteer. People are putting in the work, and they have interest … and editors may be waiting on someone to close so they can get back to editing. We know, looking through the discussions, the job of reading through and resolving a big deliberation is hard, especially with back and forth and contentiousness. [We hope to] help that person do that work.”

The paper’s co-authors are: first author Jane Im, a graduate student at the University of Michigan’s School of Information; Christopher J. Schilling of the Wikimedia Foundation; and David Karger, a professor of computer science and CSAIL researcher.

(Not) finding closure

Wikipedia offers several channels to solve editorial disputes, which involve two editors hashing out their problems, putting ideas to a simple majority vote from the community, or bringing the debate to a panel of moderators. Some previous Wikipedia research has delved into those channels and back-and-forth “edit wars” between contributors. “But RfCs are interesting, because there’s much less of a voting mentality,” Zhang says. “With other processes, at the end of day you’ll vote and see what happens. [RfC participants] do vote sometimes, but it’s more about finding a consensus. What’s important is what’s actually happening in a discussion.”

To file an RfC, an editor drafts a template proposal, based on a content dispute that wasn’t resolved in an article’s basic “talk” page, and invites comment by the broader community. Proposals run the gamut, from minor disagreements about a celebrity’s background information to changes to Wikipedia’s policies. Any editor can initiate an RfC and any editor — usually, more experienced ones — who didn’t participate in the discussion and is considered neutral, may close a discussion. After 30 days, a bot automatically removes the RfC template, with or without resolution. RfCs can close formally with a summary statement by the closer, informally due to overwhelming agreement by participants, or be left stale, meaning removed without resolution.  

For their study, the researchers compiled a database consisting of about 7,000 RfC conversations from the English-language Wikipedia from 2011 to 2017, which included closing statements, author account information, and general reply structure. They also conducted interviews with 10 of Wikipedia’s most frequent closers to better understand their motivations and considerations when resolving a dispute.

Analyzing the dataset, the researchers found that about 57 percent of RfCs were formally closed. Of the remaining 43 percent, 78 percent (or around 2,300) were left stale without informal resolution — or, about 33 percent of all the RfCs studied. Combining dataset analysis with the interviews, the researchers then fleshed out the major causes of resolution failure. Major issues include poorly articulated initial arguments, where the initiator is unclear about the issue or writes a deliberately biased proposal; excessive bickering during discussions that lead to more complicated, longer, argumentative threads that are difficult to fully examine; and simple lack of interest from third-party editors because topics may be too esoteric, among other factors.

Helpful tools

The team then developed a machine-learning model to predict whether a given RfC would close (formally or informally) or go stale, by analyzing more than 60 features of the text, Wikipedia page, and editor account information. The model achieved a 75 percent accuracy for predicting failure or success within one week after discussion started. Some more informative features for prediction, they found, include the length of the discussion, number of participants and replies, number of revisions to the article, popularity of and interest in the topic, experience of the discussion participants, and the level of vulgarity, negativity, and general aggression in the comments.

The model could one day be used by RfC initiators to monitor a discussion as it’s unfolding. “We think it could be useful for editors to know how to a target their interventions,” Zhang says. “They could post [the RfC] to more [Wikipedia forums] or invite more people, if it looks like it’s in danger of not being resolved.”

The researchers suggest Wikipedia could develop tools to help closers organize lengthy discussions, flag persuasive arguments and opinion changes within a thread, and encourage collaborative closing of RfCs.

In the future, the model and proposed tools could potentially be used for other community platforms that involve large-scale discussions and deliberations. Zhang points to online city-and community-planning forums, where citizens weigh in on proposals. “People are discussing [the proposals] and voting on them, so the tools can help communities better understand the discussions … and would [also] be useful for the implementers of the proposals.”

Zhang, Im, and other researchers have now built an external website for editors of all levels of expertise to come together to learn from one another, and more easily monitor and close discussions. “The work of closer is pretty tough,” Zhang says, “so there’s a shortage of people looking to close these discussions, especially difficult, longer, and more consequential ones. This could help reduce the barrier to entry [for editors to become closers] and help them collaborate to close RfCs.”

“While it is surprising that a third of these discussions were never resolved, [what’s more] important are the reasons why discussions fail to come to closure, and the most interesting conclusions here come from the qualitative analyses,” says Robert Kraut, a professor emeritus of human-computer interactions at Carnegie Melon University. “Some [of the study’s] findings transcend Wikipedia and can apply to many discussion in other settings.” More work, he adds, could be done to improve the accuracy of the machine-learning model in order to provide more actionable insights to Wikipedia.

The study sheds light on how some RfC processes “deviate from established norms, leading to inefficiencies and biases,” says Dario Taraborelli, director of research at the Wikimedia Foundation. “The results indicate that the experience of participants and the length of a discussion are strongly predictive of the timely closure of an RfC. This brings new empirical evidence to the question of how to make governance-related discussions more accessible to newcomers and members of underrepresented groups.”

Heat-seeking studies

Tue, 11/06/2018 - 12:00pm

Months of painstaking setup and delicate experimentation paid off recently for Artyom Kossolapov with a thrilling moment of discovery. It came while the nuclear science and engineering student was conducting research for his master's degree, observing a metallic device immersed in water.

“As we increased power, bubbles emerged that formed a film coating the metal surface, trapping heat and creating a hotspot — a big blob — and I realized I was watching the phenomenon of critical heat flux in real time,” recalls Kossolapov. “It was very exciting seeing this happen live; it felt like a big breakthrough.”

Critical heat flux (CHF) poses a central problem for the field of nuclear engineering. The conditions underlying CHF occur in boiling water systems such as nuclear reactors, Kossolapov explains, when a dangerous combination of heat and pressure “creates so many bubbles that they clump together, forming a vapor film, which insulates the heated surface from cooling liquids and potentially creates dangerous instability.”

CHF is unlikely to occur in nuclear reactors in large part because plant operators build in very wide safety margins, running their reactors at power levels well below those that might kick off the formation of vaporous films. This means that reactors do not generate as much energy as they might. To solve this problem, scientists have long tried to elucidate with confidence and specificity the conditions that give rise to CHF.

“There hasn't been any consensus explanation for the physics of the phenomenon,” says Kossolapov. “This is because there have been limitations on the kinds of measurements scientists could make.”

But with the efforts of Kossolapov and others at the Department of Nuclear Science and Engineering, those limitations are diminishing. Working under the direction of Matteo Bucci, the Norman C. Rasmussen Assistant Professor of Nuclear Science and Engineering, Kossolapov has devised a novel experimental approach to deploying advanced diagnostics to investigate the mechanics of boiling heat transfer in nuclear energy systems. 

Using synchronized rapid video imaging and high speed infrared thermometry — a camera measuring temperatures on minute surface areas at microsecond intervals — Kossolapov's team has been able to map the infinitesimal changes taking place on surfaces of pressurized water reactors that can in milliseconds produce the CHF phenomenon. “We found a lot of interesting physics never revealed before,” he says.

The results of this successful series of experiments were published in an article, “Investigation of subcooled flow boiling and CHF using high-resolution diagnostics” in the journal Experimental Thermal and Fluid Science.

With these new tools, Kossolapov says he hopes “to help improve models of this phenomenon, so that reactors might eventually be operated at higher power.”

“Even a several percent increase in power over the entire reactor fleet around the world would deliver a big boost to total electric production — all of it in the form of clean energy,” he says.

For Kossolapov, his CHF work represents the payoff to years of research, both at MIT and in Russia, where he began his academic career. Born and raised in Almaty, Kazakhstan's largest city, Kossolapov found he had a predilection for physics and math. A middle school science teacher familiar with nuclear power communicated her excitement about the field, and this stuck with him.

By the end of high school, when he says he found his interests “constrained by geography,” Kossolapov headed to Peter the Great St. Petersburg Polytechnic University. Seeking a degree in a STEM field, especially one that emphasized physics, he landed on the university's nuclear program.

“It clicked for me,” he says. “I wanted to go into the energy sector, and nuclear seemed most interesting, and most likely to open up a very broad and important career path.”

Although the curriculum for nuclear engineering focused on classroom instruction, Kossolapov chanced into a research opportunity with a professor from another department. “That was kind of a turnaround moment for me,” he says. He was plunged into a series of projects working with heat flux sensors and laser diagnostics for measuring and improving heat flow through surfaces.

“We manipulated different models in a two-story wind tunnel, modifying surfaces of heat exchangers, or wings of planes, or turbines,” Kossolapov recalls. “We wanted to make small modifications that might help tailor fluid flow around surfaces to enhance heat transfer, and improve the function of these items.”

Kossolapov researched evenings, weekends, and summers, writing up the results for his thesis. “We were able to measure something not easy to measure with other techniques, and found entirely new effects,” he says.

Eager to apply experimental heat transfer work more directly to the nuclear field, Kossolapov headed to MIT for graduate work. “I saw that it was the best place to conduct impactful research on these problems,” he says.

Today, as he launches into doctoral studies, Kossolapov continues to study CHF, but with a shift. While his previous research focused on low-pressure applications he is now absorbed with reproducing the phenomenon in conditions that mirror the intense pressures and high temperatures inside commercial nuclear reactors.

“The new setup I'm building must withstand these conditions, which means specialized materials for optics and infrared thermometry,” he says. “It's going to be a challenge; we don't want things melting.”

His doctoral investigations will require months of tweaking, an arduous journey that Kossolapov knows well. For stress relief, he has developed some productive distractions: hunkering down in his home studio writing and recording instrumental music.

“It's a nice change of pace for me,” he says. “I get to disengage one part of my brain.”

While Kossolapov knows his work is unlikely to make immediate impacts, he believes that his years of MIT research could ultimately help strengthen the nuclear power industry.

“By better understanding boiling heat transfer at high pressures, we could improve the performance of components inside these reactors,” he says. “This kind of work could eventually help increase power output at plants, and possibly even prevent them from being decommissioned.”

Oceanographers produce first-ever images of entire cod shoals

Tue, 11/06/2018 - 11:19am

For the most part, the mature Atlantic cod is a solitary creature that spends most of its time far below the ocean’s surface, grazing on bony fish, squid, crab, shrimp, and lobster — unless it’s spawning season, when the fish flock to each other by the millions, forming enormous shoals that resemble frenzied, teeming islands in the sea.

These massive spawning shoals may give clues to the health of the entire cod population — an essential indicator for tracking the species’ recovery, particularly in regions such as New England and Canada, where cod has been severely depleted by decades of overfishing.

But the ocean is a murky place, and fish are highly mobile by nature, making them difficult to map and count. Now a team of oceanographers at MIT has journeyed to Norway — one of the last remaining regions of the world where cod still thrive — and used a synoptic acoustic system to, for the first time, illuminate entire shoals of cod almost instantaneously, during the height of the spawning season.

The team, led by Nicholas Makris, professor of mechanical engineering and director of the Center for Ocean Engineering, and Olav Rune Godø of the Norwegian Institute of Marine Research, was able to image multiple cod shoals, the largest spanning 50 kilometers, or about 30 miles. From the images they produced, the researchers estimate that the average cod shoal consists of about 10 million individual fish.

They also found that when the total population of cod dropped below the average shoal size, the species remained in decline for decades.

“This average shoal size is almost like a lower bound,” Makris says. “And the sad thing is, it seems to have been crossed almost everywhere for cod.”

Makris and his colleagues have published their results today in the journal Fish and Fisheries.

Echoes in the deep

For years, researchers have attempted to image cod and herring shoals using high-frequency, hull-mounted sonar instruments, which direct narrow beams below moving research vessels. These ships traverse a patch of the sea in a lawnmower-like pattern, imaging slices of a shoal by emitting high-frequency sound waves, and measuring the time it takes for the signals to bounce off a fish and back to the ship. But this method requires a vessel to move slowly through the waters to get counts; one survey can take many weeks to complete and typically samples only a small portion of any particular expansive shoal, often completely missing shoals between survey tracks and never capturing shoal dynamics

The team made use of the Ocean Acoutic Waveguide Remote Sensing, or OAWRS system, an imaging technique developed at MIT by Makris and co-author Purnima Ratilal, which emits low-frequency sound waves that can travel over a much wider range than high-frequency sonar. The sound waves are essentially tuned to bounce off fish, in particular, off their swim bladder — a gas-filled organ that reflects sound waves — like echoes off a tiny drum. As these echoes return to the ship, researchers can aggregate them to produce an instant picture of millions of fish over vast areas.

Making passage

In February and March of 2014, Makris and a team of students and researchers headed to Norway to count cod, herring, and capelin during the height of their spawning seasons. They towed OAWRS aboard the Knorr, a U.S. Navy research vessel that is operated by the Woods Hole Oceanographic Institution and is best known as the ship aboard which researchers discovered the remnants of the Titanic.

The ship left Woods Hole and crossed the Atlantic over two weeks, during which time the crew continuously battled storms and choppy winter seas. When they finally arrived at the southern coast of Norway, they spent the next three weeks imaging herring, cod, and capelin along the entire Norwegian coast, from the town of Alesund, north to the Russian border.

“The underwater terrain was as treacherous as the land, with submerged seamounts, ridges, and fjord channels,” Makris recalls. “Billions of herring actually would hide in one of these submerged fjords near Alesund during the daytime, about 300 meters down, and come up at night to shelves about 100 meters deep. Our mission there was to instantaneously image entire shoals of them, stretching for kilometers, and sort out their behavior.”

A window through a hurricane

As they moved up the Norwegian coast, the researchers towed a 0.5-kilometer-long array of passive underwater microphones and a device that emitted low-frequency sound waves. After imaging herring shoals in southern Norway, the team moved north to Lofoten, a dramatic archipelago of sheer cliffs and mountains, depicted most famously in Edgar Allen Poe’s “Descent into the Maelstrom,” in which the poet made note of the region’s abundance of cod.

To this day, Lofoten remains a primary spawning ground for cod, and there, Makris’ team was able to produce the first-ever images of an entire cod shoal, spanning 50 kilometers.

Toward the end of their journey, the researchers planned to image one last cod region, just as a hurricane was projected to hit. The team realized there would be only two windows of relatively calm winds in which to operate their imaging equipment.

“So we went, got good data, and fled to a nearby fjord as the eye wall struck,” Makris recalls. “We ended with 30-foot seas at dawn and the Norwegian coast guard, in a strangely soothing young voice, urging us to evacuate the area.” The team was able to image a slightly smaller shoal there, spanning about 10 kilometers, before completing the expedition.

On the brink

Back on dry land, the researchers analyzed their images and estimated that an average shoal size consists of about 10 million fish. They also looked at historical tallies of cod, in Norway, New England, the North Sea and Canada, and discovered an interesting trend: Those regions — like New England  — that experienced long-lasting declines in cod stocks did so when the total cod population dropped below roughly 10 million — the same number as an average shoal. When cod dropped below this threshold, the population took decades to recover, if it did at all.

In Norway, the cod population always stayed above 10 million and was able to recover, climbing back to preindustrial levels over the years, even after significant declines in the mid-20th century. The team also imaged shoals of herring  and found a similar trend through history: When the total population dropped below the level of an average herring spawning shoal, it took decades for the fish to recover.

Makris and Godø hope that the team’s results will serve as a measuring stick of sorts, to help researchers keep track of fish stocks and recognize when a species is on the brink.

“The ocean is a dark place, you look out there and can’t see what’s going on,” Makris says. “It’s a free-for-all out there, until you start shining a light on it and seeing what’s happening. Then you can properly appreciate and understand and manage.” He adds “Even if field work is difficult, time consuming, and expensive, it is essential to confirm and inspire theories, models, and simulations.”

This research was supported, in part, by the Norwegian Institute of Marine Research, the Office of Naval Research, and the National Science Foundation.

Chemical synthesis could produce more potent antibiotics

Mon, 11/05/2018 - 10:59am

Using a novel type of chemical reaction, MIT researchers have shown that they can modify antibiotics in a way that could potentially make them more effective against drug-resistant infections.

By chemically linking the antibiotic vancomycin to an antimicrobial peptide, the researchers were able to dramatically enhance the drug’s effectiveness against two strains of drug-resistant bacteria. This kind of modification is simple to perform and could be used to create additional combinations of antibiotics and peptides, the researchers say.

“Typically, a lot of steps would be needed to get vancomycin in a form that would allow you to attach it to something else, but we don’t have to do anything to the drug,” says Brad Pentelute, an MIT associate professor of chemistry and the study’s senior author. “We just mix them together and we get a conjugation reaction.”

This strategy could also be used to modify other types of drugs, including cancer drugs, Pentelute says. Attaching such drugs to an antibody or another targeting protein could make it easier for the drugs to reach their intended destinations.

Pentelute’s lab worked with Stephen Buchwald, the Camille Dreyfus Professor of Chemistry at MIT; Scott Miller, a professor of chemistry at Yale University; and researchers at Visterra, a local biotech company, on the paper, which appears in the Nov. 5 issue of Nature Chemistry. The paper’s lead authors are former MIT postdoc Daniel Cohen, MIT postdoc Chi Zhang, and MIT graduate student Colin Fadzen.

A simple reaction

Several years ago, Cohen made the serendipitous discovery that an amino acid called selenocysteine can spontaneously react with complex natural compounds without the need for a metal catalyst. Cohen found that when he mixed electron-deficient selenocysteine with the antibiotic vancomycin, the selenocysteine attached itself to a particular spot — an electron-rich ring of carbon atoms within the vancomycin molecule.

This led the researchers to try using selenocysteine as a “handle” that could be used to link peptides and small-molecule drugs. They incorporated selenocysteine into naturally occurring antimicrobial peptides — small proteins that most organisms produce as part of their immune defenses. Selenocysteine, a naturally occurring amino acid that includes an atom of selenium, is not as common as the other 20 amino acids but is found in a handful of enzymes in humans and other organisms.

The researchers found that not only were these peptides able to link up with vancomycin, but the chemical bonds consistently occurred at the same location, so all of the resulting molecules were identical. Creating such a pure product is difficult with existing methods for linking complex molecules. Furthermore, doing this kind of reaction with previously existing methods would likely require 10 to 15 steps just to chemically modify vancomycin in a way that would allow it to react with a peptide, the researchers say.

“That’s the beauty of this method,” Zhang says. “These complex molecules intrinsically possess regions that can be harnessed to conjugate to our protein, if the protein possesses the selenocysteine handle that we developed. It can greatly simplify the process.”

Dan Mandell, CEO of GRO Biosciences, says the new approach also overcomes another obstacle to this type of reaction, which is that when drugs are chemically modified to enable attachment to selenocysteine, it can weaken them.

“This paper provides an important advance on this technology by allowing attachment of unmodified drugs to targeting proteins,” says Mandell, who was not involved in the research. “This approach can help usher in a new wave of selenocysteine-mediated drug conjugates, where targeting proteins deliver potent drugs to the site of disease in a predictable fashion.”

The researchers tested conjugates of vancomycin and a variety of antimicrobial peptides (AMPs). They found that one of these molecules, a combination of vancomycin and the AMP dermaseptin, was five times more powerful than vancomycin alone against a strain of bacteria called E. faecalis. Vancomycin linked to an AMP called RP-1 was able to kill the bacterium A. baumannii, even though vancomycin alone has no effect on this strain. Both of these strains have high levels of drug resistance and often cause infections acquired in hospitals.

Modified drugs

This approach should work for linking peptides to any complex organic molecule that has the right kind of electron-rich ring, the researchers say. They have tested their method with about 30 other molecules, including serotonin and resveratrol, and found that they could be easily joined to peptides containing selenocysteine. The researchers have not yet explored how these modifications might affect the drugs’ activity.

In addition to modifying antibiotics, as they did in this study, the researchers believe they could use this technique for creating targeted cancer drugs. Scientists could use this approach to attach antibodies or other proteins to cancer drugs, helping the drugs to reach their destination without causing side effects in healthy tissue.

Adding selenocysteine to small peptides is a fairly straightforward process, the researchers say, but they are now working on adapting the method so that it can be used for larger proteins. They are also experimenting with the possibility of performing this type of conjugation reaction using the more common amino acid cysteine as a handle instead of selenocysteine.

The research was funded by the National Institutes of Health, a Damon Runyon Cancer Research Foundation Award, and a Sontag Distinguished Scientist Award.

E.T., we’re home

Sun, 11/04/2018 - 11:59pm

If extraterrestrial intelligence exists somewhere in our galaxy, a new MIT study proposes that laser technology on Earth could, in principle, be fashioned into something of a planetary porch light — a beacon strong enough to attract attention from as far as 20,000 light years away.

The research, which author James Clark calls a “feasibility study,” appears today in The Astrophysical Journal. The findings suggest that if a high-powered 1- to 2-megawatt laser were focused through a massive 30- to 45-meter telescope and aimed out into space, the combination would produce a beam of infrared radiation strong enough to stand out from the sun’s energy.

Such a signal could be detectable by alien astronomers performing a cursory survey of our section of the Milky Way — especially if those astronomers live in nearby systems, such as around Proxima Centauri, the nearest star to Earth, or TRAPPIST-1, a star about 40 light-years away that hosts seven exoplanets, three of which are potentially habitable. If the signal is spotted from either of these nearby systems, the study finds, the same megawatt laser could be used to send a brief message in the form of pulses similar to Morse code.

“If we were to successfully close a handshake and start to communicate, we could flash a message, at a data rate of about a few hundred bits per second, which would get there in just a few years,” says Clark, a graduate student in MIT’s Department of Aeronautics and Astronautics and author of the study.

The notion of such an alien-attracting beacon may seem far-fetched, but Clark says the feat can be realized with a combination of technologies that exist now and that could be developed in the near term.

“This would be a challenging project but not an impossible one,” Clark says. “The kinds of lasers and telescopes that are being built today can produce a detectable signal, so that an astronomer could take one look at our star and immediately see something unusual about its spectrum. I don’t know if intelligent creatures around the sun would be their first guess, but it would certainly attract further attention.”

Standing up to the sun

Clark started looking into the possibility of a planetary beacon as part of a final project for 16.343 (Spacecraft, and Aircraft Sensors and Instrumentation), a course taught by Clark’s advisor, Associate Professor Kerri Cahoy.

“I wanted to see if I could take the kinds of telescopes and lasers that we’re building today, and make a detectable beacon out of them,” Clark says. 

He started with a simple conceptual design involving a large infrared laser and a telescope through which to further focus the laser’s intensity. His aim was to produce an infrared signal that was at least 10 times greater than the sun’s natural variation of infrared emissions. Such an intense signal, he reasoned, would be enough to stand out against the sun’s own infrared signal, in any “cursory survey by an extraterrestrial intelligence.”

He analyzed combinations of lasers and telescopes of various wattage and size, and found that a 2-megawatt laser, pointed through a 30-meter telescope, could produce a signal strong enough to be easily detectable by astronomers in Proxima Centauri b, a planet that orbits our closest star, 4 light-years away. Similarly, a 1-megawatt laser, directed through a 45-meter telescope, would generate a clear signal in any survey conducted by astronomers within the TRAPPIST-1 planetary system, about 40 light-years away. Either setup, he estimated, could produce a generally detectable signal from up to 20,000 light-years away.

Both scenarios would require laser and telescope technology that has either already been developed, or is within practical reach. For instance, Clark calculated that the required laser power of 1 to 2 megawatts is equivalent to that of the U.S. Air Force’s Airborne Laser, a now-defunct megawatt laser that was meant to fly aboard a military jet for the purpose of shooting ballistic missiles out of the sky. He also found that while a 30-meter telescope considerably dwarfs any existing observatory on Earth today, there are plans to build such massive telescopes in the near future, including the 24-meter Giant Magellan Telescope and the 39-meter European Extremely Large Telescope, both of which are currently under construction in Chile.

Clark envisions that, like these massive observatories, a laser beacon should be built atop a mountain, to minimize the amount of atmosphere the laser would have to penetrate before beaming out into space.

He acknowledges that a megawatt laser would come with some safety issues. Such a beam would produce a flux density of about 800 watts of power per square meter, which is approaching that of the sun, which generates about 1,300 watts per square meter. While the beam wouldn’t be visible, it could still damage people’s vision if they were to look directly at it. The beam could also potentially scramble any cameras aboard spacecraft that happen to pass through it.

“If you wanted to build this thing on the far side of the moon where no one’s living or orbiting much, then that could be a safer place for it,” Clark says. “In general, this was a feasibility study. Whether or not this is a good idea, that’s a discussion for future work.”

Taking E.T.’s call

Having established that a planetary beacon is technically feasible, Clark then flipped the problem and looked at whether today’s imaging techniques would be able to detect such an infrared beacon if it were produced by astronomers elsewhere in the galaxy. He found that, while a telescope 1 meter or larger would be capable of spotting such a beacon, it would have to point in the signal’s exact direction to see it.

“It is vanishingly unlikely that a telescope survey would actually observe an extraterrestrial laser, unless we restrict our survey to the very nearest stars,” Clark says.

He hopes the study will encourage the development of infrared imaging techniques, not only to spot any laser beacons that might be produced by alien astronomers, but also to identify gases in a distant planet’s atmosphere that might be indications of life.

“With current survey methods and instruments, it is unlikely that we would actually be lucky enough to image a beacon flash, assuming that extraterrestrials exist and are making them,” Clark says. “However, as the infrared spectra of exoplanets are studied for traces of gases that indicate the viability of life, and as full-sky surveys attain greater coverage and become more rapid, we can be more certain that, if E.T. is phoning, we will detect it.”

Why private firms’ accounting disclosures help the economy grow

Sat, 11/03/2018 - 11:59pm

The U.S. has some distinctive rules about the financial disclosures of corporations. For instance: Private companies, which make up more than half of the economy, do not have to disclose much information at all. But public companies — those with shares trading on exchanges — have extensive disclosure rules. Around the world, only a few other countries have similar systems. 

This creates an asymmetrical situation within some industries. If some companies have to disclose a lot of information about themselves and others do not, does it create an advantage for the firms that get to examine their peers and rivals without being subject to the same scrutiny?

Nemit Shroff’s research shows that it does. Shroff, the Class of 1958 Career Development Associate Professor at the MIT Sloan School of Management, is an accounting expert who studies the effect of information on executives; he looks at how well or poorly firms allocate capital, depending on the information they have available to them.

“I focus on investment decisions because it’s one of the fundamental drivers of value,” Shroff says. Through a series of carefully designed studies, he has arrived at some answers.

“What we find is that private companies make better decisions when they operate in industries that have public firms,” Shroff notes. Referring to the disclosures that publicly traded firms make, he adds, “That information not only informs the investors in that company, it also informs other stakeholders in other companies what’s going on in an industry, what’s going on at the economy level.”

Shroff’s research thus opens up an array of other questions, including: What should accounting policies be, if having more information tends to lead to better growth outcomes? For his work in the field, Shroff was recently awarded tenure at MIT.

A family of entrepreneurs

Shroff grew up in India, where becoming a professor was something he never considered until he reached college.

“Pretty much my entire family, we are entrepreneurs,” Shroff says. “They have their own businesses, my grandfather, all my cousins, my father, my uncle, pretty much everyone is an entrepreneur. The notion of academia was completely foreign to me growing up.”

That was true even as Shroff received his undergraduate degree in accounting from Sydenham College at Mumbai University and then moved on to get his MBA at Amrita School of Business in Coimbatore.

As Shroff puts it, he “started getting interested in public companies” and asking a lot of questions in class about finance and markets — to the point where a professor encouraged him to think about pursuing his queries as formal research topics.

“He told me, ‘The kind of things you’re interested in, they’re things you do if you’re an academic,’” Shroff says. “He encouraged me a lot.” Soon Shroff was applying to PhD programs in the U.S. — “I had little to lose, so I thought I would give it a shot” — and wound up at the University of Michigan, where he got his PhD in business administration, focusing on accounting, in 2011.

Shroff joined the MIT faculty straight from Ann Arbor and has remained at the Institute ever since. He was promoted to associate professor in 2015 and then tenured this spring. And Shroff has found a home at MIT not just as a researcher, but in the classroom; he raves about MIT Sloan students.

“They’re very respectful, very smart, and it’s a joy to get to know them,” Shroff says. “It’s remarkable how accomplished they are.”

Pros and cons of going public

As a researcher, Shroff’s work has helped show not only that more information helps managers, but more specifically, how it helps. One of the crucial mechanisms at work, Shroff notes, is precisely that it helps executives judge the size of the markets they are in.

“You can come up with more precise estimates of the demand for a product or service,” Shroff says. “It can reduce uncertainty. … [and] will increase confidence in whatever decision [you] make.

For public firms, conversely, he notes that “those disclosures could make you lose your competitive edge.” That said, this information imbalance is only one of the factors that weigh in corporate decisions to go public or stay private, including access to capital, corporate control, and more.

Shroff has found at least one previously underrated benefit for publicly held firms that disclose more information: The more rigorous work that goes into public disclosure practices can help firms learn about themselves and operate more effectively, a conclusion he makes in a recently completed paper.

‘The compliance process actually gives them information that’s relevant for their decisions,” Shroff says. “When you comply with new rules, it forces you to collect new information.”

Inevitably, the question of how the U.S. should structure its disclosure rules remains a political question in many regards. Still, Shroff says, he hopes his work, and that of others in his field, can inform the discussion, especially by demonstrating the overall utility of data and its effects on economic efficiency.

“There is social value to this information,” Shroff says.

Collaborating for MIT’s Future poster session offers community insights and inspiration

Fri, 11/02/2018 - 3:35pm

The annual poster session, “Collaborating for MIT’s Future,” provided a high-energy setting to learn about MIT projects that serve the community and the wider world. Held on the top floor of the Media Lab, the poster session featured 59 posters and 133 presenters from more than 40 MIT departments, offices, and groups. Along with the collegial atmosphere and tasty food, the poster session gave attendees an insider’s view of MIT beyond their own department, lab or center.

Some posters were about new services, others offered “how-to” or “for your awareness” insights, still others highlighted partnerships and forward-looking collaboration. Here’s a small sample that shows the variety and vibrancy of the 2018 posters.

3-D printing

Did you know that MIT Copytech offers 3-D printing in collaboration with MIT’s Project Manus? Project Manus was looking for a place where 3-D printers could operate around the clock and where fees ($5 per hour) could be collected. Copytech now has two 3-D printers in their Room 11-004 location; the printed objects are made of a plastic filament derived from renewable resources, such as corn starch or sugarcane.

Steve Dimond, the manager of Copytech, notes that the 3-D printers are very popular with students, who often use them to make prototypes related to their lab work. But the printers can also be used to make gifts. One memorable print job featured 12 skulls, based on the true story of Phineas Gage, a railroad foreman man who survived an iron bar passing through his head.

The 3-D printers have cameras in them, so Copytech staff can check on them during off hours to make sure they’re running as expected. They’re viewable during Copytech’s regular business hours.

To find out more about 3D printing through Project Manus and Copytech, see the 3-D printing FAQ.

New human resources website

MIT Human Resources has launched hr.mit.edu, which integrates the former HR, careers, and new employee websites into one comprehensive resource. The MIT HR Communications team, in collaboration with design consultants and partners across MIT, held interviews and focus groups and conducted multiple rounds of usability testing to ensure that the new site meets the needs of current employees, prospective employees, and retirees. The new website is easier to navigate, more intuitive, and centered on what users need and how they search.

The discovery process revealed that managers at MIT felt they had to go to multiple areas of the HR website to get the information they needed to do their jobs (e.g., one place for onboarding, another for compensation, an yet another for staffing services). This resulted in a new design that brings all of that content together in a new, role-based section for managers. There are similar sections for new employees, current employees, and retirees.

The redesigned website also includes an HR Knowledge Base, which helps users find answers to their HR “How do I?” questions.

PubPub

PubPub is a platform for open access publishing by research communities, ranging from independent researchers and labs to conference organizers and book publishers. It’s sponsored by the Knowledge Futures Group, a joint initiative of the MIT Press and the Media Lab.

The focus of PubPub is user engagement and building community around publications, along with the different types of learning this engagement can spread. Its goal is to create a publishing tool for any open access community. Anyone can now create their own PubPub account and build a publishing community. 

One compelling example of PubPub’s use has been to document the Celebrating Millie conference, held in honor of late Institute Professor Mildred Dresselhaus last November. Dresselhaus’s granddaughter Shoshi Cooper transcribed and edited all of the speeches and uploaded stories and multimedia images from the conference posters. The resulting website has allowed people who couldn’t be at the conference to share in the experience. It’s also enabled conversations with people Dresselhaus inspired and discussions about her work. 

Parking made easy

MIT’s Parking and Transportation Office launched a new parking application in Atlas that mirrors a new parking program. The application, accessible with a Kerberos account, provides MIT parking account holders with their own dashboard, including account, billing, and ticket details.

The parking application features ease of use and automation. Parking stickers are a thing of the past: your parking account now renews automatically until you choose to close it. When you need to make a change — say, to enter information about a new car – these data will be active in the system within a day of entry.

There’s also a version of the application for parking coordinators on campus allowing them to view the status of parkers in their area and manage departmental vehicles.

The corresponding parking program expands the flexibility of daily rate parking to ungated lots through the use of license plate recognition technology; a vehicle goes around campus scanning plates, reading them to confirm you have a parking account for that area and charging you the same daily rate charged for parking in assigned gated lots. Paying a daily rate rather than a set amount each month provides flexibility to commuters, who can choose each day what mode of transport they would like to use. For benefits-eligible employees, this unlocks the benefits of the Access MIT program, including free and unrestricted use of the MBTA subway and local buses.

The energy at this year’s poster session was palpable and the glowing feedback confirms that. Photos of the event are available on Flickr.

Giving early-career women in mechanical engineering the tools to succeed in academia

Fri, 11/02/2018 - 2:10pm

For two days in late October, 34 of the brightest minds in mechanical engineering convened on MIT’s campus. They all come from different backgrounds — one person studies human-robot interaction at Stanford University while another conducts research in thermal equipment design at the University of Illinois at Urbana-Champaign. But they all have one thing in common: They are all female graduate students and postdocs considering a career in academia.

These women attended the inaugural Rising Stars in Mechanical Engineering Workshop hosted by MIT’s Department of Mechanical Engineering. The program, which is modeled after the successful Rising Stars Workshops in biomedical engineering, physics, civil and environmental engineering, and electrical engineering and computer science, aims to prepare women for the challenges associated with a career in academia. Topics covered ranged from leadership skill, to establishing a lab as a junior faculty member, and communicating their research vision.

“Our goal throughout the workshop was for them to develop professional skills as they envision a career in academia,” said workshop co-chair Evelyn Wang, the Gail E. Kendall Professor and department head for mechanical engineering. “Providing these talented young women with more mentorship and career skills can help pave the way for gender parity in mechanical engineering departments around the world.”

Wang kicked off the workshop by welcoming the researchers, who had been selected for the workshop based on their many achievements. She then introduced Deborah Burstein, a researcher at Beth Israel Deaconess Medical Center who also works for the MIT IMPACT Program. IMPACT helps researchers better articulate their work and identify ways they can make a lasting impact in their fields.

“Many of the participants commented that they wish they had learned the skills discussed in the IMPACT sessions in graduate school as it would have made their grant proposals much more effective,” said Theresa Werth, the program manager for Rising Stars in Mechanical Engineering.

The first day concluded with a series of panels where faculty from MIT’s Department of Mechanical Engineering reflected on their own experience as early career researchers. The first panel focused on the journey from student life to faculty life. When asked about the most important thing to do in the first year of a faculty job, Amos Winter, associate professor of mechanical engineering, extolled the virtues of patience.

“It’s helpful to recognize that there is a gestation period for a new faculty member. It takes a few years to get up and running, and that’s okay,” he said.

In a second panel, faculty discussed the various choices and serendipitous events that have altered their career paths. Yang Shao-Horn, the W.M. Keck Professor of Energy, emphasized the importance of reflection when deciding what projects to focus on.

“When we look forward we don’t know the risks or benefits, it’s only when we reflect that we can see clearly,” Yang said. “It’s a journey about knowing yourself.”

Akanksha Menon, a postdoc at Lawrence Berkeley National Laboratory, found hearing personal stories from young faculty useful. “Just to know that they were in our same shoes and felt the same insecurities or faced the same challenges – that’s been really great,” said Menon.

The second day of the workshop focused on the most pressing question on the minds of most PhD students and postdocs: how to get a job. A team from HFP Consulting introduced participants to the leadership and management skills needed to build a successful career in research.

The final lecture was given by Maria Yang, MIT associate professor of mechanical engineering and workshop co-chair.

“We assembled a truly inspiring group of young women,” said Yang. “They were incredibly engaged and enthusiastic throughout the workshop. By the end of the two days, they had even more confidence than when they first walked in.”

The women said they left the event with more than confidence and career building skills, they were now a part of a new community. For Kelilah Wolkowicz, a postdoc at Harvard University who recently completed her PhD at Penn State where she focused on wheelchair design, the comradery she felt with her fellow attendees was a highlight of the workshop.

“As you bounce from one university to another, it can be hard to establish a community of peers,” Wolkowicz explains. “This workshop has really helped with that because we've been able to meet so many women in our field who will be following similar career trajectories.”

In the coming years, Rising Stars workshops will be hosted by mechanical engineering departments at both Stanford University and the University of California at Berkeley.

Office of Sustainability names 2018 grant winners

Fri, 11/02/2018 - 1:50pm

This October, the MIT Office of Sustainability (MITOS) announced the winners of the 2018 Campus Sustainability Incubator Fund grants. With the Incubator Fund, MITOS supports research that utilizes MIT’s campus and its facilities as a test bed for new, sustainable solutions.

Now in its second year of awards, the Incubator Fund is supporting two new projects: Professor Jessika Trancik, in the Institute for Data, Systems, and Society, and Professor Douglas Hart, in the Department of Mechanical Engineering.  The Trancik team seeks to study on-site renewable energy storage systems and the Hart team will be engaging in a two-semester class to prototype carbon-neutral cooling systems. In both cases, the research will be managed in collaboration with operational staff within the Department of Facilities and the Central Utilities Plant (CUP).

The Incubator Fund was established in summer 2017 thanks to a donation from Malcolm Strandberg to support projects that bring students, faculty, and staff together to apply sustainability research and innovation on campus.

“The MIT campus provides a unique opportunity for researchers to work with staff and students to prove the feasibility of sustainable solutions on the individual and campus scale, with an eye on how they can be scaled up to cities and beyond,” says Julie Newman, director of MITOS and convener of the fund’s Advisory Committee. "It has been exciting to watch the first cohort's progress, and we are thrilled to support these two projects this year."

Calm, cool, and carbon-neutral

This year, undergraduate students taking 2.013 (Engineering Systems Design) and 2.014 (Engineering Systems Development) with Hart have been tasked with creating a high efficiency, carbon-neutral cooling system that can be tested directly with MIT’s existing infrastructure.

The challenge is imminent: As climate change continues to increase average temperatures particularly in highly populated parts of the globe, systems for cooling buildings will be necessary for human health. But in the case of current technologies, those cooling systems feed heat and carbon back into the environment and exacerbate the problem.

As a first step, the classes will design carbon-neutral cooling systems that could be integrated into MIT’s buildings. To ensure their design will be compatible with campus facilities, the class is working directly with staff at the MIT Central Utilities Plant (CUP), the on-campus power plant. Given the plant’s convenient location to campus, engineers from the CUP can visit the class to work with students on a regular basis.

“2.013 and 2.014 immerse students in a real-world design environment in which they are accountable to a sponsor, where their work has significant impact, and success or failure means more than a grade,” says Hart. “Working closely with the staff of CUP inherently raises the level of professionalism in the class while providing students with knowledgeable mentors that can guide them through the transition from student to professional engineer.”

The fall semester will be spent in design, and in the spring, students will develop their carbon-neutral solutions. Finally, a smaller group of students may stay on for the summer to test and apply the project on campus.

“Working with researchers and students is refreshing,” says Seth Kinderman, plant engineering manager at CUP. “Typically, we support the campus and students by making steam, electricity, and chilled water. Other times, we can support students and research directly.”

Saving (energy) for a rainy day

A look at Boston’s weather forecast reveals the necessity of Trancik’s research. Solar energy is not available all the time, and this can be a challenge for renewable energy installations in regions like the Northeast where overcast skies and precipitation happen frequently and can last for days.

One potential solution is to improve upon energy storage systems, but there are still limited data available about such systems, and much of what has been collected are proprietary. Trancik’s team plans to work with MIT Facilities to explore the installation of lithium-ion batteries.

"We are trying to understand how to optimize energy storage systems used in conjunction with sources of renewable energy,” says Micah Ziegler, a postdoc in the Trancik Lab. “The opportunity to collaborate with the MIT Department of Facilities to collect relevant data will be invaluable for our research and for the design and operation of these energy systems."

Once the batteries are in place, the researchers can test different strategies for redistributing the electricity they store. Innovations may come from facilities management, electrical engineering, or chemical engineering of the battery systems. And the data they gather could identify, for example, which strategies are most effective for reducing emissions or optimizing energy efficiency.

As the first project of its kind to plan to make the data available through the Sustainability DataPool, Trancik’s work will be a lasting contribution to energy storage system technology.

From 2017 seed funds, projects continue to grow

Work continues on the diverse projects that were awarded the inaugural Campus Sustainability Incubator Fund grants last year. This October, the team led by Professor Kripa Varanasi from the Department of Mechanical Engineering installed an electrified, water-catching dome structure over the steam plumes of MIT’s CUP, which could drastically reduce the water lost as steam.

Another project, initially led by Lisa Anderson in the Department of Chemical Engineering, and now overseen by MIT Research Scientist Jeremy Gregory, found last year that MIT orders three million lab gloves annually, and as of right now, it’s not clear where they are all going once they are used. That’s the next question, which this research team will begin tackling this year by conducting waste assessments in laboratories.

Meanwhile, Randy Kirchain in the Materials Research Laboratory has been leading an incubator project to support sustainable building design on campus. Kirchain’s team has been developing quantitative tools that factor sustainability into the design process while also consulting designers, to ensure the tool’s usefulness.

To build new college, MIT seeks campus and alumni input

Fri, 11/02/2018 - 12:59pm

Since announcing the MIT Stephen A. Schwarzman College of Computing, Institute leaders have reached out to the campus and alumni communities in a series of forums, seeking ideas about the transformative new entity that will radically integrate computing with disciplines throughout MIT.

MIT Provost Martin A. Schmidt and Dean of the School of Engineering Anantha P. Chandrakasan engaged with students, faculty, and staff at forums on campus, where they presented outlines of the project and received dozens of public comments and questions. Additionally, Chandrakasan and Executive Vice President and Treasurer Israel Ruiz engaged with alumni in two webcast sessions that featured Q&A about the college.

“Creating this new college requires us to think deeply and carefully about its structure,” Schmidt said at a forum for faculty on Oct. 18. That process should be firmly connected to the ideas and experiences of the MIT community as a whole, he said further at a student forum on Oct. 25, adding that the goal was to “engage you in the process of building the college together.”

Community perspectives

The discussions at the forums each had a slightly different flavor, generally reflecting the perspectives of the participants. The faculty forum, for instance, included professors from several fields concerned about maintaining a solid balance of disciplinary research at MIT.

The responsibilities of professors at the new college have yet to be fully defined. Many faculty will have joint appointments between the MIT Schwarzman College of Computing and existing MIT departments, an approach that both Schmidt and Chandrakasan acknowledged has had varying results in the past. As participants noted, some MIT faculty with joint appointments have thrived, but others have floundered, being pulled in different scholarly and administrative directions. 

“We need to figure out how to make dual appointments work,” Chandrakasan said. Still, he noted that the “cross-cutting” structure of the college had enormous potential to integrate computing into the full range of disciplines at the Institute.

At a standing-room-only forum for MIT staff members on Oct. 25, with people lining the walls of Room 4-270, audience members offered comments and questions about the college’s proposed main building, MIT’s computing infrastructure, teaching, advising, the admissions process, and the need to hire motivated staff in the college’s most formative stages.

“It’s an opportunity to really do a whole-of-Institute solution to this challenge,” Schmidt said. “It’s going to test us.”

Multiple people at the student forum on Oct. 25 called for diversity among the college’s new faculty — a view Schmidt and Chandrakasan readily agreed with. The Institute leaders also emphasized the expansion of opportunities the college will provide for students, including more joint programs and degrees, and more student support.

“There will be more UROP opportunities, more resources, more faculty,” Chandrakasan said. Also, he noted, “We’re not going to change the undergraduate admissions process.” MIT Chancellor Cynthia Barnhart also spoke at the student forum.

At all three on-campus forums, audience participants commented upon the value of having Institute supporters share MIT’s goal of creating a “better world.” At the staff forum, one audience member advocated that MIT only accept funding from backers who were fully committed to democracy, and questioned the Institute’s connections with Saudi Arabia. Schmidt noted that MIT — as it has publicly announced — is currently reassessing MIT’s Institute-level engagements with entities of the Kingdom of Saudi Arabia.

At the student forum, audience members also raised queries about MIT’s mission and its relationships with donors; the issues cited included the precedent of naming the college after an individual, and the extent of MIT’s due diligence process during the creation of the college. Schmidt said the Institute had performed its due diligence well and developed the idea of the named college after extensive discussions; he also noted that faculty and students of the college would be able to develop a full range of intellectual and academic projects freely.

Audience members also stressed the generalized need to think critically about the impact of technology on society at a moment of social, political, and ecological uncertainty — and expressed a preference for the college to integrate ethics into its curriculum.

“This presents a real opportunity to get at that,” Chandrakasan responded.

On Oct. 30, the Alumni Association hosted two webcasts that featured Q&A with Chandrakasan and Ruiz. Over 1,000 alumni from around the world registered for the virtual conversations, which were moderated by Vice President for Communications Nate Nickerson. Questions centered on how the cross-disciplinary aspirations of the college would find life, and on how ethics will be made to infuse the college and shape its graduates. In both sessions, alumni asked how they can participate in the pursuit of the college’s mission. “The alumni will be critical to our efforts,” said Ruiz. “They offer us great wisdom as we form the college, and they will serve as important points of connection for our faculty and students as they seek to understand all the ways that computing is shaping our world.”

Helping every department

The college is being developed thanks to a $350 million foundational gift from Mr. Schwarzman, the chairman, CEO, and co-founder of Blackstone, a leading global asset manager. It will serve as an interdisciplinary hub for research and teaching across all aspects of computing, while strengthening links between computing and other scholarly pursuits.

“The college has two goals,” said Chandrakasan at the forum for MIT staff members. “One is to advance computing, and one is to link computing to other other [fields]. … This allows us to optimize, unbundle, and rebundle, to make computing much more integrated across all disciplines.”

It also presents new organizational challenges. For decades, MIT has been largely organized around its five schools, which focus on engineering; science; architecture and planning; humanities, arts, and social sciences; and management. But as Chandrakasan emphasized in all three campus forums, the MIT Schwarzman College of Computing is intended to develop connections with all of those schools as well as other stand-alone institutes and programs on campus.

“This is about helping advance every department,” said Chandrakasan, who frequently referred to the importance of the college’s “bridge” function, meaning it can span the width of MIT to link students, faculty, and resources together.

For his part, Schmidt emphasized at the events that the college will accelerate the current trend in disciplinary transformation. He noted that the fields of economics and urban studies at MIT have both recently created joint degrees with computer science as a natural response to the ways data and computing power have enabled new modes of academic research.

The foundational gift is part of a $1 billion commitment MIT has made to the new college, which will be centered in a new campus building, include 50 new faculty and allow the Institute to create a new series of collaborative, interdisciplinary enterprises in research and education. The college is meant to address all aspects of computing, including the policy and ethical issues surrounding new technologies.

“Across the Institute there is great enthusiasm for this,” Chandrakasan added.

“A unique opportunity to evolve”

The MIT Schwarzman College of Computing is intended to open in the fall of 2019 and will be housed partly — but not entirely — in its new building. The timeline, Chandrakasan acknowledged, is “super-aggressive.”

Schmidt and Chandrakasan noted that many important issues were yet to be resolved. As part of the process of developing the college, the Institute is creating a task force and working groups to assess some of the critical issues MIT faces.

Some audience members at the forums also questioned why MIT would announce the creation of its new college at a time when some of the entity’s institutional features are unresolved. In response, Schmidt noted that the Institute benefits by being on the leading edge of computing, and that the creation of the college will only enhance that position. Community engagement, he noted, would help the Institute finalize its vision for the college.

“We’re not going to be able to answer all of [your] questions,” Schmidt said at the staff forum. To gain traction on unresolved matters, he added, “We think the task force model is an appropriate one.”

MIT intends to hire a dean for the college and begin the search process for new faculty during the current academic year. There are a few campus sites being considered as the location for the college’s main building, but not all elements of the college will be located in that building.

Overall, Schmidt concluded, the creation of the college has presented MIT with a unique opportunity to evolve in response to the prevalence of computing and its influence in so many spheres of life.

“Every campus in the country today has been grappling with the need,” Schmidt said. “We feel that MIT has come forward with a really compelling solution.”

Fleets of drones could aid searches for lost hikers

Thu, 11/01/2018 - 11:59pm

Finding lost hikers in forests can be a difficult and lengthy process, as helicopters and drones can’t get a glimpse through the thick tree canopy. Recently, it’s been proposed that autonomous drones, which can bob and weave through trees, could aid these searches. But the GPS signals used to guide the aircraft can be unreliable or nonexistent in forest environments.

In a paper being presented at the International Symposium on Experimental Robotics conference next week, MIT researchers describe an autonomous system for a fleet of drones to collaboratively search under dense forest canopies. The drones use only onboard computation and wireless communication — no GPS required.

Each autonomous quadrotor drone is equipped with laser-range finders for position estimation, localization, and path planning. As the drone flies around, it creates an individual 3-D map of the terrain. Algorithms help it recognize unexplored and already-searched spots, so it knows when it’s fully mapped an area. An off-board ground station fuses individual maps from multiple drones into a global 3-D map that can be monitored by human rescuers.

In a real-world implementation, though not in the current system, the drones would come equipped with object detection to identify a missing hiker. When located, the drone would tag the hiker’s location on the global map. Humans could then use this information to plan a rescue mission.

“Essentially, we’re replacing humans with a fleet of drones to make the search part of the search-and-rescue process more efficient,” says first author Yulun Tian, a graduate student in the Department of Aeronautics and Astronautics (AeroAstro).

The researchers tested multiple drones in simulations of randomly generated forests, and tested two drones in a forested area within NASA’s Langley Research Center. In both experiments, each drone mapped a roughly 20-square-meter area in about two to five minutes and collaboratively fused their maps together in real-time. The drones also performed well across several metrics, including overall speed and time to complete the mission, detection of forest features, and accurate merging of maps.

Co-authors on the paper are: Katherine Liu, a PhD student in MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and AeroAstro; Kyel Ok, a PhD student in CSAIL and the Department of Electrical Engineering and Computer Science; Loc Tran and Danette Allen of the NASA Langley Research Center; Nicholas Roy, an AeroAstro professor and CSAIL researcher; and Jonathan P. How, the Richard Cockburn Maclaurin Professor of Aeronautics and Astronautics.

Exploring and mapping

On each drone, the researchers mounted a LIDAR system, which creates a 2-D scan of the surrounding obstacles by shooting laser beams and measuring the reflected pulses. This can be used to detect trees; however, to drones, individual trees appear remarkably similar. If a drone can’t recognize a given tree, it can’t determine if it’s already explored an area.

The researchers programmed their drones to instead identify multiple trees’ orientations, which is far more distinctive. With this method, when the LIDAR signal returns a cluster of trees, an algorithm calculates the angles and distances between trees to identify that cluster. “Drones can use that as a unique signature to tell if they’ve visited this area before or if it’s a new area,” Tian says.

This feature-detection technique helps the ground station accurately merge maps. The drones generally explore an area in loops, producing scans as they go. The ground station continuously monitors the scans. When two drones loop around to the same cluster of trees, the ground station merges the maps by calculating the relative transformation between the drones, and then fusing the individual maps to maintain consistent orientations.

“Calculating that relative transformation tells you how you should align the two maps so it corresponds to exactly how the forest looks,” Tian says.

In the ground station, robotic navigation software called “simultaneous localization and mapping” (SLAM) — which both maps an unknown area and keeps track of an agent inside the area — uses the LIDAR input to localize and capture the position of the drones. This helps it fuse the maps accurately.

The end result is a map with 3-D terrain features. Trees appear as blocks of colored shades of blue to green, depending on height. Unexplored areas are dark but turn gray as they’re mapped by a drone. On-board path-planning software tells a drone to always explore these dark unexplored areas as it flies around. Producing a 3-D map is more reliable than simply attaching a camera to a drone and monitoring the video feed, Tian says. Transmitting video to a central station, for instance, requires a lot of bandwidth that may not be available in forested areas.

More efficient searching

A key innovation is a novel search strategy that let the drones more efficiently explore an area. According to a more traditional approach, a drone would always search the closest possible unknown area. However, that could be in any number of directions from the drone’s current position. The drone usually flies a short distance, and then stops to select a new direction.

“That doesn’t respect dynamics of drone [movement],” Tian says. “It has to stop and turn, so that means it’s very inefficient in terms of time and energy, and you can’t really pick up speed.”

 Instead, the researchers’ drones explore the closest possible area, while considering their current direction. They believe this can help the drones maintain a more consistent velocity. This strategy — where the drone tends to travel in a spiral pattern — covers a search area much faster. “In search and rescue missions, time is very important,” Tian says.

In the paper, the researchers compared their new search strategy with a traditional method. Compared to that baseline, the researchers’ strategy helped the drones cover significantly more area, several minutes faster and with higher average speeds.

One limitation for practical use is that the drones still must communicate with an off-board ground station for map merging. In their outdoor experiment, the researchers had to set up a wireless router that connected each drone and the ground station. In the future, they hope to design the drones to communicate wirelessly when approaching one another, fuse their maps, and then cut communication when they separate. The ground station, in that case, would only be used to monitor the updated global map.

Novelist Min Jin Lee makes the case for understanding through fiction

Thu, 11/01/2018 - 1:23pm

Renowned author Min Jin Lee made a vigorous case for literature as an essential means for understanding complex cultures around the globe, during a public event at MIT on Tuesday.

Lee, a childhood immigrant to the U.S. from Korea whose celebrated 2017 novel “Pachinko” details four generations of a Korean family in times of great upheaval, centered her remarks on the value of writing as a way of teaching people about their counterparts in unfamiliar cultures.

“Perhaps the job of the writer is to ask, ‘Could they be us?’” Lee said, speaking to a large and appreciative audience of over 300 people in MIT’s room 10-250.

The experience of reading fiction, she noted, brings a unique depth and commitment to the process of learning.

“What I’m asking is for you to hang out with me for 16 hours,” Lee said, referring to the amount of time “Pachinko” might take to read. “That’s a pretty big deal.” But one reward, she added, is, “If you could be Korean, only for those 16 hours … then you’ll realize you have the capacity to cross that ocean of unfamiliarity, where the unfamiliar becomes intimately your experience. And that is my goal, absolutely.”

Starr turn

Lee’s remarks were part of the Starr Forum series held by MIT’s Center for International Studies (CIS). The events feature public discussions about world politics, global trends, and international relations.

After delivering her remarks, Lee answered questions from the audience as well as from discussant Amy Carleton, a lecturer in MIT’s Comparative Media Studies/Writing program. Lee was introduced at the event by Richard Samuels, the Ford International Professor Professor of Political Science at MIT and director of CIS.

“Pachinko,” Lee’s second novel and best-known work, was named one of the 10 best books of 2017 by The New York Times and a finalist for the National Book Award for fiction. The novel is a sprawling historical narrative that includes sections set in Japanese-occupied Korea during the early 20th century, and in a later period when some of the book’s key characters live in Japan, where Koreans — even those born in Japan — are denied certain rights and live in a distinct minority culture.

Lee’s writerly interest in life as an often-excluded member of society derives to a significant extent from her own family’s experience. In 1976, at age 7, Lee immigrated with her parents to the U.S. from Korea, an experience she referred to throughout the event.

“As I child I remember thinking it was so difficult,” Lee said about her parents, who left a middle-class life in Korea when they came to America and settled in Queens, New York. “They had to deal with poverty, and being mistreated, and so many inequities and indignities, from being a person on the outside.” Over time, she added, “You know what it feels like to watch your parents being insulted.”

For several reasons, Lee added, “My childhood was in many ways complicated and dark.” Still, she thrived as a student, graduated from Yale University and Georgetown University’s law school, and was a lawyer before deciding to become a full-time writer.

In so doing, Lee said, she was fulfilling a longstanding need to describe her own kind of social experience to others.

“In terms of writing ‘Pachinko,’ I wrote this as an adult, but I started it as a child,” Lee said. “I got the idea when I was 19.”

Validation for readers

Responding to an audience question about anything she might have done differently in her life and career, Lee said, “I wish I thought [earlier] that my story mattered. I wouldn’t have taken so long” to write it.

And in response to one student’s comment that he felt “validated” after reading a novel about Koreans marginalized in a larger society, Lee had some sharp criticisms about the lack of representation for people of Asian heritage in our culture.

“Asian-Americans in this country are systematically and routinely erased in the media,” Lee said. “It’s intentional.” As a result, she added, Asian-Americans can easily doubt that their presence and experiences should matter to others.

As Lee made clear, her family’s identity when they lived in Korea was a bit complicated, too. Lee’s grandfather was a Presbyterian minister, a position she said was associated with reformist politics in Korea and the tensions that come with such such a position.

Lee also joked about the personal intricacies of maintaining a belief in both predestination on the one hand, and free will on the other. As she quipped, drawing laughs, “How do those two things work together? The way I think about it also, I can say this at MIT, is: Light is both a particle and a wave.”

Near the end of the question-and-answer session, one student asked Lee if she had thought about turning “Pachinko” into a longer work or continuing series in some form.

“It was an insane amount of work,” Lee replied. “I’m really glad I did it, but I would not continue it.”

Neuroscientists gain new insights through innovation

Thu, 11/01/2018 - 12:30pm

Hundreds of people who attended the Picower Institute for Learning and Memory fall symposium “Frontiers in Neurotechnology” on Oct. 23 got an inside look at emerging techniques and methods — ingenious new ways to look inside the brain, to understand its healthy anatomy and function, and to better understand disease.

Presented by 10 leading researchers the improvements in microscopy, advances in culturing brain tissues, and novel ways to detect and control brain activity illustrated the rapid pace of innovation in neuroscience today, said Li-Huei Tsai, Picower Professor and director of the Picower Institute. In her opening remarks, she reflected on how many new technologies her lab has adopted in the last decade and invited the crowd that packed Building 46’s Singleton Auditorium — and the overflow seating outside — to do the same.

“We are very fortunate to be working in neuroscience at a time of such extraordinary ingenuity,” she told the attendees.

Improving insight

Some of the new techniques that Tsai and hundreds of colleagues worldwide are adopting include methods of preserving, optically clearing, labeling, and enlarging brain tissue that were invented by the symposium’s host, Kwanghun Chung, assistant professor in the Picower Institute, the Institute for Medical Engineering and Science and the Department of Chemical Engineering.

At the symposium, Chung revealed that he is leading a new five-year project funded by the National Institutes of Health to map the entire human brain at unprecedented scales of detail, ranging from the circuits spanning distant regions down to individual synapses where neurons connect. The collaboration will take advantage of many of the rich suite of technologies his lab has developed. In recent work, Chung said, he’s also been applying the techniques to trace the circuits connecting brain regions that are key to deep-brain stimulation (a Parkinson’s disease treatment) and to illuminate differences between models of the autism-like condition Rett syndrome and healthy controls.

Advanced tissue processing, though, is just one category of ways that neuroscientists are getting a better look at the brain. Several speakers discussed major leaps in microscopy that have enabled the instruments able to image deeper, more clearly and faster, to keep pace with neural activity.

Elizabeth Hillman of Columbia University, for example, discussed her ongoing development of “SCAPE,” a version of light-sheet two-photon microscopy in which scopes image a broad plane of tissue rather than just a narrow spot. She showed how SCAPE is fast and sharp enough to allow real-time imaging of neural activity and blood flow in entire brains of behaving animals such as zebrafish and fruit flies, entire bodies of nematode worms, and large brain volumes in mice.

Na Ji of the University of California at Berkeley, meanwhile, described a different way of imaging activity within large brain volumes in live animals. She’s used the “Bessel beam” system to simultaneously image all the dendrites and synapses of a neuron (in 3-D) in the visual cortex to watch how it responds to specific sensory inputs. What would take 10 hours with a conventional scope takes 20 minutes using the technique, she said. 

Ji and fellow speaker David Kleinfeld both pointed to another technology advancing microscopy: adaptive optics. This technology, already familiar to makers of advanced telescopes, dynamically adjusts the mirrors of a scope to mitigate distortions from light bending within complex tissues. By working to optimize parameters of two-photon microscopy, including the use of adaptive optics, Kleinfeld’s lab has been able to image 800 microns (0.8 mm) deep into the brain, which is enough to reach down to dendritic spines of neurons in layer 5 of the cortex, an important area of scientific interest because cells there receive input to those spines from the thalamus.

Beyond innovating upon and optimizing optics, another way speakers showed progress in seeing into the brain was in developing new kinds of “reporters,” or molecules that will fluoresce when they encounter a target chemical or protein. Kleinfeld, for example, has developed reporters called CNiFERs that show the concentration of neurotransmitters and neuromodulators. He’s used CNiFERs to show that mice can volitionally increase their levels of noradrenaline and dopamine when incentivized with rewarding feedback.

Boaz Levi of the Allen Institute discussed his work to develop reporters for an entirely different purpose: distinctly marking different cell classes and types in human and mouse cortex using engineered viruses. Even a rather specific region of the brain, after all, can have scores of different types of cells, each playing a diversity of roles. His goal is to help neuroscientists better sort through that complex landscape.

Yet another way to get a readout of brain activity is to monitor its large-scale oscillations, or brain waves. Though known for decades, brainwaves still have huge untapped potential to inform researchers and clinicians alike, said Emery N. Brown, an anesthesiologist-neuroscientist-statistician at the Picower Institute, Massachusetts General Hospital, and Harvard University.

By advancing rigorous statistical methods to analyze EEG readings of patients under general anesthesia, Brown has been able to create systems that allow for monitoring a patient’s brain state in real-time. The work has shed light on how anesthesia works, how people respond at different ages, and has led to innovations in medical practice that allow anesthesiologists to better control drug doses, often allowing for dramatic reductions in the amount of medicine administered, said Brown, the Edward Hood Taplin Professor at MIT. That, in turn, can help patients wake up faster and clearer-headed yet with better-managed pain.

The next step in Brown’s work, he said, is developing a real-time, closed loop system that can regulate dosing in response to EEG readings.

Making "mini-brains"

For ethical and logistical reasons, particularly when studying developmental, neurological and psychiatric disorders, scientists need to experiment with lab-grown neural tissue cultures rather than actual brains. Several speakers at the symposium shared their latest innovations in the burgeoning field of culturing 3-D brain organoids, or mini-brains, which can provide otherwise unattainable insights. Organoids are considered valuable because they can be grown from reprogrammed cells taken from human patients, or other organisms, and then developed into models that reflect brain development with those same genes. Moreover the genes can be precisely manipulated in the lab for experiments.

Sergiu Pasca of Stanford University, who published a review of how organoids can model various diseases the day after the symposium, offered several examples in his talk. For instance, in a new study his lab is examining the effects of oxygen deprivation on cells in the developing brain, a problem that sometimes affects fetuses.

Paola Arlotta of Harvard University also studies essential aspects of human brain development, such as the molecular rules that build neuronal diversity in the cerebral cortex, using organoid models. In her talk she emphasized the need to ensure that organoids provide reproducible experimental testbeds. After all, it’s one thing to grow a tissue but it’s another to make reliable experimental comparisons using them. She discussed work her lab has done to improve organoid culture to ensure greater reproducibility.

In a similar vein, Orly Reiner of the Weizmann Institute in Israel described how her desire to study a neurodevelopmental disorder using organoids motivated her to innovate to overcome challenges. Her lab is interested in understanding lissencephaly, a disease characterized by a lack of characteristic folds on the surface of the brain. But as organoids grow, if they don’t have vasculature, cells embedded deep inside can’t get nutrients they need and die. Moreover, without some of the kinds of innovations described above, microscopes can image those inner cells clearly. So Reiner’s lab decided to grow organoids on a chip, reshaping them into more of a thin pita bread-like shape that could be imaged and sustained. The models have indeed provided important insights into the disorder.

Arnold Kriegstein of the University of California at San Francisco has also used organoids to study lissencephaly and other disorders. His lab has a more general interest in a particular class of progenitor cells that give rise to neurons during development. He found that these progenitor cells behave abnormally in the lissencephaly model. Meanwhile, in a very different project, members of his lab are using organoids in comparative evolutionary biology studies that show key differences in brain development among humans, chimpanzees and macaque monkeys.

After a day filled with examples of neuroscientists describing new findings made possible by new technologies, Matthew Wilson, the Sherman Fairchild Professor in Neurobiology at MIT and associate director of the Picower Institute summed up the optimism of the field.

“This is the frontier,” Wilson said. “Thinking about these technologies that are giving us access to describe and understand the complexity of the brain at the level of molecules, cells and systems and then using that to turn around and understand how we can control brain function and understand cognition, it really is a golden age.”

Study: Impact of mercury-controlling policies shrinks with every five-year delay

Thu, 11/01/2018 - 9:57am

Mercury is an incredibly stubborn toxin. Once it is emitted from the smokestacks of coal-fired power plants, among other sources, the gas can drift through the atmosphere for up to a year before settling into oceans and lakes. It can then accumulate in fish as toxic methylmercury, and eventually harm the people who consume the fish.

What’s more, mercury that was previously emitted can actually re-enter the atmosphere through evaporation. These “legacy emissions” can drift and be deposited elsewhere, setting off a cycle in which a growing pool of toxic mercury can circulate and contaminate the environment for decades or even centuries.

A new MIT study finds that the longer countries wait to reduce mercury emissions, the more legacy emissions will accumulate in the environment, and the less effective any emissions-reducing policies will be when they are eventually implemented.

In a paper published today in the journal Environmental Science and Technology, researchers have found that, for every five years that countries delay in cutting mercury emissions, the impact of any policy measures will be reduced by 14 percent on average. In other words, for every five years that countries wait to reduce mercury emissions, they will have to implement policies that are 14 percent more stringent in order to meet the same reduction goals.

The researchers also found that remote regions are likely to suffer most from any delay in mercury controls. Mercury contamination in these regions will only increase, mostly from the buildup of legacy emissions that have traveled there and  continue to cycle through and contaminate their environments.

“The overall message is that we need to take action quickly,” says study author Noelle Selin, associate professor in MIT’s Institute for Data Systems and Society and Department of Earth, Atmospheric, and Planetary Sciences. “We will be dealing with mercury for a long time, but we could be dealing with a lot more of it the longer we delay controls.”

Global delay

The Minamata Convention, an international treaty with 101 parties including the United States, went into effect in August 2017. The treaty represents a global commitment to protect human health and the environment by reducing emissions of mercury from anthropogenic sources. The treaty requires that countries control emissions from specific sources, such as coal-fired power plants, which account for about a quarter of the world’s mercury emissions. Other sources addressed by the treaty include mercury used in artisanal and small-scale gold mining, nonferrous metals production, and cement production.

In drafting and evaluating their emissions-reducing plans, policymakers typically use models to simulate the amount of mercury that would remain in the atmosphere if certain measures were taken to reduce emissions at their source. But Selin says many of these models either do not account for legacy emissions or they assume that these emissions are constant from year to year. These measures also do not take effect immediately — the treaty urges that countries take action as soon as possible, but its requirements for controlling existing sources such as coal-fired power plants allow for up to a 10-year delay.  

“What many models usually don’t take into account is that anthropogenic emissions are feeding future legacy emissions,” Selin says. “So today’s anthropogenic emissions are tomorrow’s legacy emissions.”

The researchers suspected that, if countries hold off on implementing their emissions control plans, this could result in the growth of not just primary emissions from smokestacks, but also legacy emissions that made it back into the atmosphere a second time.

“In real life, when countries say, ‘we want to reduce emissions,’ it usually takes many years before they actually do,” says Hélène Angot, the study’s first author and a former postdoc at MIT. “We wanted to ask, what are the consequences of delaying action when you take legacy emissions into account.”

The legacy of waiting

The group used a combination of two models: GEOS-Chem, a global atmospheric model developed at MIT that simulates the transport of chemicals in the atmosphere around the world; and a biogeochemical cycle model that simulates the way mercury circulates in compartments representing global atmosphere, soil, and water.

With this modeling combination, the researchers estimated the amount of legacy emissions that would be produced in any region of the world, given various emissions-reducing policy timelines. They assumed a scenario in which countries would adopt a policy to reduce global mercury emissions by 50 percent compared to 2010 levels. They then simulated the amount of mercury that would be deposited in lakes and oceans, both from primary and legacy emissions, if such a policy were delayed every five years, from 2020 to 2050.

In sum, they found that if countries were to delay by five, 10, or 15 years, any policy they would implement would have 14, 28, or 42 percent less of an impact, respectively, than if that same policy were put in place immediately.

“The longer we wait, the longer it will take to get to safe levels of contamination,” Angot says.

Remote consequences

Based on their simulations, the researchers compared four regions located at various distances from anthropogenic sources: remote areas of eastern Maine; Ahmedabad, one of the largest cities in India, located near two coal-fired power plants; Shanghai, China’s biggest city, which has elevated atmospheric mercury concentrations; and an area of the Southern Pacific known for its tuna fisheries.

They found that, proportionally, delays in mercury action had higher consequences in the regions that were farthest away from any anthropogenic source of mercury, such as eastern Maine — an area that is home to several Native American tribes whose livelihoods and culture depend in part on the local fish catches.

Selin and Angot have been collaborating with members of these tribes, in a partnership that was established by MIT’s Center for Environmental Health Sciences.

“These communities are trying to go back to a more traditional way of life, and they want to eat more fish, but they’re contaminated,” Angot says. “So they asked us, ‘When can we safely eat as much fish as we want? When can we assume that mercury concentrations will be low enough so we can eat fish regularly?’”

To answer these questions, the team modeled the amount of fish contamination in eastern Maine that could arise from a buildup of legacy emissions if mercury-reducing policies are delayed. The researchers used a simple lake model, adapted and applied at MIT in collaboration with colleagues at Michigan Technological University, that simulates the way mercury circulates through a column that represents layers of the atmosphere, a lake, and the sediment beneath. The model also simulates the way mercury converts into methylmercury, its more toxic form that can bioaccumulate in fish.

“In general, we found that the longer we wait to decrease global emissions, the longer it will take to get to safe methylmercury concentrations in fish,” Angot says. “Basically, if you are far away [from any anthropogenic source of mercury], you rely on everyone else. All countries have to decrease emissions if you want to see a decrease in contamination in a very remote place. So that’s why we need global action.”

This research was supported, in part, by the National Institute of Environmental Health Sciences, through a core grant to MIT’s Center for Environmental Health Sciences, and by the NIEHS Superfund Basic Research Program.

Pages