Feed aggregator

SCIENCE: 'Butterfly effect' hampers weather forecasts

ClimateWire News - Thu, 08/08/2019 - 6:42am
BOULDER, Colo. — When it comes to using very elaborate high-speed computer models to forecast the weather, there is good and bad news.

PEOPLE: Key architect of car rules rollback to depart — sources

ClimateWire News - Thu, 08/08/2019 - 6:42am
Regulators and other observers anticipate that Heidi King, a chief architect of President Trump's rollback of clean car standards, will soon leave the administration.

UNITED NATIONS: Major report urges overhaul of global land use, agriculture

ClimateWire News - Thu, 08/08/2019 - 6:42am
Halting climate change and feeding the world's rapidly growing population both require major overhauls to the way that humans manage the land they live on, according to a much-anticipated report from the Intergovernmental Panel on Climate Change.

Does cable news shape your views?

MIT Latest News - Wed, 08/07/2019 - 11:59pm

It’s a classic question in contemporary politics: Does partisan news media coverage shape people’s ideologies? Or do people decide to consume political media that is already aligned with their beliefs?

A new study led by MIT political scientists tackles this issue head-on and arrives at a nuanced conclusion: While partisan media does indeed have “a strong persuasive impact” on political attitudes, as the researchers write in a newly published paper, news media exposure has a bigger impact on people without strongly held preferences for partisan media than it does for people who seek out partisan media outlets.

In short, certain kinds of political media affect a cross-section of viewers in varying manners, and to varying degrees — so while the influence of partisan news is real, it also has its limits.

“Different populations are going to respond to partisan media in different ways,” says Adam Berinsky, the Mitsui Professor of Political Science and director of the Political Experiments Research Lab (PERL) at MIT, and a co-author of the study.

“Political persuasion is hard,” Berinsky adds. “If it were easy, the world would already look a lot different.”

The paper, “Persuading the Enemy: Estimating the Persuasive Effects of Partisan Media with the Preference-Incorporating Choice and Assignment Design,” is now available in advance online form from the American Political Science Review.

In addition to Berinsky, the authors are Justin de Benedictis-Kessner PhD ’17, an assistant professor of political science at Boston University; Mathew A. Baum, a professor at the Harvard Kennedy School; and Teppei Yamamoto, an associate professor in MIT’s Department of Political Science.

Breaking down the problem

A substantial political science literature has debated the question of media influence; some scholars have contended that partisan media significantly shapes public opinion, but others have argued that “selective exposure,” in which people watch what they already agree with, is predominant. 

“It’s a really tricky problem,” Berinsky says. “How do you disentangle these things?”

The new research aims to do that, in part, by disaggregating the viewing public. The study consists of a series of experiments and surveys analyzing the responses of smaller subgroups, which were divided according to media consumption preferences, ideology, and more.

That allows the researchers to tease apart the cause-and-effect issues surrounding media consumption by looking more specifically at the impact of media on people with different ideologies and different levels of willingness to view media. The researchers call this approach the Preference-Incorporating Choice and Assignment design, or PICA.

For instance, one experiment within the study gave participants the option of reading web posts from either the conservative Fox News channel; MSNBC, which has several shows leaning in a significantly more liberal-left direction; or the Food Network. Other participants were assigned to watch one of the three.

By examing viewer responses to the content, the scholars found that people who elected to read materials from partisan news channels were less influenced by the content. By contrast, participants who gravitated to the Food Network but were assigned to watch cable news, were more influenced by the content.

How big is the effect? Quantitatively, the researchers found, a single exposure to partisan media can change the views of relatively nonpolitical citizens by an amount equal to one-third of the average ideological gap that exists between partisans on the right and left sides of the political spectrum.

Thus, the influence of cable news depends on who it is reaching. “People do respond differently based on their preferences,” Berinsky says.

And while the impact of partisan cable news on people who elect to watch it is smaller, it does exist, the researchers found. For instance, in another of the study’s experiments, the researchers tested cable news’ effects on viewers’ beliefs about marijuana legislation. Even among regular cable-news viewers, partisan content influenced people’s views.

Overall, Yamamoto states, the PICA method is novel because it “allows us to make inferences about what is never [otherwise] directly observable,” that is, the impact of partisan media on people who would normally choose not to consume it.  

“Most people just don’t want news”

To put the findings in the context of daily news viewership in the U.S., consider the recent congressional hearings in which special counsel Robert Mueller testified about his presidential investigation. Fox News led the cable ratings with an average of 3 million viewers during most of the day, while MSNBC had an average of 2.4 million viewers. Overall, 13 million people watched. But the Super Bowl, for example, regularly pulls in around 100 million viewers.

“Most people just don’t want to be exposed to political news,” Berinsky notes. “These are not bad people or bad citizens. In theory, a democracy is working well when you can ignore politics.”

One implication of the larger lack of interest in politics, consequently, is that any audience gains that partisan media outlets experience can produce relatively greater influence — since that growth would apply to formerly irregular consumers of news, who may be more easily influenced. Again, though, such audience gains are likely to be limited, due to the reluctance of most Americans to consume partisan media.

“We only learned those people are persuadable because we made them watch the news,” Berinsky says.

Other scholars in the field say the paper is a valuable addition to the literature on media influence. Kevin Arceneaux, the Thomas J. Freaney, Jr. Professor of Political Science and director of the Behavioral Foundations Lab at Temple University, says the study “represents an important methodological leap forward in the study of media effects.”

Arceneaux says the researchers “convincingly demonstrate that partisan news media have the largest effects among individuals who tend to avoid consuming news,” and suggests some possible implications pertaining to the larger media landscape.

For people who do follow politics, he suggests, having many news options available may “blunt the persuasive and polarizing effects of partisan news media”; at the same time, social media could be “an important source of polarization” by introducing some people to news. Arceneaux also notes that further research on the effects of “counterattitudinal” partisan news — content that argues against the beliefs of consumers — would shed more light on the dynamics of media influence.

The study was supported by a National Science Foundation grant and the Political Experiments Research Lab at MIT; Berinsky’s contribution was partly supported by a Joan Shorenstein Fellowship.

Air travel in academia

MIT Latest News - Wed, 08/07/2019 - 12:45pm

Our planet’s warming climate presents an imminent and catastrophic challenge that will have far-reaching economic, social, and political ramifications. As residents of a wealthy, developed nation, we contribute more to climate change than the average global citizen. At MIT, as globally connected citizens with many opportunities for work- and research-related air travel, many community members contribute more to climate change than the average American.

For many individuals at the Media Lab, who travel around the world to collaborate on research projects, present at conferences, and lead workshops, research-related air travel represents a huge proportion of their annual greenhouse-gas emissions. For example, a single economy-class seat on a flight from Boston, Massachusetts, to Los Angeles, California, is responsible for the same carbon emissions as 110 days of driving a car. Several labbers wanted to do more to educate the Media Lab community about the impact of our collective air travel and improve the lab’s sustainability.

While the best way to reduce our carbon footprint would be to take fewer airplane flights, this solution isn’t always possible or desirable given the research opportunities that require air travel. Instead, research assistants Juliana Cherston, Natasha Jaques, and Caroline Jaffe decided to start a pilot program through which the Media Lab will buy high-quality carbon offsets to reduce the climate impact of the lab’s collective air travel. The program's website was designed and engineered by Craig Ferguson.

Though carbon-offset programs have been criticized in the past for giving people an excuse for irresponsible climate behavior, carbon-offset verification has improved drastically in the past decade. When it is infeasible to reduce overall air travel mileage, the purchase of high-quality, verified carbon offsets will fund projects that produce renewable energy and avoid future carbon emissions. As part of a pilot program, the lab plans to buy carbon offsets through Gold Standard, a certified offset provider that verifies that their offset projects, like distributing clean cooking stoves, investing in wind power plants, and regenerating forests, both reduce carbon emissions and also meet the United Nations' Sustainable Development Goals.

During the six-month pilot program, the project leaders are asking members of the Media Lab community to log their lab-related air miles through a simple web interface. At the end of each month they will tally the air miles traveled by the community, calculate the carbon emissions associated with those flights, and purchase offsets through Gold Standard to offset the impact of those flights. It is hoped that the program will spark a discussion about climate behavior while contributing to a global model of sustainability.

While putting together the pilot program, the organizing team members ran into a few surprising data and design issues. First, they learned that gathering data — and knowing which data to collect — was trickier than expected. What exactly counts as “lab-related” travel, and is there some centralized system that tracks the lab’s air mileage? It turns out that no such system exists. While MIT maintains careful financial accounting, there hasn’t been a reason to specifically track mileage before, and the ability to do so is not built into the Institute’s accounting systems.

The team also wrestled with interesting questions around user participation. While they wanted to encourage as many people as possible to participate in order to collect the most accurate travel data, they also didn’t want to incentivize people to travel more than they do already. And, they didn’t want people to vacate a sense of responsibility by knowing their travel was being offset. In the process of putting together this pilot, the team learned of other groups at MIT and at other universities who are developing carbon-offset programs. In other cases, offset programs are top-down: Offsets are automatically purchased through finance or logistics channels. These programs don’t have to deal with user-participation challenges and likely have more accurate data totals, but they also miss the opportunity to engage the community in a substantive conversation around air travel emissions.

After thinking carefully about goals for the project, the team decided that soliciting travel data from the community would do the most to raise awareness about the issue — and it was also a cheap and easy way to kick off a pilot. After launching the pilot several weeks ago, the team has received a few dozen messages communicating enthusiasm, asking questions, and raising concerns. They are planning to send monthly update emails to the Media Lab community, and host several discussion groups at the end of the pilot to evaluate the program and figure out what to do next. Through this pilot, the team hopes to learn about what makes an effective carbon-offsets program and pass this knowledge on to groups at MIT and other schools who are trying to implement university-wide offset programs.

Read more at offset.media.mit.edu (and log your air miles if you’re at the Media Lab). When the pilot is complete, the team will publish a followup to share its findings.

A version of this article was previously published by the MIT Media Lab.

3Q: Jeremy Gregory on measuring the benefits of hazard resilience

MIT Latest News - Wed, 08/07/2019 - 12:30pm

According to the National Oceanic and Atmospheric Administration (NOAA), the combined cost of natural disasters in the United States was $91 billion in 2018. The year before, natural disasters inflicted even greater damage — $306.2 billion. Traditionally, investment in mitigating these damages has gone toward disaster response. While important, disaster response is only one part of disaster mitigation. By putting more resources into disaster readiness, communities can reduce the time it takes to recover from a disaster while decreasing loss of life and damage costs. Experts refer to this preemptive approach as resilience.

Resilience entails a variety of actions. In the case of individual buildings, it can be as straightforward as increasing the nail size in roof panels, using thicker windows, and increasing the resistance of roof shingles. On a broader scale, it involves predicting vulnerabilities in a community and preparing for surge pricing and other economic consequences associated with disasters.

MIT Concrete Sustainability Hub Executive Director Jeremy Gregory weighs in on why resilience hasn’t been widely adopted in the United States and what can be done to change that.

Q: What is resilience in the context of disaster mitigation?

A: Resilience is how one responds to a change, usually that is in the context of some type of disaster — whether it’s natural or manmade. There are three components of resilience: How significant is the damage due to the disaster? How long does it take to recover? What is the level of recovery after a certain amount of time?

It’s important to invest in resilience since we can mitigate significant expenses and loss of life due to disasters before they occur. So, if we build more resilient in the first place, then we don’t end up spending as much on the response to a disaster and communities can more quickly become operational again.

Generally, building construction is not particularly resilient. That’s primarily because the incentives aren’t aligned for creating resilient construction. For example, the Federal Emergency Management Agency, which handles disaster response, invests significantly more in post-disaster mitigation efforts than it does in pre-disaster mitigation efforts — the funds are an order of magnitude greater for the former. Part of that could be that we’re relying on an agency that’s primarily focused on emergency response to help us prepare for avoiding an emergency response. But primarily, that’s because when buildings are purchased, we don’t have information on the resiliency of the building.

Q: What is needed to make resilience more widely adopted?

A: Essentially, we need a robust approach for quantifying the benefits of resilience for a diverse range of contexts. For a lot of buildings, the construction decisions are not made in consultation with the ultimate owner of the building. A developer has to make decisions based on what they think the owner will value. And right now, owners don’t communicate that they value resilience. I think a big part of that is that they don’t have enough quantitative information about why one building is more resilient than another.

So, for example, when it comes to the fuel economy of our automobiles, we now have a consistent way to measure that fuel economy and communicate fuel consumption costs over the life cycle of the vehicle. Or similarly, we have a way of measuring the energy consumption of appliances that we buy and quantifying those costs throughout the product life. We currently don’t have a robust system for quantifying the resilience of a building and how that will translate into costs associated with repairs due to hazards over the lifetime of the building.

Q: Is building resilient expensive?

A: Building resilient does not have to be significantly more expensive than conventional construction. Our research has shown that more resilient construction can cost less than 10 percent more than conventional construction. But those increased initial costs are offset by lower expenses associated with hazard repairs over the lifetime of the building. So, in some of the cases we looked at in residential construction, the payback periods for the more hazard-resistant construction were five years or less in areas prone to hurricane damage. Our other research on the break-even mitigation percentage has shown that, in some of the most hurricane-prone areas, you can spend up to nearly 20 percent more on the initial investment of the building and break even on your expenses over a 30-year period, including from the damages due to hazards, compared to a conventional building that will sustain more damage.

It’s important for owners to know how significant these costs are and what the life-cycle benefits are for more hazard-resistant construction. Once developers know that homeowners value that information, that will create more market demand for hazard-resistant construction and ultimately lead to the development of safer and more resilient communities.

A similar shift has occurred in the demand for green buildings, and that’s primarily due to rating systems like LEED [Leadership in Energy and Environmental Design]: developers now construct buildings with green rating systems because they know there is a market premium for those buildings, since owners value them. We need to create a similar kind of demand for resilient construction.

There are several resilient rating systems already in place. The Insurance Institute for Business and Home Safety, for example, has developed the Fortified rating system, which informs homeowners and builders about hazard risks and ranks building designs according to certain levels of protection. The U.S. Resiliency Council’s Building Rating System is another model that offers four rating levels and currently focuses primarily on earthquakes. Additionally, there is the REli rating by the U.S. Green Building Council — the same organization that runs the LEED ratings. These are all good efforts to communicate resilient construction, but there are also opportunities to incorporate more quantitative estimates of resilience into the rating systems.

The rise of these kinds of resilience rating systems is particularly timely since the annual cost of hazard-induced damage is expected to increase over the next century due to climate change and development in hazard-prone areas. But with new standards for quantifying resilience, we can motivate hazard-resistant construction that protects communities and mitigates the consequences of climate change.

Study measures how fast humans react to road hazards

MIT Latest News - Wed, 08/07/2019 - 11:28am

Imagine you’re sitting in the driver’s seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?

MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.

Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the Journal of Experimental Psychology: General, the researchers examined how quickly drivers can recognize a road hazard if they’ve just looked back at the road. That’s a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.

“You’re looking away from the road, and when you look back, you have no idea what’s going on around you at first glance,” says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We wanted to know how long it takes you to say, ‘A moose is walking into the road over there, and if I don’t do something about it, I’m going to take a moose to the face.’”

For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards — such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs — and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.

The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.

Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they’ll most likely be expensive. “And who is more likely to buy expensive vehicles? Older drivers,” he says. “If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn’t reflect the time older drivers need. In that case, you’ve made a system that’s unsafe for older drivers.”

Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.

Playing “the worst video game ever”

In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. “Think of it as the worst video game ever,” Wolfe says.

The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver’s response, such as braking or swerving.

Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn’t. There was then another split-second pause on a black screen before the next mask popped up.

When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants’ responses. If a participant responded incorrectly to one video, the next video’s duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. “If they got it wrong, we assumed they didn’t have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter,” Wolfe says.

The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The video stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they’d steer.

“It’s not enough to say, ‘I know something fell into road in my lane.’ You need to understand that there’s a shoulder to the right and a car in the next lane that I can’t accelerate into, because I’ll have a collision,” Wolfe says.

More time needed

The MIT study didn’t record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.

Wolfe thinks that’s concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.

Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. “That already bites into the 1.5 seconds,” he says. “If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn’t even get into time it’ll take to reassert control and brake or steer. Then, it starts to get really worrying.”

Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen — indicating where a smartphone may be mounted on a windshield — and similarly pump the brakes when they notice a road hazard.

The work is sponsored, in part, by the Toyota Research Institute.  

NORWAY: Wealth fund's bid to dump Big Oil is now but a whimper

ClimateWire News - Wed, 08/07/2019 - 6:43am
After revealing it wants to dump all oil stocks in a market-shattering bang in 2017, Norway's $1.1 trillion wealth fund's actual divestment could now be so small it hardly matters.

BRAZIL: Latest deforestation data shows significant surge

ClimateWire News - Wed, 08/07/2019 - 6:43am
New data from the Brazilian space research institute indicates a surge in deforestation in the Amazon in the last quarter.

PEOPLE: Hillary and Chelsea Clinton writing book on 'Gutsy Women'

ClimateWire News - Wed, 08/07/2019 - 6:43am
Hillary Clinton and Chelsea Clinton have teamed up for "The Book of Gutsy Women," honoring everyone from scientist Marie Curie to climate activist Greta Thunberg.

TRANSPORTATION: Battery-powered ships next up in battle to tackle emissions

ClimateWire News - Wed, 08/07/2019 - 6:43am
The electric battery boom has a new target: ships.

EXTREME WEATHER: Children affected by Harvey feel new stresses

ClimateWire News - Wed, 08/07/2019 - 6:43am
The state of Texas and its communities have taken major steps to defend themselves from natural disasters since Hurricane Harvey devastated large swaths of the region two years ago, a new report indicates.

EMISSIONS: Energy boom in Asia may undermine global CO2 goals

ClimateWire News - Wed, 08/07/2019 - 6:43am
Southeast Asia's meteoric economic rise threatens global climate goals, and the region itself, international organizations and financial institutions are warning with rising frequency.

EPA: Chamber of Commerce to defend ACE rule in court

ClimateWire News - Wed, 08/07/2019 - 6:43am
Another player has thrown its weight behind the Trump administration's new Affordable Clean Energy rule in court.

Q&A: An author of N.Y. climate bill on its success and pitfalls

ClimateWire News - Wed, 08/07/2019 - 6:43am
New York passed one of the most ambitious pieces of climate legislation in American history this summer. The Climate Leadership and Community Protection Act requires the state to cut emissions 40% from 1990 levels by 2030 and 85% by 2050. The remaining 15% would be offset.

EMISSIONS: Why are EPA methane estimates so low? It's guesswork

ClimateWire News - Wed, 08/07/2019 - 6:43am
Studies continue to pour in showing that EPA's decades-old method for estimating methane emissions from oil and gas facilities doesn't stand up to empirical data.

ENERGY TRANSITIONS: The future of offshore wind may depend on Bernhardt

ClimateWire News - Wed, 08/07/2019 - 6:43am
The nation's first large project for offshore wind is scuffling, and it's unclear how supportive Interior Secretary David Bernhardt is of the proposed facility in Massachusetts.

Astrophysical shock phenomena reproduced in the laboratory

MIT Latest News - Tue, 08/06/2019 - 11:59pm

Vast interstellar events where clouds of charged matter hurtle into each other and spew out high-energy particles have now been reproduced in the lab with high fidelity. The work, by MIT researchers and an international team of colleagues, should help resolve longstanding disputes over exactly what takes place in these gigantic shocks.

Many of the largest-scale events, such as the expanding bubble of matter hurtling outward from a supernova, involve a phenomenon called collisionless shock. In these interactions, the clouds of gas or plasma are so rarefied that most of the particles involved actually miss each other, but they nevertheless interact electromagnetically or in other ways to produces visible shock waves and filaments. These high-energy events have so far been difficult to reproduce under laboratory conditions that mirror those in an astrophysical setting, leading to disagreements among physicists as to the mechanisms at work in these astrophysical phenomena.

Now, the researchers have succeeded in reproducing critical conditions of these collisionless shocks in the laboratory, allowing for detailed study of the processes taking place within these giant cosmic smashups. The new findings are described in the journal Physical Review Letters, in a paper by MIT Plasma Science and Fusion Center Senior Research Scientist Chikang Li, five others at MIT, and 14 others around the world.

Virtually all visible matter in the universe is in the form of plasma, a kind of soup of subatomic particles where negatively charged electrons swim freely along with positively charged ions instead of being connected to each other in the form of atoms. The sun, the stars, and most clouds of interstellar material are made of plasma.

Most of these interstellar clouds are extremely tenuous, with such low density that true collisions between their constituent particles are rare even when one cloud slams into another at extreme velocities that can be much faster than 1,000 kilometers per second. Nevertheless, the result can be a spectacularly bright shock wave, sometimes showing a great deal of structural detail including long trailing filaments.

Astronomers have found that many changes take place at these shock boundaries, where physical parameters “jump,” Li says. But deciphering the mechanisms taking place in collisionless shocks has been difficult, since the combination of extremely high velocities and low densities has been hard to match on Earth.

While collisionless shocks had been predicted earlier, the first one that was directly identified, in the 1960s, was the bow shock formed by the solar wind, a tenuous stream of particles emanating from the sun, when it hits Earth’s magnetic field. Soon, many such shocks were recognized by astronomers in interstellar space. But in the decades since, “there has been a lot of simulations and theoretical modeling, but a lack of experiments” to understand how the processes work, Li says.

Li and his colleagues found a way to mimic the phenomena in the laboratory by generating a jet of low-density plasma using a set of six powerful laser beams, at the OMEGA laser facility at the University of Rochester, and aiming it at a thin-walled polyimide plastic bag filled with low-density hydrogen gas. The results reproduced many of the detailed instabilities observed in deep space, thus confirming that the conditions match closely enough to allow for detailed, close-up study of these elusive phenomena. A quantity called the mean free path of the plasma particles was measured as being much greater than the widths of the shock waves, Li says, thus meeting the formal definition of a collisionless shock.

At the boundary of the lab-generated collisionless shock, the density of the plasma spiked dramatically. The team was able to measure the detailed effects on both the upstream and downstream sides of the shock front, allowing them to begin to differentiate the mechanisms involved in the transfer of energy between the two clouds, something that physicists have spent years trying to figure out. The results are consistent with one set of predictions based on something called the Fermi mechanism, Li says, but further experiments will be needed to definitively rule out some other mechanisms that have been proposed.

“For the first time we were able to directly measure the structure” of important parts of the collisionless shock, Li says. “People have been pursuing this for several decades.”

The research also showed exactly how much energy is transferred to particles that pass through the shock boundary, which accelerates them to speeds that are a significant fraction of the speed of light, producing what are known as cosmic rays. A better understanding of this mechanism “was the goal of this experiment, and that’s what we measured” Li says, noting that they captured a full spectrum of the energies of the electrons accelerated by the shock.

"This report is the latest installment in a transformative series of experiments, annually reported since 2015, to emulate an actual astrophysical shock wave for comparison with space observations," says Mark Koepke, a professor of physics at West Virginia University and chair of the Omega Laser Facility User Group, who was not involved in the study. "Computer simulations, space observations, and these experiments reinforce the physics interpretations that are advancing our understanding of the particle acceleration mechanisms in play in high-energy-density cosmic events such as gamma-ray-burst-induced outflows of relativistic plasma."

The international team included researchers at the University of Bordeaux in France, the Czech Academy of Sciences, the National Research Nuclear University in Russia, the Russian Academy of Sciences, the University of Rome, the University of Rochester, the University of Paris, Osaka University in Japan, and the University of California at San Diego. It was supported by the U.S. Department of Energy and the French National Research Agency.

New insights into bismuth’s character

MIT Latest News - Tue, 08/06/2019 - 3:40pm

The search for better materials for computers and other electronic devices has focused on a group of materials known as “topological insulators” that have a special property of conducting electricity on the edge of their surfaces like traffic lanes on a highway. This can increase energy efficiency and reduce heat output.

The first experimentally demonstrated topological insulator in 2009 was bismuth-antimony, but only recently did researchers identify pure bismuth as a new type of topological insulator. A group of researchers in Europe and the U.S. provided both experimental evidence and theoretical analysis in a 2018 Nature Physics report.

Now, researchers at MIT along with colleagues in Boston, Singapore, and Taiwan have conducted a theoretical analysis to reveal several more previously unidentified topological properties of bismuth. The team was led by senior authors MIT Associate Professor Liang Fu, MIT Professor Nuh Gedik, Northeastern University Distinguished Professor Arun Bansil, and Research Fellow Hsin Lin at Academica Sinica in Taiwan.

“It’s kind of a hidden topology where people did not know that it can be that way,” says MIT postdoc Su-Yang Xu, a coauthor of the paper published recently in PNAS.

Topology is a mathematical tool that physicists use to study electronic properties by analyzing electrons’ quantum wave functions. The “topological” properties give rise to a high degree of stability in the material and make its electronic structure very robust against minor imperfections in the crystal, such as impurities, or minor distortions of its shape, such as stretching or squeezing.

“Let’s say I have a crystal that has imperfections. Those imperfections, as long as they are not so dramatic, then my electrical property will not change,” Xu explains. “If there is such topology and if the electronic properties are uniquely tied to the topology rather than the shape, then it will be very robust.”

“In this particular compound, unless you somehow apply pressure or something to distort the crystal structure, otherwise this conduction will always be protected,” Xu says.

Since the electrons carrying a certain spin can only move in one direction in these topological materials, they cannot bounce backwards or scatter, which is the behavior that makes silicon- and copper-based electronic devices heat up.

While materials scientists seek to identify materials with fast electrical conduction and low heat output for advanced computers, physicists want to classify the types of topological and other properties that underlie these better-performing materials.

In the new paper, “Topology on a new facet of bismuth,” the authors calculated that bismuth should show a state known as a “Dirac surface state,” which is considered a hallmark of these topological insulators. They found that the crystal is unchanged by a half-circle rotation (180 degrees). This is called a twofold rotational symmetry. Such a twofold rotational symmetry protects the Dirac surface states. If this twofold rotation symmetry of the crystal is disrupted, these surface states lose their topological protection.

Bismuth also features a topological state along certain edges of the crystal where two vertical and horizontal faces meet, called a “hinge” state. To fully realize the desired topological effects in this material, the hinge state and other surface states must be coupled to another electronic phenomenon known as “band inversion” that the theorists’ calculations show also is present in bismuth. They predict that these topological surface states could be confirmed by using an experimental technique known as photoemission spectroscopy.

If electrons flowing through copper are like a school of fish swimming through a lake in summer, electrons flowing across a topological surface are more like ice skaters crossing the lake’s frozen surface in winter. For bismuth, however, in the hinge state, their motion would be more akin to skating on the corner edge of an ice cube.

The researchers also found that in the hinge state, as the electrons move forward, their momentum and another property, called spin — which defines a clockwise or counterclockwise rotation of the electrons — is “locked.” “Their direction of spinning is locked with respect to their direction of motion,” Xu explains.

These additional topological states might help explain why bismuth lets electrons travel through it much farther than most other materials, and why it conducts electricity efficiently with many fewer electrons than materials such as copper.

“If we really want to make these things useful and significantly improve the performance of our transistors, we need to find good topological materials — good in terms of they are easy to make, they are not toxic, and also they are relatively abundant on earth,” Xu suggests. Bismuth, which is an element that is safe for human consumption in the form of remedies to treat heartburn, for example, meets all these requirements.

“This work is a culmination of a decade and a half’s worth of advancement in our understanding of symmetry-protected topological materials,” says David Hsieh, professor of physics at Caltech, who was not involved in this research.

“I think that these theoretical results are robust, and it is simply a matter of experimentally imaging them using techniques like angle-resolved photoemission spectroscopy, which Professor Gedik is an expert in,” Hsieh adds.

Northeastern University Professor Gregory Fiete notes that “Bismuth-based compounds have long played a starring role in topological materials, though bismuth itself was originally believed to be topologically trivial.”

“Now, this team has discovered that pure bismuth is multiply topological, with a pair of surface Dirac cones untethered to any particular momentum value,” says Fiete, who also was not involved in this research. “The possibility to move the Dirac cones through external parameter control may open the way to applications that exploit this feature."

Caltech's Hsieh notes that the new findings add to the number of ways that topologically protected metallic states can be stabilized in materials. “If bismuth can be turned from semimetal into insulator, then isolation of these surface states in electrical transport can be realized, which may be useful for low-power electronics applications,” Hsieh explains.

Also contributing to the bismuth topology paper were MIT postdoc Qiong Ma; Tay-Rong Chang of the Department of Physics, National Cheng Kung University, Taiwan, and the Center for Quantum Frontiers of Research and Technology, Taiwan; Xiaoting Zhou, Department of Physics, National Cheng Kung University, Taiwan; and Chuang-Han Hsu, Centre for Advanced 2D Materials and Graphene Research Centre, National University of Singapore.

This work was partly supported by the Center for Integrated Quantum Materials and the U.S. Department of Energy, Materials Sciences and Engineering division.

Computer-aided knitting

MIT Latest News - Tue, 08/06/2019 - 11:35am

The oldest known knitting item dates back to Egypt in the Middle Ages, by way of a pair of carefully handcrafted socks. Although handmade clothes have occupied our closets for centuries, a recent influx of high-tech knitting machines have changed how we now create our favorite pieces. 

These systems, which have made anything from Prada sweaters to Nike shirts, are still far from seamless. Programming machines for designs can be a tedious and complicated ordeal: When you have to specify every single stitch, one mistake can throw off the entire garment. 

In a new pair of papers, researchers from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) have come up with a new approach to streamline the process: a new system and design tool for automating knitted garments. 

In one paper, a team created a system called “InverseKnit”, that translates photos of knitted patterns into instructions that are then used with machines to make clothing. An approach like this could let casual users create designs without a memory bank of coding knowledge, and even reconcile issues of efficiency and waste in manufacturing. 

“As far as machines and knitting go, this type of system could change accessibility for people looking to be the designers of their own items,'' says Alexandre Kaspar, CSAIL PhD student and lead author on a new paper about the system. “We want to let casual users get access to machines without needed programming expertise, so they can reap the benefits of customization by making use of machine learning for design and manufacturing.” 

In another paper, researchers came up with a computer-aided design tool for customizing knitted items. The tool lets non-experts use templates for adjusting patterns and shapes, like adding a triangular pattern to a beanie, or vertical stripes to a sock. You can image users making items customized to their own bodies, while also personalizing for preferred aesthetics.


Automation has already reshaped the fashion industry as we know it, with potential positive residuals of changing our manufacturing footprint as well. 

To get InverseKnit up and running, the team first created a dataset of knitting instructions, and the matching images of those patterns. They then trained their deep neural network on that data to interpret the 2-D knitting instructions from images. 

This might look something like giving the system a photo of a glove, and then letting the model produce a set of instructions, where the machine then follows those commands to output the design. 

When testing InverseKnit, the team found that it produced accurate instructions 94% of the time. 

“Current state-of-the-art computer vision techniques are data-hungry, and they need many examples to model the world effectively,” says Jim McCann, assistant professor in the Carnegie Mellon Robotics Institute. “With InverseKnit, the team collected an immense dataset of knit samples that, for the first time, enables modern computer vision techniques to be used to recognize and parse knitting patterns.” 

While the system currently works with a small sample size, the team hopes to expand the sample pool to employ InverseKnit on a larger scale. Currently, the team only used a specific type of acrylic yarn, but they hope to test different materials to make the system more flexible. 

A tool for knitting

While there’s been plenty of developments in the field — such as Carnegie Mellon’s automated knitting processes for 3-D meshes — these methods can often be complex and ambiguous. The distortions inherent in 3-D shapes hamper how we understand the positions of the items, and this can be a burden on the designers. 

To address this design issue, Kaspar and his colleagues developed a tool called “CADKnit”, which uses 2-D images, CAD software, and photo editing techniques to let casual users customize templates for knitted designs.

The tool lets users design both patterns and shapes in the same interface. With other software systems, you’d likely lose some work on either end when customizing both. 

“Whether it’s for the everyday user who wants to mimic a friend’s beanie hat, or a subset of the public who might benefit from using this tool in a manufacturing setting, we’re aiming to make the process more accessible for personal customization,'' says Kaspar. 

The team tested the usability of CADKnit by having non-expert users create patterns for their garments and adjust the size and shape. In post-test surveys, the users said they found it easy to manipulate and customize their socks or beanies, successfully fabricating multiple knitted samples. They noted that lace patterns were tricky to design correctly and would benefit from fast realistic simulation.

However the system is only a first step towards full garment customization. The authors found that garments with complicated interfaces between different parts — such as sweaters — didn’t work well with the design tool. The trunk of sweaters and sleeves can be connected in various ways, and the software didn’t yet have a way of describing the whole design space for that.

Furthermore, the current system can only use one yarn for a shape, but the team hopes to improve this by introducing a stack of yarn at each stitch. To enable work with more complex patterns and larger shapes, the researchers plan to use hierarchical data structures that don’t incorporate all stitches, just the necessary ones.

“The impact of 3-D knitting has the potential to be even bigger than that of 3-D printing. Right now, design tools are holding the technology back, which is why this research is so important to the future,” says McCann. 

A paper on InverseKnit was presented by Kaspar alongside MIT postdocs Tae-Hyun Oh and Petr Kellnhofer, PhD student Liane Makatura, MIT undergraduate Jacqueline Aslarus, and MIT Professor Wojciech Matusik. It was presented at the International Conference on Machine Learning this past June in Long Beach, California. 

A paper on the design tool was led by Kaspar alongside Makatura and Matusik.