MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 1 day 28 min ago

A novel data-compression technique for faster computer programs

Tue, 04/16/2019 - 12:00am

A novel technique developed by MIT researchers rethinks hardware data compression to free up more memory used by computers and mobile devices, allowing them to run faster and perform more tasks simultaneously.

Data compression leverages redundant data to free up storage capacity, boost computing speeds, and provide other perks. In current computer systems, accessing main memory is very expensive compared to actual computation. Because of this, using data compression in the memory helps improve performance, as it reduces the frequency and amount of data programs need to fetch from main memory.

Memory in modern computers manages and transfers data in fixed-size chunks, on which traditional compression techniques must operate. Software, however, doesn’t naturally store its data in fixed-size chunks. Instead, it uses “objects,” data structures that contain various types of data and have variable sizes. Therefore, traditional hardware compression techniques handle objects poorly.

In a paper being presented at the ACM International Conference on Architectural Support for Programming Languages and Operating Systems this week, the MIT researchers describe the first approach to compress objects across the memory hierarchy. This reduces memory usage while improving performance and efficiency.

Programmers could benefit from this technique when programming in any modern programming language — such as Java, Python, and Go — that stores and manages data in objects, without changing their code. On their end, consumers would see computers that can run much faster or can run many more apps at the same speeds. Because each application consumes less memory, it runs faster, so a device can support more applications within its allotted memory.

In experiments using a modified Java virtual machine, the technique compressed twice as much data and reduced memory usage by half over traditional cache-based methods.

“The motivation was trying to come up with a new memory hierarchy that could do object-based compression, instead of cache-line compression, because that’s how most modern programming languages manage data,” says first author Po-An Tsai, a graduate student in the Computer Science and Artificial Intelligence Laboratory (CSAIL).

“All computer systems would benefit from this,” adds co-author Daniel Sanchez, a professor of computer science and electrical engineering, and a researcher at CSAIL. “Programs become faster because they stop being bottlenecked by memory bandwidth.”

The researchers built on their prior work that restructures the memory architecture to directly manipulate objects. Traditional architectures store data in blocks in a hierarchy of progressively larger and slower memories, called “caches.” Recently accessed blocks rise to the smaller, faster caches, while older blocks are moved to slower and larger caches, eventually ending back in main memory. While this organization is flexible, it is costly: To access memory, each cache needs to search for the address among its contents.

“Because the natural unit of data management in modern programming languages is objects, why not just make a memory hierarchy that deals with objects?” Sanchez says.

In a paper published last October, the researchers detailed a system called Hotpads, that stores entire objects, tightly packed into hierarchical levels, or “pads.” These levels reside entirely on efficient, on-chip, directly addressed memories — with no sophisticated searches required.

Programs then directly reference the location of all objects across the hierarchy of pads. Newly allocated and recently referenced objects, and the objects they point to, stay in the faster level. When the faster level fills, it runs an “eviction” process that keeps recently referenced objects but kicks down older objects to slower levels and recycles objects that are no longer useful, to free up space. Pointers are then updated in each object to point to the new locations of all moved objects. In this way, programs can access objects much more cheaply than searching through cache levels.

For their new work, the researchers designed a technique, called “Zippads,” that leverages the Hotpads architecture to compress objects. When objects first start at the faster level, they’re uncompressed. But when they’re evicted to slower levels, they’re all compressed. Pointers in all objects across levels then point to those compressed objects, which makes them easy to recall back to the faster levels and able to be stored more compactly than prior techniques.  

A compression algorithm then leverages redundancy across objects efficiently. This technique uncovers more compression opportunities than previous techniques, which were limited to finding redundancy within each fixed-size block. The algorithm first picks a few representative objects as “base” objects. Then, in new objects, it only stores the different data between those objects and the representative base objects.

Brandon Lucia, an assistant professor of electrical and computer engineering at Carnegie Mellon University, praises the work for leveraging features of object-oriented programming languages to better compress memory. “Abstractions like object-oriented programming are added to a system to make programming simpler, but often introduce a cost in the performance or efficiency of the system,” he says. “The interesting thing about this work is that it uses the existing object abstraction as a way of making memory compression more effective, in turn making the system faster and more efficient with novel computer architecture features.”

Smaller, faster, better: Nanoscale batteries may power future technology

Fri, 04/12/2019 - 10:25am

Inside modern cell phones are billions of nanoscale switches that flip on and off, allowing the phone to function. These switches, called transistors, are controlled by an electrical signal that is delivered via a single battery. This configuration of one battery to power multiple components works well for today's technologies, but there is room for improvement. Each time a signal is piped from the battery to a component, some power is lost on the journey. Coupling each component with its own battery would be a much better setup, minimizing energy loss and maximizing battery life. However, in the current tech world, batteries are not small enough to permit this arrangement — at least not yet.

Now, MIT Lincoln Laboratory and the MIT Department of Materials Science and Engineering have made headway in developing nanoscale hydrogen batteries that use water-splitting technology. With these batteries, the researchers aim to deliver a faster charge, longer life, and less wasted energy. In addition, the batteries are relatively easy to fabricate at room temperature and adapt physically to unique structural needs.

"Batteries are one of the biggest problems we’re running into at the Laboratory," says Raoul Ouedraogo, who is from Lincoln Laboratory’s Advanced Sensors and Techniques Group and is the project's principal investigator. "There is significant interest in highly miniaturized sensors going all the way down to the size of a human hair. We could make those types of sensors, but good luck finding a battery that small. Current batteries can be round like coin cells, shaped like a tube, or thin but on a centimeter scale. If we have the capability to lay our own batteries to any shape or geometry and in a cheap way, it opens doors to a whole lot of applications."

The battery gains its charge by interacting with water molecules present in the surrounding air. When a water molecule comes in contact with the reactive, outer metal section of the battery, it is split into its constituent parts — one molecule of oxygen and two of hydrogen. The hydrogen molecules become trapped inside the battery and can be stored until they are ready to be used. In this state, the battery is "charged." To release the charge, the reaction reverses. The hydrogen molecules move back through the reactive metal section of the battery and combine with oxygen in the surrounding air.

So far, the researchers have built batteries that are 50 nanometers thick — thinner than a strand of human hair. They have also demonstrated that the area of the batteries can be scaled from as large as centimeters to as small as nanometers. This scaling ability allows the batteries to be easily integrated near transistors at a nano- and micro-level, or near components and sensors at the millimeter- and centimeter-level.

"A useful feature of this technology is that the oxide and metal layers can be patterned very easily into nanometer-scale custom geometries, making it straightforward to build intricate battery patterns for a particular application or to deposit them on flexible substrates," says Annie Weathers, a staff member of the laboratory’s Chemical, Microsystem, and Nanoscale Technologies Group, who is also involved in the project.

The batteries have also demonstrated a power density that is two orders of magnitude greater than most currently used batteries. A higher power density means more power output per the volume of the battery.

"What I think made this project work is the fact that none of us are battery people," says Ouedraogo. "Sometimes it takes somebody from the outside to see new things."

Currently, water-splitting techniques are used to generate hydrogen for large-scale industrial needs. This project will be the first to apply the technique for creating batteries, and at much smaller scales.

The project was funded via Lincoln Laboratory's Technology Office Energy Initiative and has entered into phase two of development, which includes optimizing the batteries further and integrating them with sensors.

Earliest life may have arisen in ponds, not oceans

Fri, 04/12/2019 - 9:59am

Primitive ponds may have provided a suitable environment for brewing up Earth’s first life forms, more so than oceans, a new MIT study finds.

Researchers report that shallow bodies of water, on the order of 10 centimeters deep, could have held high concentrations of what many scientists believe to be a key ingredient for jump-starting life on Earth: nitrogen.

In shallow ponds, nitrogen, in the form of nitrogenous oxides, would have had a good chance of accumulating enough to react with other compounds and give rise to the first living organisms. In much deeper oceans, nitrogen would have had a harder time establishing a significant, life-catalyzing presence, the researchers say.

“Our overall message is, if you think the origin of life required fixed nitrogen, as many people do, then it’s tough to have the origin of life happen in the ocean,” says lead author Sukrit Ranjan, a postdoc in MIT’s Department of Earth, Atmospheric and Planetary Sciences (EAPS). “It’s much easier to have that happen in a pond.”

Ranjan and his colleagues have published their results today in the journal Geochemistry, Geophysics, Geosystems. The paper’s co-authors are Andrew Babbin, the Doherty Assistant Professor in Ocean Utilization in EAPS, along with Zoe Todd and Dimitar Sasselov of Harvard University, and Paul Rimmer at Cambridge University. Breaking a bond If primitive life indeed sprang from a key reaction involving nitrogen, there are two ways in which scientists believe this could have happened. The first hypothesis involves the deep ocean, where nitrogen, in the form of nitrogenous oxides, could have reacted with carbon dioxide bubbling forth from hydrothermal vents, to form life’s first molecular building blocks. The second nitrogen-based hypothesis for the origin of life involves RNA — ribonucleic acid, a molecule that today helps encode our genetic information. In its primitive form, RNA was likely a free-floating molecule. When in contact with nitrogenous oxides, some scientists believe, RNA could have been chemically induced to form the first molecular chains of life. This process of RNA formation could have occurred in either the oceans or in shallow lakes and ponds. Nitrogenous oxides were likely deposited in bodies of water, including oceans and ponds, as remnants of the breakdown of nitrogen in Earth’s atmosphere. Atmospheric nitrogen consists of two nitrogen molecules, linked via a strong triple bond, that can only be broken by an extremely energetic event — namely, lightning. “Lightning is like a really intense bomb going off,” Ranjan says. “It produces enough energy that it breaks that triple bond in our atmospheric nitrogen gas, to produce nitrogenous oxides that can then rain down into water bodies.”

Scientists believe that there could have been enough lightning crackling through the early atmosphere to produce an abundance of nitrogenous oxides to fuel the origin of life in the ocean. Ranjan says scientists have assumed that this supply of lightning-generated nitrogenous oxides was relatively stable once the compounds entered the oceans.

However, in this new study, he identifies two significant “sinks,” or effects that could have destroyed a significant portion of nitrogenous oxides, particularly in the oceans. He and his colleagues looked through the scientific literature and found that nitrogenous oxides in water can be broken down via interactions with the sun’s ultraviolet light, and also with dissolved iron sloughed off from primitive oceanic rocks.

Ranjan says both ultraviolet light and dissolved iron could have destroyed a significant portion of nitrogenous oxides in the ocean, sending the compounds back into the atmosphere as gaseous nitrogen.

“We showed that if you include these two new sinks that people hadn’t thought about before, that suppresses the concentrations of nitrogenous oxides in the ocean by a factor of 1,000, relative to what people calculated before,” Ranjan says.

“Building a cathedral”

In the ocean, ultraviolet light and dissolved iron would have made nitrogenous oxides far less available for synthesizing living organisms. In shallow ponds, however, life would have had a better chance to take hold. That’s mainly because ponds have much less volume over which compounds can be diluted. As a result, nitrogenous oxides would have built up to much higher concentrations in ponds. Any “sinks,” such as UV light and dissolved iron, would have had less of an effect on the compound’s overall concentrations. 

Ranjan says the more shallow the pond, the greater the chance nitrogenous oxides would have had to interact with other molecules, and particularly RNA, to catalyze the first living organisms.

“These ponds could have been from 10 to 100 centimeters deep, with a surface area of tens of square meters or larger,” Ranjan says. “They would have been similar to Don Juan Pond in Antarctica today, which has a summer seasonal depth of about 10 centimeters.”

That may not seem like a significant body of water, but he says that’s precisely the point: In environments any deeper or larger, nitrogenous oxides would simply have been too diluted, precluding any participation in origin-of-life chemistry. Other groups have estimated that, around 3.9 billion years ago, just before the first signs of life appeared on Earth, there may have been about 500 square kilometers of shallow ponds and lakes worldwide.

“That’s utterly tiny, compared to the amount of lake area we have today,” Ranjan says. “However, relative to the amount of surface area prebiotic chemists postulate is required to get life started, it’s quite adequate.”

The debate over whether life originated in ponds versus oceans is not quite resolved, but Ranjan says the new study provides one convincing piece of evidence for the former.

“This discipline is less like knocking over a row of dominos, and more like building a cathedral,” Ranjan says. “There’s no real ‘aha’ moment. It’s more like building up patiently one observation after another, and the picture that’s emerging is that overall, many prebiotic synthesis pathways seem to be chemically easier in ponds than oceans.”

This research was supported, in part, by the Simons Foundation and MIT.

Five from MIT win 2019 Paul and Daisy Soros Fellowships for New Americans

Thu, 04/11/2019 - 3:10pm

Two MIT alumnae and three current MIT doctoral students are among this year’s 30 recipients of The Paul and Daisy Soros Fellowships for New Americans. The five students — Joseph Maalouf, Indira Puri, Grace Zhang, Helen Zhou, and Jonathan Zong — will each receive up to $90,000 to fund their doctoral educations.

The newest fellows were selected from a pool of 1,767 applications based on their potential to make significant contributions to U.S. society, culture, or their academic fields. The P.D. Soros Fellowships are open to all American immigrants and children of immigrants, including DACA recipients, refugees, and asylum seekers. In the past nine years, 34 MIT students and alumni have been awarded this fellowship.

Founded by Hungarian immigrants Daisy M. Soros and her late husband Paul Soros (1926-2013), the program honors continuing generations of immigrant contributions to the United States. “Paul and Daisy Soros Fellows are all passionate about giving back to the country and remind us of the very best version of America,” says Craig Harwood, director of the fellowship program.

MIT students interested in applying to the P.D. Soros Fellowship should contact Kim Benard, assistant dean of distinguished fellowships. The application for the 2020 fellowship is now open and the national deadline is Nov. 1. 

Joseph Maalouf

Joseph Maalouf is a PhD student in chemical engineering at MIT. He received his bachelor’s degree with honors and distinction from Stanford University. Maalouf’s doctoral research focuses on developing novel electrocatalysts that will be able to take advantage of renewable electricity to directly synthesize both commodity and fine chemicals.

Maalouf was born and raised in Las Vegas, Nevada. His father emigrated from Lebanon as a teenager to escape the civil war occurring at the time and his mother emigrated from a small town in Mexico as a young adult.

After completing his studies at MIT, Maalouf hopes to develop his research into an electro-organic synthesis company that will transform the environmentally unfriendly way that chemicals are currently produced.

Indira Puri

Indira Puri is a PhD candidate in economics at MIT. She was born in New York to Indian immigrants. With bachelor's and master's degrees in mathematics, computer science, and economics from Stanford University, Puri draws on her multifaceted experience in approaching research.

Puri’s awards include Stanford’s Firestone Medal, a best thesis award; the J.E Wallace Sterling Scholarship, for being one of the top 25 graduating students across Stanford’s School of Humanities and Sciences; being inducted into Phi Beta Kappa her junior year; chess and debate recognition at the national level; and being named a U.S. Presidential Scholar.

Puri served as president of Stanford’s chess organization, and graduate co-chair of Stanford Women in Computer Science.

Grace H. Zhang ’17

Grace Zhang graduated from MIT in 2017 with a BS in physics and received honors for both her research and community service through the Malcolm Cotton Brown Award, the Order of the Lepton Award, and the Joel Matthew Orloff Award. She was born in Tucson, Arizona, where her parents immigrated to pursue their graduate studies. At the age of 5, she moved to Shanghai to live with her grandparents before returning to the United States at age 10 to settle in East Brunswick, New Jersey, with her mother.

Zhang is currently a doctoral student in physics at Harvard University, studying theoretical soft condensed matter physics with a focus on emergent phenomena in materials and biological networks. She aspires to be a professor and, inspired by her own mentors, she also strives to make a difference toward the growth of a diverse and inclusive scientific community.

At MIT, Zhang explored research in a variety of topics in experimental and theoretical physics, publishing five papers — three as leading authors — across journals of Physical Review, Review of Scientific Instruments, Nature Physics, and Science.

Helen Zhou ’17, MEng ’18

Helen Zhou received a BS in computer science and electrical engineering from MIT in 2017 and an MEng degree in 2018. Zhou was born in Winnipeg, Ontario, after her parents left China to pursue studies in Canada. When she was 6, her family moved to Canton, Michigan.

As an undergraduate, Zhou conducted machine research at the MIT Media Lab. She also completed internships at Google and Amazon Search (A9.com). As a master’s student, she joined the MIT Clinical Machine Learning group where her thesis on predicting antibiotic resistance from electronic medical records informed her current research interests.

As a machine learning PhD student at Carnegie Mellon University, Zhou is exploring problems at the intersection of machine learning and health care, such as personalization, interpretability for human-in-the-loop learning, and synthesizing heterogeneous data from multiple modalities. Throughout her academic career, she hopes to develop methods that will allow scientists to continually shine new light on aspects of health care and medicine that are not well understood.

Jonathan Zong

Jonathan Zong is pursuing a PhD in human-computer interaction in the MIT department of electrical engineering and computer science and is a graduate researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL). He is interested in visual interfaces that help people comprehend how technology governs human behavior. His goal is to serve the public interest by producing research that critically examines technologically mediated social relations.

Zong was born in Houston, Texas, after his parents emigrated from China to pursue graduate school. He completed his undergraduate education at Princeton University in the computer science and visual arts departments. His computer science thesis investigated empirical methods for studying internet research ethics, while his visual arts thesis was an exhibition exploring how his discomfort with authority and power — especially his own — shapes his identity.

At Princeton, Zong created research-based visual art that influenced discussions about technology in The New York Times and exhibited at the Centre National du Graphisme in Chaumont, France. He has interned as a software engineer and graphic designer at companies including Coursera, Square, Linked by Air, and Google.

Climate expert emphasizes the fierce urgency of now

Thu, 04/11/2019 - 9:27am

Prominent economist and policymaker Lord Nicholas Stern delivered a strong warning about the dangers of climate change in a talk at MIT on Tuesday, calling the near future “defining” and urging a rapid overhaul of the economy to reach net zero carbon emissions.

“The next 20 years will be absolutely defining,” Stern told the audience, saying they “will shape what kind of future people your age will have.”

“Don’t underestimate the size of the challenge,” Stern added, while giving the MIT Undergraduate Economics Association’s annual public lecture.

To consider the climate trouble we are already in, Stern noted, consider that the concentration of carbon dioxode in the atmosphere is now over 400 parts per million, a level the Earth has not experienced for about 3 million years, long before people were around. (The modern human lineage is estimated to be about 200,000 years old.)

Back then, sea levels were about 30 to 60 feet higher than they are now, Stern said. The recent rise in carbon dioxide concentrations has created rapidly increasing temperatures that could raise the ocean back to those prehuman levels — which would profoundly alter our civilization’s geography.

“It would be Oxford-by-the-Sea,” Stern said, referring to the English university seat that lies about 50 miles inland at present. “Bangladesh would be completely underwater.” Moreover, Stern noted, “Southern Europe would probably look like the Sahara Desert.”  

And with a 2 degree Celsius rise in average temperatures, Stern pointed out, the proportion of people on Earth exposed to extreme heat would jump from 14 percent to 37 percent.

“This is the kind of heat that can kill, in a big way,” Stern warned.

“Net zero is fundamental”

As dire as those scenarios seem, Stern also expressed some optimism, saying that policymakers are now much more likely to believe that we can can combine continued economic growth with zero-emissions technology — a change from common views expressed at, say, the 2009 global climate summit in Copenhagen.

“What we’ve seen I think in the last five years or so is a change of understanding of the policy toward climate change,” Stern said, “from ‘How much growth do we have to give up to be more responsible and sustainable?’ to ‘How can we find a form of growth that’s different and sustainable?’”

However, he warned, a world with a net of zero carbon dioxide emissions within a few decades will be absolutely necessary for society to maintain its current form.

“The net zero is fundamental,” Stern said. “That’s not some strange economist’s aspiration. The net zero is the science. If you want to stabilize temperatures, you’re going to have to stabilize concentrations. Stabilizing concentrations means net zero.”

Stern’s lecture, “Unlocking the Inclusive Growth Story of the 21st century: The Drive to the Zero-Carbon Economy,” was delivered to an audience of over 100 people in MIT’s Room 2-190, a lecture hall.

As part of his remarks, Stern contended that the overhaul of energy production and consumption could have leveling economic benefits globally. Indeed, a successful transformation of energy use would almost by definition have a broad impact, he said, since about 70 percent of energy involves infrastructure and 70 percent of growth in coming decades may be located in the developing world.

Multiplying those factors, Stern said, “Half of the story is infrastructure in developing countries and emerging markets.”

Among many specific urban climate measures, Stern suggested that, for instance, “if cities banned internal-combustion engine cars [from] coming into the [city] centers by some date, say, 2025, that would radically change the kinds of cars that come to market.” And he touted the ability of policymakers to effect change, citing the massive global switch to more efficient LED light bulbs as one case where lawmaking has created massive imporvements in energy efficiency. 

It’s not enough to talk

Stern is an accomplished economist who has studied development and growth extensively, and shifted his focus to include climate economics over the last two decades. He is professor of economics and government and chair of the Grantham Research Institute on Climate Change and the Environment at the London School of Economics. 

Stern may be best-known in public for his work as a minister in Britain’s Treasury Department, where he spearheaded a major report on climate and economics, released in 2006. In 2007, Stern was made a life peer in Britain in 2007, and sits in the House of Lords — as a nonpartisan member, he reminded the audience on Tuesday. Stern was chief economist of the World Bank from 2000 to 2003, and president of the British Academy from 2013 to 2017.

As Stern remarked at the beginning of his talk, he also spent a year at MIT in the early 1970s, working with MIT economist Robert M. Solow. Stern said the Institute has “been my U.S. home” through the years.

At one point, Stern asked audience members to raise their hands if they were economists; a significant percentage of people in the room did so. 

“Those of you who are not economists,” Stern quipped, “it was your decision, and you have to live with it.”

Stern was introduced at the event by Paul Joskow, the Elizabeth and James Killian Professor of Economics, Emeritus, at MIT, and a faculty member at the Institute for over 45 years. Joskow also led off the question-and-answer session after Stern’s talk with a query about rural land use and its impact on climate. Stern responded that, although he had emphasized urban policy in his talk, rural policies such as reforestation should play a significant role in capturing excess carbon dioxide.

Stern fielded a wide variety of queries, including one about the economics profession from an audience member who asked: “As an economist working on an issue that affects the world in a relatively short time frame, is it enough, is it persuasive enough, to be doing research … and doing presentations like this?”

“No,” Stern responded instantly. “That’s why I spend a lot of time doing other things.” In recent years, Stern has worked with high-level government officials on climate policy matters in China, India, France, and for the U.N., among other projects.

As advice for economics students concerned about climate, Stern suggested: “Invest in your own skill.” And he left no doubt about his own view on the importance of the climate challenge.

“We have the biggest problem facing humankind,” Stern said.

MIT publishes draft self-study report for comment

Wed, 04/10/2019 - 11:59pm

Every 10 years, as required by the Institute’s accrediting agency, the New England Commission of Higher Education (NECHE), MIT undergoes a process of institutional review. The process begins with a comprehensive self-study, followed by an onsite visit by a team of peer evaluators and ends with the commission’s assessment and decision about continuing the Institute’s accreditation.

Since December 2017, nine planning groups of MIT officers, faculty, staff, and students have considered MIT’s evolution since its last review in 2009 and its plans for the future in relation to NECHE’s standards for accreditation. The planning groups have prepared a draft report, which examines a wide range of Institute activities, including academic programming, student life, resources, and educational effectiveness. The MIT Steering Committee invites community feedback on the report by May 1.

The report surfaces three key themes. First, over the last decade, MIT’s educational model has become more interdisciplinary, more experiential, more digital, and more flexible. With a commitment to improve the first-year experience and increased attention on the General Institute Requirements, these qualities will likely become further ingrained in an MIT education.

Second, making a better world means making a better MIT. To achieve the impact the Institute envisions across the nation and around the world, MIT must take a critical look internally to strengthen its campus community and its place in Cambridge, according to the report. This work includes a revitalization of the Institute’s physical infrastructure, steps to make the community more inclusive and welcoming, and renewed attention on improving the student experience.

Finally, the report explores the forces that led to the creation of the MIT Stephen A. Schwarzman College of Computing. It also details the work underway to shape the college, and anticipates the college’s impact at MIT and beyond.

The steering committee will submit its final report to NECHE and a team of external evaluators this summer in preparation for the team’s onsite visit in September. During that visit, MIT will hold open forums for members of the community to share their experiences and perspectives with the evaluators as part of the reaccreditation process.

3 Questions: Provost Martin Schmidt on building a new college

Wed, 04/10/2019 - 11:59pm

In October 2018, MIT announced the MIT Stephen A. Schwarzman College of Computing, to address the rapid evolution of computing and artificial intelligence, and fuse computing with disciplines throughout the Institute. The timetable is an ambitious one; the college is set to open in September 2019.

Some important milestones have been reached: A new dean has been hired, a building site has been identified, and a wide-ranging celebration event sparked many conversations about the new college. Now, a task force of five working groups is meeting regularly to advance the planning of the college, with guidance from a steering committee that includes Dean of Engineering Anantha Chandrakasan and MIT Faculty Chair Susan Silbey and that is chaired by Provost Martin Schmidt. Centered around key themes, the working groups together comprise more than 100 professors, students, and staff. The themes and their co-chairs are:

  • Organizational Structure — Co-chairs: Asu Ozdaglar, Department of Electrical Engineering and Computer Science (EECS); Nelson Repenning, MIT Sloan School of Management
  • Faculty Appointments — Co-chairs: Eran Ben-Joseph, Department of Urban Studies and Planning; William Freeman, EECS
  • Curriculum and Degrees — Co-chairs: Srini Devadas, EECS; Troy Van Voorhis, Department of Chemistry
  • Social Implications and Responsibilities of Computing — Co-chairs: Melissa Nobles, School of Humanities, Arts, and Social Sciences; Julie Shah, Department of Aeronautics and Astronautics
  • College Infrastructure — Co-chairs: Benoit Forget, Department of Nuclear Science and Engineering; Nicholas Roy, Department of Aeronautics and Astronautics and the Computer Science and Artificial Intelligence Laboratory  

Schmidt has described the mission of the new college as having three parts: to advance research in computer science and computing; to bring advanced computational capabilities to disciplines across MIT and, conversely, to bring knowledge from those disciplines to bear on the development of next-generation algorithms by computer scientists; and to make an appreciation of the social implications of technology implicit throughout research and education at the Institute.

The task force is now considering how to erect a college that realizes this trifold mission. Members of the MIT community are encouraged to share input via a related idea bank, and at three community forums with the working group co-chairs on April 17 and 18. MIT News spoke with Schmidt about the path forward to the opening of the new college.

Q: What is the charge for each of the working groups you have convened?

A: The working groups are meeting throughout this semester, both separately and in discussion with each other, to think through the essential elements of the new college and develop ideas to help guide the administration.

One group is thinking about the college’s organizational structure. For example, is this a unit that should have departments, or do you want something that’s a little more agile as fields evolve?

We also have a working group on faculty, which is considering how to ensure that people can successfully work in one department and in the college. We have a lot of faculty who advance computing, but they don’t do it in computer science; how can they participate in and contribute to the college?

The education working group is thinking about what kind of degrees we should have and how should those degrees be administered. If a department wants to offer a joint program with computer science, how do we make it so?

We have a group that’s thinking about the infrastructure elements; we have people clamoring for cloud computing, for example. So, how is that going to work? Is there way to develop an infrastructure into a whole that’s greater than the sum of its parts?

The last working group is thinking about the social implications of technology, and how we make consideration of that implicit in everything we do. What does the curriculum look like for our students? What are the implications for our research agenda?

The most important thing I feel like we’re doing to advance the college right now is having more than 100 people vigorously debating these things. We’d love to have even more people in these groups, but we’ve worked hard to have a fairly broad representation.

Q: What is the intended outcome of these discussions? What will happen between now and the arrival of Dean Huttenlocher?

A: Each working group will produce a report that’s a summary of their deliberations, and those documents will be shared with the MIT community, which we hope will spark further input.

We’ve populated the working groups with people from across the Institute, and they are entering the process from very different perspectives. One of the challenges is that the college is built on three promises, and nearly everybody on the campus sees a promise that matters a lot to them. At the end of this process, at a minimum what we’ll have is 100 people who have congealed at some level, and we’ll be able to identify the key areas of divergence. For example, when we get somebody from EECS and somebody from political science and somebody from physics in the same room, and they all envision the college differently, if we can hash that out and understand those differences, we’ll have a strong foundation to build on.

I fully expect that as we go down this path we’re going to be making midcourse corrections. We’re going to realize that our path to achieving our end goal now looks a bit different. Having that flexibility to pivot is going to be really important.

On the hiring side, we’ve committed to 50 new faculty positions to be filled over the next five years, and I hope by September that we’ve identified the first 10. Half will be in computer science, so EECS has been asked to look for those new five. The other five will be bridge faculty, so I’ve been in conversation with each school dean to think about who they might bring forward as a bridge hire.

Q: Despite differing perspectives, are there also points of agreement? What are some of the common themes that have emerged from the working groups’ discussions so far?

A: There are two related things that people are really excited about and want to hold us accountable to. One is creating a culture and climate that make it the best college to realize the vision we’ve articulated. We need to be really intentional about this and start thinking about it now. That’s true for diversity and inclusion as well. What can we do that’s really deliberate, as we’re thinking about this from the beginning, to ensure that the college is as diverse and inclusive as we can possibly make it?

Cultivating collaboration and innovation between MIT and Denmark

Wed, 04/10/2019 - 1:30pm

“Denmark has some of the best working conditions in the world. I’m eager to learn more about the customs and culture, especially as it pertains to the workplace,” says sophomore Evie Mayner. Mayner will be one of the first students to travel to Denmark via the MIT International Science and Technology Initiatives (MISTI) program, thanks to the creation of a new program that connects the country with the MIT community.

The Confederation of Danish Industry (DI) has provided support to MIT to launch this new Denmark program, which sponsors students to work in Denmark on internship and research opportunities. This is part of an effort to attract international talent to Denmark in several key industries in which Denmark and Boston are mutually strong: life science, information technology (IT), and engineering.

The idea for a Denmark program first came about during a January 2018 visit from DI Chief Operating Officer Thomas Bustrup and his delegation. With support from Industriens Fond, DI was able to contribute three years of funding as a catalyst for the program. MIT-Denmark began in September 2018 and will officially launch over the summer when the first cohort of MIT students is sent overseas. “Ideally,” says Mayner, “I’ll be bringing back to the U.S. some of the energy, innovation, and healthy habits that characterize the Danish lifestyle.”

Due to overwhelming interest, the original objective of sending 10 students to Denmark this first year had to be abandoned. Over 150 students applied to the program, while approximately 40 Danish companies and universities wanted to host at least one student. To meet the demand from both directions, the program is now sending 20 students — still far fewer than what is possible, based on interest.

Denmark has gained attention in recent years as the idea of "hygge" — a Danish word that roughly means coziness — has become a trend. Similarly, Denmark has been named the second-happiest country by the World Happiness Report and has placed in the top three for seven years in a row. Another selling point is the brand of sustainability that Denmark has embraced. In 2018, Denmark ranked third out of 180 countries on the Environmental Performance Index, having ranked fourth the previous time the assessment was made. Beyond this, Denmark has a cultural legacy of respect and consideration for the environment.

One example of Danish commitment to sustainability is MIT-Denmark host Aquaporin A/S, a global water technology company dedicated to revolutionizing water purification through the use of industrial biotechnological techniques and thinking. “In Aquaporin, we believe that innovation is driven by gathering people with different scientific background, culture, and perspective,” says Mads Andersen, head of Aquaporin Academy, the company’s student program. “Collaboration between industry and academia is a perfect platform to exploit this, and we are therefore extremely happy and proud of being part of the MIT-Denmark program. The hope is also that the MIT students will be inspired by the Danish way of working with a focus on sustainability and work-life balance.”

Mayner is one of the first MIT interns spending the summer at Aquaporin A/S and is intrigued by what she has learned about Danish workplace culture. “I am especially excited by the idea of a high degree of worker autonomy with open communication, which I think will be valuable for my career and personal development.”

While buzzwords around Denmark — clean energy, work-life balance, and hygge — make it an easy sell for students, the program is also helping tackle a concern with the Danish workforce, as companies struggle to fill vacant positions. This problem stems from a combination of factors, including a shortage of candidates with backgrounds that match Denmark’s growing industries, primarily engineering, life science, and IT. Another aspect is a sheer numbers game — most people who want to work in Denmark are already working. “Danish businesses are thriving, and more and more companies find themselves in a position where they have to turn down orders due to lack of highly-skilled specialists,” says Linda Duncan Wendelboe, head of DI Global Talent. “We need more young talented people to consider Denmark as their next career destination. Thanks to a long tradition of collaboration between civil society, private and public sectors, academia and entrepreneurs, we are often ranked among the most innovative in the world. At the same time, sustainability is a fully incorporated part of the business strategy in many Danish companies.”

MIT-Denmark students will not only build relevant experience toward their academic and professional development, they will also get a taste of what it means to live in a modern welfare state, work in one of the best countries for business, and take new approaches to innovation and entrepreneurship the Danish way. Many host companies and universities view this as a way for students to dip their toes in Danish culture, hopeful that they will consider Denmark in their career paths not only for the strong industries in IT, life science, and sustainability, but also for the life-quality benefits. Other organizations hosting MIT students include 3Shape, Copenhagen Business School, COWI, Denmark’s Technical University (DTU), Grundfos, LEO Pharma, Maersk (the first company to make a match with an MIT student), SPACE10, University of Copenhagen, Visma, and several start-ups housed within BLOXHUB, among others.

Similar to other MISTI programs, the MIT-Denmark program will send students to Denmark on internship and research opportunities ranging from three to 12 months, all cost-neutral to ensure this opportunity can be accessible to every student at MIT.

The establishment of the MIT-Denmark program comes around the same time as greater involvement of Denmark in the Boston area. In January of this year, a new Danish Innovation Center (ICDK) — which will be a work to build stronger relationships between Denmark and the Boston health and life science community, both on the commercial and research side — opened in Kendall Square. To officially open ICDK, the Danish Minister of Science, Technology, Information, and Higher Education, Tommy Ahlers, visited Boston to also get a greater feel for the technology and innovation environment in the area.

The ICDK is a combined effort from the Danish Ministry of Foreign Affairs and Ministry of Education and Research currently being led by Science Attaché Torben Orla Nielsen. At the same time that the Danish government has a growing interest in the greater Boston area, so do Danish companies. A Danish investment firm is backing the offshore wind farm in New Bedford, while renewable energy powerhouses, such as Ørsted and Vestas, are making their mark on the implementation side. Furthermore, both larger companies and smaller Danish start-ups have opened up offices in Boston in the last couple of years. Danish household names such as Novo Nordisk, Danfoss, and LEO Pharma have a handful of staff at the Cambridge Innovation Center (CIC), with a smaller Danish biotech company, Medtrace, nearby.

MIT-Denmark Program Manager Sydney-Johanna Stevns has big dreams for the program. “There is such a wealth of opportunity and innovation in Denmark; connecting this to MIT has been naturally synergistic. In a few years, I expect we will be sending three times as many.” The Program’s Faculty Director, Kathleen Thelen, is excited about deepening MIT’s connections to Scandinavia: “This is such an incredible opportunity for MIT students to experience a different culture but in a way that is also firmly anchored in their chosen fields of study.”

MIT International Science and Technology Initiatives creates applied international learning opportunities for MIT students that increase their ability to understand and address real-world problems. MISTI collaborates with partners at MIT and beyond, serving as a vital nexus of international activity and bolstering the Institute’s research mission by promoting collaborations between MIT faculty members and their counterparts abroad. MISTI programs are made possible through the generosity of individuals, corporations, and foundations.

For more information, email misti@mit.edu or contact country program managers directly. MISTI is a program in the Center for International Studies within the School of Humanities, Arts, and Social Sciences.

Astronomers capture first image of a black hole

Wed, 04/10/2019 - 9:03am

The following press release was issued today by the European Southern Observatory.

This breakthrough was announced today in a series of six papers published in a special issue of  The Astrophysical Journal Letters. The image reveals the black hole at the centre of Messier 87 [1], a massive galaxy in the nearby Virgo galaxy cluster. This black hole resides 55 million light-years from Earth and has a mass 6.5 billion times that of the Sun [2].

The EHT links telescopes around the globe to form an unprecedented Earth-sized virtual telescope [3]. The EHT offers scientists a new way to study the most extreme objects in the Universe predicted by Einstein’s general relativity during the centenary year of the historic experiment that first confirmed the theory [4].

"We have taken the first picture of a black hole," said EHT project director Sheperd S. Doeleman of the Center for Astrophysics | Harvard & Smithsonian. "This is an extraordinary scientific feat accomplished by a team of more than 200 researchers."

Black holes are extraordinary cosmic objects with enormous masses but extremely compact sizes. The presence of these objects affects their environment in extreme ways, warping spacetime and superheating any surrounding material.

"If immersed in a bright region, like a disc of glowing gas, we expect a black hole to create a dark region similar to a shadow — something predicted by Einstein’s general relativity that we’ve never seen before," explained chair of the EHT Science Council Heino Falcke of Radboud University, the Netherlands. "This shadow, caused by the gravitational bending and capture of light by the event horizon, reveals a lot about the nature of these fascinating objects and has allowed us to measure the enormous mass of M87’s black hole."
Multiple calibration and imaging methods have revealed a ring-like structure with a dark central region — the black hole’s shadow — that persisted over multiple independent EHT observations.

"Once we were sure we had imaged the shadow, we could compare our observations to extensive computer models that include the physics of warped space, superheated matter and strong magnetic fields. Many of the features of the observed image match our theoretical understanding surprisingly well," remarks Paul T.P. Ho, EHT Board member and Director of the East Asian Observatory [5]. "This makes us confident about the interpretation of our observations, including our estimation of the black hole’s mass."

"The confrontation of theory with observations is always a dramatic moment for a theorist. It was a relief and a source of pride to realise that the observations matched our predictions so well," elaborated EHT Board member Luciano Rezzolla of Goethe Universität, Germany.

Creating the EHT was a formidable challenge which required upgrading and connecting a worldwide network of eight pre-existing telescopes deployed at a variety of challenging high-altitude sites. These locations included volcanoes in Hawai`i and Mexico, mountains in Arizona and the Spanish Sierra Nevada, the Chilean Atacama Desert, and Antarctica.

The EHT observations use a technique called very-long-baseline interferometry (VLBI) which synchronises telescope facilities around the world and exploits the rotation of our planet to form one huge, Earth-size telescope observing at a wavelength of 1.3mm. VLBI allows the EHT to achieve an angular resolution of 20 micro-arcseconds — enough to read a newspaper in New York from a café in Paris [6].

The telescopes contributing to this result were ALMA, APEX, the IRAM 30-meter telescope, the James Clerk Maxwell Telescope, the Large Millimeter Telescope Alfonso Serrano, the Submillimeter Array, the Submillimeter Telescope, and the South Pole Telescope [7]. Petabytes of raw data from the telescopes were combined by highly specialised supercomputers hosted by the Max Planck Institute for Radio Astronomy and MIT Haystack Observatory.

European facilities and funding played a crucial role in this worldwide effort, with the participation of advanced European telescopes and the support from the European Research Council — particularly a €14 million grant for the BlackHoleCam project [8]. Support from ESO, IRAM and the Max Planck Society was also key. "This result builds on decades of European expertise in millimetre astronomy”, commented Karl Schuster, Director of IRAM and member of the EHT Board.

The construction of the EHT and the observations announced today represent the culmination of decades of observational, technical, and theoretical work. This example of global teamwork required close collaboration by researchers from around the world. Thirteen partner institutions worked together to create the EHT, using both pre-existing infrastructure and support from a variety of agencies. Key funding was provided by the US National Science Foundation (NSF), the EU's European Research Council (ERC), and funding agencies in East Asia.

“ESO is delighted to have significantly contributed to this result through its European leadership and pivotal role in two of the EHT’s component telescopes, located in Chile — ALMA and APEX,” commented ESO Director General Xavier Barcons. “ALMA is the most sensitive facility in the EHT, and its 66 high-precision antennas were critical in making the EHT a success.”

"We have achieved something presumed to be impossible just a generation ago," concluded Doeleman. "Breakthroughs in technology, connections between the world's best radio observatories, and innovative algorithms all came together to open an entirely new window on black holes and the event horizon.”

Notes

[1] The shadow of a black hole is the closest we can come to an image of the black hole itself, a completely dark object from which light cannot escape. The black hole’s boundary — the event horizon from which the EHT takes its name — is around 2.5 times smaller than the shadow it casts and measures just under 40 billion km across.

[2] Supermassive black holes are relatively tiny astronomical objects — which has made them impossible to directly observe until now. As the size of a black hole’s event horizon is proportional to its mass, the more massive a black hole, the larger the shadow. Thanks to its enormous mass and relative proximity, M87’s black hole was predicted to be one of the largest viewable from Earth — making it a perfect target for the EHT.

[3] Although the telescopes are not physically connected, they are able to synchronize their recorded data with atomic clocks — hydrogen masers — which precisely time their observations. These observations were collected at a wavelength of 1.3 mm during a 2017 global campaign. Each telescope of the EHT produced enormous amounts of data – roughly 350 terabytes per day – which was stored on high-performance helium-filled hard drives. These data were flown to highly specialised supercomputers — known as correlators — at the Max Planck Institute for Radio Astronomy and MIT Haystack Observatory to be combined. They were then painstakingly converted into an image using novel computational tools developed by the collaboration.

[4] 100 years ago, two expeditions set out for Principe Island off the coast of Africa and Sobral in Brazil to observe the 1919 solar eclipse, with the goal of testing general relativity by seeing if starlight would be bent around the limb of the sun, as predicted by Einstein. In an echo of those observations, the EHT has sent team members to some of the world's highest and most isolated radio facilities to once again test our understanding of gravity.

[5] The East Asian Observatory (EAO) partner on the EHT project represents the participation of many regions in Asia, including China, Japan, Korea, Taiwan, Vietnam, Thailand, Malaysia, India and Indonesia.

[6] Future EHT observations will see substantially increased sensitivity with the participation of the IRAM NOEMA Observatory, the Greenland Telescope and the Kitt Peak Telescope.

[7] ALMA is a partnership of the European Southern Observatory (ESO; Europe, representing its member states), the U.S. National Science Foundation (NSF), and the National Institutes of Natural Sciences(NINS) of Japan, together with the National Research Council (Canada), the Ministry of Science and Technology (MOST; Taiwan), Academia Sinica Institute of Astronomy and Astrophysics (ASIAA; Taiwan), and Korea Astronomy and Space Science Institute (KASI; Republic of Korea), in cooperation with the Republic of Chile. APEX is operated by ESO, the 30-meter telescope is operated by IRAM (the IRAM Partner Organizations are MPG (Germany), CNRS (France) and IGN (Spain)), the James Clerk Maxwell Telescope is operated by the EAO, the Large Millimeter Telescope Alfonso Serrano is operated by INAOE and UMass, the Submillimeter Array is operated by SAO and ASIAA and the Submillimeter Telescope is operated by the Arizona Radio Observatory (ARO). The South Pole Telescope is operated by the University of Chicago with specialized EHT instrumentation provided by the University of Arizona.

[8] BlackHoleCam is an EU-funded project to image, measure and understand astrophysical black holes. The main goal of BlackHoleCam and the Event Horizon Telescope (EHT) is to make the first ever images of the billion solar masses black hole in the nearby galaxy M87 and of its smaller cousin, Sagittarius A*, the supermassive black hole at the centre of our Milky Way. This allows the determination of the deformation of spacetime caused by a black hole with extreme precision.
 
More information

This research was presented in a series of six papers published today in a special issue of The Astrophysical Journal Letters.

The EHT collaboration involves more than 200 researchers from Africa, Asia, Europe, North and South America. The international collaboration is working to capture the most detailed black hole images ever by creating a virtual Earth-sized telescope. Supported by considerable international investment, the EHT links existing telescopes using novel systems — creating a fundamentally new instrument with the highest angular resolving power that has yet been achieved.
The individual telescopes involved are; ALMA, APEX, the IRAM 30-meter Telescope, the IRAM NOEMA Observatory, the James Clerk Maxwell Telescope (JCMT), the Large Millimeter Telescope (LMT), the Submillimeter Array (SMA), the Submillimeter Telescope (SMT), the South Pole Telescope (SPT), the Kitt Peak Telescope, and the Greenland Telescope (GLT).

The EHT consortium consists of 13 stakeholder institutes; the Academia Sinica Institute of Astronomy and Astrophysics, the University of Arizona, the University of Chicago, the East Asian Observatory, Goethe-Universitaet Frankfurt, Institut de Radioastronomie Millimétrique, Large Millimeter Telescope, Max Planck Institute for Radio Astronomy, MIT Haystack Observatory, National Astronomical Observatory of Japan, Perimeter Institute for Theoretical Physics, Radboud University and the Smithsonian Astrophysical Observatory.  

ESO is the foremost intergovernmental astronomy organisation in Europe and the world’s most productive ground-based astronomical observatory by far. It has 16 Member States: Austria, Belgium, the Czech Republic, Denmark, France, Finland, Germany, Ireland, Italy, the Netherlands, Poland, Portugal, Spain, Sweden, Switzerland and the United Kingdom, along with the host state of Chile and with Australia as a Strategic Partner. ESO carries out an ambitious programme focused on the design, construction and operation of powerful ground-based observing facilities enabling astronomers to make important scientific discoveries. ESO also plays a leading role in promoting and organising cooperation in astronomical research. ESO operates three unique world-class observing sites in Chile: La Silla, Paranal and Chajnantor. At Paranal, ESO operates the Very Large Telescope and its world-leading Very Large Telescope Interferometer as well as two survey telescopes, VISTA working in the infrared and the visible-light VLT Survey Telescope. Also at Paranal ESO will host and operate the Cherenkov Telescope Array South, the world’s largest and most sensitive gamma-ray observatory. ESO is also a major partner in two facilities on Chajnantor, APEX and ALMA, the largest astronomical project in existence. And on Cerro Armazones, close to Paranal, ESO is building the 39-metre Extremely Large Telescope, the ELT, which will become “the world’s biggest eye on the sky.”

Working together as a “virtual telescope,” observatories around the world produce first direct images of a black hole

Wed, 04/10/2019 - 9:00am

An international team of over 200 astronomers, including scientists from MIT’s Haystack Observatory, has captured the first direct images of a black hole. They accomplished this remarkable feat by coordinating the power of eight major radio observatories on four continents, to work together as a virtual, Earth-sized telescope.

In a series of papers published today in a special issue of Astrophysical Journal Letters, the team has revealed four images of the supermassive black hole at the heart of Messier 87, or M87, a galaxy within the Virgo galaxy cluster, 55 million light years from Earth.

All four images show a central dark region surrounded by a ring of light that appears lopsided — brighter on one side than the other.

Albert Einstein, in his theory of general relativity, predicted the existence of black holes, in the form of infinitely dense, compact regions in space, where gravity is so extreme that nothing, not even light, can escape from within. By definition, black holes are invisible. But if a black hole is surrounded by light-emitting material such as plasma, Einstein’s equations predict that some of this material should create a “shadow,” or an outline of the black hole and its boundary, also known as its event horizon.

Based on the new images of M87, the scientists believe they are seeing a black hole’s shadow for the first time, in the form of the dark region at the center of each image.

Relativity predicts that the immense gravitational field will cause light to bend around the black hole, forming a bright ring around its silhouette, and will also cause the surrounding material to orbit around the object at close to light speed. The bright, lopsided ring in the new images offers visual confirmation of these effects: The material headed toward our vantage point as it rotates around appears brighter than the other side.

From these images, theorists and modelers on the team have determined that the black hole is about 6.5 billion times as massive as our sun. Slight differences between each of the four images suggest that material is zipping around the black hole at lightning speed.

“This black hole is much bigger than the orbit of Neptune, and Neptune takes 200 years to go around the sun,” says Geoffrey Crew, a research scientist at Haystack Observatory. “With the M87 black hole being so massive, an orbiting planet would go around it within a week and be traveling at close to the speed of light.”

“People tend to view the sky as something static, that things don’t change in the heavens, or if they do, it’s on timescales that are longer than a human lifetime,” says Vincent Fish, a research scientist at Haystack Observatory. “But what we find for M87 is, at the very fine detail we have, objects change on the timescale of days. In the future, we can perhaps produce movies of these sources. Today we’re seeing the starting frames.”

“These remarkable new images of the M87 black hole prove that Einstein was right yet again,” says Maria Zuber, MIT’s vice president for research and the E.A. Griswold Professor of Geophysics in the Department of Earth, Atmospheric and Planetary Sciences. “The discovery was enabled by advances in digital systems at which Haystack engineers have long excelled.”

“Nature was kind”

The images were taken by the Event Horizon Telescope, or EHT, a planet-scale array comprising eight radio telescopes, each in a remote, high-altitude environment, including the mountaintops of Hawaii, Spain’s Sierra Nevada, the Chilean desert, and the Antarctic ice sheet.

On any given day, each telescope operates independently, observing astrophysical objects that emit faint radio waves. However, a black hole is infinitely smaller and darker than any other radio source in the sky. To see it clearly, astronomers need to use very short wavelengths — in this case, 1.3 millimeters — that can cut through the clouds of material between a black hole and the Earth.

Making a picture of a black hole also requires a magnification, or “angular resolution,” equivalent to reading a text on a phone in New York from a sidewalk café in Paris. A telescope’s angular resolution increases with the size of its receiving dish. However, even the largest radio telescopes on Earth are nowhere near big enough to see a black hole.

But when multiple radio telescopes, separated by very large distances, are synchronized and focused on a single source in the sky, they can operate as one very large radio dish, through a technique known as very long baseline interferometry, or VLBI. Their combined angular resolution as a result can be vastly improved.

For EHT, the eight participating telescopes summed up to a virtual radio dish as big as the Earth, with the ability to resolve an object down to 20 micro-arcseconds — about 3 million times sharper than 20/20 vision. By a happy coincidence, that’s about the precision required to view a black hole, according to Einstein’s equations.

“Nature was kind to us, and gave us something just big enough to see by using state-of-the-art equipment and techniques,” says Crew, co-leader of the EHT correlation working group and the ALMA Observatory VLBI team.

“Gobs of data”

On April 5, 2017, the EHT began observing M87. After consulting numerous weather forecasts, astronomers identified four nights that would produce clear conditions for all eight observatories — a rare opportunity, during which they could work as one collective dish to observe the black hole.

In radio astronomy, telescopes detect radio waves, at frequencies that register incoming photons as a wave, with an amplitude and phase that’s measured as a voltage. As they observed M87, every telescope took in streams of data in the form of voltages, represented as digital numbers.

“We’re recording gobs of data — petabytes of data for each station,” Crew says.

In total, each telescope took in about one petabyte of data, equal to 1 million gigabytes. Each station recorded this enormous influx that onto several Mark6 units — ultrafast data recorders that were originally developed at Haystack Observatory.

After the observing run ended, researchers at each station packed up the stack of hard drives and flew them via FedEx to Haystack Observatory, in Massachusetts, and Max Planck Institute for Radio Astronomy, in Germany. (Air transport was much faster than transmitting the data electronically.) At both locations, the data were played back into a highly specialized supercomputer called a correlator, which processed the data two streams at a time.

As each telescope occupies a different location on the EHT’s virtual radio dish, it has a slightly different view of the object of interest — in this case, M87. The data received by two separate telescopes may encode a similar signal of the black hole but also contain noise that’s specific to the respective telescopes.

The correlator lines up data from every possible pair of the EHT’s eight telescopes. From these comparisons, it mathematically weeds out the noise and picks out the black hole’s signal. High-precision atomic clocks installed at every telescope time-stamp incoming data, enabling analysts to match up data streams after the fact.

“Precisely lining up the data streams and accounting for all kinds of subtle perturbations to the timing is one of the things that Haystack specializes in,” says Colin Lonsdale, Haystack director and vice chair of the EHT directing board.

Teams at both Haystack and Max Planck then began the painstaking process of “correlating” the data, identifying a range of problems at the different telescopes, fixing them, and rerunning the correlation, until the data could be rigorously verified. Only then were the data released to four separate teams around the world, each tasked with generating an image from the data using independent techniques.

“It was the second week of June, and I remember I didn’t sleep the night before the data was released, to be sure I was prepared,” says Kazunori Akiyama, co-leader of the EHT imaging group and a postdoc working at Haystack.

All four imaging teams previously tested their algorithms on other astrophysical objects, making sure that their techniques would produce an accurate visual representation of the radio data. When the files were released, Akiyama and his colleagues immediately ran the data through their respective algorithms. Importantly, each team did so independently of the others, to avoid any group bias in the results.

“The first image our group produced was slightly messy, but we saw this ring-like emission, and I was so excited at that moment,” Akiyama remembers. “But simultaneously I was worried that maybe I was the only person getting that black hole image.”

His concern was short-lived. Soon afterward all four teams met at the Black Hole Initiative at Harvard University to compare images, and found, with some relief, and much cheering and applause, that they all produced the same, lopsided, ring-like structure — the first direct images of a black hole.

“There have been ways to find signatures of black holes in astronomy, but this is the first time anyone’s ever taken a picture of one,” Crew says. “This is a watershed moment.”

“A new era”

The idea for the EHT was conceived in the early 2000s by Sheperd Doeleman PhD ’95, who was leading a pioneering VLBI program at Haystack Observatory and now directs the EHT project as an astronomer at the Harvard-Smithsonian Center for Astrophysics. At the time, Haystack engineers were developing the digital back-ends, recorders, and correlator that could process the enormous datastreams that an array of disparate telescopes would receive.

“The concept of imaging a black hole has been around for decades,” Lonsdale says. “But it was really the development of modern digital systems that got people thinking about radio astronomy as a way of actually doing it. More telescopes on mountaintops were being built, and the realization gradually came along that, hey, [imaging a black hole] isn’t absolutely crazy.”

In 2007, Doeleman’s team put the EHT concept to the test, installing Haystack’s recorders on three widely scattered radio telescopes and aiming them together at Sagittarius A*, the black hole at the center of our own galaxy.

“We didn’t have enough dishes to make an image,” recalls Fish, co-leader of the EHT science operations working group. “But we could see there was something there that’s about the right size.”

Today, the EHT has grown to an array of 11 observatories: ALMA, APEX, the Greenland Telescope, the IRAM 30-meter Telescope, the IRAM NOEMA Observatory, the Kitt Peak Telescope, the James Clerk Maxwell Telescope, the Large Millimeter Telescope Alfonso Serrano, the Submillimeter Array, the Submillimeter Telescope, and the South Pole Telescope. (Read further in the related press release.)

Coordinating observations and analysis has involved over 200 scientists from around the world who make up the EHT collaboration, with 13 main institutions, including Haystack Observatory. Key funding was provided by the National Science Foundation, the European Research Council, and funding agencies in East Asia, including the Japan Society for the Promotion of Science. The telescopes contributing to this result were ALMA, APEX, the IRAM 30-meter telescope, the James Clerk Maxwell Telescope, the Large Millimeter Telescope Alfonso Serrano, the Submillimeter Array, the Submillimeter Telescope, and the South Pole Telescope.

More observatories are scheduled to join the EHT array, to sharpen the image of M87 as well as attempt to see through the dense material that lies between Earth and the center of our own galaxy, to the heart of Sagittarius A*.

“We’ve demonstrated that the EHT is the observatory to see a black hole on an event horizon scale,” Akiyama says. “This is the dawn of a new era of black hole astrophysics.”

The Haystack EHT team includes John Barrett, Roger Cappallo, Joseph Crowley, Mark Derome, Kevin Dudevoir, Michael Hecht, Lynn Matthews, Kotaro Moriyama, Michael Poirier, Alan Rogers, Chester Ruszczyk, Jason SooHoo, Don Sousa, Michael Titus, and Alan Whitney. Additional contributors were MIT alumni Daniel Palumbo, Katie Bouman, Lindy Blackburn, Sera Markoff, and Bill Freeman, a professor in MIT’s Department of Electrical Engineering and Computer Science.

MIT Energy Conference explores changes to the grid in coming decades

Tue, 04/09/2019 - 2:07pm

The 14th annual student-run MIT Energy Conference, the nation’s largest such event, reflected an increasing maturity in the low-carbon and zero-carbon energy field. A global push toward carbon-free energy, many speakers made clear at the event, held April 4 and 5 in Cambridge’s Kendall Square, is now seen as a given; questions now center on how to make it happen more quickly, fairly, and universally.

While previous conferences emphasized promoting a rise in the use of renewable energy, the keynote speakers, panel discussions, and exhibitors at this year’s event increasingly focused on the details of policy proposals, ways of funding promising new startup ventures, and how the electric grid can be adapted to handle an ever-increasing proportion of small and intermittent power sources.

A new theme this year was the increasingly significant role seen for proposed fusion power plants, which for many years have been considered too futuristic and too uncertain to play a meaningful part in decarbonizing energy production. But several new, privately funded ventures are working on fast-track versions that are seen as far more realistic and feasible than previous efforts. Two of these companies were featured in the conference’s two opening talks.

Chris Mowry, CEO of General Fusion, laid out the urgent need for scalable, carbon-free power generation, and explained his company’s unusual approach to a fusion reactor, which relies on pistons surrounding a sphere containing molten metal that gets pushed inward, generating the high temperatures and pressures needed to produce fusion in a tiny pellet of fuel at the device’s center. “Fusion is the most powerful energy source in the universe, the ultimate clean tech,” Mowry said. And unlike wind, solar, or hydropower, fusion energy potentially is unaffected by geographical constraints or weather patterns, and could be sited anywhere. And its fuel, derived from ordinary water, is readily available.

Bob Mumgaard, CEO of Commonwealth Fusion Systems (CFS), a startup company launched in collaboration with MIT to develop an advanced tokamak (donut-shaped) fusion reactor using high-temperature superconducting magnets, described that company’s approach, which unlike many others is based on principles that have been well-established through decades of fusion research at MIT and elsewhere. “It’s an exciting time,” Mumgaard said. “There is a nascent fusion industry, an ecosystem, in this field, like what we saw with solar several years ago.” The various companies have already formed a fusion industry association, which will help to drive the industry forward.

The CFS design, being worked on through research efforts with MIT’s Plasma Science and Fusion Center, is now the largest research project at MIT, Mumgaard said, attesting to how seriously this field is now being taken. If successful, it could transform the world’s energy mix, potentially providing a significant fraction of the planet’s energy needs by this century’s end, he said.

Meanwhile, the solar and wind technologies that were struggling to find their place in the energy mix a decade ago have now become, in most places, the cheapest alternative for new power generation, beating out coal-fired plants even without considering subsidies for renewables or penalties for pollution-producing fossil plants. Now, new business models for spurring the further growth of these renewables have become a major focus.

For example, Audrey Lee, vice president of grid services at Sunrun, explained how her company is focusing on selling solar power as a service, rather than as something the customer has to buy. The company will install solar panels and a battery backup system on a customer’s house, and sell the power to the customer at a discounted price. Meanwhile, all such installations in an area function as a sort of minigrid, in which excess power from one house can be used by other houses in the area, creating what she calls a “virtual power plant” and dynamically sharing the resource.

Overall, the conference speakers were optimistic about the potential for deep reductions in carbon emissions, at least in terms of the technology and economics. Political and social factors may be harder to predict, but renewable energy systems exist or are in development that could meet the need for carbon-free power. In a live taping of “The Energy Gang” podcast held at the conference, podcast regular Katherine Hamilton, a former utility executive and regulator, said that “it’s pretty easy to get to 75 percent renewables by 2040” in electricity production worldwide. “I don’t see any barriers, and everybody wants it.”

John Farrell, director of the Energy Democracy Initiative at the Institute for Local Self-Reliance, said that while much emphasis has been put on large, grid-scale solar installations, “solar can be competitive at any scale.” And with renewable energy’s rapid growth in general, “everybody is going to win from decarbonization,” because it has benefits both to the individual consumer as well as globally by reducing pollution and greenhouse gases.

One of the key needs to enable greater penetration of renewable energy sources is an inexpensive and reliable way of storing the energy from intermittent sources, several speakers emphasized. Even battery or other systems that could supply a four-hour backup could make a big difference in reducing pollution, because that could eliminate the need for many fossil-fuel-powered peaking generators that are only used during periods of peak demand, typically on hot summer afternoons and early evenings. A system that could extend solar power generated at midday to be useful during these peak hours could drastically cut the need for such peaking generators, said Ravi Manghani, director of energy storage for Wood Mackenzie Power and Renewables.

Yet-Ming Chiang, the Kyocera Professor of Materials Science and Engineering at MIT and founder of several startup battery companies, explained how his company Form Energy is working to develop extremely low-cost, long-lasting batteries that could meet this kind of utility-scale storage need. He envisions large installations that would be very different from the way people think of batteries today. “It’s not something that would be manufactured and shipped,” like conventional batteries, “but more like a chemical plant,” a fixed facility with large tanks and pumps.

Greater investment in the new technologies will be necessary to take carbon-free energy the next step toward dominating the energy mix. In a keynote talk, Brian Deese, global head of sustainable investing at BlackRock, asked “what will it take to move the really big capital, the trillions of dollars, into carbon-free energy?” One thing that’s needed, he said, is more funding for the kinds of big trials that can help to prove the feasibility of a technology and “de-risk” it for investors.

Sectors such as the utility companies, municipalities, and commercial mortgage-backed securities could invest significant funds into research and development in the energy field, Deese suggested. Those three sectors have about $5 trillion among them to invest, so even if a very small fraction of that went into such research, it could have a dramatic impact, he said.

This year, for the first time, the MIT Energy Conference also incorporated the final round of MIT’s Clean Energy Prize, in which a student-led team can win up to $100,000 toward their energy startup venture. Four teams competed in the final round, including one called Reeddi, started by two students from Nigeria, which aims to provide rechargeable batteries in rural areas in Africa where grid power is either nonexistent or unreliable. Another team, Medley Thermal, proposes a way to produce steam for heating or industrial processes in a more efficient way, potentially shaving off 25 percent or more of the energy needed for such steam production. And Sun Co. Tracking aims to develop a passive system for solar panel tracking, to enable new or existing solar installations to capture more energy for a given area.

The grand prize of $100,000 went to a team called Aeroshield, led by MIT graduate student Elise Strobach, who developed a new kind of transparent, lightweight material to be sandwiched between glass panes, producing a window that is 50 percent more insulating than conventional double-pane windows and lasts five to 10 years longer, but can be manufactured at low cost on existing production lines with only minor changes. “$20 billion a year goes out through windows” in the form of wasted heat, she said — an amount of wasted energy that is “enough to power half the country.”

Shrinking the carbon footprint of a chemical in everyday objects

Tue, 04/09/2019 - 11:19am

The biggest source of global energy consumption is the industrial manufacturing of products such as plastics, iron, and steel. Not only does manufacturing these materials require huge amounts of energy, but many of the reactions also directly emit carbon dioxide as a byproduct.

In an effort to help reduce this energy use and the related emissions, MIT chemical engineers have devised an alternative approach to synthesizing epoxides, a type of chemical that is used to manufacture diverse products, including plastics, pharmaceuticals, and textiles. Their new approach, which uses electricity to run the reaction, can be done at room temperature and atmospheric pressure while eliminating carbon dioxide as a byproduct.

“What isn’t often realized is that industrial energy usage is far greater than transportation or residential usage. This is the elephant in the room, and there has been very little technical progress in terms of being able to reduce industrial energy consumption,” says Karthish Manthiram, an assistant professor chemical engineering and the senior author of the new study.

The researchers have filed for a patent on their technique, and they are now working on improving the efficiency of the synthesis so that it could be adapted for large-scale, industrial use.

MIT postdoc Kyoungsuk Jin is the lead author of the paper, which appears online  April 9 in the Journal of the American Chemical Society. Other authors include graduate students Joseph Maalouf, Nikifar Lazouski, and Nathan Corbin, and postdoc Dengtao Yang.

Ubiquitous chemicals

Epoxides, whose key chemical feature is a three-member ring consisting of an oxygen atom bound to two carbon atoms, are used to manufacture products as varied as antifreeze, detergents, and polyester.

“It’s impossible to go for even a short period of one’s life without touching or feeling or wearing something that has at some point in its history involved an epoxide. They’re ubiquitous,” Manthiram says. “They’re in so many different places, but we tend not to think about the embedded energy and carbon dioxide footprint.”

Several epoxides are among the chemicals with the top carbon footprints. The production of one common epoxide, ethylene oxide, generates the fifth-largest carbon dioxide emissions of any chemical product.

Manufacturing epoxides requires many chemical steps, and most of them are very energy-intensive. For example, the reaction used to attach an atom of oxygen to ethylene, producing ethylene oxide, must be done at nearly 300 degrees Celsius and under pressures 20 times greater than atmospheric pressure. Furthermore, most of the energy used to power this kind of manufacturing comes from fossil fuels.

Adding to the carbon footprint, the reaction used to produce ethylene oxide also generates carbon dioxide as a side product, which is released into the atmosphere. Other epoxides are made using a more complicated approach involving hazardous peroxides, which can be explosive, and calcium hydroxide, which can cause skin irritation.

To come up with a more sustainable approach, the MIT team took inspiration from a reaction known as water oxidation, which uses electricity to split water into oxygen, protons, and electrons. They decided to try performing the water oxidation and then attaching the oxygen atom to an organic compound called an olefin, which is a precursor to epoxides.

This was a counterintuitive approach, Manthiram says, because olefins and water normally cannot react with each other. However, they can react with each other when an electric voltage is applied.

To take advantage of this, the MIT team designed a reactor with an anode where water is broken down into oxygen, hydrogen ions (protons), and electrons. Manganese oxide nanoparticles act as a catalyst to help this reaction along, and to incorporate the oxygen into an olefin to make an epoxide. Protons and electrons flow to the cathode, where they are converted into hydrogen gas.

Thermodynamically, this reaction only requires about 1 volt of electricity, less than the voltage of a standard AA battery. The reaction does not generate any carbon dioxide, and the researchers anticipate that they could further reduce the carbon footprint by using electricity from renewable sources such as solar or wind to power the epoxide conversion.

Scaling up

So far, the researchers have shown that they can use this process to create an epoxide called cyclooctene oxide, and they are now working on adapting it to other epoxides. They are also trying to make the conversion of olefins into epoxides more efficient — in this study, about 30 percent of the electrical current went into the conversion reaction, but they hope to double that.

They estimate that their process, if scaled up, could produce ethylene oxide at a cost of $900 per ton, compared to $1,500 per ton using current methods. That cost could be lowered further as the process becomes more efficient. Another factor that could contribute to the economic viability of this approach is that it also generates hydrogen as a byproduct, which is valuable in its own right to power fuel cells.

The researchers plan to continue developing the technology in hopes of eventually commercializing it for industrial use, and they are also working on using electricity to synthesize other kinds of chemicals.

“There are many processes that have enormous carbon dioxide footprints, and decarbonization can be driven by electrification,” Manthiram says. “One can eliminate temperature, eliminate pressure, and use voltage instead.”

The research was funded by MIT’s Department of Chemical Engineering and a National Science Foundation Graduate Research Fellowship.

MIT spinout seeks to transform food safety testing

Tue, 04/09/2019 - 11:00am

“This is a $10 billion market and everyone knows it.” Those are the words of Chris Hartshorn, CEO of a new MIT spinout — Xibus Systems — that is aiming to make a splash in the food industry with their new food safety sensor.

Hartshorn has considerable experience supporting innovation in agriculture and food technology. Prior to joining Xibus, he served as chief technology officer for Callaghan Innovation, a New Zealand government agency. A large portion of the country’s economy relies upon agriculture and food, so a significant portion of the innovation activity there is focused on those sectors.

While there, Hartshorn came in contact with a number of different food safety sensing technologies that were already on the market, aiming to meet the needs of New Zealand producers and others around the globe. Yet, “every time there was a pathogen-based food recall” he says, “it shone a light on the fact that this problem has not yet been solved.” 

He saw innovators across the world trying to develop a better food pathogen sensor, but when Xibus Systems approached Hartshorn with an invitation to join as CEO, he saw something unique in their approach, and decided to accept.

Novel liquid particles provide quick indication of food contamination

Xibus Systems was formed in the fall of 2018 to bring a fast, easy, and affordable food safety sensing technology to food industry users and everyday consumers. The development of the technology, based on MIT research, was supported by two commercialization grants through the MIT Abdul Latif Jameel Water and Food Systems Lab’s J-WAFS Solutions program. It is based on specialized droplets — called Janus emulsions — that can be used to detect bacterial contamination in food. The use of Janus droplets to detect bacteria was developed by a research team led by Tim Swager, the John D. MacArthur Professor of Chemistry, and Alexander Klibanov, the Novartis Professor of Biological Engineering and Chemistry.

Swager and researchers in his lab originally developed the method for making Janus emulsions in 2015. Their idea was to create a synthetic particle that has the same dynamic qualities as the surface of living cells. 

The liquid droplets consist of two hemispheres of equal size, one made of a blue-tinted fluorocarbon and one made of a red-tinted hydrocarbon. The hemispheres are of different densities, which affects how they align and how opaque or transparent they appear when viewed from different angles. They are, in effect, lenses. What makes these micro-lenses particularly unique, however, is their ability to bind to specific bacterial proteins. Their binding properties enabled them to move, flipping from a red hemisphere to blue based on the presence or absence of a particular bacteria, like Salmonella.

“We were thrilled by the design,” Swager says. “It is a completely new sensing method that could really transform the food safety sensing market. It showed faster results than anything currently available on the market, and could still be produced at very low cost.”

Janus emulsions respond exceptionally quickly to contaminants and provide quantifiable results that are visible to the naked eye or can be read via a smartphone sensor. 

“The technology is rooted in very interesting science,” Hartshorn says. “What we are doing is marrying this scientific discovery to an engineered product that meets a genuine need and that consumers will actually adopt.”

Having already secured nearly $1 million in seed funding from a variety of sources, and also being accepted into Sprout, a highly respected agri-food accelerator, they are off to a fast start.

Solving a billion-dollar industry challenge

Why does speed matter? In the field of food safety testing, the standard practice is to culture food samples to see if harmful bacterial colonies form. This process can take many days, and often can only be performed offsite in a specialized lab.

While more rapid techniques exist, they are expensive and require specialized instruments — which are not widely available — and still typically require 24 hours or more from start to finish. In instances where there is a long delay between food sampling and contaminant detection, food products could have already reached consumers hands — and upset their stomachs. While the instances of illness and death that can occur from food-borne illness are alarming enough, there are other costs as well.  Food recalls result in tremendous waste, not only of the food products themselves but of the labor and resources involved in their growth, transportation, and processing. Food recalls also involve lost profit for the company. North America alone loses $5 billion annually in recalls, and that doesn’t count the indirect costs associated with the damage that occurs to particular brands, including market share losses that can last for years.

The food industry would benefit from a sensor that could provide fast and accurate readings of the presence and amount of bacterial contamination on-site. The Swager Group’s Janus emulsion technology has many of the elements required to meet this need and Xibus Systems is working to improve the speed, accuracy, and overall product design to ready the sensor for market.

Two other J-WAFS-funded researchers have helped improve the efficiency of early product designs. Mathias Kolle, assistant professor in the Department of Mechanical Engineering at MIT and recipient of a separate 2017 J-WAFS seed grant, is an expert on optical materials. In 2018, he and his graduate student Sara Nagelberg performed the calculations describing light’s interaction with the Janus particles so that Swager’s team could modify the design and improve performance. Kolle continues to be involved, serving with Swager on the technical advisory team for Xibus. 

This effort was a new direction for the Swager group. Says Swager: “The technology we originally developed was completely unprecedented. At the time that we applied to for a J-WAFS Solutions grant, we were working in new territory and had minimal preliminary results. At that time, we would have not made it through, for example,  government funding reviews which can be conservative. J-WAFS sponsorship of our project at this early stage was critical to help us to achieve the technology innovations that serve as the foundation of this new startup.”  

Xibus co-founder Kent Harvey — also a member of the original MIT research team—is joined by Matthias Oberli and Yuri Malinkevich. Together with Hartshorn they are working on a prototype for initial market entry. They are actually developing two different products: a smartphone sensor that is accessible to everyday consumers, and a portable handheld device that is more sensitive and would be suitable for industry. If they are able to build a successful platform that meets industry needs for affordability, accuracy, ease of use, and speed, they could apply that platform to any situation where a user would need to analyze organisms that live in water. This opens up many sectors in the life sciences, including water quality, soil sensing, veterinary diagnostics, as well as fluid diagnostics for the broader healthcare sector.    

The Xibus team wants to nail their product right off the bat.

“Since food safety sensing is a crowded field, you only get one shot to impress your potential customers,“ Hartshorn says. “If your first product is flawed or not interesting enough, it can be very hard to open the door with these customers again. So we need to be sure our prototype is a game-changer. That’s what’s keeping us awake at night.” 

MIT team places first in U.S. Air Force virtual reality competition

Tue, 04/09/2019 - 9:50am

When the United States Air Force put out a call for submissions for its first-ever Visionary Q-Prize competition in October 2018, a six-person team of MIT students and alumni took up the challenge. Last month, they emerged as a first-place winner for their prototype of a virtual reality tool they called CoSMIC (Command, Sensing, and Mapping Information Center).

The challenge was hosted by the Air Force Research Labs Space Vehicles Directorate and the Wright Brothers Institute to encourage nontraditional sources with innovative products and ideas to engage with military customers to develop solutions for safe and secure operations in space.

CoSMIC, a virtual reality visualization tool for satellite operators, placed first in the Augmented Reality/Virtual Reality category. “More than 23,000 objects — from satellites to debris to spent rocket bodies — are in orbit and being tracked,” says Eric Hinterman, a graduate student in MIT’s Department of Aeronautics and Astronautics and member of the winning team. “The challenge was to develop a user interface to help visualize these objects and predict if they’re going to collide, and what we can do to avoid that.”

The goal of CoSMIC is to enable satellite operators to process more data than they could using a standard 2-D screen. The technology minimizes mental workload and allows operators to more easily perform maneuvers and focus their attention on user-selected objects.

“Space is such a dynamic and complex environment that is becoming more and more congested and contested. We need to be able to display and interpret data faster and more accurately, so we can respond quickly and appropriately to any kind of threat, whether it’s adversarial, space debris, or satellites in close proximity,” says Gen. Jay Raymond, Air Force Space Command and Joint Forces Space Component commander. “The VQ-Prize challenge is a prime example of how we’re thinking and sourcing, outside the box, to get after rapid, agile onboarding of new technology that will make space operations safer for everyone.”

Hinterman and his team built their prototype from commercially available components, including an HTC VIVE Pro headset and a hand-tracking sensor. “You put on the headset, and it immerses you in the world of the satellites,” he explained. “You’re looking at the Earth, and satellites surround it as tiny pinpricks of light. Their orbital data are accurate, and you can zoom in on any of them.” The hand-tracking sensor allows operators to see their hands and to grasp and move objects as they would in the real world.

CoSMIC was developed in the studio of the VR/AR MIT, an organization for MIT students interested in virtual and augmented reality, of which two teammates are members. Hardware and other resources in the studio are available for student use, thanks to the generosity of corporate and individual donors who sponsor the student-run group.

In mid-March, the team spent a week at the United States Air Force Academy in Colorado Springs showcasing the CoSMIC prototype. Satellite operators in Colorado Springs shared with them their challenges and their current procedures and tools. “Based on that, we’re able to tweak our prototype and build other interesting concepts,” says Anandapadmanaban. Likewise, the Air Force development team had a chance to examine CoSMIC and consider ways to integrate it with their existing tools.

This month, the MIT team will attend Space Symposium in Colorado Springs, an annual conference of professionals from the space community ranging from the military to cyber security organizations to R&D facilities.

“We very intentionally sought out AR/VR enthusiasts and influencers at the outset of this challenge,” says Raymond. “And the solutions we received prove there’s a wealth of great ideas out there and that we need to continue to create avenues such as the VQ-Prize to connect innovative ideas to needs.”

“It’s been a fun challenge,” says Hinterman. “I think CoSMIC will be very relevant in the next couple of decades as the number of satellites being launched into orbit increases dramatically.”

The CoSMIC team includes MIT undergrads Eswar Anandapadmanaban, an electrical engineering and computer science major, and Alexander Laiman, a materials science major; grad student Eric Hinterman of aeronautics and astronautics; and alumni Barret Schlegelmilch SM ’18, MBA ’18, Steven Link SM ’18, MBA ’18, and Philip Ebben SM ’18, MBA ’18.

MET Fund launched to support MITdesignX

Tue, 04/09/2019 - 8:00am

MIT’s vibrant entrepreneurial ecosystem has inspired another first: a new seed fund established to support startups launched from School of Architecture and Planning (SA+P) innovation accelerator MITdesignX.

The Boston-based MET Fund, an independent legal entity unaffiliated with MIT, has been formed to provide support to qualified ventures that have completed the MITdesignX program.

This funding initiative is innovative in two important respects: It offers support at a key stage in the life of a startup, and it will give a portion of profits — 20 percent — to the accelerator itself.

“When MITdesignX was created, we envisioned the need for an independent fund to support the program and its entrepreneurial teams as they move from the academy into the real world,” says Dennis Frenchman, Class of 1922 Professor of Urban Design and Planning, and director of the MIT Center for Real Estate. Frenchman is also the founder and academic director of the accelerator.

“It’s gratifying to see friends and outside investors rally to the support of MITdesignX and its graduates by launching the MET Fund,” he says. “It is a testament to the lasting value and success of the MITdesignX program and SA+P.”

MITdesignX is a “venture design” accelerator based in the School of Architecture and Planning, as well as an interdisciplinary academic program operating at the intersection of design, science, and technology. It is dedicated to exploration and application of design for complex problem-solving and the discovery of high-impact solutions to address critical challenges facing the future of design, cities, and the human environment. It reflects a new approach to entrepreneurship education drawing on business theory, design thinking, and entrepreneurial practices.

The program is a launching pad for startups created by students, researchers, faculty, and staff at SA+P, and their collaborators across MIT and around the world. Its offerings include academic courses, financial support, workspace, a wide network of dedicated mentors, and business and institutional contacts. It is rare among accelerators in offering both academic credit for completion of rigorous coursework and startup grants.

“After just two years, we are excited to see the success of so many of our startups,” says Gilad Rosenzweig, executive director of MITdesignX. “I believe it is an indication that new ventures launched by people dedicated to scalable creative ideas will have an important impact in cities and the built environment around the world.”

The MET fund was created to make seed funding available to qualifying MITdesignX teams after graduation — a critical period when the companies launch and take first steps such as testing prototypes, processes, and systems, and reaching out to customers and stakeholders.

An integral part of the MET Fund’s mission is to support the development of venture design education and entrepreneurship. To meet that goal, the fund will gift 20 percent of its profits to the operation of MITdesignX to further advance venture design education.

The MET Fund was founded by two entrepreneurs, Svafa Gronfeldt and Norbert Chang, who have vast experience with scaling companies from ideation to a global reach. Chang serves as the fund’s general partner. Investors include philanthropists, local angel and seed investors, and international investors interested in design-based startups and supporting innovative entrepreneurship education.

The fund’s portfolio companies are created by interdisciplinary teams from the MIT community and beyond, and include designers, architects, planners, artists, scientists, engineers, business majors, and computer scientists. The MET Fund will be advised by a non-executive, strategic advisory board with in-depth knowledge of the built environment, urban design, business, and entrepreneurship.

“I’m deeply grateful to Svafa, Norbert, and the entire MITdesignX team for taking design innovation to the next level,” says Hashim Sarkis, dean of the School of Architecture and Planning. “We are committed to training our students for the jobs of the future and enabling them to create those jobs themselves.”

Engineers develop concept for hybrid heavy-duty trucks

Tue, 04/09/2019 - 12:00am

Heavy-duty trucks, such as the 18-wheelers that transport many of the world’s goods from farm or factory to market, are virtually all powered by diesel engines. They account for a significant portion of worldwide greenhouse gas emissions, but little has been done so far to curb their climate-change-inducing exhaust.

Now, researchers at MIT have devised a new way of powering these trucks that could drastically curb pollution, increase efficiency, and reduce or even eliminate their net greenhouse gas emissions.

The concept involves using a plug-in hybrid engine system, in which the truck would be primarily powered by batteries, but with a spark ignition engine (instead of a diesel engine). That engine, which would allow the trucks to conveniently travel the same distances as today’s conventional diesel trucks, would be a flex-fuel model that could run on pure gasoline, pure alcohol, or blends of these fuels.

While the ultimate goal would be to power trucks entirely with batteries, the researchers say, this flex-fuel hybrid option could provide a way for such trucks to gain early entry into the marketplace by overcoming concerns about limited range, cost, or the need for excessive battery weight to achieve longer range.

The new concept was developed by MIT Energy Initiative and Plasma Fusion and Science Center research scientist Daniel Cohn and principal research engineer Leslie Bromberg, who are presenting it at the annual SAE International conference on April 11.

“We’ve been working for a number of years on ways to make engines for cars and trucks cleaner and more efficient, and we’ve been particularly interested in what you can do with spark ignition [as opposed to the compresson ignition used in diesels], because it’s intrinsically much cleaner,” Cohn says. Compared to a diesel engine vehicle, a gasoline-powered vehicle produces only a tenth as much nitrogen oxide (NOx) pollution, a major component of air pollution.

In addition, by using a flex-fuel configuration that allows it to run on gasoline, ethanol, methanol, or blends of these, such engines have the potential to emit far less greenhouse gas than pure gasoline engines do, and the incremental cost for the fuel flexibility is very small, Cohn and Bromberg say. If run on pure methanol or ethanol derived from renewable sources such as agricultural waste or municipal trash, the net greenhouse gas emissions could even be zero. “It’s a way of making use of a low-greenhouse-gas fuel” when it’s available, “but always having the option of running it with gasoline” to ensure maximum flexibility, Cohn says.

While Tesla Motors has announced it will be producing an all-electric heavy-duty truck, Cohn says, “we think that’s going to be very challenging, because of the cost and weight of the batteries” needed to provide sufficient range. To meet the expected driving range of conventional diesel trucks, Cohn and Bromberg estimate, would require somewhere between 10 and 15 tons of batteries “That’s a significant fraction of the payload” such a truck could otherwise carry, Cohn says.

To get around that, “we think that the way to enable the use of electricity in these vehicles is with a plug-in hybrid,” he says. The engine they propose for such a hybrid is a version of one the two researchers have been working on for years, developing a highly efficient, flexible-fuel gasoline engine that would weigh far less, be more fuel-efficient, and produce a tenth as much air pollution as the best of today’s diesel-powered vehicles.

Cohn and Bromberg did a detailed analysis of both the engineering and the economics of what would be needed to develop such an engine to meet the needs of existing truck operators. In order to match the efficiency of diesels, a mix of alcohol with the gasoline, or even pure alcohol, can be used, and this can be processed using renewable energy sources, they found. Detailed computer modeling of a whole range of desired engine characteristics, combined with screening of the results using an artificial intelligence system, yielded clear indications of the most promising pathways and showed that such substitutions are indeed practically and financially feasible.

In both the present diesel and the proposed flex-fuel vehicles, the emissions are measured at the tailpipe, after a variety of emissions-control systems have done their work in both cases, so the comparison is a realistic measure of real-world emissions. The combination of a hybrid drive and flex-fuel engine is “a way to enable the introduction of electric drive into the heavy truck sector, by making it possible to meet range and cost requirements, and doing it in a way that’s clean,” Cohn says.

Bromberg says that gasoline engines have become much more efficient and clean over the years, and the relative cost of diesel fuel has gone up, so that the cost advantages that led to the near-universal adoption of diesels for heavy trucking no longer prevail. “Over time, gas engines have become more and more efficient, and they have an inherent advantage in producing less air pollution,” he says. And by using the engine in a hybrid system, it can always operate at its optimum speed, maximizing its efficiency.

Methane is an extremely potent greenhouse gas, so if it can be diverted to produce a useful fuel by converting it to methanol through a simple chemical process, “that’s one of the most attractive ways to make a clean fuel,” Bromberg says. “I think the alcohol fuels overall have a lot of promise.”

Already, he points out, California has plans for new regulations on truck emissions that are very difficult to meet with diesel engine vehicles. “We think there’s a significant rationale for trucking companies to go to gasoline or flexible fuel,” Cohn says. “The engines are cheaper, exhaust treatment systems are cheaper, and it’s a way to ensure that they can meet the expected regulations. And combining that with electric propulsion in a hybrid system, given an ever-cleaner electric grid, can further reduce emissions and pollution from the trucking sector.”

Pure electric propulsion for trucks is the ultimate goal, but today’s batteries don’t make that a realistic option yet, Cohn says: “Batteries are great, but let’s be realistic about what they can provide.”

And the combination they propose can address two major challenges at once, they say. “We don’t know which is going to be stronger, the desire to reduce greenhouse gases, or the desire to reduce air pollution.” In the U.S., climate change may be the bigger push, while in India and China air pollution may be more urgent, but “this technology has value for both challenges,” Cohn says.

The research was supported by the MIT Arthur Samberg Energy Innovation Fund.

Greener, more efficient natural gas filtration

Mon, 04/08/2019 - 6:18pm

Natural gas and biogas have become increasingly popular sources of energy throughout the world in recent years, thanks to their cleaner and more efficient combustion process when compared to coal and oil.

However, the presence of contaminants such as carbon dioxide within the gas means it must first be purified before it can be burnt as fuel.

Traditional processes to purify natural gas typically involve the use of toxic solvents and are extremely energy-intensive.

As a result, researchers have been investigating the use of membranes as a way to remove impurities from natural gas in a more cost-effective and environmentally friendly way, but finding a polymer material that can separate gases quickly and effectively has so far proven a challenge.

Now, in a paper published today in the journal Advanced Materials, researchers at MIT describe a new type of polymer membrane that can dramatically improve the efficiency of natural gas purification while reducing its environmental impact.

The membrane, which has been designed by an interdisciplinary research team at MIT, is capable of processing natural gas much more quickly than conventional materials, according to lead author Yuan He, a graduate student in the Department of Chemistry at MIT.

“Our design can process a lot more natural gas — removing a lot more carbon dioxide — in a shorter amount of time,” He says.

Existing membranes are typically made using linear strands of polymer, says Zachary Smith, the Joseph R. Mares Career Development Professor of Chemical Engineering at MIT, who led this research effort.

“These are long-chain polymers, which look like cooked spaghetti noodles at a molecular level,” he says. “You can make these cooked spaghetti noodles more rigid, and in so doing you create spaces between the noodles that change the packing structure and the spacing through which molecules can permeate.”

However, such materials are not sufficiently porous to allow carbon dioxide molecules to permeate through them at a fast enough rate to compete with existing purification processes.

Instead of using long chains of polymers, the researchers have designed membranes in which the strands look like hairbrushes, with tiny bristles on each strand. These bristles allow the polymers to separate gases much more effectively.

“We have a new design strategy, where we can tune the bristles on the hairbrush, which allows us to precisely and systematically tune the material,” Smith says. “In doing so, we can create precise subnanometer spacings, and enable the types of interactions that we need, to create selective and highly permeable membranes.”

In experiments, the membrane was able to withstand unprecedented carbon dioxide feed pressures of up to 51 bar without suffering plasticization, the researchers report. This compares to around 34 bar for the best-performing materials. The membrane is also 2,000 -7,000 times more permeable than traditional membranes, according to the team.

Since the side-chains, or “bristles,” can be predesigned before being polymerized, it is much easier to incorporate a range of functions into the polymer, according to Francesco Benedetti, a visiting graduate student within Smith’s research lab in the Department of Chemical Engineering at MIT.

The research also included Timothy Swager, the John D. MacArthur Professor of Chemistry, and Troy Van Voorhis, the Haslam and Dewey Professor of Chemistry, MIT graduate students Hong-Zhou Ye and Sharon Lin, M. Grazia DeAngelis at the University of Bologna, and Chao Liu and Yanchuan Zhao at the Chinese Academy of Sciences.

“The performance of the material can be tuned by making very subtle changes in the side-chains, or brushes, that we predesign,” Benedetti says. “That’s very important, because it means we can target very different applications, just by making very subtle changes.”

What’s more, the researchers have discovered that their hairbrush polymers are better able to withstand conditions that would cause other membranes to fail.

In existing membranes, the long-chain polymer strands overlap one another, sticking together to form solid-state films. But over time the polymer strands slide over each other, creating a physical and chemical instability.

In the new membrane design, in contrast, the polymer bristles are all connected by a long-chain strand, which acts as a backbone. As a result, the individual bristles are unable to move, creating a more stable membrane material.

This stability gives the material unprecedented resistance to a process known as plasticization, in which polymers swell in the presence of aggressive feedstocks such as carbon dioxide, Smith says.

“We’ve seen stability that we’ve never seen before in traditional polymers,” he says.

Using polymer membranes for gas separation offers high energy efficiency, minimal environmental impact, and simple and continuous operation, but existing commercial materials have low permeance and moderate selectivity, making them less competitive than other more energy-intensive processes, says Yan Xia, an assistant professor of chemistry at Stanford University, who was not involved in the research.

“The membranes from these polymers exhibit very high permeance for several industrially important gases,” Xia says. “Further, these polymers exhibit little undesired plasticization as the gas pressure is increased, despite their relatively flexible backbone, making them desired materials for carbon dioxide-related separations.”

The researchers are now planning to carry out a systematic study of the chemistry and structure of the brushes, to investigate how this affects their performance, He says.

“We are looking for the most effective chemistry and structure for helping the separation process.”

The team are also hoping to investigate the use of their membrane designs in other applications, including carbon capture and storage, and even in separating liquids.

Paving ahead: Model aims for infrastructure preservation

Mon, 04/08/2019 - 1:00pm

In 2017, the American Society of Civil Engineer’s Infrastructure Report Card gave America’s infrastructure an overall grade of a D+. Given that the report found the U.S. had been paying for just half of its infrastructure needs, the low grade unfortunately wasn't surprising.

To solve the crisis, researchers Fengdi Guo, Jeremy Gregory, and Randolph Kirchain at the MIT Concrete Sustainability Hub have proposed a new approach to long-term infrastructure preservation. The approach, outlined in the Journal of the Transportation Research Board, is called simulation optimization life-cycle cost analysis (LCCA).

Like other long-term pavement preservation strategies, this new MIT method takes a life-cycle cost analysis perspective, involving factoring in the costs of future maintenance to the total cost of a project in addition to the initial costs of construction.

But what sets MIT’s simulation optimization LCCA apart from the other approaches is its embrace of various uncertainties, particularly related to the timing and treatment methods used to repair and rehabilitate pavements — known as the treatment schedule.

Currently, traditional long-term strategies employ a rigid schedule for future road treatments, explains Guo. “One drawback of a rigid schedule,” he says, “is that it may overestimate the total life-cycle cost.”

Such strategies also assume that predetermined investments or decisions will result in a predictable outcome — for example, that a planned investment in a highway will produce a corresponding future improvement in its performance and quality.

The MIT approach, however, acknowledges that this is often not the case.

Conditions like construction costs, maintenance costs, and deterioration processes can change unpredictably over the course of a project’s lifetime. This means that a set investment may not produce a set result, and — if pavements deteriorate faster than expected — may lead to unbudgeted repairs or even unsafe conditions.

To manage these uncertainties, MIT researchers assemble pricing, deterioration, and potential treatment schedule information to inform their predictions. They then predict the many possible future prices of asphalt and concrete — two key paving materials.

The next part of the process is what gives simulation optimization its name — an algorithm simulates numerous potential scenarios in pricing and deterioration from year to year.

“We have simulated around 1,000 scenarios and, for each scenario, the future cost and deterioration rate are fixed,” Guo says.

After completing the simulations, optimization then comes into play. “For each simulated scenario we can find an optimal treatment schedule,” says Guo, “and based on this schedule we can then calculate its life cycle costs.”

All of these simulated and then optimized results are then compiled to show the life cycle cost distribution of different pavement design alternatives. Based on these distributions, the best design is selected.

Essentially, this new method considers the uncertainty of both treatment timing and treatment actions to reduce a project’s life cycle cost. This results in different, more beneficial pavement designs.

And when compared to the costs of conventional methods, the advantages of simulation optimization become apparent.

In one case study of a mile-long length of road over a 35-year period, the simulation optimization model cost $150,000 less per mile than conventional methods when considering lifecycle cost.

The same is true of a road similar in length but with even more traffic flow. When the road saw nearly six times the truck traffic, the simulation optimization model cost $100,000 less per mile over its lifecycle.

At a time when funding for infrastructure is scarce, these case studies demonstrate that a simulation optimization model will allow agencies to make better informed paving decisions that will prove more cost effective over a pavement’s life cycle.

Concrete Sustainability Hub research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.

Welcome to The Deep

Mon, 04/08/2019 - 9:00am

Hidden away in the basement of Building 37, two small rooms are filled with band saws, hacksaws, 3-D printers, laser cutting printers, and a variety of other tools large and small. A painted mural of a squid holding power tools greets visitors at the entrance. Wall art of a word cloud depicts some of the space’s values: respect, responsibility, and collaboration.

Welcome to The Deep, a new makerspace open to the MIT community. It is the latest output of Project Manus, an initiative to establish, update, and integrate makerspaces on campus.

During the week of March 11, The Deep hosted five open houses to draw attention to its 1,239-square-foot space, which started offering training to students in the summer of 2018. At the open houses, students were offered pizza, tours, and basic safety training amid examples of the creations they would be able to make themselves in the space: 3-D-printed frogs, an intricately carved wooden insect, and engraved boxes.

“What we've learned about these designing and building spaces is that it’s not the equipment or the space that that makes them work well. It’s how they’re run,” says Marty Culpepper, a professor of mechanical engineering.

Culpepper is known as MIT’s “Maker Czar” and is the head of Project Manus under Provost Martin A. Schmidt and the MIT Innovation Initiative. “What makes The Deep unique is that the students have a lot of ownership, and the students help run the space,” Culpepper says.

Seth Avecilla MArch ’08, a maker technical specialist, helped coordinate and conduct the tours and safety training during the open house. He says The Deep will offer open hours, training classes, and student mentors. The space currently has 15 active mentors, and Avecilla would like to see the number double or triple to better accommodate demand among students.

“Part of our mission is to try different methods of training,” he says “We’re about to get super busy — we hope.”

Robyn Goodner, a fellow maker technical specialist and organizer at The Deep, was pleased with turnout at the open houses. “They’ve been super packed. Monday to Wednesday we trained over 200 people,” she says.

The Deep is open to the entire Institute; students, staff, faculty, and alumni are all welcome to use the makerspace after completing training. It is connected via Project Manus to a broad network of makerspaces and maker programs around the MIT campus. In the spring of 2016, Culpepper and his team located, mapped out, and made available the 130,000 square feet of MIT space that has been devoted to making through an app, Mobius. At the same time, Vice Chancellor Ian Waitz — who was then the dean of engineering — helped launch MakerBucks, a program that provides every first-year MIT student who undergoes introductory training with $50 for materials to use in the makerspace of their choosing. 

With ties to the other makerspaces under Project Manus, The Deep is strengthening the connections between the already-existing makerspaces.

“We are positioning The Deep to be a central hub of a more closely networked group of makerspaces that basically make it possible for the students to traverse seamlessly through that network.” says Culpepper. “There's the formation of this larger MIT makerspace community that will revolve around The Deep, which will be good for the students in all kinds of different ways.”

Students who attended The Deep’s open houses expressed excitement about the makerspace.

“I have access to a lot of spaces, but I’m immensely impressed by the technical expertise that has accumulated with Project Manus in The Deep,” says Jacob Minke, a junior studying nuclear science and engineering and mechanical engineering. “It takes a deep level of understanding to do a hands-on project.”

Natasha Stark, a junior studying biological engineering who is interested in using The Deep’s laser cutting equipment, says she likes the space because it “is convenient and has better machines.”

“The part I’m most excited about is that it is explicitly for everybody,” she says. 

Roberto Bolli, a junior studying mechanical engineering, agrees.

“A space like this will be incredibly useful,” he says. He adds that he has felt limited at times by other, smaller makerspaces where “the most you can hope for is some woodworking tools.”

The name “The Deep” was chosen by the community involved with the creation of the makerspace. It plays off the fact that the makerspace is located underground, and is an analog reference to deep learning — not the kind involving artificial intelligence or computing, but the type of deep knowledge involved in refined manual techniques, design, and fabrication.

The Deep is itself a kind of experiment that Culpepper is using to develop ideas for the approximately 17,000-square-foot makerspace planned for the remodeled Metropolitan Storage Warehouse, which is estimated to be completed in 2022.

“The Deep is actually what we call an R&D makerspace,” Culpepper says. “We do making in there like normal, but we also run loads of experiments to figure out how to do making better. Through the research in The Deep, we're going to find out what is the best way to maximize the amount of benefit for the students while minimizing the amount of resources required to deliver that.”

Deep stimulation improves cognitive control by augmenting brain rhythms

Mon, 04/08/2019 - 8:00am

In a new study that could improve the therapeutic efficacy of deep-brain stimulation (DBS) for psychiatric disorders such as depression, a team of scientists shows that when DBS is applied to a specific brain region, it improves patients’ cognitive control over their behavior by increasing the power of a specific low-frequency brain rhythm in their prefrontal cortex.

The findings, published April 4 in Nature Communications, suggest that the increase in “theta” rhythms, readily detectable in EEG recordings, could provide neurosurgeons and psychiatrists with the reliable, objective, and rapid feedback they’ve needed to properly fine-tune the placement and “dosage” of DBS electrical stimulation. In Parkinson’s disease, where DBS has been most successful, that kind of feedback is available through a reduction in a patient’s tremors. But for depression or obsessive-compulsive disorder (OCD), symptoms can be more subtle, subjective, and slowly emergent.

“This is a major step forward for psychiatric brain stimulation,” says Alik Widge, the lead and corresponding author on the paper. Widge began the work while a Picower clinical fellow at the Picower Institute for Learning and Memory at MIT and a research fellow at Massachusetts General Hospital (MGH). He is now an assistant professor of psychiatry at the University of Minnesota Medical School. “This study shows us a specific mechanism of how DBS improves patients’ brain function, which should let us better identify who can benefit and how to optimize their individual treatment.”

Heading into the research, the team, also led by Earl Miller, Picower Professor of Neuroscience at MIT and Darin Dougherty, associate professor of psychiatry at Harvard Medical School and director of the Division of Neurotherapeutics at MGH, knew that DBS applied to the brain’s ventral internal capsule and ventral striatum (VCVS) has shown mixed results in treating OCD and depression. A common feature of both conditions is a deficit of cognitive control, the function of controlling automatic or habitual behaviors through conscious will (for instance, overcoming recurring negative emotions that are a hallmark of depression). Cognitive control is performed in part by the prefrontal cortex, which is involved in circuits passing through the VCVS region. Moreover, theta rhythms are believed to be a means by which neurons in the prefrontal cortex could synchronize and drive the activity of neurons in other regions.

The team’s working hypothesis, therefore, was that DBS might help patients by increasing theta rhythms in these crucial cognitive control circuits linking prefrontal cortex to VCVS, thereby allowing the cortex to be more effective in controlling atypical emotions. If they could read out a patient’s theta rhythms and optimally amplify those with DBS, they reasoned, maybe they’d see an increase in cognitive control.

To find out, they worked with 14 volunteers at MGH, 12 of whom had previously received DBS treatment for depression and the other two for OCD. The researchers gave each participant a “conflict” task in which they had to identify the numeral in a sequence of three numbers that was different (like the “2” in “332”) despite the vivid and intentional background distraction of an emotionally evocative image (like adorable puppies or a vicious shark). An increase in cognitive control would mean a quicker reaction time in being able to identify the correct unique digit.

The researchers recorded brain waves of the subjects while they performed the task, once with DBS switched on and once with it off. What they found was that with DBS on, people indeed made their selection faster (overcoming the “interference,” or conflict of the emotional picture). There was no difference in accuracy, meaning that subjects were not sacrificing accuracy to gain more speed. Meanwhile, theta rhythms in the cortex increased markedly in association with both the stimulation in VCVS and the behavioral improvement of the faster reaction time.

“This study demonstrates the value of closed-loop stimulation,” Miller says. “We read the brain's natural rhythms and then enhanced them by stimulation.  We augmented the rhythms that were already there. It suggests that brain rhythms play a role in cognition and that we can treat cognitive deficits by manipulating those rhythms.”

The authors acknowledged that the study was relatively small, and because all of the participants were receiving DBS as a treatment, the exact stimulation settings were different between individual participants. Widge cautioned that a more standardized study would be important to verify the results. However, the authors said that with further research, theta rhythms could provide a biomarker to calibrate DBS treatments for psychiatric disorders where cognitive control is crucial. Moreover, individual tuning of theta rhythms via DBS of the VCVS could lead to new treatments for psychiatric disorders where cognitive control — and the flexibility of behavior that comes from exerting conscious intent over recurring emotions or compulsions — is crucial.

“The current study demonstrates that DBS at an FDA-approved target for psychiatric illness is shown to affect a specific symptom underlying multiple psychiatric illnesses, namely cognitive flexibility,” Dougherty says. “These findings suggest that looking at effects of DBS ‘underneath’ a diagnosis, at the symptom level, may lead to utility for other psychiatric illnesses in the short term and perhaps to more personalized medicine approaches to DBS in the longer term.”

In addition to Widge, Miller and Dougherty, the paper’s other authors are Samuel Zorowitz, Ishita Basu, Angelique C. Paulk, Sydney Cash, Emad Eskandar, and Thilo Deckersbach.

Several of the authors have applied for patents on technologies related to DBS and modulation of oscillations.

The study was funded by The Brain and Behavior Research Foundation, the Picower Family Foundation, the MIT Picower Institute Innovation Fund, the National Institutes of Health, and the Defense Advanced Research Projects Agency.

Pages