MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 3 months 11 hours ago

Finding the brain’s compass

Mon, 08/12/2019 - 1:00pm

The world is constantly bombarding our senses with information, but the ways in which our brain extracts meaning from this information remains elusive. How do neurons transform raw visual input into a mental representation of an object — like a chair or a dog?

In work published in Nature Neuroscience, MIT neuroscientists have identified a brain circuit in mice that distills “high-dimensional” complex information about the environment into a simple abstract object in the brain.

“There are no degree markings in the external world; our current head direction has to be extracted, computed, and estimated by the brain,” explains Ila Fiete, an associate member of the McGovern Institute and senior author of the paper. “The approaches we used allowed us to demonstrate the emergence of a low-dimensional concept, essentially an abstract compass in the brain.”

This abstract compass, according to the researchers, is a one-dimensional ring that represents the current direction of the head relative to the external world.

Schooling fish

Trying to show that a data cloud has a simple shape, like a ring, is a bit like watching a school of fish. By tracking one or two sardines, you might not see a pattern. But if you could map all of the sardines, and transform the noisy dataset into points representing the positions of the whole school of sardines over time, and where each fish is relative to its neighbors, a pattern would emerge. This model would reveal a ring shape, a simple shape formed by the activity of hundreds of individual fish.

Fiete, who is also an associate professor in MIT’s Department of Brain and Cognitive Sciences, used a similar approach, called topological modeling, to transform the activity of large populations of noisy neurons into a data cloud in the shape of a ring.

Simple and persistent ring

Previous work in fly brains revealed a physical ellipsoid ring of neurons representing changes in the direction of the fly’s head, and researchers suspected that such a system might also exist in mammals.

In this new mouse study, Fiete and her colleagues measured hours of neural activity from scores of neurons in the anterodorsal thalamic nucleus (ADN) — a region believed to play a role in spatial navigation — as the animals moved freely around their environment. They mapped how the neurons in the ADN circuit fired as the animal’s head changed direction.

Together, these data points formed a cloud in the shape of a simple and persistent ring.

“This tells us a lot about how neural networks are organized in the brain,” explains Edvard Moser, director of the Kavli Institute of Systems Neuroscience in Norway, who was not involved in the study. “Past data have indirectly pointed towards such a ring-like organization, but only now has it been possible, with the right cell numbers and methods, to demonstrate it convincingly,” says Moser.

Their method for characterizing the shape of the data cloud allowed Fiete and colleagues to determine which variable the circuit was devoted to representing, and to decode this variable over time, using only the neural responses.

“The animal’s doing really complicated stuff,” explains Fiete, “but this circuit is devoted to integrating the animal’s speed along a one-dimensional compass that encodes head direction. Without a manifold approach, which captures the whole state space, you wouldn’t know that this circuit of thousands of neurons is encoding only this one aspect of the complex behavior, and not encoding any other variables at the same time.”

Even during sleep, when the circuit is not being bombarded with external information, this circuit robustly traces out the same one-dimensional ring, as if dreaming of past head-direction trajectories.

Further analysis revealed that the ring acts an attractor. If neurons stray off trajectory, they are drawn back to it, quickly correcting the system. This attractor property of the ring means that the representation of head direction in abstract space is reliably stable over time, a key requirement if we are to understand and maintain a stable sense of where our head is relative to the world around us.

“In the absence of this ring,” Fiete explains, “we would be lost in the world.”

Shaping the future

Fiete’s work provides a first glimpse into how complex sensory information is distilled into a simple concept in the mind, and how that representation autonomously corrects errors, making it exquisitely stable.

But the implications of this study go beyond coding of head direction.

“Similar organization is probably present for other cognitive functions, so the paper is likely to inspire numerous new studies,” says Moser.

Fiete sees these analyses and related studies carried out by colleagues at the Norwegian University of Science and Technology, Princeton University, the Weitzman Institute, and elsewhere as fundamental to the future of neural decoding studies.

With this approach, she explains, it is possible to extract abstract representations of the mind from the brain, potentially even thoughts and dreams.

“We’ve found that the brain deconstructs and represents complex things in the world with simple shapes,” explains Fiete. “Manifold-level analysis can help us to find those shapes, and they almost certainly exist beyond head-direction circuits.”

Julian Picard: Chopping microwaves, sharpening instincts

Mon, 08/12/2019 - 12:50pm

“Looking through microscopes has never been my thing,” says Julian Picard.

As a graduate student in the Department of Physics, Picard works with the invisible world of particles and electromagnetic waves every day, yet he is motivated by the goal of creating something very visible, “something you can hold in your hand.” His study of the microwaves that speed from the megawatt gyrotron at MIT’s Plasma Science and Fusion Center (PSFC) could lead the way to smaller and more powerful particle accelerators, the kind of finished product Picard finds rewarding. 

Picard became interested in plasma as an undergraduate at the University of Washington in Seattle. His student research at their Advanced Propulsion Laboratory and Space Plasma Simulation Laboratory prepared him for an internship, and later a research engineer position, at Eagle Harbor Technologies. Working there on plasma generation and pulsed power supplies, he admired the way the most experienced scientists seemed to solve problems “intuitively.”

“That was inspiring to me,” he says. “One of the reasons I came back to grad school was to be steeped in something for a long time. After spending so long working hard on something, you start to develop a gut instinct.”

Picard notes it was difficult to find a graduate program that would provide him with a deep physics background, along with the opportunity to apply his understanding to a practical plasma project.

“That is what drives me,” Picard says, “I want to understand how something works well enough to apply it in a new way. To me, it feels vacuous to try to design something without understanding how it works. That’s why I wanted to find a program in physics: I wanted to continue developing my background in basic science, and then be able to apply it to a variety of things.”

He discovered what he wanted at the PSFC in the Plasma Science and Technology Group, headed by Richard Temkin, who introduced him to the center’s megawatt gyrotron, the source of microwaves for a new project to test particle accelerator cavities.

Particle accelerators, besides being essential tools for studying the universe, have practical applications including medical instrument sterilization, computer chip manufacture, material identification and radioisotope production for cancer treatment. While an accelerator typically runs at low frequency (1 gigahertz) with success, researchers have suspected that running it at higher frequencies would allow it to be made smaller and more efficient, improving the convenience and possibly reducing the expense.

Although the PSFC megawatt gyrotron is capable of producing microwaves at the higher frequency of 110 GHz, the length of the pulse would melt any accelerator cavity it passed through. Researchers needed to find a way to shorten that pulse.

In an article for Applied Physics Letters, Picard describes the experimental setup that allowed researchers to “chop” the pulse. The piece received the Outstanding Student Paper Award from the IEEE Nuclear and Plasma Sciences Society at the 2019 Pulsed Power and Plasma Science Conference in June.

To shorten the pulse, PSFC researchers strategically arranged a wafer of silicon in the path of the microwaves. Typically, microwaves would pass straight through this. However, a laser directed onto the wafer creates a type of plasma inside the silicon that will reflect the microwaves for as long as the laser is on. Those reflected high-frequency microwaves can be directed into the accelerator, and the pulse chopped to a manageable length (10 nanoseconds) simply by turning off the laser.

The laser-targeted wafer does not reflect all the microwaves; about 30 percent are absorbed by or pass through the silicon. Picard’s study showed, however, that as the gyrotron power increased toward a megawatt the wafer reflected more. Instead of reflecting 70 percent of the microwaves, it reflected closer to 80 or 85 percent.

“This effect had never been seen before because nobody could test at the higher power levels,” says Picard. “Reflection becomes more efficient at higher powers compared to lower powers. That means there is more power available, so we can test more interesting accelerator structures.”

The PSFC is working with a group from Stanford University that designs accelerator cavities, which can now be tested with the “Megawatt Microwave Pulse Chopper.” 

Picard is pleased with the experiment.

“What I’ve really liked about this project is that, at the end of the day, we have a device that makes a short pulse,” he says. “That’s a deliverable. It’s satisfying and motivating.”

New type of electrolyte could enhance supercapacitor performance

Mon, 08/12/2019 - 11:00am

Supercapacitors, electrical devices that store and release energy, need a layer of electrolyte — an electrically conductive material that can be solid, liquid, or somewhere in between. Now, researchers at MIT and several other institutions have developed a novel class of liquids that may open up new possibilities for improving the efficiency and stability of such devices while reducing their flammability.

“This proof-of-concept work represents a new paradigm for electrochemical energy storage,” the researchers say in their paper describing the finding, which appears today in the journal Nature Materials.

For decades, researchers have been aware of a class of materials known as ionic liquids — essentially, liquid salts — but this team has now added to these liquids a compound that is similar to a surfactant, like those used to disperse oil spills. With the addition of this material, the ionic liquids “have very new and strange properties,” including becoming highly viscous, says MIT postdoc Xianwen Mao PhD ’14, the lead author of the paper.

“It’s hard to imagine that this viscous liquid could be used for energy storage,” Mao says, “but what we find is that once we raise the temperature, it can store more energy, and more than many other electrolytes.”

That’s not entirely surprising, he says, since with other ionic liquids, as temperature increases, “the viscosity decreases and the energy-storage capacity increases.” But in this case, although the viscosity stays higher than that of other known electrolytes, the capacity increases very quickly with increasing temperature. That ends up giving the material an overall energy density — a measure of its ability to store electricity in a given volume — that exceeds those of many conventional electrolytes, and with greater stability and safety.

The key to its effectiveness is the way the molecules within the liquid automatically line themselves up, ending up in a layered configuration on the metal electrode surface. The molecules, which have a kind of tail on one end, line up with the heads facing outward toward the electrode or away from it, and the tails all cluster in the middle, forming a kind of sandwich. This is described as a self-assembled nanostructure.

“The reason why it’s behaving so differently” from conventional electrolytes is because of the way the molecules intrinsically assemble themselves into an ordered, layered structure where they come in contact with another material, such as the electrode inside a supercapacitor, says T. Alan Hatton, a professor of chemical engineering at MIT and the paper’s senior author. “It forms a very interesting, sandwich-like, double-layer structure.”

This highly ordered structure helps to prevent a phenomenon called “overscreening” that can occur with other ionic liquids, in which the first layer of ions (electrically charged atoms or molecules) that collect on an electrode surface contains more ions than there are corresponding charges on the surface. This can cause a more scattered distribution of ions, or a thicker ion multilayer, and thus a loss of efficiency in energy storage; “whereas with our case, because of the way everything is structured, charges are concentrated within the surface layer,” Hatton says.

The new class of materials, which the researchers call SAILs, for surface-active ionic liquids, could have a variety of applications for high-temperature energy storage, for example for use in hot environments such as in oil drilling or in chemical plants, according to Mao. “Our electrolyte is very safe at high temperatures, and even performs better,” he says. In contrast, some electrolytes used in lithium-ion batteries are quite flammable.

The material could help to improve performance of supercapacitors, Mao says. Such devices can be used to store electrical charge and are sometimes used to supplement battery systems in electric vehicles to provide an extra boost of power. Using the new material instead of a conventional electrolyte in a supercapacitor could increase its energy density by a factor of four or five, Mao says. Using the new electrolyte, future supercapacitors may even be able to store more energy than batteries, he says, potentially even replacing batteries in applications such as electric vehicles, personal electronics, or grid-level energy storage facilities.

The material could also be useful for a variety of emerging separation processes, Mao says. “A lot of newly developed separation processes require electrical control,” in various chemical processing and refining applications and in carbon dioxide capture, for example, as well as resource recovery from waste streams. These ionic liquids, being highly conductive, could be well-suited to many such applications, he says.

The material they initially developed is just an example of a variety of possible SAIL compounds. “The possibilities are almost unlimited,” Mao says. The team will continue to work on different variations and on optimizing its parameters for particular uses. “It might take a few months or years,” he says, “but working on a new class of materials is very exciting to do. There are many possibilities for further optimization.”

The research team included Paul Brown, Yinying Ren, Agilio Padua, and Margarida Costa Gomes at MIT; Ctirad Cervinka at École Normale Supérieure de Lyon, in France; Gavin Hazell and Julian Eastoe at the University of Bristol, in the U.K.; Hua Li and Rob Atkin at the University of Western Australia; and Isabelle Grillo at the Institut Max-von-Laue-Paul-Langevin in Grenoble, France. The researchers dedicate their paper to the memory of Grillo, who recently passed away.

“It is a very exciting result that surface-active ionic liquids (SAILs) with amphiphilic structures can self-assemble on electrode surfaces and enhance charge storage performance at electrified surfaces,” says Yi Cui, a professor of materials science and engineering at Stanford University, who was not associated with this research. “The authors have studied and understood the mechanism. The work here might have a great impact on the design of high energy density supercapacitors, and could also help improve battery performance,” he says.

Nicholas Abbott, a University Professor of Chemistry at Cornell University, who also was not involved in this work, says “The paper describes a very clever advance in interfacial charge storage, elegantly demonstrating how knowledge of molecular self-assembly at interfaces can be leveraged to address a contemporary technological challenge.”

The work was supported by the MIT Energy Initiative, an MIT Skoltech fellowship, and the Czech Science Foundation.

Tissue model reveals role of blood-brain barrier in Alzheimer’s

Mon, 08/12/2019 - 10:41am

Beta-amyloid plaques, the protein aggregates that form in the brains of Alzheimer’s patients, disrupt many brain functions and can kill neurons. They can also damage the blood-brain barrier — the normally tight border that prevents harmful molecules in the bloodstream from entering the brain.

MIT engineers have now developed a tissue model that mimics beta-amyloid’s effects on the blood-brain barrier, and used it to show that this damage can lead molecules such as thrombin, a clotting factor normally found in the bloodstream, to enter the brain and cause additional damage to Alzheimer’s neurons.

“We were able to show clearly in this model that the amyloid-beta secreted by Alzheimer’s disease cells can actually impair barrier function, and once that is impaired, factors are secreted into the brain tissue that can have adverse effects on neuron health,” says Roger Kamm, the Cecil and Ida Green Distinguished Professor of Mechanical and Biological Engineering at MIT.

The researchers also used the tissue model to show that a drug that restores the blood-brain barrier can slow down the cell death seen in Alzheimer’s neurons.

Kamm and Rudolph Tanzi, a professor of neurology at Harvard Medical School and Massachusetts General Hospital, are the senior authors of the study, which appears in the August 12 issue of the journal Advanced Science. MIT postdoc Yoojin Shin is the paper’s lead author.

Barrier breakdown

The blood vessel cells that make up the blood-brain barrier have many specialized proteins that help them to form tight junctions — cellular structures that act as a strong seal between cells.

Alzheimer’s patients often experience damage to brain blood vessels caused by beta-amyloid proteins, an effect known as cerebral amyloid angiopathy (CAA). It is believed that this damage allows harmful molecules to get into the brain more easily. Kamm decided to study this phenomenon, and its role in Alzheimer’s, by modeling brain and blood vessel tissue on a microfluidic chip.

“What we were trying to do from the start was generate a model that we could use to understand the interactions between Alzheimer’s disease neurons and the brain vasculature,” Kamm says. “Given the fact that there’s been so little success in developing therapeutics that are effective against Alzheimer’s, there has been increased attention paid to CAA over the last couple of years.”

His lab began working on this project several years ago, along with researchers at MGH who had engineered neurons to produce large amounts of beta-amyloid proteins, just like the brain cells of Alzheimer’s patients.

Led by Shin, the researchers devised a way to grow these cells in a microfluidic channel, where they produce and secrete beta-amyloid protein. On the same chip, in a parallel channel, the researchers grew brain endothelial cells, which are the cells that form the blood-brain barrier. An empty channel separated the two channels while each tissue type developed.

After 10 days of cell growth, the researchers added collagen to the central channel separating the two tissue types, which allowed molecules to diffuse from one channel to the other. They found that within three to six days, beta-amyloid proteins secreted by the neurons began to accumulate in the endothelial tissue, which led the cells to become leakier. These cells also showed a decline in proteins that form tight junctions, and an increase in enzymes that break down the extracellular matrix that normally surrounds and supports blood vessels.

As a result of this breakdown in the blood-brain barrier, thrombin was able to pass from blood flowing through the leaky vessels into the Alzheimer’s neurons. Excessive levels of thrombin can harm neurons and lead to cell death.

“We were able to demonstrate this bidirectional signaling between cell types and really solidify things that had been seen previously in animal experiments, but reproduce them in a model system that we can control with much more detail and better fidelity,” Kamm says.

Plugging the leaks

The researchers then decided to test two drugs that have previously been shown to solidify the blood-brain barrier in simpler models of endothelial tissue. Both of these drugs are FDA-approved to treat other conditions. The researchers found that one of these drugs, etodolac, worked very well, while the other, beclomethasone, had little effect on leakiness in their tissue model.

In tissue treated with etodolac, the blood-brain barrier became tighter, and neurons’ survival rates improved. The MIT and MGH team is now working with a drug discovery consortium to look for other drugs that might be able to restore the blood-brain barrier in Alzheimer’s patients.

“We’re starting to use this platform to screen for drugs that have come out of very simple single cell screens that we now need to validate in a more complex system,” Kamm says. “This approach could offer a new potential form of Alzheimer’s treatment, especially given the fact that so few treatments have been demonstrated to be effective.”

The research was funded by the Cure Alzheimer’s Fund and the JPB Foundation.

Beaver Works Summer Institute concludes its fourth year

Mon, 08/12/2019 - 9:00am

Nearly 1,000 students, instructors, and guests packed into MIT's Johnson Ice Rink on Aug. 4 to kick off the final event for the 2019 Beaver Works Summer Institute (BWSI). It was a full day of competitions and demonstrations — the culmination of four weeks of hard work and dedication from the students and staff. The event, held at various locations on MIT campus, was a fitting end to what many of the students described as a transformational experience.

Now in its fourth year, the BWSI offers hands-on STEM learning to rising high school seniors, and now to middle school students, through project-based, workshop-style courses. The program is run jointly by MIT Lincoln Laboratory and the School of Engineering and this year admitted more than 250 students from 27 states and more than 130 schools. This year's BWSI featured 10 courses — Autonomous RACECAR Grand Prix, Autonomous Air Vehicle Racing, Autonomous Cognitive Assistant, Medlytics: Data Science for Health and Medicine, Build a CubeSat, Unmanned Air System–Synthetic Aperture Radar (UAS-SAR), Embedded Security and Hardware Hacking, Hacking a 3-D Printer, Remote Sensing for Crisis Response, and Assistive Technology — plus one middle school RACECAR class.

At the MIT Department of Aeronautics and Astronautics Building 31, teams of students from the UAS-SAR course were challenged to create an image of a covered space with a hidden pattern underneath. To do this, the teams each flew a small UAS around an enclosed room. The UAS was equipped with a radar that the students had built and tested during the course. Afterward, the teams answered visitors' questions and gave informal presentations about their radars.

"In addition to obtaining more hands-on experience, the people that I have met and bonded with made this program a moment in my life that I will never forget," says Swanyee Aung, a student in the UAS-SAR class from the Bronx High School of Science in New York.

Fiona McEvilly, a teaching assistant (TA) for the course, took the UAS-SAR class in 2018 and was excited to return and participate in a different way. "This year I was able to help BWSI grow and expand, and I'm still learning more as a TA this year," she says. “BWSI is such a great opportunity.”

Meanwhile, students from several of the courses displayed their work with posters and demonstrations in the MIT Stratton Student Center (Building W20).

Shuen Wu, a homeschool student from Minnesota who took the Medlytics course, explained his team's work, which was to design a prototype web application that would help physicians and patients identify disease from symptoms and then recommend treatment. The Medlytics course focused on the intersection of data science and medicine, allowing students to apply advanced machine learning and data mining to real-world medical challenges. "I really like the fact that we spent a lot of time actually working on projects," Wu says. "The best way to learn coding and statistics is to just do it."

One team of students from the Build a CubeSat course designed and constructed a prototype of a CubeSat called SLOOP that would inform people responding to oil spills about how to take action quickly and efficiently. "We learned a lot about how spacecraft and satellites are built and got to experience building something faithful to an actual spacecraft," says Kemal Pulungan, a student from Troy High School in New York.

Back in the Johnson Athletics Center, two floors above the ice rink, students from the Autonomous Air Vehicle Racing class completed an obstacle course race made of bridges and rings hanging at different heights in the air. Each team developed algorithms that allowed an Intel drone to autonomously navigate the race course. The winning team completed the course in one minute and 32 seconds.

In the afternoon, the BWSI students, staff, and guests gathered again in the Johnson Ice Rink to watch the RACECAR grand prix. RACECAR was the very first course offered in BWSI — and remains the largest course, with 57 students enrolled this year. For this event, the ice rink was converted into a racetrack with obstacles such as a graveyard, car wash, and giant windmill. Students programmed RACECARs (Rapid Autonomous Complex Environment Competing Ackermann-steering Robots), designed by MIT and Lincoln Laboratory, to navigate the track by using inertial sensors, lidar, and cameras.

This year was the first that middle school students were admitted to a modified version of the RACECAR course. Their course is based on the high school version, with students learning software coding and controls for programming their own RACECAR vehicles.

"This was a real opportunity for middle school students to be exposed to the basics of programming, computational thinking, computer vision, and robotics," says Sabina Chen, the middle school RACECAR instructor and an MIT graduate student. "Throughout the program, students were encouraged to think critically and work as a team to complete complex coding challenges. BWSI RACECAR Middle School may be one of the few programs that currently exist to teach not only computer programming, but also computer vision and autonomous driving, to students at this age."

In his opening remarks, Robert Shin, the director of Beaver Works and head of the Intelligence, Surveillance, and Reconnaissance and Tactical Systems Division at MIT Lincoln Laboratory, challenged the outgoing students to keep the ball rolling by becoming mentors to the next generation of engineers. One goal of BWSI is to continue expanding to bring the program to more and more students across the country and the world. In line with this goal, this year's program included teams from Mexico participating in the RACECAR and CogWorks courses and a team from Nauset High School on Cape Cod competing in RACECAR. In addition, the Ulsan National Institute of Science and Technology in South Korea provided TAs to several BWSI courses and plans to adopt the BWSI curriculum next year.

"I know it’s a cliché to say that BWSI was a transformational experience, but that's honestly the best description for my time here," says Emily Amspoker, a student in the Embedded Security and Hardware Hacking course from Kent Denver School in Colorado. "Before I started the online part of my course, I knew very little about the subject. Fortunately, throughout both the online and in-person class, I've learned more about embedded systems and cybersecurity than I could have fathomed just a couple of months ago. More importantly, I feel like I've grown to become a better teammate and person by collaborating and overcoming problems with students from across the country who are also passionate about STEM."

A single-photon source you can make at home

Fri, 08/09/2019 - 1:30pm

Quantum computing and quantum cryptography are expected to give much higher capabilities than their classical counterparts. For example, the computation power in a quantum system may grow at a double exponential rate instead of a classical linear rate due to the different nature of the basic unit, the qubit (quantum bit). Entangled particles enable the unbreakable codes for secure communications. The importance of these technologies motivated the U.S. government to legislate the National Quantum Initiative Act, which authorizes $1.2 billion over the following five years for developing quantum information science.

Single photons can be an essential qubit source for these applications. To achieve practical usage, the single photons should be in the telecom wavelengths, which range from 1,260-1,675 nanometers, and the device should be functional at room temperature. To date, only a single fluorescent quantum defect in carbon nanotubes possesses both features simultaneously. However, the precise creation of these single defects has been hampered by preparation methods that require special reactants, are difficult to control, proceed slowly, generate non-emissive defects, or are challenging to scale.

Now, research from Angela Belcher, head of the MIT Department of Biologicial Engineering, Koch Institute member, and the James Crafts Professor of Biological Engineering, and postdoc Ching-Wei Lin, published online in Nature Communications, describes a simple solution to create carbon-nanotube based single-photon emitters, which are known as fluorescent quantum defects.

“We can now quickly synthesize these fluorescent quantum defects within a minute, simply using household bleach and light,” Lin says. “And we can produce them at large scale easily.”

Belcher’s lab has demonstrated this amazingly simple method with minimum non-fluorescent defects generated. Carbon nanotubes were submerged in bleach and then irradiated with ultraviolet light for less than a minute to create the fluorescent quantum defects.

The availability of fluorescent quantum defects from this method has greatly reduced the barrier for translating fundamental studies to practical applications. Meanwhile, the nanotubes become even brighter after the creation of these fluorescent defects. In addition, the excitation/emission of these defect carbon nanotubes is shifted to the so-called shortwave infrared region (900-1,600 nm), which is an invisible optical window that has slightly longer wavelengths than the regular near-infrared. What's more, operations at longer wavelengths with brighter defect emitters allow researchers to see through the tissue more clearly and deeply for optical imaging. As a result, the defect carbon nanotube-based optical probes (usually to conjugate the targeting materials to these defect carbon nanotubes) will greatly improve the imaging performance, enabling cancer detection and treatments such as early detection and image-guided surgery.

Cancers were the second-leading cause of death in the United States in 2017. Extrapolated, this comes out to around 500,000 people who die from cancer every year. The goal in the Belcher Lab is to develop very bright probes that work at the optimal optical window for looking at very small tumors, primarily on ovarian and brain cancers. If doctors can detect the disease earlier, the survival rate can be significantly increased, according to statistics. And now the new bright fluorescent quantum defect can be the right tool to upgrade the current imaging systems, looking at even smaller tumors through the defect emission.

“We have demonstrated a clear visualization of vasculature structure and lymphatic systems using 150 times less amount of probes compared to previous generation of imaging systems,” Belcher says, “This indicates that we have moved a step forward closer to cancer early detection.”

In collaboration with contributors from Rice University, reearchers can identify for the first time the distribution of quantum defects in carbon nanotubes using a novel spectroscopy method called variance spectroscopy. This method helped the researchers monitor the quality of the quantum defect contained-carbon nanotubes and find the correct synthetic parameters easier.

Other co-authors at MIT include biological engineering graduate student Uyanga Tsedev, materials science and engineering graduate student Shengnan Huang, as well as Professor R. Bruce Weisman, Sergei Bachilo, and Zheng Yu of Rice University.

This work was supported by grants from the Marble Center for Cancer Nanomedicine, the Koch Institute Frontier Research Program, Frontier, the National Science Foundation, and the Welch Foundation.

The music of the spheres

Fri, 08/09/2019 - 1:25pm

Space has long fascinated poets, physicists, astronomers, and science fiction writers. Musicians, too, have often found beauty and meaning in the skies above. At MIT’s Kresge Auditorium, a group of composers and musicians manifested their fascination with space in a concert titled “Songs from Extrasolar Spaces.” Featuring the Lorelei Ensemble — a Boston, Massachusetts-based women’s choir — the concert included premieres by MIT composers John Harbison and Elena Ruehr, along with compositions by Meredith Monk and Molly Herron. All the music was inspired by discoveries in astronomy.

“Songs from Extrasolar Spaces,” part of an MIT conference on TESS — the Transiting Exoplanet Survey Satellite, launched in April 2018. TESS is an MIT-led NASA mission that scans the skies for evidence of exoplanets: bodies ranging from dwarf planets to giant planets that orbit stars other than our sun. During its two-year mission, TESS and its four highly-sensitive cameras survey 85 percent of the sky, monitoring more than 200,000 stars for the temporary dips in brightness that might signal a transit — the passage of a planetary body across that star.

“There is a feeling you get when you look at these images from TESS,” says Ruehr, an award-winning MIT lecturer in the Music and Theater Arts Section and former Guggenheim Fellow. “A sense of vastness, of infinity. This is the sensation I tried to capture and transpose into vocal music.” 

Supported by the MIT Center for Art, Science and Technology’s Fay Chandler Creativity Grant; MIT Music and Theater Arts; and aerospace and technology giant Northrop Grumman, which also built the TESS satellite, the July 30 concert was conceived by MIT Research Associate Natalia Guerrero. Both the conference and concert marked the 50th anniversary of the Apollo 11 moon landing — another milestone in the quest to chart the universe and Earth’s place in it.

A 2014 MIT graduate, Guerrero manages the team finding planet candidates in the TESS images at the MIT Kavli Institute for Astrophysics and Space Research and is also the lead for the MIT branch of the mission’s communications team. “I wanted to include an event that could make the TESS mission accessible to people who aren’t astronomers or physicists,” says Guerrero. “But I also wanted that same event to inspire astronomers and physicists to look at their work in a new way.”

Guerrero majored in physics and creative writing at MIT, and after graduating she deejayed a radio show called “Voice Box” on the MIT radio station WMBR. That transmission showcased contemporary vocal music and exposed her to composers including Harbison and Ruehr. Last year, in early summer, Guerrero contacted Ruehr to gauge her interest in composing music for a still-hypothetical concert that might complement the 2019 TESS conference.

Ruehr was keen on the idea. She was also a perfect fit for the project. The composer had often drawn inspiration from visual images and other art forms for her music. “Sky Above Clouds,” an orchestral piece she composed in 1989, is inspired by the Georgia O’Keefe paintings she viewed as a child at the Art Institute of Chicago. Ruehr had also created music inspired by David Mitchell’s visionary novel “Cloud Atlas” and Anne Patchett’s “Bel Canto.” “It’s a question of reinterpreting language, capturing its rhythms and volumes and channeling them into music,” says Ruehr. “The source language can be fiction, or painting, or in this case these dazzling images of the universe.”

In addition, Ruehr had long been fascinated by space and stars. “My father was a mathematician who studied fast Fourier transform analysis,” says Ruehr, who is currently composing an opera set in space. “As a young girl, I’d listen to him talking about infinity with his colleagues on the telephone. I would imagine my father existing in infinity, on the edge of space.”

Drawing inspiration from the images TESS beams back to Earth, Ruehr composed two pieces for “Songs from Extrasolar Spaces.” The first, titled “Not from the Stars,” takes its name and lyrics from a Shakespeare sonnet. For the second, “Exoplanets,” Ruehr used a text that Guerrero extrapolated from the titles of the first group of scientific papers published from TESS data. “I’m used to working from images,” explains Ruehr. “First, I study them. Then, I sit down at the piano and try to create a single sound that captures their essence and resonance. Then, I start playing with that sound.”

Ruehr was particularly pleased to compose music about space for the Lorelei Ensemble. “There’s a certain quality in a women’s choir, especially the Lorelei Ensemble, that is perfectly suited for this project,” says Ruehr. “They have an ethereal sound and wonderful harmonic structures that make us feel as if we’re perceiving a small dab of brightness in an envelope of darkness.”

At the 2019 MIT TESS conference, experts from across the globe shared results from the first year of observation in the sky above the Southern Hemisphere, and discussed plans for the second-year trek above the Northern Hemisphere. The composers and musicians hope “Songs from Extrasolar Spaces” brought attention to the TESS missions, offers a new perspective on space exploration, and will perhaps spark further collaborations between scientists and artists. George Ricker, TESS principal investigator; Sara Seager, TESS deputy director of science; and Guerrero presented a pre-concert lecture. “Music has the power to generate incredibly powerful emotions,” says Ruehr. “So do these images from TESS. In many ways, they are more beautiful than any stars we might ever imagine.”

TESS is a NASA Astrophysics Explorer mission led and operated by MIT in Cambridge, Massachusetts, and managed by Goddard Spaceflight Center. Additional partners include Northrop Grumman, based in Falls Church, Virginia; NASA’s Ames Research Center in California’s Silicon Valley; the Harvard-Smithsonian Center for Astrophysics in Cambridge; MIT Lincoln Laboratory; and the Space Telescope Science Institute in Baltimore, Maryland. More than a dozen universities, research institutes, and observatories worldwide are participants in the mission.

Yearlong hackathon engages nano community around health issues

Fri, 08/09/2019 - 11:45am

A traditional hackathon focuses on computer science and programming, attracts coders in droves, and spans an entire weekend with three stages: problem definition, solution development, and business formation. 

Hacking Nanomedicine, however, recently brought together graduate and postgraduate students for a single morning of hands-on problem solving and innovation in health care while offering networking opportunities across departments and research interests. Moreover, the July hackathon was the first in a series of three half-day events structured to allow ideas to develop over time.

This deliberately deconstructed, yearlong process promotes necessary ebb and flow as teams shift in scope and recruit new members throughout each stage. “We believe this format is a powerful combination of intense, collaborative, multidisciplinary interactions, separated by restful research periods for reflecting on new ideas, allowing additional background research to take place and enabling additional people to be pulled into the fray as ideas take shape,” says Brian Anthony, associate director of MIT.nano and principal research scientist in MIT’s Institute for Medical Engineering and Science (IMES) and Department of Mechanical Engineering.

Organized by Marble Center for Cancer Nanomedicine Assistant Director Tarek Fadel, Foundation Medicine’s Michael Woonton, and MIT Hacking Medicine Co-Directors Freddy Nguyen and Kriti Subramanyam, the event was sponsored by IMES, the Koch Institute’s Marble Center for Cancer Nanomedicine, and MIT.nano, the new 200,000-square-foot nanoscale research center that launched at MIT last fall.

Sangeeta Bhatia, director of the Marble Center, emphasizes the importance of creating these communication channels between community members working in tangentially-related research spheres. "The goal of the event is to galvanize the nanotechnology community around Boston — including MIT.nano, the Marble Center, and IMES — to leverage the unique opportunities presented by miniaturization and to answer critical questions impacting health care,” says Bhatia, who is also the John J. and Dorothy Wilson Professor of Health Sciences and Technology at MIT.

At the kickoff session, organizers sought to create a smaller, workshop-based event that would introduce students, medical residents, and trainees to the world of hacking and disruptive problem solving. Representatives from MIT Hacking Medicine started the day with a brief overview and case study on PillPack, a successful internet pharmacy startup created from a previous hackathon event.

Participants then each had 30 seconds to develop and pitch problems highlighting critical health care industry shortcomings before forming into five teams based on shared interests. Groups pinpointed a wide array of timely topics, from the nation’s fight against obesity to minimizing vaccine pain. Each cohort had two hours to work through multifaceted, nanotechnology-based solutions. 

Mentors Cicely Fadel, a clinical researcher at the Wyss Institute for Biologically Inspired Engineering and neonatologist at Beth Israel Deaconess Medical Center, and David Chou, a hematopathologist at Massachusetts General Hospital and clinical fellow at the Wyss Institute, roamed the room during the solution phase, offering feedback on feasibility based on their own clinical experience.

At the conclusion of the problem-solving block, each of the five teams presented their solution to a panel of expert judges: Imran Babar, chief business officer of Cydan; Adama Marie Sesay, senior staff engineer of the Wyss Institute; Craig Mak, director of strategy at Arbor Bio; Jaideep Dudani, associate director of Relay Therapeutics; and Zen Chu, senior lecturer at the MIT Sloan School of Management and faculty director of MIT Hacking Medicine. 

Given the introductory nature of the event, judges opted to forego the traditional scoring rubric and instead paired with each team to offer individualized, qualitative feedback. Event sponsors note that the decision to steer away from a black-and-white, ranked-placing system encourages participants to continue thinking about the pain points of their problem in anticipation of the next hackathon in the series this fall.

During this second phase, participants will further develop their solution and explore the issue’s competitive landscape. Organizers plan to bring together local business and management stakeholders for a final event in the spring that will allow participants to pitch their project for acquisition or initial seed funding. 

Founded in 2011, MIT Hacking Medicine consists of both students and community members and aims to promote medical innovation to benefit the health care community. The group recognizes that technological advancement is often born out of collaboration rather than isolation. Monday’s event accordingly encouraged networking among students and postdocs not just from MIT but institutions all around Boston, creating lasting relationships rooted in a commitment to deliver crucial health care solutions.

Indeed, these events have proven successful in fostering connections and propelling innovation. According to MIT Hacking Medicine’s website, more than 50 companies with over $240 million in venture funding have been created since June 2018 thanks to their hackathons, workshops, and networking gatherings. The organization’s events across the globe have engaged nearly 22,000 hackers eager to disrupt the status quo and think critically about health systems in place.

This past weekend, MIT Hacking Medicine hosted its flagship Grand Hack event in Washington. Over the course of a weekend, like-minded students and professionals across a range of industries will join forces to tackle issues related to health care access, mental health and professional burnout, rare diseases, and more. Sponsors hope that Monday’s shorter, intimate event will garner enthusiasm for larger hackathons like this one to sustain communication among a diverse community of experts in their respective fields. 

Optimus Ride’s autonomous system makes self-driving vehicles a reality

Fri, 08/09/2019 - 11:36am

Some of the biggest companies in the world are spending billions in the race to develop self-driving vehicles that can go anywhere. Meanwhile, Optimus Ride, a startup out of MIT, is already helping people get around by taking a different approach.

The company’s autonomous vehicles only drive in areas it comprehensibly maps, or geofences. Self-driving vehicles can safely move through these areas at about 25 miles per hour with today’s technology.

“It’s important to realize there are multiple approaches, and multiple markets, to self-driving,” says Optimus Ride CEO Ryan Chin MA ’00, SM ’04, PhD ’12. “There’s no monolithic George Jetson kind of self-driving vehicle. You have robot trucks, you have self-driving taxis, self-driving pizza delivery machines, and each of these will have different time frames of technological development and different markets.”

By partnering with developers, the Optimus team is currently focused on deploying its vehicles in communities with residential and commercial buildings, retirement communities, corporate and university campuses, airports, resorts, and smart cities. The founders estimate the combined value of transportation services in those markets to be over $600 billion.

“We believe this is an important, huge business, but we also believe this is the first addressable market in the sense that we believe the first autonomous vehicles that will generate profits and make business sense will appear in these environments, because you can build the tech much more quickly,” says Chin, who co-founded the company with Albert Huang SM ’05, PhD ’10, Jenny Larios Berlin MCP ’14, MBA ’15, Ramiro Almeida, and Class of 1948 Career Development Professor of Aeronautics and Astronautics Sertac Karaman.

Optimus Ride currently runs fleets of self-driving vehicles in the Seaport area of Boston, in a mixed-use development in South Weymouth, Massachusetts, and, as of this week, in the Brooklyn Navy Yard, a 300-acre industrial park that now hosts the first self-driving vehicle program in the state.

Later this year, the company will also deploy its autonomous vehicles in a private community of Fairfield, California, and in a mixed-use development in Reston, Virginia.

The early progress — and the valuable data that come with it — is the result of the company taking a holistic view of transportation. That perspective can be traced back to the founders’ diverse areas of focus at MIT.

A multidisciplinary team

Optimus Ride’s founders have worked across a wide array of departments, labs, and centers across MIT. The technical validation for the company began when Karaman participated in the Defense Advanced Research Projects Agency’s (DARPA) Urban Challenge with a team including Huang in 2007. Both researchers had also worked in the Computer Science and Artificial Intelligence Laboratory together.

For the event, DARPA challenged 89 teams with creating a fully autonomous vehicle that could traverse a 60 mile course in under six hours. The vehicle from MIT was one of only six to complete the journey.

Chin, who led a Media Lab project that developed a retractable electric vehicle in the Smart Cities group, met Karaman when both were PhD candidates in 2012. Almeida began working in the Media Lab as a visiting scholar a year later.

As members of the group combined their expertise on both self-driving technology and the way people move around communities, they realized they needed help developing business models around their unique approach to improving transportation. Jenny Larios Berlin was introduced to the founders in 2015 after earning joint degrees from the Department of Urban Studies and Planning and the Sloan School of Management. The team started Optimus Ride in August that year.

“The company is really a melting pot of ideas from all of these schools and departments,” Karaman says. “When we met each other, there was the technology angle, but we also realized there’s an important business angle, and there’s also an interesting urban planning/media arts and sciences angle around thinking of the system as a whole. So when we formed the company we thought, not just how can we build fully autonomous vehicles, but also how can we make transportation in general more affordable, sustainable, equitable, accessible, and so on.”

Karaman says the company’s approach could only have originated in a highly collaborative environment like MIT, and believes it gives the company a big advantage in the self-driving sector.

“I knew how to build autonomous systems, but in interacting with Ryan and Ramiro and Jenny, I really got a better understanding of what the systems would look like, what the smart cities that utilize the systems would look like, what some of the business models would look like,” Karaman says. “That has a feedback on the technology. It allows you to build the right kind of technology very efficiently in order to go to these markets.”

Optimus Ride's self-driving vehicles can travel on many public roads. Courtesy of Optimus Ride

First mover advantage

Optimus Ride’s vehicles have a suite of cameras, lasers, and sensors similar to what other companies use to help autonomous vehicles navigate their environments. But Karaman says the company’s key technical differentiators are its machine vision system, which rapidly identifies objects, and its ability to fuse all those data sources together to make predictions, such as where an object is going and when it will get there.

Optimus Ride's vehicles feature a range of cameras and sensors to help them navigate their environment. Courtesy of Optimus Ride

The strictly defined areas where the vehicles drive help them learn what Karaman calls the “culture of driving” on different roads. Human drivers might subconsciously take a little longer at certain intersections. Commuters might drive much faster than the speed limit. Those and other location-specific details, like the turn radius of the Silver Line bus in the Seaport, are learned by the system through experience.

“A lot of the well-funded autonomous driving projects out there try to capture everything at the same time and tackle every problem,” Karaman says. “But we operate the vehicle in places where it can learn very rapidly. If you go around, say, 10,000 miles in a small community, you end up seeing a certain intersection a hundred or a thousand times, so you learn the culture of driving through that intersection. But if you go 10,000 miles around the country, you’ll only see places once.”

Safety drivers are still required to be behind the wheels of autonomous vehicles in the states Optimus Ride operates in, but the founders hope to soon be monitoring fleets with fewer people in a manner similar to an air traffic controller.

For now, though, they’re focused on scaling their current model. The contract in Reston, Virginia is part of a strategic partnership with one of the largest real estate managers in the world, Brookfield Properties. Chin says Brookfield owns over 100 locations where Optimus Ride could deploy its system, and the company is aiming to be operating 10 or more fleets by the end of 2020.

“Collectively, [the founders] probably have around three decades of experience in building self-driving vehicles, electric vehicles, shared vehicles, mobility transportation, on demand systems, and in looking at how you integrate new transportation systems into cities,” Chin says. “So that’s been the idea of the company: to marry together technical expertise with the right kind of policymaking, the right kind of business models, and to bring autonomy to the world as fast as possible.”

An emerging view of RNA transcription and splicing

Fri, 08/09/2019 - 11:30am

Cells often create compartments to control important biological functions. The nucleus is a prime example; surrounded by a membrane, it houses the genome. Yet cells also harbor enclosures that are not membrane-bound and more transient, like oil droplets in water. Over the past two years, these droplets (called “condensates”) have become increasingly recognized as major players in controlling genes. Now, a team led by Whitehead Institute scientists helps expand this emerging picture with the discovery that condensates play a role in splicing, an essential activity that ensures the genetic code is prepared to be translated into protein. The researchers also reveal how a critical piece of cellular machinery moves between different condensates. The team’s findings appear in the Aug. 7 online issue of Nature.

“Condensates represent a real paradigm shift in the way molecular biologists think about gene control,” says senior author Richard Young, a member of the Whitehead Institute and professor of biology at MIT. “Now, we’ve added a critical new layer to this thinking that enhances our understanding of splicing as well as the major transcriptional apparatus RNA polymerase II.”

Young’s lab has been at the forefront of studying how and when condensates form as well as their functions in gene regulation. In the current study, Young and his colleagues, including first authors Eric Guo and John Manteiga, focused their efforts on a key transition that happens when genes undergo transcription — an early step in gene activation whereby an RNA copy is created from the genes’ DNA template. First, all of the molecular machinery needed to make RNA, including a large protein complex known as RNA polymerase II, assembles at a given gene. Then, specific chemical modifications to RNA polymerase II allow it to begin transcribing DNA into RNA. This shift from so-called transcription initiation to active transcription also involves another important molecular transition: As RNA molecules begin to grow, the splicing apparatus must also move in and carry out its job.

“We wanted to step back and ask, ‘Do condensates play an important role in this switch, and if so, what mechanism might be responsible?’” explains Young.

For roughly three decades, it has been recognized that the factors required for splicing are stored in compartments called speckles. Yet whether these speckles play an active role in splicing, or are simply storage vessels, has remained unclear.

Using confocal microscopy, the Whitehead team discovered condensates filled with components of the splicing machinery in the vicinity of highly active genes. Notably, these structures exhibited similar liquid-like characteristics to those condensates described in prior studies from Young’s lab that are involved in transcription initiation. 

“These findings signaled to us that there are two types of condensates at work here: one involved in transcription initiation and the other in splicing and transcriptional elongation,” said Manteiga, a graduate student in Young’s lab.

With two different condensates at play, the researchers wondered: How does the critical transcriptional machinery, specifically RNA polymerase II, move from one condensate to the other?

Guo, Manteiga, and their colleagues found that chemical modification, specifically the addition of phosphate groups, serves as a kind of molecular switch that alters the protein complex’s affinity for a particular condensate. With fewer phosphate groups, it associates with the condensates for transcription initiation; when more phosphates are added, it enters the splicing condensates. Such phosphorylation occurs on one end of the protein complex, which contains a specialized region known as the C-terminal domain (CTD). Importantly, the CTD lacks a specific three-dimensional structure, and previous work has shown that such intrinsically disordered regions can influence how and when certain proteins are incorporated into condensates.

“It is well-documented that phosphorylation acts as a signal to help regulate the activity of RNA polymerase II,” says Guo, a postdoc in Young’s lab. “Now, we’ve shown that it also acts as a switch to alter the protein’s preference for different condensates.”

In light of their discoveries, the researchers propose a new view of splicing compartments, where speckles serve primarily as warehouses, storing the thousands of molecules required to support the splicing apparatus when they are not needed. But when splicing is active, the phosphorylated CTD of RNA Pol II serves as an attractant, drawing the necessary splicing materials toward the gene where they are needed and into the splicing condensate.

According to Young, this new outlook on gene control has emerged in part through a multidisciplinary approach, bringing together perspectives from biology and physics to learn how properties of matter predict some of the molecular behaviors he and his team have observed experimentally. “Working at the interface of these two fields is incredibly exciting,” says Young. “It is giving us a whole new way of looking at the world of regulatory biology.”

Support for this work was provided by the U.S. National Institutes of Health, National Science Foundation, Cancer Research Institute, Damon Runyon Cancer Research Foundation, Hope Funds for Cancer Research, Swedish Research Council, and German Research Foundation DFG.

Eight from MIT honored in 2019 Technology Review 35 Innovators Under 35

Fri, 08/09/2019 - 10:20am

Earlier this summer, MIT Technology Review released its annual list of 35 Innovators Under 35, and the 2019 roster has a strong MIT presence. At least eight MIT alumni and current or former postdocs were named to this year’s group.

According to MIT Technology Review, "35 Innovators Under 35," now in its 19th year, is a list of the most promising young innovators around the world whose accomplishments are poised to have a dramatic impact on the world. The list is split into five categories: Inventors, Entrepreneurs, Visionaries, Humanitarians, and Pioneers.

Postdocs and alumni honored for 2019 are:

Anurag Bajpayee SM ’08, PhD ’12 (Entrepreneurs) The founder of Gradient, Bajpayee's approaches can treat dirty wastewater and can make desalination more efficient.

Cesar de la Fuente Nunez, 2015 postdoc (Pioneer) An assistant professor at the University of Pennsylvania, De la Fuente Nunez developed algorithms that follow Charles Darwin’s theory of evolution to create optimized artificial antibiotics.

Grace X. Gu SM ’14, PhD ’18 (Pioneers) Now at the University of California at Berkeley, Gu is using artificial intelligence to help dream up a new generation of lighter, stronger materials.

Qichao Hu ’07, 2012 postdoc (Entrepreneur) Hu, founder and CEO of SolidEnergy Systems, is on the cusp of one of the most highly anticipated developments in industry: the next battery revolution.

Raluca Ada Popa ’10, MEng ’10, PhD ’14 (Visionaries) Now at the University of California at Berkeley, Popa's computer security method could protect data, even when attackers break in. 

Ritu Raman, postdoc (Inventor) A researcher at MIT's Koch Institute, Raman has developed inchworm-size robots made partly of biological tissue and muscle. 

Brandon Sorbom PhD ’17 (Inventor) Chief scientist at Commonwealth Fusion Systems, Sorbom's high-temperature superconductors could make fusion reactors much cheaper to build. 

Archana Venkataraman ’07, MEng ’07, PhD ’12 (Inventor) We still don’t know much about neurological disorders. Venkataraman, now at the Johns Hopkins University, is using artificial intelligence to change that.

For more on the connection between the Institute and MIT Technology Review's Innovators Under 35, see Slice of MIT's lists from 2018201720162015201420132012, and 2010.

A version of this article originally appeared on the Slice of MIT blog.

Guided by AI, robotic platform automates molecule manufacture

Thu, 08/08/2019 - 2:04pm

Guided by artificial intelligence and powered by a robotic platform, a system developed by MIT researchers moves a step closer to automating the production of small molecules that could be used in medicine, solar energy, and polymer chemistry.

The system, described in the August 8 issue of Science, could free up bench chemists from a variety of routine and time-consuming tasks, and may suggest possibilities for how to make new molecular compounds, according to the study co-leaders Klavs F. Jensen, the Warren K. Lewis Professor of Chemical Engineering, and Timothy F. Jamison, the Robert R. Taylor Professor of Chemistry and associate provost at MIT.

The technology “has the promise to help people cut out all the tedious parts of molecule building,” including looking up potential reaction pathways and building the components of a molecular assembly line each time a new molecule is produced, says Jensen.

“And as a chemist, it may give you inspirations for new reactions that you hadn’t thought about before,” he adds.

Other MIT authors on the Science paper include Connor W. Coley, Dale A. Thomas III, Justin A. M. Lummiss, Jonathan N. Jaworski, Christopher P. Breen, Victor Schultz, Travis Hart, Joshua S. Fishman, Luke Rogers, Hanyu Gao, Robert W. Hicklin, Pieter P. Plehiers, Joshua Byington, John S. Piotti, William H. Green, and A. John Hart.

From inspiration to recipe to finished product

The new system combines three main steps. First, software guided by artificial intelligence suggests a route for synthesizing a molecule, then expert chemists review this route and refine it into a chemical “recipe,” and finally the recipe is sent to a robotic platform that automatically assembles the hardware and performs the reactions that build the molecule.

Coley and his colleagues have been working for more than three years to develop the open-source software suite that suggests and prioritizes possible synthesis routes. At the heart of the software are several neural network models, which the researchers trained on millions of previously published chemical reactions drawn from the Reaxys and U.S. Patent and Trademark Office databases. The software uses these data to identify the reaction transformations and conditions that it believes will be suitable for building a new compound.

“It helps makes high-level decisions about what kinds of intermediates and starting materials to use, and then slightly more detailed analyses about what conditions you might want to use and if those reactions are likely to be successful,” says Coley.

“One of the primary motivations behind the design of the software is that it doesn’t just give you suggestions for molecules we know about or reactions we know about,” he notes. “It can generalize to new molecules that have never been made.”

Chemists then review the suggested synthesis routes produced by the software to build a more complete recipe for the target molecule. The chemists sometimes need to perform lab experiments or tinker with reagent concentrations and reaction temperatures, among other changes.

“They take some of the inspiration from the AI and convert that into an executable recipe file, largely because the chemical literature at present does not have enough information to move directly from inspiration to execution on an automated system,” Jamison says.

The final recipe is then loaded on to a platform where a robotic arm assembles modular reactors, separators, and other processing units into a continuous flow path, connecting pumps and lines that bring in the molecular ingredients.

“You load the recipe — that’s what controls the robotic platform — you load the reagents on, and press go, and that allows you to generate the molecule of interest,” says Thomas. “And then when it’s completed, it flushes the system and you can load the next set of reagents and recipe, and allow it to run.”

Unlike the continuous flow system the researchers presented last year, which had to be manually configured after each synthesis, the new system is entirely configured by the robotic platform.

“This gives us the ability to sequence one molecule after another, as well as generate a library of molecules on the system, autonomously,” says Jensen.

The design for the platform, which is about two cubic meters in size — slightly smaller than a standard chemical fume hood — resembles a telephone switchboard and operator system that moves connections between the modules on the platform.

“The robotic arm is what allowed us to manipulate the fluidic paths, which reduced the number of process modules and fluidic complexity of the system, and by reducing the fluidic complexity we can increase the molecular complexity,” says Thomas. “That allowed us to add additional reaction steps and expand the set of reactions that could be completed on the system within a relatively small footprint.”

Toward full automation

The researchers tested the full system by creating 15 different medicinal small molecules of different synthesis complexity, with processes taking anywhere between two hours for the simplest creations to about 68 hours for manufacturing multiple compounds.

The team synthesized a variety of compounds: aspirin and the antibiotic secnidazole in back-to-back processes; the painkiller lidocaine and the antianxiety drug diazepam in back-to-back processes using a common feedstock of reagents; the blood thinner warfarin and the Parkinson’s disease drug safinamide, to show how the software could design compounds with similar molecular components but differing 3-D structures; and a family of five ACE inhibitor drugs and a family of four nonsteroidal anti-inflammatory drugs.

“I’m particularly proud of the diversity of the chemistry and the kinds of different chemical reactions,” says Jamison, who said the system handled about 30 different reactions compared to about 12 different reactions in the previous continuous flow system.

“We are really trying to close the gap between idea generation from these programs and what it takes to actually run a synthesis,” says Coley. “We hope that next-generation systems will increase further the fraction of time and effort that scientists can focus their efforts on creativity and design.”  

The research was supported, in part, by the U.S. Defense Advanced Research Projects Agency (DARPA) Make-It program.

Characterizing tau aggregates in neurodegenerative diseases

Thu, 08/08/2019 - 12:00pm

The microtubule-binding protein tau in neurons of the central nervous system can misfold into filamentous aggregates under certain conditions. These filaments are found in many neurodegenerative diseases such as Alzheimer’s disease, chronic traumatic encephalopathy (CTE), and progressive supranuclear palsy. Understanding the molecular structure and dynamics of tau fibrils is important for designing anti-tau inhibitors to combat these diseases.

Cryoelectron microscopy studies have recently shown that tau fibrils derived from postmortem brains of Alzheimer’s patients adopt disease-specific molecular conformations. These conformations consist of long sheets, known as beta sheets, that are formed by thousands of protein molecules aligned in parallel. In contrast, recombinant tau fibrillized using the anionic polymer heparin was reported to exhibit polymorphic structures. However, the origin of this in vitro structural polymorphism as compared to the in vivo structural homogeneity is unknown.

Using solid-state nuclear magnetic resonance (SSNMR) spectroscopy, MIT Professor Mei Hong, in collaboration with Professor Bill DeGrado at the University of California at San Francisco, has shown in a paper, published July 29 in PNAS, that the beta sheet core of heparin-fibrillized tau in fact adopts a single molecular conformation. The tau protein they studied contains four microtubule-binding repeats, and the beta sheet fibril core spans the second and third repeats.

Clarifying biochemical studies of tau and its fibril formation

Previous research on this subject had reported four polymorphic structures of four-repeat (4R) tau fibrils, a polymorphism that led many labs to believe that in vitro tau fibrils were poor mimics of the in vivo patient-brain tau. However, through the use of their SSNMR spectra, which show only a single set of peaks for the protein, Hong and DeGrado discovered a crucial biochemical problem that led to the previous polymorphism.

Once this error was corrected, 4R tau was found to display only a single molecular structure. The revelation of this common biochemical problem, which is protease contamination in the heparin used to fibrillize tau, will significantly clarify and positively impact the field of tau research.

Preventing the formation of tau aggregates in Alzheimer’s disease and beyond

The three-dimensional fold of this four-repeat tau fibril core is distinct from the fibril core of the Alzheimer’s disease tau, which consists of a mixture of three- and four-repeat isoforms. “The tau isoform we studied is the same as that in diseases such as progressive supranuclear palsy, [so] the structural model we determined suggests what the patient brain tau from PSP may look like. Knowing this structure will be important for designing anti-tau inhibitors to either disrupt fibrils or prevent fibrils from forming in the first place,” explains Hong.

This SSNMR study also reported detailed characterizations of the mobilities of amino acid residues outside the rigid beta sheet core. These residues, which appear as a “fuzzy coat” in transmission electron micrographs, exhibit increasingly larger-amplitude motion towards the two ends of the polypeptide chain. Interestingly, the first and fourth microtubule-binding repeats, although excluded from the rigid core, display local b-strand conformations and are semi-rigid.

These structural and dynamical results suggest future medicinal interventions to disrupt or prevent the formation of tau aggregates in some neurodegenerative diseases.

Using recent gene flow to define microbe populations

Thu, 08/08/2019 - 10:59am

Identifying species among plants and animals has been a full-time occupation for some biologists, but the task is even more daunting for the myriad microbes that inhabit the planet. Now, MIT researchers have developed a simple measurement of gene flow that can define ecologically important populations among bacteria and archaea, including pinpointing populations associated with human diseases.

The gene flow metric separates co-existing microbes in genetically and ecologically distinct populations, Martin Polz, a professor of civil and environmental engineering at MIT, and colleagues write in the August 8 issue of Cell.

Polz and his colleagues also developed a method to identify parts of the genome in these populations that show different adaptations that can be mapped onto different environments. When they tested their approach on a gut bacterium, for instance, they were able to determine that different populations of the bacteria were associated with healthy individuals and patients with Crohn’s disease.

Biologists often call a group of plants or animals a species if the group is reproductively isolated from others — that is, individuals in the group can reproduce with each other, but they can’t reproduce with others. As a result, members of a species share a set of genes that differs from other species. Much of evolutionary theory centers on species and populations, the representatives of a species in a particular area.

But microbes “defy the classic species concept for plants and animals,” Polz explains. Microbes tend to reproduce asexually, simply splitting themselves in two rather than combining their genes with other individuals to produce offspring. Microbes are also notorious for “taking up DNA from environmental sources, such as viruses,” he says. “Viruses can transfer DNA into microbial cells and that DNA can be incorporated into their genomes.”

These processes make it difficult to sort coexisting microbes into distinct populations based on their genetic makeup. “If we can’t identify those populations in microbes, we can’t one-to-one apply all this rich ecological and evolutionary theory that has been developed for plants and animals to microbes,” says Polz.

If researchers want to measure an ecosystem’s resilience in the face of environmental change, for instance, they might look at how populations within species change over time. “If we don’t know what a species is, it’s very difficult to measure and assess these types of perturbations,” he adds.

Christopher Marx, a microbiologist at the University of Idaho who was not part of the Cell study, says he and his colleagues “will immediately apply” the MIT researchers’ approach to their own work. “We can use this to answer the question, ‘What should we define as an ecologically important unit?’”

A yardstick for gene flow

Martin and his colleagues decided to look for another way to define ecologically meaningful populations in microbes. Led by microbiology graduate student Philip Arevalo, the researchers developed a metric of gene flow that they called PopCOGenT (Populations as Clusters Of Gene Transfer).

PopCOGenT measures recent gene flow or gene transfer between closely related genomes. In general, microbial genomes that have exchanged DNA recently should share longer and more frequent stretches of identical DNA than if individuals were just reproducing by splitting their DNA in two. Without this sort of recent exchange, the researchers suggested, the length of these shared stretches of identical DNA would shorten as mutations insert new “letters” into the stretch.

Two microbial strains that are not genetically identical to each other but share sizable “chunks” of identical DNA are probably exchanging more genetic material with each other than with other strains. This gene flow measurement can define distinct microbial populations, as the researchers discovered in their tests of three different kinds of bacteria.

In Vibrio bacteria, for instance, closely related populations may share some core gene sequences, but they appear completely isolated from each other when viewed through this measurement of recent gene flow, Polz and colleagues found.

Polz says that the PopCOGenT method may work better at defining microbial populations than previous studies because it focuses on recent gene flow among closely related organisms, rather than including gene flow events that may have happened thousands of years in the past.

The method also suggests that while microbes are constantly taking in different DNA from their environment that might obscure patterns of gene flow, “it may be that this divergent DNA is really removed by selection from populations very quickly,” says Polz.

The reverse ecology approach

Microbiology graduate student David VanInsberghe then suggested a “reverse ecology” approach that could identify regions of the genome in these newly defined populations that show “selective sweeps” — places where DNA variation is reduced or eliminated, likely as a result of strong natural selection for a particular beneficial genetic variant.

By identifying specific sweeps within populations, and mapping the distribution of these populations, the method can reveal possible adaptations that drive microbes to inhabit a particular environment or host — without any prior knowledge of their environment. When the researchers tested this approach in the gut bacterium Ruminococcus gnavus, they uncovered separate populations of the microbe associated with healthy people and patients with Crohn’s disease.

Polz says the reverse ecology method is likely to be applied in the near future to studying the full diversity of the bacteria that inhabit the human body. “There is a lot of interest in sequencing closely related organisms within the human microbiome and looking for health and disease associations, and the datasets are growing.”

He hopes to use the approach to examine the “flexible genome” of microbes. Strains of E. coli bacteria, for instance, share about 40 percent of their genes in a “core genome,” while the other 60 percent — the flexible part — varies between strains. “For me, it’s one of the biggest questions in microbiology: Why are these genomes so diverse in gene content?” Polz explains. “Once we can define populations as evolutionary units, we can interpret gene frequencies in these populations in light of evolutionary processes.”

Polz and colleagues’ findings could increase estimates of microbe diversity, says Marx. “What I think is really cool about this approach from Martin’s group is that they actually suggest that the complexity that we see is even more complex than we’re giving it credit for. There may be even more types that are ecologically important out there, things that if they were plants and animals we would be calling them species.”

Other MIT authors on the paper include Joseph Elsherbini and Jeff Gore. The research was supported, in part, by the National Science Foundation and the Simons Foundation.

Study furthers radically new view of gene control

Thu, 08/08/2019 - 10:59am

In recent years, MIT scientists have developed a new model for how key genes are controlled that suggests the cellular machinery that transcribes DNA into RNA forms specialized droplets called condensates. These droplets occur only at certain sites on the genome, helping to determine which genes are expressed in different types of cells.

In a new study that supports that model, researchers at MIT and the Whitehead Institute for Biomedical Research have discovered physical interactions between proteins and with DNA that help explain why these droplets, which stimulate the transcription of nearby genes, tend to cluster along specific stretches of DNA known as super enhancers. These enhancer regions do not encode proteins but instead regulate other genes.

“This study provides a fundamentally important new approach to deciphering how the ‘dark matter’ in our genome functions in gene control,” says Richard Young, an MIT professor of biology and member of the Whitehead Institute.

Young is one of the senior authors of the paper, along with Phillip Sharp, an MIT Institute Professor and member of MIT’s Koch Institute for Integrative Cancer Research; and Arup K. Chakraborty, the Robert T. Haslam Professor in Chemical Engineering, a professor of physics and chemistry, and a member of MIT’s Institute for Medical Engineering and Science and the Ragon Institute of MGH, MIT, and Harvard.

Graduate student Krishna Shrinivas and postdoc Benjamin Sabari are the lead authors of the paper, which appears in Molecular Cell on Aug. 8.

“A biochemical factory”

Every cell in an organism has an identical genome, but cells such as neurons or heart cells express different subsets of those genes, allowing them to carry out their specialized functions. Previous research has shown that many of these genes are located near super enhancers, which bind to proteins called transcription factors that stimulate the copying of nearby genes into RNA.

About three years ago, Sharp, Young, and Chakraborty joined forces to try to model the interactions that occur at enhancers. In a 2017 Cell paper, based on computational studies, they hypothesized that in these regions, transcription factors form droplets called phase-separated condensates. Similar to droplets of oil suspended in salad dressing, these condensates are collections of molecules that form distinct cellular compartments but have no membrane separating them from the rest of the cell.

In a 2018 Science paper, the researchers showed that these dynamic droplets do form at super enhancer locations. Made of clusters of transcription factors and other molecules, these droplets attract enzymes such as RNA polymerases that are needed to copy DNA into messenger RNA, keeping gene transcription active at specific sites.

“We had demonstrated that the transcription machinery forms liquid-like droplets at certain regulatory regions on our genome, however we didn't fully understand how or why these dewdrops of biological molecules only seemed to condense around specific points on our genome,” Shrinivas says.

As one possible explanation for that site specificity, the research team hypothesized that weak interactions between intrinsically disordered regions of transcription factors and other transcriptional molecules, along with specific interactions between transcription factors and particular DNA elements, might determine whether a condensate forms at a particular stretch of DNA. Biologists have traditionally focused on “lock-and-key” style interactions between rigidly structured protein segments to explain most cellular processes, but more recent evidence suggests that weak interactions between floppy protein regions also play an important role in cell activities.

In this study, computational modeling and experimentation revealed that the cumulative force of these weak interactions conspire together with transcription factor-DNA interactions to determine whether a condensate of transcription factors will form at a particular site on the genome. Different cell types produce different transcription factors, which bind to different enhancers. When many transcription factors cluster around the same enhancers, weak interactions between the proteins are more likely to occur. Once a critical threshold concentration is reached, condensates form.

“Creating these local high concentrations within the crowded environment of the cell enables the right material to be in the right place at the right time to carry out the multiple steps required to activate a gene,” Sabari says. “Our current study begins to tease apart how certain regions of the genome are capable of pulling off this trick.”

These droplets form on a timescale of seconds to minutes, and they blink in and out of existence depending on a cell’s needs.

“It’s an on-demand biochemical factory that cells can form and dissolve, as and when they need it,” Chakraborty says. “When certain signals happen at the right locus on a gene, the condensates form, which concentrates all of the transcription molecules. Transcription happens, and when the cells are done with that task, they get rid of them.”

A new view

Weak cooperative interactions between proteins may also play an important role in evolution, the researchers proposed in a 2018 Proceedings of the National Academy of Sciences paper. The sequences of intrinsically disordered regions of transcription factors need to change only a little to evolve new types of specific functionality. In contrast, evolving new specific functions via “lock-and-key” interactions requires much more significant changes.

“If you think about how biological systems have evolved, they have been able to respond to different conditions without creating new genes. We don’t have any more genes that a fruit fly, yet we’re much more complex in many of our functions,” Sharp says. “The incremental expanding and contracting of these intrinsically disordered domains could explain a large part of how that evolution happens.”

Similar condensates appear to play a variety of other roles in biological systems, offering a new way to look at how the interior of a cell is organized. Instead of floating through the cytoplasm and randomly bumping into other molecules, proteins involved in processes such as relaying molecular signals may transiently form droplets that help them interact with the right partners.

“This is a very exciting turn in the field of cell biology,” Sharp says. “It is a whole new way of looking at biological systems that is richer and more meaningful.”

Some of the MIT researchers, led by Young, have helped form a company called Dewpoint Therapeutics to develop potential treatments for a wide variety of diseases by exploiting cellular condensates. There is emerging evidence that cancer cells use condensates to control sets of genes that promote cancer, and condensates have also been linked to neurodegenerative disorders such as amyotrophic lateral sclerosis (ALS) and Huntington’s disease.

The research was funded by the National Science Foundation, the National Institutes of Health, and the Koch Institute Support (core) Grant from the National Cancer Institute.

The MIT Press releases a comprehensive report on open-source publishing software

Thu, 08/08/2019 - 9:00am

The MIT Press has announced the release of a comprehensive report on the current state of all available open-source software for publishing. “Mind the Gap,” funded by a grant from The Andrew W. Mellon Foundation, “shed[s] light on the development and deployment of open source publishing technologies in order to aid institutions' and individuals' decision-making and project planning,” according to its introduction. It will be an unparalleled resource for the scholarly publishing community and complements the recently released Mapping the Scholarly Communication Landscape census.

The report authors, led by John Maxwell, associate professor and director of the Publishing Program at Simon Fraser University, catalog 52 open source online publishing platforms. These are defined as production and hosting systems for scholarly books and journals that meet the survey criteria, described in the report as those “available, documented open-source software relevant to scholarly publishing” and as well as others in active development. This research provides the foundation for a thorough analysis of the open publishing ecosystem and the availability, affordances, and current limitations of these platforms and tools.

The number of OS online publishing platforms has proliferated in the last decade, but the report finds that they are often too small, too siloed, and too niche to have much impact beyond their host organization or institution. This leaves them vulnerable to shifts in organizational priorities and external funding sources that prioritize new projects over the maintenance and improvement of existing projects. This fractured ecosystem is difficult to navigate and the report concludes that if open publishing is to become a durable alternative to complex and costly proprietary services, it must grapple with the dual challenges of siloed development and organization of the community-owned ecosystem itself.

“What are the forces — and organizations — that serve the larger community, that mediate between individual projects, between projects and use cases, and between projects and resources?” asks the report. “Neither a chaotic plurality of disparate projects nor an efficiency-driven, enforced standard is itself desirable, but mediating between these two will require broad agreement about high-level goals, governance, and funding priorities — and perhaps some agency for integration/mediation.”

“John Maxwell and his team have done a tremendous job collecting and analyzing data that confirm that open publishing is at a pivotal crossroads,” says Amy Brand, director of the MIT Press. “It is imperative that the scholarly publishing community come together to find new ways to fund and incentivize collaboration and adoption if we want these projects to succeed. I look forward to the discussions that will emerge from these findings.”

“We found that even though platform leaders and developers recognize that collaboration, standardization, and even common code layers can provide considerable benefit to project ambitions, functionality, and sustainability, the funding and infrastructure supporting open publishing projects discourages these activities,” explains Maxwell. “If the goal is to build a viable alternative to proprietary publishing models, then open publishing needs new infrastructure that incentivizes sustainability, cooperation, collaboration, and integration.”

Readers are invited to read, comment, and annotate “Mind the Gap” on the PubPub platform: mindthegap.pubpub.org

Does cable news shape your views?

Wed, 08/07/2019 - 11:59pm

It’s a classic question in contemporary politics: Does partisan news media coverage shape people’s ideologies? Or do people decide to consume political media that is already aligned with their beliefs?

A new study led by MIT political scientists tackles this issue head-on and arrives at a nuanced conclusion: While partisan media does indeed have “a strong persuasive impact” on political attitudes, as the researchers write in a newly published paper, news media exposure has a bigger impact on people without strongly held preferences for partisan media than it does for people who seek out partisan media outlets.

In short, certain kinds of political media affect a cross-section of viewers in varying manners, and to varying degrees — so while the influence of partisan news is real, it also has its limits.

“Different populations are going to respond to partisan media in different ways,” says Adam Berinsky, the Mitsui Professor of Political Science and director of the Political Experiments Research Lab (PERL) at MIT, and a co-author of the study.

“Political persuasion is hard,” Berinsky adds. “If it were easy, the world would already look a lot different.”

The paper, “Persuading the Enemy: Estimating the Persuasive Effects of Partisan Media with the Preference-Incorporating Choice and Assignment Design,” is now available in advance online form from the American Political Science Review.

In addition to Berinsky, the authors are Justin de Benedictis-Kessner PhD ’17, an assistant professor of political science at Boston University; Mathew A. Baum, a professor at the Harvard Kennedy School; and Teppei Yamamoto, an associate professor in MIT’s Department of Political Science.

Breaking down the problem

A substantial political science literature has debated the question of media influence; some scholars have contended that partisan media significantly shapes public opinion, but others have argued that “selective exposure,” in which people watch what they already agree with, is predominant. 

“It’s a really tricky problem,” Berinsky says. “How do you disentangle these things?”

The new research aims to do that, in part, by disaggregating the viewing public. The study consists of a series of experiments and surveys analyzing the responses of smaller subgroups, which were divided according to media consumption preferences, ideology, and more.

That allows the researchers to tease apart the cause-and-effect issues surrounding media consumption by looking more specifically at the impact of media on people with different ideologies and different levels of willingness to view media. The researchers call this approach the Preference-Incorporating Choice and Assignment design, or PICA.

For instance, one experiment within the study gave participants the option of reading web posts from either the conservative Fox News channel; MSNBC, which has several shows leaning in a significantly more liberal-left direction; or the Food Network. Other participants were assigned to watch one of the three.

By examing viewer responses to the content, the scholars found that people who elected to read materials from partisan news channels were less influenced by the content. By contrast, participants who gravitated to the Food Network but were assigned to watch cable news, were more influenced by the content.

How big is the effect? Quantitatively, the researchers found, a single exposure to partisan media can change the views of relatively nonpolitical citizens by an amount equal to one-third of the average ideological gap that exists between partisans on the right and left sides of the political spectrum.

Thus, the influence of cable news depends on who it is reaching. “People do respond differently based on their preferences,” Berinsky says.

And while the impact of partisan cable news on people who elect to watch it is smaller, it does exist, the researchers found. For instance, in another of the study’s experiments, the researchers tested cable news’ effects on viewers’ beliefs about marijuana legislation. Even among regular cable-news viewers, partisan content influenced people’s views.

Overall, Yamamoto states, the PICA method is novel because it “allows us to make inferences about what is never [otherwise] directly observable,” that is, the impact of partisan media on people who would normally choose not to consume it.  

“Most people just don’t want news”

To put the findings in the context of daily news viewership in the U.S., consider the recent congressional hearings in which special counsel Robert Mueller testified about his presidential investigation. Fox News led the cable ratings with an average of 3 million viewers during most of the day, while MSNBC had an average of 2.4 million viewers. Overall, 13 million people watched. But the Super Bowl, for example, regularly pulls in around 100 million viewers.

“Most people just don’t want to be exposed to political news,” Berinsky notes. “These are not bad people or bad citizens. In theory, a democracy is working well when you can ignore politics.”

One implication of the larger lack of interest in politics, consequently, is that any audience gains that partisan media outlets experience can produce relatively greater influence — since that growth would apply to formerly irregular consumers of news, who may be more easily influenced. Again, though, such audience gains are likely to be limited, due to the reluctance of most Americans to consume partisan media.

“We only learned those people are persuadable because we made them watch the news,” Berinsky says.

Other scholars in the field say the paper is a valuable addition to the literature on media influence. Kevin Arceneaux, the Thomas J. Freaney, Jr. Professor of Political Science and director of the Behavioral Foundations Lab at Temple University, says the study “represents an important methodological leap forward in the study of media effects.”

Arceneaux says the researchers “convincingly demonstrate that partisan news media have the largest effects among individuals who tend to avoid consuming news,” and suggests some possible implications pertaining to the larger media landscape.

For people who do follow politics, he suggests, having many news options available may “blunt the persuasive and polarizing effects of partisan news media”; at the same time, social media could be “an important source of polarization” by introducing some people to news. Arceneaux also notes that further research on the effects of “counterattitudinal” partisan news — content that argues against the beliefs of consumers — would shed more light on the dynamics of media influence.

The study was supported by a National Science Foundation grant and the Political Experiments Research Lab at MIT; Berinsky’s contribution was partly supported by a Joan Shorenstein Fellowship.

Air travel in academia

Wed, 08/07/2019 - 12:45pm

Our planet’s warming climate presents an imminent and catastrophic challenge that will have far-reaching economic, social, and political ramifications. As residents of a wealthy, developed nation, we contribute more to climate change than the average global citizen. At MIT, as globally connected citizens with many opportunities for work- and research-related air travel, many community members contribute more to climate change than the average American.

For many individuals at the Media Lab, who travel around the world to collaborate on research projects, present at conferences, and lead workshops, research-related air travel represents a huge proportion of their annual greenhouse-gas emissions. For example, a single economy-class seat on a flight from Boston, Massachusetts, to Los Angeles, California, is responsible for the same carbon emissions as 110 days of driving a car. Several labbers wanted to do more to educate the Media Lab community about the impact of our collective air travel and improve the lab’s sustainability.

While the best way to reduce our carbon footprint would be to take fewer airplane flights, this solution isn’t always possible or desirable given the research opportunities that require air travel. Instead, research assistants Juliana Cherston, Natasha Jaques, and Caroline Jaffe decided to start a pilot program through which the Media Lab will buy high-quality carbon offsets to reduce the climate impact of the lab’s collective air travel. The program's website was designed and engineered by Craig Ferguson.

Though carbon-offset programs have been criticized in the past for giving people an excuse for irresponsible climate behavior, carbon-offset verification has improved drastically in the past decade. When it is infeasible to reduce overall air travel mileage, the purchase of high-quality, verified carbon offsets will fund projects that produce renewable energy and avoid future carbon emissions. As part of a pilot program, the lab plans to buy carbon offsets through Gold Standard, a certified offset provider that verifies that their offset projects, like distributing clean cooking stoves, investing in wind power plants, and regenerating forests, both reduce carbon emissions and also meet the United Nations' Sustainable Development Goals.

During the six-month pilot program, the project leaders are asking members of the Media Lab community to log their lab-related air miles through a simple web interface. At the end of each month they will tally the air miles traveled by the community, calculate the carbon emissions associated with those flights, and purchase offsets through Gold Standard to offset the impact of those flights. It is hoped that the program will spark a discussion about climate behavior while contributing to a global model of sustainability.

While putting together the pilot program, the organizing team members ran into a few surprising data and design issues. First, they learned that gathering data — and knowing which data to collect — was trickier than expected. What exactly counts as “lab-related” travel, and is there some centralized system that tracks the lab’s air mileage? It turns out that no such system exists. While MIT maintains careful financial accounting, there hasn’t been a reason to specifically track mileage before, and the ability to do so is not built into the Institute’s accounting systems.

The team also wrestled with interesting questions around user participation. While they wanted to encourage as many people as possible to participate in order to collect the most accurate travel data, they also didn’t want to incentivize people to travel more than they do already. And, they didn’t want people to vacate a sense of responsibility by knowing their travel was being offset. In the process of putting together this pilot, the team learned of other groups at MIT and at other universities who are developing carbon-offset programs. In other cases, offset programs are top-down: Offsets are automatically purchased through finance or logistics channels. These programs don’t have to deal with user-participation challenges and likely have more accurate data totals, but they also miss the opportunity to engage the community in a substantive conversation around air travel emissions.

After thinking carefully about goals for the project, the team decided that soliciting travel data from the community would do the most to raise awareness about the issue — and it was also a cheap and easy way to kick off a pilot. After launching the pilot several weeks ago, the team has received a few dozen messages communicating enthusiasm, asking questions, and raising concerns. They are planning to send monthly update emails to the Media Lab community, and host several discussion groups at the end of the pilot to evaluate the program and figure out what to do next. Through this pilot, the team hopes to learn about what makes an effective carbon-offsets program and pass this knowledge on to groups at MIT and other schools who are trying to implement university-wide offset programs.

Read more at offset.media.mit.edu (and log your air miles if you’re at the Media Lab). When the pilot is complete, the team will publish a followup to share its findings.

A version of this article was previously published by the MIT Media Lab.

3Q: Jeremy Gregory on measuring the benefits of hazard resilience

Wed, 08/07/2019 - 12:30pm

According to the National Oceanic and Atmospheric Administration (NOAA), the combined cost of natural disasters in the United States was $91 billion in 2018. The year before, natural disasters inflicted even greater damage — $306.2 billion. Traditionally, investment in mitigating these damages has gone toward disaster response. While important, disaster response is only one part of disaster mitigation. By putting more resources into disaster readiness, communities can reduce the time it takes to recover from a disaster while decreasing loss of life and damage costs. Experts refer to this preemptive approach as resilience.

Resilience entails a variety of actions. In the case of individual buildings, it can be as straightforward as increasing the nail size in roof panels, using thicker windows, and increasing the resistance of roof shingles. On a broader scale, it involves predicting vulnerabilities in a community and preparing for surge pricing and other economic consequences associated with disasters.

MIT Concrete Sustainability Hub Executive Director Jeremy Gregory weighs in on why resilience hasn’t been widely adopted in the United States and what can be done to change that.

Q: What is resilience in the context of disaster mitigation?

A: Resilience is how one responds to a change, usually that is in the context of some type of disaster — whether it’s natural or manmade. There are three components of resilience: How significant is the damage due to the disaster? How long does it take to recover? What is the level of recovery after a certain amount of time?

It’s important to invest in resilience since we can mitigate significant expenses and loss of life due to disasters before they occur. So, if we build more resilient in the first place, then we don’t end up spending as much on the response to a disaster and communities can more quickly become operational again.

Generally, building construction is not particularly resilient. That’s primarily because the incentives aren’t aligned for creating resilient construction. For example, the Federal Emergency Management Agency, which handles disaster response, invests significantly more in post-disaster mitigation efforts than it does in pre-disaster mitigation efforts — the funds are an order of magnitude greater for the former. Part of that could be that we’re relying on an agency that’s primarily focused on emergency response to help us prepare for avoiding an emergency response. But primarily, that’s because when buildings are purchased, we don’t have information on the resiliency of the building.

Q: What is needed to make resilience more widely adopted?

A: Essentially, we need a robust approach for quantifying the benefits of resilience for a diverse range of contexts. For a lot of buildings, the construction decisions are not made in consultation with the ultimate owner of the building. A developer has to make decisions based on what they think the owner will value. And right now, owners don’t communicate that they value resilience. I think a big part of that is that they don’t have enough quantitative information about why one building is more resilient than another.

So, for example, when it comes to the fuel economy of our automobiles, we now have a consistent way to measure that fuel economy and communicate fuel consumption costs over the life cycle of the vehicle. Or similarly, we have a way of measuring the energy consumption of appliances that we buy and quantifying those costs throughout the product life. We currently don’t have a robust system for quantifying the resilience of a building and how that will translate into costs associated with repairs due to hazards over the lifetime of the building.

Q: Is building resilient expensive?

A: Building resilient does not have to be significantly more expensive than conventional construction. Our research has shown that more resilient construction can cost less than 10 percent more than conventional construction. But those increased initial costs are offset by lower expenses associated with hazard repairs over the lifetime of the building. So, in some of the cases we looked at in residential construction, the payback periods for the more hazard-resistant construction were five years or less in areas prone to hurricane damage. Our other research on the break-even mitigation percentage has shown that, in some of the most hurricane-prone areas, you can spend up to nearly 20 percent more on the initial investment of the building and break even on your expenses over a 30-year period, including from the damages due to hazards, compared to a conventional building that will sustain more damage.

It’s important for owners to know how significant these costs are and what the life-cycle benefits are for more hazard-resistant construction. Once developers know that homeowners value that information, that will create more market demand for hazard-resistant construction and ultimately lead to the development of safer and more resilient communities.

A similar shift has occurred in the demand for green buildings, and that’s primarily due to rating systems like LEED [Leadership in Energy and Environmental Design]: developers now construct buildings with green rating systems because they know there is a market premium for those buildings, since owners value them. We need to create a similar kind of demand for resilient construction.

There are several resilient rating systems already in place. The Insurance Institute for Business and Home Safety, for example, has developed the Fortified rating system, which informs homeowners and builders about hazard risks and ranks building designs according to certain levels of protection. The U.S. Resiliency Council’s Building Rating System is another model that offers four rating levels and currently focuses primarily on earthquakes. Additionally, there is the REli rating by the U.S. Green Building Council — the same organization that runs the LEED ratings. These are all good efforts to communicate resilient construction, but there are also opportunities to incorporate more quantitative estimates of resilience into the rating systems.

The rise of these kinds of resilience rating systems is particularly timely since the annual cost of hazard-induced damage is expected to increase over the next century due to climate change and development in hazard-prone areas. But with new standards for quantifying resilience, we can motivate hazard-resistant construction that protects communities and mitigates the consequences of climate change.

Study measures how fast humans react to road hazards

Wed, 08/07/2019 - 11:28am

Imagine you’re sitting in the driver’s seat of an autonomous car, cruising along a highway and staring down at your smartphone. Suddenly, the car detects a moose charging out of the woods and alerts you to take the wheel. Once you look back at the road, how much time will you need to safely avoid the collision?

MIT researchers have found an answer in a new study that shows humans need about 390 to 600 milliseconds to detect and react to road hazards, given only a single glance at the road — with younger drivers detecting hazards nearly twice as fast as older drivers. The findings could help developers of autonomous cars ensure they are allowing people enough time to safely take the controls and steer clear of unexpected hazards.

Previous studies have examined hazard response times while people kept their eyes on the road and actively searched for hazards in videos. In this new study, recently published in the Journal of Experimental Psychology: General, the researchers examined how quickly drivers can recognize a road hazard if they’ve just looked back at the road. That’s a more realistic scenario for the coming age of semiautonomous cars that require human intervention and may unexpectedly hand over control to human drivers when facing an imminent hazard.

“You’re looking away from the road, and when you look back, you have no idea what’s going on around you at first glance,” says lead author Benjamin Wolfe, a postdoc in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We wanted to know how long it takes you to say, ‘A moose is walking into the road over there, and if I don’t do something about it, I’m going to take a moose to the face.’”

For their study, the researchers built a unique dataset that includes YouTube dashcam videos of drivers responding to road hazards — such as objects falling off truck beds, moose running into the road, 18-wheelers toppling over, and sheets of ice flying off car roofs — and other videos without road hazards. Participants were shown split-second snippets of the videos, in between blank screens. In one test, they indicated if they detected hazards in the videos. In another test, they indicated if they would react by turning left or right to avoid a hazard.

The results indicate that younger drivers are quicker at both tasks: Older drivers (55 to 69 years old) required 403 milliseconds to detect hazards in videos, and 605 milliseconds to choose how they would avoid the hazard. Younger drivers (20 to 25 years old) only needed 220 milliseconds to detect and 388 milliseconds to choose.

Those age results are important, Wolfe says. When autonomous vehicles are ready to hit the road, they’ll most likely be expensive. “And who is more likely to buy expensive vehicles? Older drivers,” he says. “If you build an autonomous vehicle system around the presumed capabilities of reaction times of young drivers, that doesn’t reflect the time older drivers need. In that case, you’ve made a system that’s unsafe for older drivers.”

Joining Wolfe on the paper are: Bobbie Seppelt, Bruce Mehler, Bryan Reimer, of the MIT AgeLab, and Ruth Rosenholtz of the Department of Brain and Cognitive Sciences and CSAIL.

Playing “the worst video game ever”

In the study, 49 participants sat in front of a large screen that closely matched the visual angle and viewing distance for a driver, and watched 200 videos from the Road Hazard Stimuli dataset for each test. They were given a toy wheel, brake, and gas pedals to indicate their responses. “Think of it as the worst video game ever,” Wolfe says.

The dataset includes about 500 eight-second dashcam videos of a variety of road conditions and environments. About half of the videos contain events leading to collisions or near collisions. The other half try to closely match each of those driving conditions, but without any hazards. Each video is annotated at two critical points: the frame when a hazard becomes apparent, and the first frame of the driver’s response, such as braking or swerving.

Before each video, participants were shown a split-second white noise mask. When that mask disappeared, participants saw a snippet of a random video that did or did not contain an imminent hazard. After the video, another mask appeared. Directly following that, participants stepped on the brake if they saw a hazard or the gas if they didn’t. There was then another split-second pause on a black screen before the next mask popped up.

When participants started the experiment, the first video they saw was shown for 750 milliseconds. But the duration changed during each test, depending on the participants’ responses. If a participant responded incorrectly to one video, the next video’s duration would extend slightly. If they responded correctly, it would shorten. In the end, durations ranged from a single frame (33 milliseconds) up to one second. “If they got it wrong, we assumed they didn’t have enough information, so we made the next video longer. If they got it right, we assumed they could do with less information, so made it shorter,” Wolfe says.

The second task used the same setup to record how quickly participants could choose a response to a hazard. For that, the researchers used a subset of videos where they knew the response was to turn left or right. The video stops, and the mask appears on the first frame that the driver begins to react. Then, participants turned the wheel either left or right to indicate where they’d steer.

“It’s not enough to say, ‘I know something fell into road in my lane.’ You need to understand that there’s a shoulder to the right and a car in the next lane that I can’t accelerate into, because I’ll have a collision,” Wolfe says.

More time needed

The MIT study didn’t record how long it actually takes people to, say, physically look up from their phones or turn a wheel. Instead, it showed people need up to 600 milliseconds to just detect and react to a hazard, while having no context about the environment.

Wolfe thinks that’s concerning for autonomous vehicles, since they may not give humans adequate time to respond, especially under panic conditions. Other studies, for instance, have found that it takes people who are driving normally, with their eyes on the road, about 1.5 seconds to physically avoid road hazards, starting from initial detection.

Driverless cars will already require a couple hundred milliseconds to alert a driver to a hazard, Wolfe says. “That already bites into the 1.5 seconds,” he says. “If you look up from your phone, it may take an additional few hundred milliseconds to move your eyes and head. That doesn’t even get into time it’ll take to reassert control and brake or steer. Then, it starts to get really worrying.”

Next, the researchers are studying how well peripheral vision helps in detecting hazards. Participants will be asked to stare at a blank part of the screen — indicating where a smartphone may be mounted on a windshield — and similarly pump the brakes when they notice a road hazard.

The work is sponsored, in part, by the Toyota Research Institute.  

Pages