MIT Latest News
In 2016, synthetic biologists reconstructed a possibly extinct disease, known as horsepox, using mail-order DNA for around $100,000. The experiment was strictly for research purposes, and the disease itself is harmless to humans. But the published results, including the methodology, raised concerns that a nefarious agent, given appropriate resources, could engineer a pandemic. In an op-ed published today in PLOS Pathogens, Media Lab Professor Kevin Esvelt, who develops and studies gene-editing techniques, argues for tighter biosecurity and greater research transparency to keep such “information hazards” — published information that could be used to cause harm — in check. Esvelt spoke with MIT News about his ideas.
Q: What are information hazards, and why are they an important topic in synthetic biology?
A: Our society is not at ease with this notion that some information is hazardous, but it unfortunately happens to be true. No one believes the blueprints for nuclear weapons should be public, but we do collectively believe that the genome sequences for viruses should be public. This was not a problem until DNA synthesis got really good. The current system for regulating dangerous biological agents is bypassed by DNA synthesis. DNA synthesis is becoming accessible to a wide variety of people, and the instructions for doing nasty things are freely available online.
In the horsepox study, for instance, the information hazard is partly in the paper and the methods they described. But it’s also in the media covering it and highlighting that something bad can be done. And this is worsened by the people who are alarmed, because we talk to journalists about the potential harm, and that just feeds into it. As critics of these things, we are spreading information hazards too.
Part of the solution is just acknowledging that openness of information has costs, and taking steps to minimize those. That means raising awareness that information hazards exist, and being a little more cautious about talking about, and especially citing, dangerous work. Information hazards are a “tragedy of the commons” problem. Everyone thinks that, if it’s already out there, one more citation isn’t going to hurt. But everyone thinks that way. It just keeps on building until it’s on Wikipedia.
Q: You say one issue with synthetic biology is screening DNA for potentially harmful sequences. How can cryptography help promote a market of “clean” DNA?
A: We really need to do something about the ease of DNA synthesis and the accessibility of potential pandemic pathogens. The obvious solution is to get some kind of screening implemented for all DNA synthesis. The International Gene Synthesis Consortium (IGSC) was set up by industry leaders in DNA synthesis post-anthrax attacks. To be a member, a company needs to demonstrate it screens its orders, but member companies only cover 80 percent of the commercial market and none of the synthesis facilities within large firms. And there is no external way to verify that IGSC companies are actually doing the screening, or that they screen for the right things.
We need a more centralized system, where all DNA synthesis in the world is autonomously checked and would only be approved for synthesis if harmful sequences were not found in any of them. This is a cryptography problem.
On one hand, you have trade secrets, because firms making DNA don’t want others to know what they’re making. On the other hand, you have database of hazards that must be useless if stolen. You want to encrypt orders, send them to a centralized database, and then learn if it’s safe or not. Then you need a system for letting people add things to the database, which can be done privately. This is totally achievable with modern cryptography. You can use what’s known as hashes [which converts inputs of letters and numbers into an encrypted output of a fixed sequence] or do it using a newer method of fully homomorphic encryption, which lets you do calculations on encrypted data without ever decrypting it.
We’re just beginning to work on this challenge now. A point of this PLOS Pathogens op-ed is to lay the groundwork for this system.
In the long term, authorized experts can add hazards to their own databases. That’s the ideal way to deal with information hazards. If I think of a sequence that I’m confident is very dangerous, and people shouldn’t do this; ideally I would be able to contribute that to a database, possibly in conjunction with just one other authorized user who concurs. That could make sure nobody else makes that exact sequence, without unduly spreading the hazardous information of its identity and potential nature.
Q: You argue for peer review during earlier research stages. How would that help prevent information hazards?
A: The horsepox study was controversial with regard to whether the benefits outweighed the risks. It’s been said that one benefit was highlighting that viruses can be built from scratch. In oncological viral therapy, where you make viruses to kill cancer, [this information] could accelerate their research. It’s also been postulated that horsepox might be used to make a better vaccine, but that the researchers couldn’t access a sample. Those may be true. It’s still a clear information hazard. Could that aspect have been avoided?
Ideally, the horsepox study would have been reviewed by other experts, including some who were concerned by its implications and could have pointed out, for example, that you could have made a virus without harmful relatives as an example — or made horsepox, used it for vaccine development, and then just not specified that you made it from scratch. Then, you would have had all the research benefits of the study, without creating the information hazard. That would have been possible insofar as other experts had been given a chance to look at the research design before experiments were done.
With the current process, it’s typically only peer review at the end of the research. There’s no feedback at the research design phase at all. The time when peer review would be most useful would be at that phase. This transition requires funders, journals, and governments getting together to change [the process] in small subfields. In fields clearly without information hazards, you might publicly preregister your research plans and invite feedback. In fields like synthetic mammalian virology that present clear hazards, you’d want the research plans sent to a couple of peer reviewers in the field for evaluation, for safety and for suggested improvements. A lot of the time there’s a better way to do the experiment than you initially imagined, and if they can point that out at the beginning, then great. I think that both models will result in faster science, which we want too.
Universities could start by setting up a special process for early-stage peer review, internally, of gene drive [a genetic engineering technology] and mammalian virology experiments. As a scientist who works in both those fields, I would be happy to participate. The question is: How can we do [synthetic biology] in a way that continues or even accelerates beneficial discoveries while avoiding those with potentially catastrophic consequences?
MIT will follow its landmark 2014 Campus Attitudes on Sexual Assault Survey (CASA) by administering the Association of American Universities (AAU) 2019 Campus Climate Survey to undergraduate and graduate students during the upcoming spring 2019 semester, Chancellor Cynthia Barnhart SM ’85, PhD ’88 announced today.
This will allow the Institute to measure the progress made to combat sexual misconduct in the five years since the first survey; identify and respond to new issues the AAU survey may uncover; and put MIT’s results into the context of national AAU aggregate data. Earlier today, the AAU announced that 33 institutions will be participating in the AAU survey next spring.
“At MIT, we don’t shy away from tough problems, and we care deeply about creating a safe, welcoming, and respectful climate for every member of our community,” said Barnhart. “Our 2014 survey provided us with critical information about how sexual misconduct affects the MIT student community, and it helped us develop data-driven education and prevention programs and policies. With the 2019 survey, we will continue to employ self-evaluation and transparency so that we can enhance our understanding of, and response to, this complex issue; create community dialogue about solutions; and send a very clear message that sexual misconduct has no place on our campus.”
Looking back: CASA survey and MIT’s response
Responding to a request from MIT President L. Rafael Reif that Barnhart make sexual assault prevention a central priority when he appointed her chancellor in February 2014, Barnhart emailed the CASA survey to all MIT students on April 27, 2014 — two days before a White House task force issued guidance that all of the nation’s colleges and universities survey their students on these matters. The CASA survey results were released in October 2014 alongside a series of Institute-wide steps that began a process of comprehensively addressing the findings.
In the three academic years since the CASA survey’s administration, MIT has added new services, expanded education and community outreach initiatives, and updated policies and procedures in order to prevent and respond to sexual misconduct. This work is also aimed at positively influencing attitudes and behaviors so that real changes in culture take hold on MIT’s campus. Examples of different efforts include:
- The Title IX and Violence Prevention Response (VPR) offices have relied on education, prevention, community outreach, and investigatory specialists to educate more people about how to prevent sexual misconduct from happening, and about how to effectively respond when incidents occur.
- In response to a specific CASA survey finding — 63 percent of respondents who reported experiencing unwanted sexual behavior told someone about it; 90 percent of those students sought support from a friend — these offices have also focused on bolstering peer-to-peer support resources and programs.
- Important policy and procedural changes have also been implemented in recent years. In the 2017-18 academic year alone, the following enhancements were introduced:
- A new policy on consensual sexual or romantic relationships in the workplace or academic environment and online training for all faculty and staff went into effect (all new students and faculty and staff were already required to complete online training). These two initiatives were championed by the Institute Committee on Sexual Misconduct Prevention and Response (CSMPR), established by Chancellor Barnhart in 2015.
- The Title IX Office worked with students, faculty, and staff to update MIT’s sexual misconduct policy so that a consistent definition of sexual misconduct applies to all students, faculty, and staff. The new policy also clearly defines important terms and adds more examples and explanations of inappropriate behavior.
- The Title IX Office began to offer a new online reporting form to lower the barriers for seeking help and reporting.
Looking ahead: AAU 2019 Campus Climate Survey
In a November 2017 update to the MIT community, Barnhart cited preliminary indicators that the Institute’s prevention, response, and cultural change efforts are having a positive impact. While acknowledging that more work must be done, she pointed to the fact that more students have been coming forward to seek support or report unwanted sexual behavior; more community members are involved in the work to create a safer, more respectful, and inclusive campus climate; and 2017 survey data that show more students believe their peers treat one another with respect.
The AAU’s 2019 Campus Climate Survey will mark the first time MIT will be able to gauge current campus attitudes about, and experiences with, sexual misconduct as well as holistically evaluate its efforts since the 2014 CASA survey. The Office of the Chancellor is currently working with campus partners across MIT, the AAU, and other participating institutions on the design of the survey instrument. The team is also working alongside MIT student leaders to start thinking about strategies to achieve a high response rate next spring.
In 2015, AAU surveyed more than 150,000 students at 27 universities. According to the AAU, the survey was one of the largest ever conducted and provided participating institutions with insights into student perceptions of campus climate, and helped universities better understand student experiences with sexual assault and misconduct.
Two MIT professors, health care economist Amy Finkelstein and media studies scholar Lisa Parks, have each been awarded a prestigious MacArthur Fellowship.
The prominent award, colloquially known as the “genius grant,” comes in the form of a five-year $625,000 fellowship, which is unrestricted, meaning recipients can use the funding any way they wish. There are 25 such fellowships being awarded in 2018. Alumna Deborah Estrin SM ’83, PhD ’85, a computer scientist at Cornell Tech, is also a new MacArthur Fellow.
“I’m very honored,” says Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT, adding that she was surprised when first notified by the MacArthur Foundation.
“I’m extremely grateful to MIT,” says Finkelstein, who has been both a doctoral student and faculty member at the Institute. “I’ve essentially spent my entire intellectual life here.”
Noting that the award is a sign of respect for her branch of economics generally, Finkelstein says she appreciates the “broader attention to the scientific work that health care economists are doing and recognition of the progress we have made as a science.”
Parks says her MacArthur award is “an incredible honor,” and that she is “thrilled to be receiving it as a humanities scholar.” She also notes that the grant will help support a new writing project as well as other research efforts.
“The fellowship will help me to write another book and will be a big boost for the Global Media Technologies and Cultures (GMTaC) Lab that I recently launched at MIT,” Parks says.
Parks, who joined the MIT faculty in 2016 after teaching at the University of California at Santa Barbara, credited the intellectual environment at both places, adding that she was “grateful to my colleagues and students at MIT and UC Santa Barbara, and share this honor with them. They supported me as I tried experimental approaches and ventured off the beaten path.”
Decoding medical costs
Finkelstein’s research has yielded major empirical findings about the cost, value, and use of health care in the U.S. Her studies are known for both their results and their rigorous methodological approach; Finkelstein often uses “natural experiments,” in which certain social policies create two otherwise similar groups of people who differ in, say, their access to medical care. This allows her to study the specific effects of policies and treatments of interest.
One of the best-known research projects of Finkelstein’s career focuses on Oregon’s use of a lottery to expand state access to Medicaid. In a series of papers, Finkelstein and her co-authors found that access to Medicaid helped the poor get more medical treatment and avoid some financial shocks, while actually increasing use of emergency rooms.
Earlier in her career, Finkelstein published an influential 2007 paper detailing the varied effects of the introduction of Medicare in the U.S. in the 1960s. The study showed that Medicare's launch was associated with increases in health care spending and the adoption of new medical technologies, while having positive financial effects on the program's recipients.
Finkelstein has trained her investigative lens on a wide variety of other issues, however. Earlier this year, she published multiple papers showing that serious medical problems subsequently reduce earnings and hurt employment, while increasing personal debt, but do not lead to outright bankruptcy as often as is sometimes claimed.
Finkelstein received her PhD from MIT in 2001 and joined the Institute faculty in 2005. In addition to her professorship in MIT’s Department of Economics, Finkelstein is co-scientific director of J-PAL North America, an MIT-based research center that encourages randomized evaluations of social science questions. In 2012, she received the John Bates Clark Medal, granted by the American Economic Association to the best economist under the age of 40.
Parks is an expert on the cultural effects of space-age technologies, especially satellites. She has written in close detail about the ways new technology has shaped our conception of things as diverse as war zones and the idea of a “global village.” As Parks has said, her work aims to get people “to think of the satellite not only as this technology that’s floating around out there in orbit, but as a machine that plays a structuring role in our everyday lives.”
Parks is the author of the influential 2005 book, “Cultures in Orbit,” and has co-edited five books of essays on technology and culture, including the 2017 volume “Life in the Age of Drone Warfare.”
Parks also has a keen interest in technology and economic inequality, and her research has also examined topics such as the video content accessible to Aboriginal Australians, who, starting in the 1980s, attempted to gain greater control of satellite television programming in rural Australia.
As the principal investigator for MIT’s Global Media Technologies and Cultures Lab, Parks and MIT graduate students in the lab conduct onsite research about media usage in a range of places, including rural Africa.
Parks received her PhD at the University of Wisconsin before joining the faculty at the University of California at Santa Barbara, and then moving to MIT.
Including Finkelstein and Parks, 23 MIT faculty members and three staff members have won the MacArthur fellowship.
MIT faculty who have won the award over the last decade include computer scientist Regina Barzilay (2017); economist Heidi Williams (2015); computer scientist Dina Kitabi and astrophysicist Sara Seager (2013); writer Junot Diaz (2012); physicist Nergis Mavalvala (2010); economist Esther Duflo (2009); and architectural engineer John Ochsendorf and physicist Marin Soljacic (2008).
People sometimes mistakenly think of general anesthesia as just a really deep sleep, but in fact, anesthesia is really four brain states — unconsciousness, amnesia, immobility, and suppression of the body’s damage sensing response, or “nociception.” In a new paper in Anesthesia and Analgesia, MIT neuroscientist and statistician Emery N. Brown and his colleagues argue that by putting nociception at the top of the priority list and taking a principled neuroscientific approach to choosing which drugs to administer, anesthesiologists can use far less medication overall, producing substantial benefits for patients.
“We’ve come up with strategies that allow us to dose the same drugs that are generally used but in different proportions that allow us to achieve an anesthetic state that is much more desirable,” says Brown, the Edward Hood Taplin Professor of Computational Neuroscience and Health Sciences and Technology in the Picower Institute for Learning and Memory at MIT and a practicing anesthesiologist at Massachusetts General Hospital.
In the paper Brown and co-authors lay out exactly how and where each major anesthetic drug affects the nociceptive circuits of the nervous system. Nociception is the body’s sensing of tissue damage. It is not pain, which is a conscious perception of that.
Then the authors show how in four different surgical cases they were able to use neuroscience to guide their choice of a “multimodal” combination of drugs to target nociceptive circuits at several different points. That way they didn’t have to use much of any individual drug. Because reduced arousal is a byproduct of the strategy, they also didn’t have to administer much medicine to ensure unconsciousness, a state they scrupulously monitor in the operating room by watching brainwaves captured by electroencephalography (EEG).
“If you do it this way, you have better control of nociception and you can get the same amount of unconsciousness with less drug,” says Brown, who is also associate director of MIT’s Institute for Medical Engineering and Science and a professor in MIT’s Department of Brain and Cognitive Sciences and at Harvard Medical School. “Patients wake up quicker, and if you carry the multimodal strategy into the postoperative period you have taken into account controlling pain such that you can use few opioids or no opioids.”
Reducing opioids has become a major goal of anesthesiologists in recent years as an epidemic of overdoses has ravaged the United States.
Opioids, after all, are hardly the only option. In the study, Brown and co-authors Kara Pavone and Marusa Naranjo describe how several anesthetic drugs work. Opioids combat nociception in the spinal cord and brainstem by binding to opioid receptors, while decreasing arousal by blockading connections involving the neurotransmitter acetylcholine between the brainstem, thalamus, and cortex. By contrast, ketamine targets glutamate receptors on both peripheral nervous system neurons that connect to the spinal cord, and neurons in the brain’s cortex, where they also therefore decrease arousal.
In all, they review the distinct antinociceptive actions of six classes of drugs: opioids, ketamine, dexmedetomidine, magnesium, lidocaine, and NSAIDs (like ibuprofen). They also discuss how anesthetic drugs that primarily impact arousal, such as propofol, or mobility, such as cisatracurium, work in the central and peripheral nervous systems.
Putting a principled, nociception-focused approach into practice requires formulating an explicit plan for each patient that will account for pain management before, during, and after surgery and at discharge from the hospital, the authors note. They lay out examples of how they implemented such plans for four patients including a 78-year-old man who had spinal surgery and a 58-year-old woman who had abdominal surgery.
The approaches contrast with more traditional approaches in that they feature a combination of drugs to affect nociception, Brown says, rather than a single drug that primarily affects consciousness.
“Because these [single drug approaches] primarily create unconsciousness and have less effect on nociception, the state of general anesthesia they create has an imbalance that means that patients are profoundly unconscious with less good antinociceptive control,” he says. “If the anesthesiologist is not monitoring the EEG to track unconsciousness and uses no principled way to distinguish antinociception from unconsciousness, patients will have prolonged wake ups. There will also be more intra-op hemodynamic (blood pressure and heart rate) fluctuations.
“Our framework lays the groundwork for two things: a clearer head post-operatively — manifest by quicker recoveries and the fact that less of the anesthetic required for unconsciousness is used — and better, more complete postoperative pain control.”
Ultimately, the authors write, it comes back to a principled focus on the nervous system and how to disrupt nociception comprehensively by intervening at multiple points.
“Understanding these systems can be used to formulate a rational strategy for multimodal general anesthesia management,” they write.
As engineers make strides in the design of wearable, electronically active, and responsive leg braces, arm supports, and full-body suits, collectively known as exoskeletons, researchers at MIT are raising an important question: While these Iron Man-like appendages may amp up a person’s strength, mobility, and endurance, what effect might they have on attention and decision making?
The question is far from trivial, as exoskeletons are currently being designed and tested for use on the battlefield, where U.S. soldiers are expected to perform focused tactical maneuvers while typically carrying 60 to 100 pounds of equipment. Exoskeletons such as electronically adaptive hip, knee, and leg braces could bear a significant portion of a soldier’s load, freeing them up to move faster and with more agility.
But could wearing such bionic add-ons, and adjusting to their movements, take away some of the attention needed for cognitive tasks, such as spotting an enemy, relaying a message, or following a squadron?
The answer, the MIT team found, is yes, at least in some scenarios. In a study that they are presenting this week at the Human Factors and Ergonomics Society Annual Meeting in Philadelphia, the researchers tested volunteers, who were either active-duty members of the military or participants in a Reserve Officer Training Corps (ROTC) unit, as they marched through an obstacle course while wearing a commercially available knee exoskeleton and carrying a backpack weighing up to 80 pounds. Seven of the 12 subjects had slower reaction times in a visual task when they completed the course with the exoskeleton on and powered, compared to when they finished it without the exoskeleton.
The researchers also found that the soldiers, when asked to follow a leader at a certain distance, were less able to keep a constant distance while wearing the exoskeleton.
The results, though preliminary, suggest that engineers designing exoskeletons for military and other uses may want to consider a device’s “cognitive fit” — how much of a user’s attention or decision making the device could potentially divert even while assisting them physically.
“In a military exoskeleton, soldiers are supposed to be scanning for enemies in the environment, making sure where other people in their squad are, monitoring a whole variety of things,” says Leia Stirling, an assistant professor in MIT’s Department of Aeronautics and Astronautics and a member of the Institute for Medical Engineering and Science. “You don’t want them to have to focus on how they’re stepping because of the exoskeleton. That’s why I was interested in how much attention these technologies require.”
Stirling’s co-authors on the paper include researchers at MIT, Draper, and the University of Massachusetts at Lowell.
Follow the leader
To investigate exoskeletons’ effect on a user’s attention, the team set up an obstacle course at UMass Lowell’s NERVE Center, a facility that normally tests and evaluates robots over various physical courses. Stirling and her colleagues modified an existing obstacle course to include cross-slopes and short walls to step over. Lights at both ends of the obstacle course were set up to intermittently blink on and off.
The team enlisted 12 male subjects and trained them over a period of three days. During the first day, they were each custom-fit with, and trained to use, a commercially available knee exoskeleton — a rigid, powered knee brace designed to help extend a user’s leg and increase endurance while, for example, in climbing over obstacles and walking over long distances.
Over the following two days, the subjects were instructed to navigate the obstacle course while following a researcher, posing as a squadron member. As they made their way through the course, the subjects performed several cognitive tasks. The first was a visual task, in which the subjects had to press a button on a mock rifle as soon as they perceived a light go on. The second was a pair of audio tasks, in which the subjects had to respond to a radio call check with a simple “Roger, over,” as well as a more complicated task, where they had to listen to three leaders reporting different numbers of enemies and then report the total number over the radio. The third was a follow-along task, where the subjects had to maintain a certain distance from the squadron leader as they navigated the course.
Overall, Stirling found that for the visual task, seven of the 12 subjects wearing the powered exoskeleton reacted significantly more slowly and tended to miss light signals completely, compared with their performance when not wearing the device. While wearing the powered knee-brace, the subjects also had a harder time maintaining the specified distance when following the leader.
Going forward, Stirling plans to investigate the importance of reaction times while wearing an exoskeleton in various contexts.
“For a military soldier, if they don’t detect an enemy over half a second, what does that mean? Does that put their life at risk, or is that OK?” Stirling says. “We need to better understand what these operationally relevant differences are. A reaction time of half a second for me walking down a sidewalk is probably not a big deal. But it could be a big deal in a military environment.”
Interestingly, the team identified a few users who were unfazed by the addition of an exoskeleton, and who performed just as well in the visual, audio, and follow-along tasks.
“In this study, we see some people have no deficit in their attention. But some people do, and we’re not sure why some people are good exoskeleton users and some have more difficulty,” Stirling says. “Now we’re starting to investigate what makes people good users versus less adept users. Is this driven from a motor pathway, or a perception pathway, or a cognitive pathway?”
Stirling’s group is working toward a better understanding of the way humans adapt and react to exoskeletons and other wearable technologies, such as next-generation spacesuits.
“We’re looking at the fluency between what the system is doing and what the human is doing,” Stirling says. “If the human wants to speed up or slow down, can this system be designed to appropriately move so the human is not fighting the system, and vice versa?”
Beyond military and space applications, Stirling says that if the connection between the human and the machine can be made more fluid, requiring less of a user’s immediate attention, then exoskeletons may find a much wider, commercial appeal.
“Maybe you want to be able to climb that mountain, or go on a longer hike, or you may be older and want to run around with your grandkids,” Stirling says.
“How can you design exoskeletons so people can reduce their own injury risk and extend their capability, their activities of daily living? These systems are really exciting. We just want to be cognizant of the different risks that occur when you bring something into a natural environment.”
This research was supported, in part, by Draper.
When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.
MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.
“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”
The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.
In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.
“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”
Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.
Trading off exploration and exploitation
Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.
The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.
The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.
For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.
Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.
Working with multiple agents
In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.
“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”
Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.
Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.
“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.
More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.
Lately the fact-checking world has been in a bit of a crisis. Sites like Politifact and Snopes have traditionally focused on specific claims, which is admirable but tedious; by the time they’ve gotten through verifying or debunking a fact, there’s a good chance it’s already traveled across the globe and back again.
Social media companies have also had mixed results limiting the spread of propaganda and misinformation. Facebook plans to have 20,000 human moderators by the end of the year, and is putting significant resources into developing its own fake-news-detecting algorithms.
Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) believe that the best approach is to focus not only on individual claims, but on the news sources themselves. Using this tack, they’ve demonstrated a new system that uses machine learning to determine if a source is accurate or politically biased.
“If a website has published fake news before, there’s a good chance they’ll do it again,” says postdoc Ramy Baly, the lead author on a new paper about the system. “By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.”
Baly says the system needs only about 150 articles to reliably detect if a news source can be trusted — meaning that an approach like theirs could be used to help stamp out new fake-news outlets before the stories spread too widely.
The system is a collaboration between computer scientists at MIT CSAIL and QCRI, which is part of the Hamad Bin Khalifa University in Qatar. Researchers first took data from Media Bias/Fact Check (MBFC), a website with human fact-checkers who analyze the accuracy and biases of more than 2,000 news sites; from MSNBC and Fox News; and from low-traffic content farms.
They then fed those data to a machine learning algorithm, and programmed it to classify news sites the same way as MBFC. When given a new news outlet, the system was then 65 percent accurate at detecting whether it has a high, low or medium level of factuality, and roughly 70 percent accurate at detecting if it is left-leaning, right-leaning, or moderate.
The team determined that the most reliable ways to detect both fake news and biased reporting were to look at the common linguistic features across the source’s stories, including sentiment, complexity, and structure.
For example, fake-news outlets were found to be more likely to use language that is hyperbolic, subjective, and emotional. In terms of bias, left-leaning outlets were more likely to have language that related to concepts of harm/care and fairness/reciprocity, compared to other qualities such as loyalty, authority, and sanctity. (These qualities represent a popular theory — that there are five major moral foundations — in social psychology.)
Co-author Preslav Nakov, a senior scientist at QCRI, says that the system also found correlations with an outlet’s Wikipedia page, which it assessed for general — longer is more credible — as well as target words such as “extreme” or “conspiracy theory.” It even found correlations with the text structure of a source’s URLs: Those that had lots of special characters and complicated subdirectories, for example, were associated with less reliable sources.
“Since it is much easier to obtain ground truth on sources [than on articles], this method is able to provide direct and accurate predictions regarding the type of content distributed by these sources,” says Sibel Adali, a professor of computer science at Rensselaer Polytechnic Institute who was not involved in the project.
Nakov is quick to caution that the system is still a work in progress, and that, even with improvements in accuracy, it would work best in conjunction with traditional fact-checkers.
“If outlets report differently on a particular topic, a site like Politifact could instantly look at our fake news scores for those outlets to determine how much validity to give to different perspectives,” says Nakov.
Baly and Nakov co-wrote the new paper with MIT Senior Research Scientist James Glass alongside graduate students Dimitar Alexandrov and Georgi Karadzhov of Sofia University. The team will present the work later this month at the 2018 Empirical Methods in Natural Language Processing (EMNLP) conference in Brussels, Belgium.
The researchers also created a new open-source dataset of more than 1,000 news sources, annotated with factuality and bias scores, that is the world’s largest database of its kind. As next steps, the team will be exploring whether the English-trained system can be adapted to other languages, as well as to go beyond the traditional left/right bias to explore region-specific biases (like the Muslim world’s division between religious and secular).
“This direction of research can shed light on what untrustworthy websites look like and the kind of content they tend to share, which would be very useful for both web designers and the wider public,” says Andreas Vlachos, a senior lecturer at the University of Cambridge who was not involved in the project.
Nakov says that QCRI also has plans to roll out an app that helps users step out of their political bubbles, responding to specific news items by offering users a collection of articles that span the political spectrum.
“It’s interesting to think about new ways to present the news to people,” says Nakov. “Tools like this could help people give a bit more thought to issues and explore other perspectives that they might not have otherwise considered."