MIT Latest News
In 2016, synthetic biologists reconstructed a possibly extinct disease, known as horsepox, using mail-order DNA for around $100,000. The experiment was strictly for research purposes, and the disease itself is harmless to humans. But the published results, including the methodology, raised concerns that a nefarious agent, given appropriate resources, could engineer a pandemic. In an op-ed published today in PLOS Pathogens, Media Lab Professor Kevin Esvelt, who develops and studies gene-editing techniques, argues for tighter biosecurity and greater research transparency to keep such “information hazards” — published information that could be used to cause harm — in check. Esvelt spoke with MIT News about his ideas.
Q: What are information hazards, and why are they an important topic in synthetic biology?
A: Our society is not at ease with this notion that some information is hazardous, but it unfortunately happens to be true. No one believes the blueprints for nuclear weapons should be public, but we do collectively believe that the genome sequences for viruses should be public. This was not a problem until DNA synthesis got really good. The current system for regulating dangerous biological agents is bypassed by DNA synthesis. DNA synthesis is becoming accessible to a wide variety of people, and the instructions for doing nasty things are freely available online.
In the horsepox study, for instance, the information hazard is partly in the paper and the methods they described. But it’s also in the media covering it and highlighting that something bad can be done. And this is worsened by the people who are alarmed, because we talk to journalists about the potential harm, and that just feeds into it. As critics of these things, we are spreading information hazards too.
Part of the solution is just acknowledging that openness of information has costs, and taking steps to minimize those. That means raising awareness that information hazards exist, and being a little more cautious about talking about, and especially citing, dangerous work. Information hazards are a “tragedy of the commons” problem. Everyone thinks that, if it’s already out there, one more citation isn’t going to hurt. But everyone thinks that way. It just keeps on building until it’s on Wikipedia.
Q: You say one issue with synthetic biology is screening DNA for potentially harmful sequences. How can cryptography help promote a market of “clean” DNA?
A: We really need to do something about the ease of DNA synthesis and the accessibility of potential pandemic pathogens. The obvious solution is to get some kind of screening implemented for all DNA synthesis. The International Gene Synthesis Consortium (IGSC) was set up by industry leaders in DNA synthesis post-anthrax attacks. To be a member, a company needs to demonstrate it screens its orders, but member companies only cover 80 percent of the commercial market and none of the synthesis facilities within large firms. And there is no external way to verify that IGSC companies are actually doing the screening, or that they screen for the right things.
We need a more centralized system, where all DNA synthesis in the world is autonomously checked and would only be approved for synthesis if harmful sequences were not found in any of them. This is a cryptography problem.
On one hand, you have trade secrets, because firms making DNA don’t want others to know what they’re making. On the other hand, you have database of hazards that must be useless if stolen. You want to encrypt orders, send them to a centralized database, and then learn if it’s safe or not. Then you need a system for letting people add things to the database, which can be done privately. This is totally achievable with modern cryptography. You can use what’s known as hashes [which converts inputs of letters and numbers into an encrypted output of a fixed sequence] or do it using a newer method of fully homomorphic encryption, which lets you do calculations on encrypted data without ever decrypting it.
We’re just beginning to work on this challenge now. A point of this PLOS Pathogens op-ed is to lay the groundwork for this system.
In the long term, authorized experts can add hazards to their own databases. That’s the ideal way to deal with information hazards. If I think of a sequence that I’m confident is very dangerous, and people shouldn’t do this; ideally I would be able to contribute that to a database, possibly in conjunction with just one other authorized user who concurs. That could make sure nobody else makes that exact sequence, without unduly spreading the hazardous information of its identity and potential nature.
Q: You argue for peer review during earlier research stages. How would that help prevent information hazards?
A: The horsepox study was controversial with regard to whether the benefits outweighed the risks. It’s been said that one benefit was highlighting that viruses can be built from scratch. In oncological viral therapy, where you make viruses to kill cancer, [this information] could accelerate their research. It’s also been postulated that horsepox might be used to make a better vaccine, but that the researchers couldn’t access a sample. Those may be true. It’s still a clear information hazard. Could that aspect have been avoided?
Ideally, the horsepox study would have been reviewed by other experts, including some who were concerned by its implications and could have pointed out, for example, that you could have made a virus without harmful relatives as an example — or made horsepox, used it for vaccine development, and then just not specified that you made it from scratch. Then, you would have had all the research benefits of the study, without creating the information hazard. That would have been possible insofar as other experts had been given a chance to look at the research design before experiments were done.
With the current process, it’s typically only peer review at the end of the research. There’s no feedback at the research design phase at all. The time when peer review would be most useful would be at that phase. This transition requires funders, journals, and governments getting together to change [the process] in small subfields. In fields clearly without information hazards, you might publicly preregister your research plans and invite feedback. In fields like synthetic mammalian virology that present clear hazards, you’d want the research plans sent to a couple of peer reviewers in the field for evaluation, for safety and for suggested improvements. A lot of the time there’s a better way to do the experiment than you initially imagined, and if they can point that out at the beginning, then great. I think that both models will result in faster science, which we want too.
Universities could start by setting up a special process for early-stage peer review, internally, of gene drive [a genetic engineering technology] and mammalian virology experiments. As a scientist who works in both those fields, I would be happy to participate. The question is: How can we do [synthetic biology] in a way that continues or even accelerates beneficial discoveries while avoiding those with potentially catastrophic consequences?
MIT will follow its landmark 2014 Campus Attitudes on Sexual Assault Survey (CASA) by administering the Association of American Universities (AAU) 2019 Campus Climate Survey to undergraduate and graduate students during the upcoming spring 2019 semester, Chancellor Cynthia Barnhart SM ’85, PhD ’88 announced today.
This will allow the Institute to measure the progress made to combat sexual misconduct in the five years since the first survey; identify and respond to new issues the AAU survey may uncover; and put MIT’s results into the context of national AAU aggregate data. Earlier today, the AAU announced that 33 institutions will be participating in the AAU survey next spring.
“At MIT, we don’t shy away from tough problems, and we care deeply about creating a safe, welcoming, and respectful climate for every member of our community,” said Barnhart. “Our 2014 survey provided us with critical information about how sexual misconduct affects the MIT student community, and it helped us develop data-driven education and prevention programs and policies. With the 2019 survey, we will continue to employ self-evaluation and transparency so that we can enhance our understanding of, and response to, this complex issue; create community dialogue about solutions; and send a very clear message that sexual misconduct has no place on our campus.”
Looking back: CASA survey and MIT’s response
Responding to a request from MIT President L. Rafael Reif that Barnhart make sexual assault prevention a central priority when he appointed her chancellor in February 2014, Barnhart emailed the CASA survey to all MIT students on April 27, 2014 — two days before a White House task force issued guidance that all of the nation’s colleges and universities survey their students on these matters. The CASA survey results were released in October 2014 alongside a series of Institute-wide steps that began a process of comprehensively addressing the findings.
In the three academic years since the CASA survey’s administration, MIT has added new services, expanded education and community outreach initiatives, and updated policies and procedures in order to prevent and respond to sexual misconduct. This work is also aimed at positively influencing attitudes and behaviors so that real changes in culture take hold on MIT’s campus. Examples of different efforts include:
- The Title IX and Violence Prevention Response (VPR) offices have relied on education, prevention, community outreach, and investigatory specialists to educate more people about how to prevent sexual misconduct from happening, and about how to effectively respond when incidents occur.
- In response to a specific CASA survey finding — 63 percent of respondents who reported experiencing unwanted sexual behavior told someone about it; 90 percent of those students sought support from a friend — these offices have also focused on bolstering peer-to-peer support resources and programs.
- Important policy and procedural changes have also been implemented in recent years. In the 2017-18 academic year alone, the following enhancements were introduced:
- A new policy on consensual sexual or romantic relationships in the workplace or academic environment and online training for all faculty and staff went into effect (all new students and faculty and staff were already required to complete online training). These two initiatives were championed by the Institute Committee on Sexual Misconduct Prevention and Response (CSMPR), established by Chancellor Barnhart in 2015.
- The Title IX Office worked with students, faculty, and staff to update MIT’s sexual misconduct policy so that a consistent definition of sexual misconduct applies to all students, faculty, and staff. The new policy also clearly defines important terms and adds more examples and explanations of inappropriate behavior.
- The Title IX Office began to offer a new online reporting form to lower the barriers for seeking help and reporting.
Looking ahead: AAU 2019 Campus Climate Survey
In a November 2017 update to the MIT community, Barnhart cited preliminary indicators that the Institute’s prevention, response, and cultural change efforts are having a positive impact. While acknowledging that more work must be done, she pointed to the fact that more students have been coming forward to seek support or report unwanted sexual behavior; more community members are involved in the work to create a safer, more respectful, and inclusive campus climate; and 2017 survey data that show more students believe their peers treat one another with respect.
The AAU’s 2019 Campus Climate Survey will mark the first time MIT will be able to gauge current campus attitudes about, and experiences with, sexual misconduct as well as holistically evaluate its efforts since the 2014 CASA survey. The Office of the Chancellor is currently working with campus partners across MIT, the AAU, and other participating institutions on the design of the survey instrument. The team is also working alongside MIT student leaders to start thinking about strategies to achieve a high response rate next spring.
In 2015, AAU surveyed more than 150,000 students at 27 universities. According to the AAU, the survey was one of the largest ever conducted and provided participating institutions with insights into student perceptions of campus climate, and helped universities better understand student experiences with sexual assault and misconduct.
Two MIT professors, health care economist Amy Finkelstein and media studies scholar Lisa Parks, have each been awarded a prestigious MacArthur Fellowship.
The prominent award, colloquially known as the “genius grant,” comes in the form of a five-year $625,000 fellowship, which is unrestricted, meaning recipients can use the funding any way they wish. There are 25 such fellowships being awarded in 2018. Alumna Deborah Estrin SM ’83, PhD ’85, a computer scientist at Cornell Tech, is also a new MacArthur Fellow.
“I’m very honored,” says Finkelstein, the John and Jennie S. MacDonald Professor of Economics at MIT, adding that she was surprised when first notified by the MacArthur Foundation.
“I’m extremely grateful to MIT,” says Finkelstein, who has been both a doctoral student and faculty member at the Institute. “I’ve essentially spent my entire intellectual life here.”
Noting that the award is a sign of respect for her branch of economics generally, Finkelstein says she appreciates the “broader attention to the scientific work that health care economists are doing and recognition of the progress we have made as a science.”
Parks says her MacArthur award is “an incredible honor,” and that she is “thrilled to be receiving it as a humanities scholar.” She also notes that the grant will help support a new writing project as well as other research efforts.
“The fellowship will help me to write another book and will be a big boost for the Global Media Technologies and Cultures (GMTaC) Lab that I recently launched at MIT,” Parks says.
Parks, who joined the MIT faculty in 2016 after teaching at the University of California at Santa Barbara, credited the intellectual environment at both places, adding that she was “grateful to my colleagues and students at MIT and UC Santa Barbara, and share this honor with them. They supported me as I tried experimental approaches and ventured off the beaten path.”
Decoding medical costs
Finkelstein’s research has yielded major empirical findings about the cost, value, and use of health care in the U.S. Her studies are known for both their results and their rigorous methodological approach; Finkelstein often uses “natural experiments,” in which certain social policies create two otherwise similar groups of people who differ in, say, their access to medical care. This allows her to study the specific effects of policies and treatments of interest.
One of the best-known research projects of Finkelstein’s career focuses on Oregon’s use of a lottery to expand state access to Medicaid. In a series of papers, Finkelstein and her co-authors found that access to Medicaid helped the poor get more medical treatment and avoid some financial shocks, while actually increasing use of emergency rooms.
Earlier in her career, Finkelstein published an influential 2007 paper detailing the varied effects of the introduction of Medicare in the U.S. in the 1960s. The study showed that Medicare's launch was associated with increases in health care spending and the adoption of new medical technologies, while having positive financial effects on the program's recipients.
Finkelstein has trained her investigative lens on a wide variety of other issues, however. Earlier this year, she published multiple papers showing that serious medical problems subsequently reduce earnings and hurt employment, while increasing personal debt, but do not lead to outright bankruptcy as often as is sometimes claimed.
Finkelstein received her PhD from MIT in 2001 and joined the Institute faculty in 2005. In addition to her professorship in MIT’s Department of Economics, Finkelstein is co-scientific director of J-PAL North America, an MIT-based research center that encourages randomized evaluations of social science questions. In 2012, she received the John Bates Clark Medal, granted by the American Economic Association to the best economist under the age of 40.
Parks is an expert on the cultural effects of space-age technologies, especially satellites. She has written in close detail about the ways new technology has shaped our conception of things as diverse as war zones and the idea of a “global village.” As Parks has said, her work aims to get people “to think of the satellite not only as this technology that’s floating around out there in orbit, but as a machine that plays a structuring role in our everyday lives.”
Parks is the author of the influential 2005 book, “Cultures in Orbit,” and has co-edited five books of essays on technology and culture, including the 2017 volume “Life in the Age of Drone Warfare.”
Parks also has a keen interest in technology and economic inequality, and her research has also examined topics such as the video content accessible to Aboriginal Australians, who, starting in the 1980s, attempted to gain greater control of satellite television programming in rural Australia.
As the principal investigator for MIT’s Global Media Technologies and Cultures Lab, Parks and MIT graduate students in the lab conduct onsite research about media usage in a range of places, including rural Africa.
Parks received her PhD at the University of Wisconsin before joining the faculty at the University of California at Santa Barbara, and then moving to MIT.
Including Finkelstein and Parks, 23 MIT faculty members and three staff members have won the MacArthur fellowship.
MIT faculty who have won the award over the last decade include computer scientist Regina Barzilay (2017); economist Heidi Williams (2015); computer scientist Dina Kitabi and astrophysicist Sara Seager (2013); writer Junot Diaz (2012); physicist Nergis Mavalvala (2010); economist Esther Duflo (2009); and architectural engineer John Ochsendorf and physicist Marin Soljacic (2008).
People sometimes mistakenly think of general anesthesia as just a really deep sleep, but in fact, anesthesia is really four brain states — unconsciousness, amnesia, immobility, and suppression of the body’s damage sensing response, or “nociception.” In a new paper in Anesthesia and Analgesia, MIT neuroscientist and statistician Emery N. Brown and his colleagues argue that by putting nociception at the top of the priority list and taking a principled neuroscientific approach to choosing which drugs to administer, anesthesiologists can use far less medication overall, producing substantial benefits for patients.
“We’ve come up with strategies that allow us to dose the same drugs that are generally used but in different proportions that allow us to achieve an anesthetic state that is much more desirable,” says Brown, the Edward Hood Taplin Professor of Computational Neuroscience and Health Sciences and Technology in the Picower Institute for Learning and Memory at MIT and a practicing anesthesiologist at Massachusetts General Hospital.
In the paper Brown and co-authors lay out exactly how and where each major anesthetic drug affects the nociceptive circuits of the nervous system. Nociception is the body’s sensing of tissue damage. It is not pain, which is a conscious perception of that.
Then the authors show how in four different surgical cases they were able to use neuroscience to guide their choice of a “multimodal” combination of drugs to target nociceptive circuits at several different points. That way they didn’t have to use much of any individual drug. Because reduced arousal is a byproduct of the strategy, they also didn’t have to administer much medicine to ensure unconsciousness, a state they scrupulously monitor in the operating room by watching brainwaves captured by electroencephalography (EEG).
“If you do it this way, you have better control of nociception and you can get the same amount of unconsciousness with less drug,” says Brown, who is also associate director of MIT’s Institute for Medical Engineering and Science and a professor in MIT’s Department of Brain and Cognitive Sciences and at Harvard Medical School. “Patients wake up quicker, and if you carry the multimodal strategy into the postoperative period you have taken into account controlling pain such that you can use few opioids or no opioids.”
Reducing opioids has become a major goal of anesthesiologists in recent years as an epidemic of overdoses has ravaged the United States.
Opioids, after all, are hardly the only option. In the study, Brown and co-authors Kara Pavone and Marusa Naranjo describe how several anesthetic drugs work. Opioids combat nociception in the spinal cord and brainstem by binding to opioid receptors, while decreasing arousal by blockading connections involving the neurotransmitter acetylcholine between the brainstem, thalamus, and cortex. By contrast, ketamine targets glutamate receptors on both peripheral nervous system neurons that connect to the spinal cord, and neurons in the brain’s cortex, where they also therefore decrease arousal.
In all, they review the distinct antinociceptive actions of six classes of drugs: opioids, ketamine, dexmedetomidine, magnesium, lidocaine, and NSAIDs (like ibuprofen). They also discuss how anesthetic drugs that primarily impact arousal, such as propofol, or mobility, such as cisatracurium, work in the central and peripheral nervous systems.
Putting a principled, nociception-focused approach into practice requires formulating an explicit plan for each patient that will account for pain management before, during, and after surgery and at discharge from the hospital, the authors note. They lay out examples of how they implemented such plans for four patients including a 78-year-old man who had spinal surgery and a 58-year-old woman who had abdominal surgery.
The approaches contrast with more traditional approaches in that they feature a combination of drugs to affect nociception, Brown says, rather than a single drug that primarily affects consciousness.
“Because these [single drug approaches] primarily create unconsciousness and have less effect on nociception, the state of general anesthesia they create has an imbalance that means that patients are profoundly unconscious with less good antinociceptive control,” he says. “If the anesthesiologist is not monitoring the EEG to track unconsciousness and uses no principled way to distinguish antinociception from unconsciousness, patients will have prolonged wake ups. There will also be more intra-op hemodynamic (blood pressure and heart rate) fluctuations.
“Our framework lays the groundwork for two things: a clearer head post-operatively — manifest by quicker recoveries and the fact that less of the anesthetic required for unconsciousness is used — and better, more complete postoperative pain control.”
Ultimately, the authors write, it comes back to a principled focus on the nervous system and how to disrupt nociception comprehensively by intervening at multiple points.
“Understanding these systems can be used to formulate a rational strategy for multimodal general anesthesia management,” they write.
As engineers make strides in the design of wearable, electronically active, and responsive leg braces, arm supports, and full-body suits, collectively known as exoskeletons, researchers at MIT are raising an important question: While these Iron Man-like appendages may amp up a person’s strength, mobility, and endurance, what effect might they have on attention and decision making?
The question is far from trivial, as exoskeletons are currently being designed and tested for use on the battlefield, where U.S. soldiers are expected to perform focused tactical maneuvers while typically carrying 60 to 100 pounds of equipment. Exoskeletons such as electronically adaptive hip, knee, and leg braces could bear a significant portion of a soldier’s load, freeing them up to move faster and with more agility.
But could wearing such bionic add-ons, and adjusting to their movements, take away some of the attention needed for cognitive tasks, such as spotting an enemy, relaying a message, or following a squadron?
The answer, the MIT team found, is yes, at least in some scenarios. In a study that they are presenting this week at the Human Factors and Ergonomics Society Annual Meeting in Philadelphia, the researchers tested volunteers, who were either active-duty members of the military or participants in a Reserve Officer Training Corps (ROTC) unit, as they marched through an obstacle course while wearing a commercially available knee exoskeleton and carrying a backpack weighing up to 80 pounds. Seven of the 12 subjects had slower reaction times in a visual task when they completed the course with the exoskeleton on and powered, compared to when they finished it without the exoskeleton.
The researchers also found that the soldiers, when asked to follow a leader at a certain distance, were less able to keep a constant distance while wearing the exoskeleton.
The results, though preliminary, suggest that engineers designing exoskeletons for military and other uses may want to consider a device’s “cognitive fit” — how much of a user’s attention or decision making the device could potentially divert even while assisting them physically.
“In a military exoskeleton, soldiers are supposed to be scanning for enemies in the environment, making sure where other people in their squad are, monitoring a whole variety of things,” says Leia Stirling, an assistant professor in MIT’s Department of Aeronautics and Astronautics and a member of the Institute for Medical Engineering and Science. “You don’t want them to have to focus on how they’re stepping because of the exoskeleton. That’s why I was interested in how much attention these technologies require.”
Stirling’s co-authors on the paper include researchers at MIT, Draper, and the University of Massachusetts at Lowell.
Follow the leader
To investigate exoskeletons’ effect on a user’s attention, the team set up an obstacle course at UMass Lowell’s NERVE Center, a facility that normally tests and evaluates robots over various physical courses. Stirling and her colleagues modified an existing obstacle course to include cross-slopes and short walls to step over. Lights at both ends of the obstacle course were set up to intermittently blink on and off.
The team enlisted 12 male subjects and trained them over a period of three days. During the first day, they were each custom-fit with, and trained to use, a commercially available knee exoskeleton — a rigid, powered knee brace designed to help extend a user’s leg and increase endurance while, for example, in climbing over obstacles and walking over long distances.
Over the following two days, the subjects were instructed to navigate the obstacle course while following a researcher, posing as a squadron member. As they made their way through the course, the subjects performed several cognitive tasks. The first was a visual task, in which the subjects had to press a button on a mock rifle as soon as they perceived a light go on. The second was a pair of audio tasks, in which the subjects had to respond to a radio call check with a simple “Roger, over,” as well as a more complicated task, where they had to listen to three leaders reporting different numbers of enemies and then report the total number over the radio. The third was a follow-along task, where the subjects had to maintain a certain distance from the squadron leader as they navigated the course.
Overall, Stirling found that for the visual task, seven of the 12 subjects wearing the powered exoskeleton reacted significantly more slowly and tended to miss light signals completely, compared with their performance when not wearing the device. While wearing the powered knee-brace, the subjects also had a harder time maintaining the specified distance when following the leader.
Going forward, Stirling plans to investigate the importance of reaction times while wearing an exoskeleton in various contexts.
“For a military soldier, if they don’t detect an enemy over half a second, what does that mean? Does that put their life at risk, or is that OK?” Stirling says. “We need to better understand what these operationally relevant differences are. A reaction time of half a second for me walking down a sidewalk is probably not a big deal. But it could be a big deal in a military environment.”
Interestingly, the team identified a few users who were unfazed by the addition of an exoskeleton, and who performed just as well in the visual, audio, and follow-along tasks.
“In this study, we see some people have no deficit in their attention. But some people do, and we’re not sure why some people are good exoskeleton users and some have more difficulty,” Stirling says. “Now we’re starting to investigate what makes people good users versus less adept users. Is this driven from a motor pathway, or a perception pathway, or a cognitive pathway?”
Stirling’s group is working toward a better understanding of the way humans adapt and react to exoskeletons and other wearable technologies, such as next-generation spacesuits.
“We’re looking at the fluency between what the system is doing and what the human is doing,” Stirling says. “If the human wants to speed up or slow down, can this system be designed to appropriately move so the human is not fighting the system, and vice versa?”
Beyond military and space applications, Stirling says that if the connection between the human and the machine can be made more fluid, requiring less of a user’s immediate attention, then exoskeletons may find a much wider, commercial appeal.
“Maybe you want to be able to climb that mountain, or go on a longer hike, or you may be older and want to run around with your grandkids,” Stirling says.
“How can you design exoskeletons so people can reduce their own injury risk and extend their capability, their activities of daily living? These systems are really exciting. We just want to be cognizant of the different risks that occur when you bring something into a natural environment.”
This research was supported, in part, by Draper.
When moving through a crowd to reach some end goal, humans can usually navigate the space safely without thinking too much. They can learn from the behavior of others and note any obstacles to avoid. Robots, on the other hand, struggle with such navigational concepts.
MIT researchers have now devised a way to help robots navigate environments more like humans do. Their novel motion-planning model lets robots determine how to reach a goal by exploring the environment, observing other agents, and exploiting what they’ve learned before in similar situations. A paper describing the model was presented at this week’s IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS).
Popular motion-planning algorithms will create a tree of possible decisions that branches out until it finds good paths for navigation. A robot that needs to navigate a room to reach a door, for instance, will create a step-by-step search tree of possible movements and then execute the best path to the door, considering various constraints. One drawback, however, is these algorithms rarely learn: Robots can’t leverage information about how they or other agents acted previously in similar environments.
“Just like when playing chess, these decisions branch out until [the robots] find a good way to navigate. But unlike chess players, [the robots] explore what the future looks like without learning much about their environment and other agents,” says co-author Andrei Barbu, a researcher at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Center for Brains, Minds, and Machines (CBMM) within MIT’s McGovern Institute. “The thousandth time they go through the same crowd is as complicated as the first time. They’re always exploring, rarely observing, and never using what’s happened in the past.”
The researchers developed a model that combines a planning algorithm with a neural network that learns to recognize paths that could lead to the best outcome, and uses that knowledge to guide the robot’s movement in an environment.
In their paper, “Deep sequential models for sampling-based planning,” the researchers demonstrate the advantages of their model in two settings: navigating through challenging rooms with traps and narrow passages, and navigating areas while avoiding collisions with other agents. A promising real-world application is helping autonomous cars navigate intersections, where they have to quickly evaluate what others will do before merging into traffic. The researchers are currently pursuing such applications through the Toyota-CSAIL Joint Research Center.
“When humans interact with the world, we see an object we’ve interacted with before, or are in some location we’ve been to before, so we know how we’re going to act,” says Yen-Ling Kuo, a PhD in CSAIL and first author on the paper. “The idea behind this work is to add to the search space a machine-learning model that knows from past experience how to make planning more efficient.”
Boris Katz, a principal research scientist and head of the InfoLab Group at CSAIL, is also a co-author on the paper.
Trading off exploration and exploitation
Traditional motion planners explore an environment by rapidly expanding a tree of decisions that eventually blankets an entire space. The robot then looks at the tree to find a way to reach the goal, such as a door. The researchers’ model, however, offers “a tradeoff between exploring the world and exploiting past knowledge,” Kuo says.
The learning process starts with a few examples. A robot using the model is trained on a few ways to navigate similar environments. The neural network learns what makes these examples succeed by interpreting the environment around the robot, such as the shape of the walls, the actions of other agents, and features of the goals. In short, the model “learns that when you’re stuck in an environment, and you see a doorway, it’s probably a good idea to go through the door to get out,” Barbu says.
The model combines the exploration behavior from earlier methods with this learned information. The underlying planner, called RRT*, was developed by MIT professors Sertac Karaman and Emilio Frazzoli. (It’s a variant of a widely used motion-planning algorithm known as Rapidly-exploring Random Trees, or RRT.) The planner creates a search tree while the neural network mirrors each step and makes probabilistic predictions about where the robot should go next. When the network makes a prediction with high confidence, based on learned information, it guides the robot on a new path. If the network doesn’t have high confidence, it lets the robot explore the environment instead, like a traditional planner.
For example, the researchers demonstrated the model in a simulation known as a “bug trap,” where a 2-D robot must escape from an inner chamber through a central narrow channel and reach a location in a surrounding larger room. Blind allies on either side of the channel can get robots stuck. In this simulation, the robot was trained on a few examples of how to escape different bug traps. When faced with a new trap, it recognizes features of the trap, escapes, and continues to search for its goal in the larger room. The neural network helps the robot find the exit to the trap, identify the dead ends, and gives the robot a sense of its surroundings so it can quickly find the goal.
Results in the paper are based on the chances that a path is found after some time, total length of the path that reached a given goal, and how consistent the paths were. In both simulations, the researchers’ model more quickly plotted far shorter and consistent paths than a traditional planner.
Working with multiple agents
In one other experiment, the researchers trained and tested the model in navigating environments with multiple moving agents, which is a useful test for autonomous cars, especially navigating intersections and roundabouts. In the simulation, several agents are circling an obstacle. A robot agent must successfully navigate around the other agents, avoid collisions, and reach a goal location, such as an exit on a roundabout.
“Situations like roundabouts are hard, because they require reasoning about how others will respond to your actions, how you will then respond to theirs, what they will do next, and so on,” Barbu says. “You eventually discover your first action was wrong, because later on it will lead to a likely accident. This problem gets exponentially worse the more cars you have to contend with.”
Results indicate that the researchers’ model can capture enough information about the future behavior of the other agents (cars) to cut off the process early, while still making good decisions in navigation. This makes planning more efficient. Moreover, they only needed to train the model on a few examples of roundabouts with only a few cars. “The plans the robots make take into account what the other cars are going to do, as any human would,” Barbu says.
Going through intersections or roundabouts is one of the most challenging scenarios facing autonomous cars. This work might one day let cars learn how humans behave and how to adapt to drivers in different environments, according to the researchers. This is the focus of the Toyota-CSAIL Joint Research Center work.
“Not everybody behaves the same way, but people are very stereotypical. There are people who are shy, people who are aggressive. The model recognizes that quickly and that’s why it can plan efficiently,” Barbu says.
More recently, the researchers have been applying this work to robots with manipulators that face similarly daunting challenges when reaching for objects in ever-changing environments.
Lately the fact-checking world has been in a bit of a crisis. Sites like Politifact and Snopes have traditionally focused on specific claims, which is admirable but tedious; by the time they’ve gotten through verifying or debunking a fact, there’s a good chance it’s already traveled across the globe and back again.
Social media companies have also had mixed results limiting the spread of propaganda and misinformation. Facebook plans to have 20,000 human moderators by the end of the year, and is putting significant resources into developing its own fake-news-detecting algorithms.
Researchers from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and the Qatar Computing Research Institute (QCRI) believe that the best approach is to focus not only on individual claims, but on the news sources themselves. Using this tack, they’ve demonstrated a new system that uses machine learning to determine if a source is accurate or politically biased.
“If a website has published fake news before, there’s a good chance they’ll do it again,” says postdoc Ramy Baly, the lead author on a new paper about the system. “By automatically scraping data about these sites, the hope is that our system can help figure out which ones are likely to do it in the first place.”
Baly says the system needs only about 150 articles to reliably detect if a news source can be trusted — meaning that an approach like theirs could be used to help stamp out new fake-news outlets before the stories spread too widely.
The system is a collaboration between computer scientists at MIT CSAIL and QCRI, which is part of the Hamad Bin Khalifa University in Qatar. Researchers first took data from Media Bias/Fact Check (MBFC), a website with human fact-checkers who analyze the accuracy and biases of more than 2,000 news sites; from MSNBC and Fox News; and from low-traffic content farms.
They then fed those data to a machine learning algorithm, and programmed it to classify news sites the same way as MBFC. When given a new news outlet, the system was then 65 percent accurate at detecting whether it has a high, low or medium level of factuality, and roughly 70 percent accurate at detecting if it is left-leaning, right-leaning, or moderate.
The team determined that the most reliable ways to detect both fake news and biased reporting were to look at the common linguistic features across the source’s stories, including sentiment, complexity, and structure.
For example, fake-news outlets were found to be more likely to use language that is hyperbolic, subjective, and emotional. In terms of bias, left-leaning outlets were more likely to have language that related to concepts of harm/care and fairness/reciprocity, compared to other qualities such as loyalty, authority, and sanctity. (These qualities represent a popular theory — that there are five major moral foundations — in social psychology.)
Co-author Preslav Nakov, a senior scientist at QCRI, says that the system also found correlations with an outlet’s Wikipedia page, which it assessed for general — longer is more credible — as well as target words such as “extreme” or “conspiracy theory.” It even found correlations with the text structure of a source’s URLs: Those that had lots of special characters and complicated subdirectories, for example, were associated with less reliable sources.
“Since it is much easier to obtain ground truth on sources [than on articles], this method is able to provide direct and accurate predictions regarding the type of content distributed by these sources,” says Sibel Adali, a professor of computer science at Rensselaer Polytechnic Institute who was not involved in the project.
Nakov is quick to caution that the system is still a work in progress, and that, even with improvements in accuracy, it would work best in conjunction with traditional fact-checkers.
“If outlets report differently on a particular topic, a site like Politifact could instantly look at our fake news scores for those outlets to determine how much validity to give to different perspectives,” says Nakov.
Baly and Nakov co-wrote the new paper with MIT Senior Research Scientist James Glass alongside graduate students Dimitar Alexandrov and Georgi Karadzhov of Sofia University. The team will present the work later this month at the 2018 Empirical Methods in Natural Language Processing (EMNLP) conference in Brussels, Belgium.
The researchers also created a new open-source dataset of more than 1,000 news sources, annotated with factuality and bias scores, that is the world’s largest database of its kind. As next steps, the team will be exploring whether the English-trained system can be adapted to other languages, as well as to go beyond the traditional left/right bias to explore region-specific biases (like the Muslim world’s division between religious and secular).
“This direction of research can shed light on what untrustworthy websites look like and the kind of content they tend to share, which would be very useful for both web designers and the wider public,” says Andreas Vlachos, a senior lecturer at the University of Cambridge who was not involved in the project.
Nakov says that QCRI also has plans to roll out an app that helps users step out of their political bubbles, responding to specific news items by offering users a collection of articles that span the political spectrum.
“It’s interesting to think about new ways to present the news to people,” says Nakov. “Tools like this could help people give a bit more thought to issues and explore other perspectives that they might not have otherwise considered."
“Oh come on, how many of you think Barack Obama wiretapped the Trump Tower? All their hands went up. Almost unanimous,” said retired four-star general Michael Hayden.
Hayden, a former director of the U.S. Central Intelligence Agency (2006-2009) and the National Security Agency (1999-2005) was retelling an incident from his recently released book, “The Assault on Intelligence: American National Security in an Age of Lies” in front of an audience that filled MIT’s 425-seat Huntington Hall (Room 10-250). Hayden was describing a scene at a local bar in his native Pittsburgh where he met with people who he might have known growing up there or was related to, but who now hold sharply divergent views.
“I used to run the NSA. I kinda know how this works. Number one, they wouldn’t do it. Number two, the plumbing doesn’t work that way. They almost certainly couldn’t do it,” he said. When asked, “What evidence do you have?” the bargoers said, simply, “Obama.” When asked, “Where do you get your news?” the answer was invariably, “Facebook.”
The anecdote aptly explains the dilemmas Hayden attempts to tackle in his book, which deals with the ways in which the basic adherence to truth and facts has been eroded since Donald Trump announced he was running for president, and what the consequences are for what he calls “fact-based institutions.” The judiciary, the media, the intelligence community, and others are suffering in an era when everyone’s version of the truth is up for grabs, Hayden explains — especially when the intelligence community he knows so well is being attacked. While he does not predict societal collapse or civil war in North America, he said he is worried about the “assault on truth” that is currently taking place.
“The veneer of civilization is something that is quite thin,” he said. “It has to be protected and nurtured.”
While most of the younger generations might not be aware of this — including some of the younger generations at MIT — he said “civilization as we know it” is not a given.
While this phenomenon can be witnessed all over the world, Hayden stressed that it presents a particular problem for U.S. society.
“America was a concept under which we built a nation. If you remove the concept you remove the basic fundamental character.”
The United States was formed on the basis of the ideas of the Enlightenment, with adaptations and improvements being made as societies and “civilization” developed. Since then, those who rejected these ideas represented the negative phenomena in society and were often overpowered by the progressive or forward-thinking mainstream. Now, those who represent the negative segments of society are threatening to become mainstream.
“This is a rejection of the way of thinking that developed in the 16th and 17th centuries in the Enlightenment. I don’t want to overemphasize this but the Western man, after that period, was generally pragmatic. Our definition of truth was the best working theory we could develop at the moment of objective reality. That dynamic is what I think is under threat,” Hayden explained.
Hayden described the predicament that our society currently finds itself as a three-layer cake with each layer representing the major “players.”
“The basic layer and therefore the most important one is us,” he said. “It is the American population where our political culture is moving in the direction of a post-truth reality.”
This is where the group from the anecdote fits in, as well as everyone else. Although Hayden places all of American society in the biggest layer at the bottom, he says that people who compose the third layer are very different. “The winds of globalization have been at my back for 50 years. The people I grew up with, the winds of globalization have largely been in their face and the uneven effects of globalization have created grievances,” he said.
The grievances are more cultural rather than economic, but it creates the conditions for people with seemingly “simple” answers to appeal to their grievances and “tribe loyalties” and actually make their case to them. Also like the group in the opening anecdote, this group relies on social media outlets for their news — as well as their facts.
“Social media knows you as well as you know yourself. The business model for social media is to keep you there, keep you on the site, so it gives you something that’s pleasing to you,” Hayden continued. “But the longer you’re there the more you want [it]. The core algorithm keeps giving you [that]. Which in this version are more extreme versions of the views you had when you entered the enterprise in the first place.”
He also emphasized the fact that working-class communities are those who are particularly susceptible to the divisions.
“It is the elites of the world who are uniting,” he said. “And it’s the workers of the world who are reaching for their national flags.”
The second layer of the cake is the Trump administration. “Objective reality is not the distinctive departure point for what Trump says or does,” he said.
He offered another anecdote, a conversation he had with a retired PDB briefer — someone who delivers the president’s daily highly confidential briefings. They compared what sort of president Trump was.
“He said, Mike we have had presidents who have argued with us — that was my experience with George W. Bush. We’ve had presidents who simply lie; the Nixonian image comes to mind. They don’t argue about objective reality, they just lie about it. He offered the view that Trump isn’t either of those.”
Trump, the retired briefer argued, was someone who “fully believed his version of events,” Hayden said.
“Does the thought process there make a distinction between the past I need and the past that happened? And the answer is maybe not, which is a little bit different than lying.”
According to the briefer, the only proof of veracity that the president seems to need is “a lot of people saying they agree with him and if he can make it trending.”
The third layer are the Russians, but according to Hayden they are “the least of our problems.” Unlike layers three and two, which actively participate in questioning the truth, those in Russia who want to affect U.S. society base their interference on “existing divisions” — divisions that were created by layer one and layer two. He doesn’t doubt that the Russians were involved in trying to influence the 2016 elections, but whether anyone in the U.S. was involved depends on the results of the ongoing Mueller probe. Whether or not they influenced the votes is “unknowable and unmeasurable,” he said.
“What the Russians did we would call a covert influence campaign. The specifics of a covert influence campaign are clear: You never create a division in a society,” he said. “You identify pre-existing divisions and you exploit and worsen the pre-existing divisions.”
Their motives, he said, were to “mess with our heads,” punish Hillary Clinton, and delegitimize her as the inevitable winner, as well as hope to push votes in Trump’s direction.
The event was sponsored by the MIT Center for International Studies as part of its flagship public event series, the MIT Starr Forum. The forum brings to campus leading authorities to discuss pressing issues in the world of international relations and U.S. foreign policy.
MIT has long been a global leader in STEAM — science, technology, engineering, arts, and mathematics — education, driving innovation not only in the fields themselves but also in the ways it gets designed and delivered. Learning, of course, doesn’t just happen on campus or in a classroom setting. This summer, the MIT pK-12 Action Group, in collaboration with the Chinese International School (CIS) in Hong Kong, held its second annual MIT STEAM camp, bringing faculty, staff, and students together with just over 200 middle-school aged students and 30 teachers from the area for two weeks of hands-on learning.
According to pK-12 Action Group co-chair Professor Eric Klopfer, the group’s goal is “to create programs that engage growing numbers of diverse learners and educators through design, research, and implementation of educational innovations.” The project-based MIT STEAM camp aimed to do just that. Embodying the Institute’s “mens-et-manus” (mind-and-hand) learning approach, students in Hong Kong learned from various hands-on activities and explored the use of digital technologies and tools (such as Scratch, among others) to promote creativity, invention, and collaboration.
The theme of this summer’s MIT STEAM Camp was “Into The Water,” drawing inspiration from the UN Sustainable Development Goals. The six learning modules used in the program were developed by groups across MIT’s campus. The Edgerton Center, for instance, helped develop “Engineering with Water,” while an alumna of the MIT-Woods Hole Oceanograpic Institute Joint Program and MIT employee developed “Algae in our backyard: An ArtScience Exploration.” Other modules included observing small ocean crustaceans, understanding the effects of ocean acidification, and programming an EEG-driven “boat” (with Lego WeDo motors inside) that navigated around giant beach balls. Students also experimented and played with Scratch and other peripherals as an introduction to playful learning.
Participating campers then took these learning modules a step further, working collaboratively and applying the knowledge and skills they acquired to designing and building innovative projects. By the end of the two-week camp, students built projects ranging from board games to portable microscope projectors to two-tier structures that generated electricity through a dam.
Teachers learn, too
There was also a strong professional development component to the MIT STEAM camp. As part of the pK-12 Action Group’s mission, teachers participated in educator workshops parallel to the student campers. Led by the MIT team, local educators were exposed to the pedagogies and practices that drove the development of the camp’s modules, including approaches to have students experiment, build working models, make mistakes, and learn through iteration.
Joe Diaz '10, pK-12 Action Group program coordinator, explained that the teachers “observe how project-based learning can be used and are hopefully inspired to create and implement their own STEAM projects in their own classrooms during the school year.”
The MIT STEAM Camp succeeded in building important new capabilities among various stakeholders. “I saw this when students who had never used a hand drill picked one up and learned to drive a screw into a piece of wood,” explained Diaz. “I saw this when teachers realized that they could build working circuits that could be used to supplement their classroom activities. I saw this when our MIT student facilitators realized that they had empowered their students to make their ideas a reality, even in their short time together.”
New approaches to learning
Local students and teachers, as well as the MIT team, “came away from the MIT STEAM camp with a new way to look at how learning happens, one that combines learning while doing and connects to students’ interests and passion,” said. Claudia Urrea PhD '07 PhD, associate director of pK-12 at the Abdul Latif Jameel World Education Lab (J-WEL) and founder of the STEAM Camp. “This is possible when students from different ages and schools get together around projects they care about and propose their own solutions to issues related to their local context.”
The eleven MIT student facilitators who traveled to run the modules in the classrooms were also invaluable. Their unique styles of teaching directly impacted the way the modules took shape. They worked directly with the Hong Kong students, bringing the modules to life.
Going from classroom learning about water to actually building a model of a hydroelectric dam, as STEAM camp students and teachers did, was much more than an achievement of design. It holds the promise of becoming a template for how camp participants learn and teach for today and tomorrow, spreading MIT’s “mens-et-manus” approach to learning to different parts of the globe.
Regina Barzilay, James Collins, and Phil Sharp join leadership of new effort on machine learning in health
Regina Barzilay and James Collins have been named the faculty co-leads of the Abdul Latif Jameel Clinic for Machine Learning in Health, or J-Clinic, effective immediately, announced Anantha Chandrakasan, dean of the School of Engineering and chair of J-Clinic. Institute Professor Philip Sharp will also serve as the chair of J-Clinic’s advisory board.
Launched on Sept. 17, J-Clinic is the fourth major collaborative effort between MIT and Community Jameel, the social enterprise organization founded and chaired by Mohammed Abdul Latif Jameel ’78. A key part of the MIT Quest for Intelligence, J-Clinic will focus on developing machine learning technologies to revolutionize the prevention, detection, and treatment of disease. It will concentrate on creating and commercializing high-precision, affordable, and scalable machine learning technologies in areas of health care ranging from diagnostics to pharmaceuticals.
“J-Clinic will make a difference in patients’ lives everywhere from major hospitals to villages in the developing world. It will draw on MIT’s longstanding strengths in biomedical fields, on our decades of collaboration with the concentration of world-class teaching hospitals in Boston, and on our proximity to the world’s major biotech companies in Kendall Square,” says Chandrakasan.
Barzilay is the Delta Electronics Professor of Electrical Engineering and Computer Science at MIT and an investigator at the Computer Science and Artificial Intelligence Laboratory (CSAIL). She also co-directs a Machine Learning for Pharmaceutical Discovery and Synthesis Consortium that aims to develop AI algorithms for automation of drug design. Barzilay is a recipient of a MacArthur Fellowship, the National Science Foundation CAREER award, the MIT Technology Review TR35 Award, and a Microsoft Faculty Fellowship. She was also elected an Association of Computational Linguistics Fellow and an Association for the Advancement of Artificial Intelligence Fellow. Barzilay received her BS and MS from Ben-Gurion University of the Negev, Israel. She earned a PhD in computer science from Columbia University and did her postdoctoral work at Cornell University.
"Today almost every aspect of our life is driven by machine learning predictions — be it travel, banking or entertainment. The only area where we do not benefit from this powerful technology is the one which impacts us the most, our health care,” says Barzilay. “The goal of the center is to change it. We aim to bring the best of AI technology we develop in our labs at MIT to hospitals and clinics in the U.S. and around the world.”
Collins is the Termeer Professor of Medical Engineering and Science, a professor of biological engineering at MIT, and a member of the Harvard-MIT Health Sciences and Technology faculty. He is also a core founding faculty member of the Wyss Institute for Biologically Inspired Engineering at Harvard University and an Institute Member of the Broad Institute of MIT and Harvard. Collins's numerous honors include a Rhodes Scholarship, a MacArthur Fellowship, and an NIH Director's Pioneer Award. He is an elected member of all three national academies: the National Academy of Sciences, the National Academy of Engineering, and the National Academy of Medicine. He is also a member of the American Academy of Arts and Sciences, the National Academy of Inventors, and the World Academy of Sciences. Collins earned an BA in physics from the College of the Holy Cross and a PhD in medical engineering from the University of Oxford.
"Machine learning is the defining technology of this decade, though its impact on health care thus far has been meager. Through J-Clinic, we plan to train the next generation of scientists and engineers at the interface of machine learning and biomedicine, so as to enable the development of innovative AI-based technologies that can be used to improve lives of patients around the world,” says Collins. “I am honored to have the opportunity to work with Regina, Phil, and Anantha on this exciting new venture.”
Sharp is an Institute Professor at MIT in the Koch Institute for Integrative Cancer Research. In 1993 he shared the Nobel Prize in physiology or medicine for the discovery of split genes and in 2004 was awarded the National Medal of Science. He co-founded Biogen and served on its board for 29 years. In 2002, he co-founded Alnylam Pharmaceuticals and continues to serve on its board. He is chair of the scientific advisory committee of Stand Up to Cancer and a proponent of Convergence, the engagement of engineering, computational and physical sciences in biomedical science.
"The J-Clinic is an exciting opportunity for MIT scientists to bring machine learning to health care. I look forward to chairing its advisory group to accelerate its growth and impact,” says Sharp.
J-Clinic’s efforts will be global and multifaceted, says Chandrakasan. “J-Clinic’s remarkable leadership team will bring the world many exciting new healthcare solutions,” he says.
Milk may seem as wholesome a drink as there is, but it was not always so.
Consider the U.S. in the late 19th century. At the time, producers of milk — especially milk sold in U.S. cities — frequently watered it down. The resulting liquid was blended with chalk or plaster of Paris to appear more white. And that wasn’t the half of it: Milk often contained formaldehyde and a cleaning product called Borax. Thousands of people, including children, died from drinking milk.
Eventually, the federal government got around to cleaning up milk production, after landmark legislation in 1906. The safe milk we drink today is a result of that law. But getting to that point required a decades-long struggle by outraged advocates — among them Chicago activist Jane Addams, writer Upton Sinclair, and, not least, a crusading scientist named Harvey Washington Wiley, the chief chemist of the U.S. Department of Agriculture.
Wiley’s odyssey as a reformer and government official is at the center of a new book by MIT’s Deborah Blum, “The Poison Squad,” just published by Penguin Press. In it, Blum details the nascent 19th-century science and politics of food regulation, from the early efforts to figure out what dangers lay in food, to the torturous struggle to push regulations through the political system.
Wiley made headlines at the time through his efforts to reform food production and make eating safer for all Americans. Today, his legacy is little-known, which is a major reason why Blum wanted to reestablish his importance to us.
“I’m really interested in scientists who drive paradigm shifts,” says Blum, the director of the Knight Science Journalism program at MIT and author of several books on science research and the history of science. Wiley, she adds, was a “catalyst” without whom daily life would have been worse for tens of millions of people.
“I don’t mean the whole world was suddenly convulsed," Blum added. "But he changed the way people think. And that's a reminder that any of us can change the national or global conversation in a way that does good.”
A “really crazy experiment”
Wiley was an Indiana-bred, Harvard University-educated chemist who became an early faculty member at Purdue University. From there, after making his name through studies of multiple kinds of flawed food, he took his post at the U.S. Department of Agriculture.
As Blum makes clear, there was plenty for food reformers to study. As food production became more industrialized in an urbanizing country, food fakery abounded. “Honey” was often corn syrup, and “vanilla” was often alcohol and food coloring. Coffee could contain sawdust, and brown sugar was notoriously spiked with crushed insects at times.
The book’s title stems from one project Wiley undertook, in which he recruited men in their 20s — “the Poison Squad” — to eat three free meals a day. Some of those people were consuming food with laced with chemicals of uncertain effect, or other dubious ingredients.
“It’s this really crazy experiment you could never do today, in which a government scientist persuades government employees to dine really dangerously,” Blum notes.
Eventually Wiley — and others — amassed plenty of evidence showing that food regulations were necessary. Getting such legislation passed was a saga of its own, and as Blum details in the book, Wiley had a strained relationship with Theodore Roosevelt, the president who ultimately signed the laws Wiley had been fighting for. In theory, these two reformers might have seemed natural allies. In practice, relations between them were fraught.
“Here you have two men who are both progressive and who both want to change the country for better, and who both believe that the current system of poorly regulated industry and business is doing a disservice to the country,” Blum observes. “So you would expect them to be on the same page, but in fact they clashed from the beginning. … I think basically Roosevelt didn’t like him. And that worked against him [Wiley]. He charmed a lot of people, but he never charmed Teddy Roosevelt.”
One reason for this, Blum suggests, is that while Roosevelt was famous for breaking up monopolies, he was in fact fairly comfortable with big business, under the right conditions. But Wiley was, long before it became more fashionable, a relentless consumer advocate, above all. His unyielding consumer focus did not correspond to Roosevelt’s priorities as much as the president’s reputation might suggest.
“I had to fight my way back to liking Teddy Roosevelt after doing the book,” Blum says.
An education in history
The book, Blum’s eighth, has received praise from experts. Melinda Cep, senior director of policy for the World Wildlife Fund’s U.S. Markets and Food team, has called it “a timely tale about how scientists and citizens can work together on meaningful consumer protections.”
For her part, Blum also says she hopes readers come away from “The Poison Squad” thinking that it has “all kinds of lessons for today” about both the discovery of diluted food products and the enforcement issues that arise once laws are passed. A significant part of Blum’s book, for that matter, scrutinizes the enforcement challenges that arose after the food safety legislation was passed.
To be sure, cases of deception in food production are still with us — many kinds of seafood, for example, are not what they appear on the label. Then there are grey areas, where older laws “are completely inadequate for the 21st century,” as Blum puts it, due to changes in the way food is produced.
Many kinds of drinks do not fully list their ingredients, for instance, due to policy compromises between regulators and industry that allow manufacturers to keep ingredients as proprietary information.
“Natural flavorings,” says Blum. “What are they? You’re never going to know.”
Researching and writing the book, Blum says, was thus instructive to her about the progress of science, as well as the tensions and conflicts that have existed among corporations, consumers, and government — and still do.
“It was an education for me,” Blum says. “An education in American history.”
When Takian Fakhrul was a young girl, her father, then a graduate student in materials science at the University of Manchester, would bring her along to his lab. During these visits, she would peek at structures under the microscopes or watch him polish newly synthesized materials. And she just couldn’t seem to stay silent.
“I used to ask a lot of questions,” says Fakhrul, who is now a fourth-year PhD student in MIT's Department of Materials Science and Engineering. “My dad tells me that I was a super-curious child.”
Fakhrul’s curiosity blossomed further when she was an undergraduate at Bangladesh University of Engineering and Technology in her hometown of Dhaka, Bangladesh. Conversations with her father, who was then a materials science professor there, figured heavily into her decision to major in the same field. They talked about pressing scientific problems, like the limits of existing materials and breakthroughs in materials science that could “really affect the future of technology,” Fakhrul recalls.
Now, working in the lab of Caroline Ross, the Toyota Professor of Materials Science and Engineering, Fakhrul researches how garnets can solve problems in photonics, the study of the technical applications of light.
After finishing her PhD, she plans to return to Bangladesh to teach materials science and mentor students who want to pursue graduate studies. She hopes to help advance the field in her home country, drawing from some of the ingenuity she’s observed at MIT and at Indian Institute of Technology Delhi, where her colleagues have found creative ways to conduct their materials science research despite having far fewer resources. “I’ll take back my expertise, the connections I make here, and hopefully I’ll be able to create a bridge between MIT and Bangladesh,” she says.
Breaking speed limits
Within computers, data moves between and within chips electronically through small copper wires. In an increasingly technology-dominated world, “computers need to work faster and faster,” Fakhrul says. In order to do that, scientists must design chips and connections that allow faster data transfers and lower power consumption.
“The problem is that there’s a limit to the speed of electrons passing through metal wires,” Fakhrul says. “What’s faster than electrons in metal? The answer to that is light.” And, unlike metal wires, which can carry only a single electronic data stream, optical fibers can carry multiple wavelengths of light — and thus multiple data streams and more bandwidth — without interference. Optical fibers are already being used in networking and storage area networks; the key to advanced optical communication, and maybe even computing with light, is to design fast and energy-efficient optical fiber interconnects that function well on silicon chips.
Fakhrul researches materials for optical isolators — a component of lasers used in silicon photonics that provide a one-way path through which light can travel. “It lets light pass in the forward direction, but not backward. And that’s extremely important,” Fakhrul says. “Back reflections going into the laser destabilize the laser, reducing its performance.”
“If you really want to integrate silicon photonics onto a chip, then you definitely need to have this optical isolator integrated as well,” Fakhrul says.
Fakhrul focuses on iron garnets, which often experience chemical substitution — a trait that gives materials scientists the opportunity to design new variations of the material. The transparent nature of garnet allows light to pass through without interference. Iron garnets are also magnetic, such that they can rotate the plane of polarization of the light as it travels through. “When light passes through garnet, it acts differently in one direction than in the other direction,” Fakhrul says. By manipulating the garnets to design one-way streets for light, she hopes to demonstrate that garnets are the ideal materials for integrated optical isolators. But they also come with a catch.
“[Garnets are] actually very difficult to integrate on silicon,” Fakhrul says. “So that’s something that materials scientists have to deal with and figure out.”
Fakhrul is also interested in how garnets could be used to improve information processing. In an emerging field known as magnonics, information is transferred via the collective precession of spin waves — disturbances that propagate through magnetic materials. In garnets, spin waves “travel for long distances without relaxing,” Fakhrul says, due to their low damping constants.
“You can have this one class of materials, but then it has these unique properties that make it interesting for these versatile applications,” Fakhrul says.
After earning her undergraduate degree, Fakhrul was hired as a lecturer at Bangladesh University of Engineering and Technology and began to teach other materials science students while she completed her master’s in materials science.
During her master’s program, she also married her colleague, Nadim Chowdhury, who was a lecturer in electrical engineering and, like Fakhrul, planning to pursue a PhD.
A month into their marriage, Fakhrul recalls, “I got my acceptance letter from MIT, and he got his acceptance letter from Princeton. We were really thinking about long-distance. But a week later, he got into MIT as well! It was so amazing — it was like a miracle,” Fakhrul says.
Together, Fakhrul and her husband moved to Cambridge, and they began their studies at MIT. To stay in touch with her cultural heritage, Fakhrul became involved in the Bangladeshi Students Association at MIT, which hosts events with national, cultural, and religious significance throughout the year. This year, Fakhrul will be a co-president of the student group after working as a secretary for two years and an organization chair for one.
Fakhrul ultimately plans to return to Dhaka and continue teaching materials science. But she also has an additional goal: to help advance materials science research back home.
During her third year at MIT, Fakhrul attended the IEEE Magnetics Society Summer School — a school for approximately 85 graduate students from around the world studying magnetism, held in Spain. A winning entry in a group project competition gave Fakhrul the opportunity to travel to the Indian Institute of Technology Delhi to visit some of their research labs.
“What I really took back from my visit to IIT Delhi is how creative people can be when they have such limited resources,” Fakhrul says. She cites the availability of instrumentation for materials science research: What may be readily available and easy to order at MIT might be challenging to acquire in Dhaka or Delhi.
“[At IIT Delhi], they had a lot of parts made from local suppliers, and then they imported other parts, and then made a whole thin-film deposition system. It was at least three to four times cheaper. I thought that was really incredible because that’s something I would want to do once I go back to Bangladesh,” Fakhrul says.
Fakhrul actively works with students at her alma mater as a mentor, and hopes to continue that after returning home as well.
“Whenever I get some time off from work, I also like helping students in Bangladesh apply [to graduate programs in the United States],” she says. In total, Fakhrul mentors 10 students, six of whom are currently pursuing PhDs in the United States. At the start of the 2018-19 academic year, one of her mentees began his graduate studies at MIT.
“The feeling was so incredible, because I was the first person from materials science from my school to come here,” Fakhrul says. “It’s really nice when you get to help other people make their dreams come true as well.”
Rapid, sweeping changes in modern life are imposing new challenges upon society — and creating new opportunities as well, said noted columnist Thomas L. Friedman while delivering the fall 2018 Compton Lecture at MIT on Monday.
“We’re in the middle of three giant accelerations,” Friedman said. Changes involving markets, the Earth’s climate, and technology are reshaping social and economic life in powerful ways and putting a premium on “learning faster, and governing and operating smarter,” across the globe, he said.
“Technology is now accelerating at a pace the average human cannot keep up with,” Friedman added, emphasizing a key theme of his talk.
Friedman discussed the year 2007, in particular, as a moment full of innovations and new technologies being brought to market — a moment which “may be understood in time as one of the greatest technological inflection points” in recent history. However, the global recession that soon followed created even more stress, leading to civic repercussions we are confronting today.
“A lot of people got completely dislocated,” Friedman said.
A longtime reporter and columnist for The New York Times, Friedman gave his talk, “Thank You for Being Late: An Optimist’s Guide to Thriving in the Age of Accelerations,” before a large audience in MIT’s Kresge Auditorium. And while Friedman’s remarks warned of the dangers facing society, he also stressed the opportunities open to people around the world.
After all, Friedman noted, changes in communications platforms mean that “anyone can participate in the global conversation” occurring online today.
Shaping the national conversation
The Karl Taylor Compton Lecture Series, which began in 1957, is among MIT's most prominent lecture events. It honors the memory of Karl Taylor Compton, who served as MIT’s president from 1930 to 1948 and chair of the MIT Corporation from 1948 to 1954.
As MIT President L. Rafael Reif stated in his introductory remarks, Compton “guided MIT through the Great Depression and World War II. In the process, he helped the Institute transform itself from an outstanding technical school … to a great global university.”
Moreover, Reif added, Compton, who was himself a physicist, “brought a new focus on fundamental scientific research, and he made science an equal partner with engineering at MIT.”
Recent Compton lectures have been delivered by cellist Yo-Yo Ma, former U.S. Energy Secretary (and former head of the MIT Energy Initiative) Ernest Moniz, and Christine Lagarde, managing director of the International Monetary Fund.
In his remarks Reif also hailed Friedman, a three-time winner of the Pulitzer Prize, saying, “Tom is a global citizen and advocate for creative solutions to complex problems.” Friedman’s writing, Reif noted, “has helped shape the national conversation on the most important issues of our time.”
“Later will be too late”
During much of his talk, Friedman discussed the nature of the transformations in markets, climate, and technology, stating that they were “actually melding together into one giant change” in certain ways.
Changes in the nature of globalization, he said, from the expansion of global commerce to the development of global communication, are one reason why “we’re going from a world that is interconnected to interdependent.”
In his writings, Friedman has long warned of the dangers of climate change, and he underscored the seriousness of the issue in his lecture.
“Later will be too late,” said Friedman, regarding the need for serious climate action.
Meanwhile, Friedman observed, people have never had to adapt to so many significant technological innovations in any previous historical epoch.
“There was no bow and arrow 2.0 in the 13th century,” Friedman added, referring to the more languid pace of technological change in earlier times.
At a time of flux, however, our new technologies may be creating circumstances in which determined individuals can make an impact on the world in ways that might not have been possible before. To do so, he emphasized, often requires creativity.
“Never think in the box,” Friedman said. “Never think outside the box. Think without a box.”
At the end of his talk, Friedman answered audience questions presented by Reif. Among other things, Friedman decried the current state of U.S. politics, saying the political culture has “moved from partisan to tribal,” and warning that the U.S. could be facing civil strife reminiscent of the turmoil he covered in Lebanon at the start of his career, in the late 1970s and early 1980s.
On the other hand, Friedman added, he is “still a huge believer in America,” based mostly on the efforts of everyday citizens to “meld together” in an inclusive society of opportunity.
“If you want to be an optimist today about America, stand on your head,” Friedman said. “It looks so much better from the bottom up.”
Many microbes have an enzyme that can convert carbon dioxide to carbon monoxide. This reaction is critical for building carbon compounds and generating energy, particularly for bacteria that live in oxygen-free environments.
This enzyme is also of great interest to researchers who want to find new ways to remove greenhouse gases from the atmosphere and turn them into useful carbon-containing compounds. Current industrial methods for transforming carbon dioxide are very energy-intensive.
“There are industrial processes that do these reactions at high temperatures and high pressures, and then there’s this enzyme that can do the same thing at room temperature,” says Catherine Drennan, an MIT professor of chemistry and biology and a Howard Hughes Medical Institute Investigator. “For a long time, people have been interested in understanding how nature performs this challenging chemistry with this assembly of metals.”
Drennan and her colleagues at MIT, Brandeis University, and Aix-Marseille University in France have now discovered a unique aspect of the structure of the “C-cluster” — the collection of metal and sulfur atoms that forms the heart of the enzyme carbon monoxide dehydrogenase (CODH). Instead of forming a rigid scaffold, as had been expected, the cluster can actually change its configuration.
“It was not what we expected to see,” says Elizabeth Wittenborn, a recent MIT PhD recipient and the lead author of the study, which appears in the Oct. 2 issue of the journal eLife.
A molecular cartwheel
Metal-containing clusters such as the C-cluster perform many other critical reactions in microbes, including splitting nitrogen gas, that are difficult to replicate industrially.
Drennan began studying the structure of carbon monoxide dehydrogenase and the C-cluster about 20 years ago, soon after she started her lab at MIT. She and another research group each came up with a structure for the enzyme using X-ray crystallography, but the structures weren’t quite the same. The differences were eventually resolved and the structure of CODH was thought to be well-established.
Wittenborn took up the project a few years ago, in hopes of nailing down why the enzyme is so sensitive to inactivation by oxygen and determining how the C-cluster gets put together.
To the researchers’ surprise, their analysis revealed two distinct structures for the C-cluster. The first was an arrangement they had expected to see — a cube consisting of four sulfur atoms, three iron atoms, and a nickel atom, with a fourth iron atom connected to the cube.
In the second structure, however, the nickel atom is removed from the cube-like structure and takes the place of the fourth iron atom. The displaced iron atom binds to a nearby amino acid, cysteine, which holds it in its new location. One of the sulfur atoms also moves out of the cube. All of these movements appear to occur in unison, in a movement the researchers describe as a “molecular cartwheel.”
“The sulfur, the iron, and the nickel all move to new locations,” Drennan says. “We were really shocked. We thought we understood this enzyme, but we found it is doing this unbelievably dramatic movement that we never anticipated. Then we came up with more evidence that this is actually something that’s relevant and important — it’s not just a fluke thing but part of the design of this cluster.”
The researchers believe that this movement, which occurs upon oxygen exposure, helps to protect the cluster from completely and irreversibly falling apart in response to oxygen.
“It seems like this is a safety net, allowing the metals to get moved to locations where they’re more secure on the protein,” Drennan says.
Douglas Rees, a professor of chemistry at Caltech, described the paper as “a beautiful study of a fascinating cluster conversion process.”
“These clusters have mineral-like features and it might be thought they would be ‘as stable as a rock,’” says Rees, who was not involved in the research. “Instead, the clusters can be dynamic, which confers upon them properties that are critical to their function in a biological setting.”
Not a rigid scaffold
This is the largest metal shift that has ever been seen in any enzyme cluster, but smaller-scale rearrangements have been seen in some others, including a metal cluster found in the enzyme nitrogenase, which converts nitrogen gas to ammonia.
“In the past, people thought of these clusters as really being these rigid scaffolds, but just within the last few years there’s more and more evidence coming up that they’re not really rigid,” Drennan says.
The researchers are now trying to figure out how cells assemble these clusters. Learning more about how these clusters work, how they are assembled, and how they are affected by oxygen could help scientists who are trying to copy their action for industrial use, Drennan says. There is a great deal of interest in coming up with ways to combat greenhouse gas accumulation by, for example, converting carbon dioxide to carbon monoxide and then to acetate, which can be used as a building block for many kinds of useful carbon-containing compounds.
“It’s more complicated than people thought. If we understand it, then we have a much better chance of really mimicking the biological system,” Drennan says.
The research was funded by the National Institutes of Health and the French National Research Agency.
From Oct. 8 to 14, MIT will once again host an array of innovation-based events welcoming the public to HUBweek, the Greater Boston area’s annual “festival of the future.”
Kendall Square will serve as a key stage throughout HUBweek, now in its fourth year of convening innovators and celebrating and showcasing their work. Many of HUBweek’s 225-plus events will involve MIT faculty, students, alumni, and affiliates, ranging from an interactive “Innovation Playground” in the heart of Kendall to a policy “hackathon” hosted by MIT at the HUB, the main venue in Boston’s City Hall Plaza.
MIT was one of the founding partners of HUBweek in 2015, along with Harvard University, The Boston Globe, and Massachusetts General Hospital. An estimated 50,000 people attended last year’s festival, including participants from 49 countries. Building on that success, HUBweek organizers have added an extra day to the proceedings and several new events this year.
“Each year the number of organizations that are part of creating and building HUBweek continues to grow,” says Linda Pizzuti Henry, co-founder and chair of HUBweek. “We remain lucky to have the support of great partners like MIT, to continue to offer the majority of HUBweek events for free in order to eliminate traditional barriers to these kinds of experiences. MIT continues to be a heavily engaged leading force in the evolution and growth of HUBweek.”
MIT will help kick off HUBweek’s first day with the “Policy Hackathon,” unfolding across two days, Oct. 8 and 9 at City Hall Plaza. Hosted by the students of MIT’s Institute for Data Systems and Society (IDSS), and open to the public, the event will bring together students, data scientists, policy and urban planning researchers, and concerned citizens to tackle big societal challenges. In response to a set of challenges proposed by the city of Boston, interdisciplinary teams will roll up their sleeves and devise creative, data-driven ways to tackle them in partnership with city agencies.
“It’s been marvelous how HUBWeek has been gaining momentum,” says Kathleen Kennedy, director of special projects at MIT and co-founder and vice chair of HUBweek. “We have numerous MIT partners contributing events. HUBweek is a platform for us to open our doors and showcase the exciting work that’s happening across campus and Kendall Square. People can see, feel, and touch what’s actually happening.”
Both the present vitality and future visions of Kendall Square will be on display throughout the day on Oct. 9, during several events that are free and open to the public.
That morning, from 7:30 to 11:30 a.m. at MIT Wong Auditorium, attendees can learn about MIT’s role in the local innovation ecosystem. “Inside the Dome: MIT and the Future of Kendall Square.” Hosted by the MIT Club of Boston, the event will feature presentations and panel discussions with key MIT leaders, including Israel Ruiz, executive vice president and treasurer of MIT, and Elisabeth Reynolds, executive director of the MIT Industrial Performance Center.
From there, participants can head to 292 Main St., where the “Innovation Playground” will offer a lively, interactive scene from 12 to 8 p.m. All are welcome to participate in a range of exhibits and activities, from teleporting to tech hubs around the world, to mixing music on the turntables with a local DJ, to drawing with laser graffiti and coloring in a life-sized coloring book in a project led by the Cambridge-based Community Art Center.
That afternoon, Cambridge residents and visitors alike are also welcome at an open house hosted by the Martin Trust Center for MIT Entrepreneurship — where MIT students learn how to launch ventures — from 1 to 2 p.m., and at an MIT List Visual Arts Center Public Art Tour highlighting art and architecture across campus, from 2 to 3 p.m.
The day will be capped by a ground-breaking ceremony at 5:30 p.m. for 314 Main St., the future home of the MIT Museum and a range of commercial tenants including the Boeing Aerospace and Autonomy Center. The event will feature remarks by MIT Provost Marty Schmidt, Cambridge Mayor Marc McGovern, Boeing Chief Technology Officer Greg Hyslop, MIT Museum Advisory Board Chair Phillip Sharp, and Community Art Center Executive Director Eryn Johnson. The public is invited to bring a small object to add to the building's time capsule, and to stay for a reception immediately following the ceremony.
“These free and open-to-the public Kendall Square events are intended to bring the community together in a celebration of art, science, and technology,” says Sarah Gallop, co-director of government and community relations. The inclusive nature of the activities represents one of MIT’s primary goals in Kendall — to help create a fun, welcoming, and inviting environment for all.”
Friday, Oct.12 will be another busy day of MIT-related events. That morning, in the Ideas Dome in City Hall Plaza, the winners of the 2017-2018 competition hosted by MIT’s Climate CoLab — an open problem-solving platform from the Center for Collective Intelligence — will present their proposals. Later, at 5 p.m. at the HUB, the winning outcomes of the IDSS Policy Hackathon will be presented, laying out policy proposals related to the future of cities, health, and work. And nearby that afternoon, on the HUB Center Stage, New York Times columnist Maureen Dowd will moderate a discussion on the future of cities with three luminaries from the MIT Media Lab: Neri Oxman, architect and associate professor of media arts and sciences; Pattie Maes, professor of media technology; and Rosalind Picard, professor of media arts and sciences and founder and director of the Affective Computing Research Group. (This event is part of the Hub Forum, which requires registration and a fee to attend.)
On Oct. 13, Demo Day, a yearly HUBweek fixture, will bring together hundreds of Boston-area startups and veteran and aspiring entrepreneurs. More than two dozen MIT-affiliated companies will be on hand, participating in sessions full of advice on launching and leading new ventures, business showcases, and the pitch competition. The day will culminate with the six competition finalists presenting their ideas to expert judges, and the selection of a grand prize winner — the climax of three months and four rounds of judging. (MIT has the largest number of affiliated companies and entrepreneurs participating in Demo Day of any HUBweek partner. The winner of last year’s competition, a venture using robotics to find leaks in water distribution pipes, was founded by MIT alumnus You Wu PhD ’18.)
A new addition to HUBweek this year is the Change Maker Conference, which will bring together more than 200 artists, activists, researchers, entrepreneurs, and problem-solvers for a focused, two-day multidisciplinary experience. Joi Ito, director of the MIT Media Lab, will deliver the keynote address, and several other MIT speakers will be featured, including Julie Newman, director of sustainability; David Kong, director of the Community Biotechnology Initiative; and Rashin Fahandej, an artist, filmmaker, and MIT research fellow. The winners of innovation competitions held by the MIT Enterprise Forum in Greece and Poland will also travel to Boston to attend the conference; several are MIT alumni.
Even as HUBweek shines a roving light on the many MIT players in the Greater Boston innovation ecosystem, it also offers an opportunity for the broader MIT community to explore and learn, through events such as a family-friendly Robot Block Party on Oct. 14, featuring 20 robots developed by Boston-area and other companies.
“HUBweek is a unique showcase of the region’s role as an innovative powerhouse,” says Jessie Schlosser Smith, MIT’s director of open space programming. “MIT’s events share a signature story of curiosity, collaboration, and creativity, and aim to bring people from diverse business sectors and neighborhoods together.”
This year's HUBweek theme — “We the Future” — emphasizes inclusivity. “MIT is part of a larger community in Cambridge and beyond, and with these programs, we express a warm invitation to all — to participate, learn, and play,” says Smith.
MIT Senior Associate Director of Athletics John Benedick has announced his retirement, effective at the end of the 2018-19 academic year.
“It is difficult to put into a few words the experience of a lifetime,” says Benedick. “I have been incredibly fortunate to have had the privilege to work with so many talented and giving individuals at MIT. I have learned many valuable lessons from our student-athletes and my colleagues that I will never forget and will always appreciate. MIT and the people who make MIT have given me a great gift by allowing me to be part of this amazing community. I look forward to the continued success of DAPER and our student-athletes.”
He is currently in his 44th year as a member of the MIT Athletics staff and is the senior associate director of athletics for sports administration. In his role, Benedick deals with the development of operational and budget processes and procedures for the intercollegiate athletics program. In addition, he is also the director of NCAA Compliance for the Institute’s 33 intercollegiate teams and played a key role in developing the design of the swimming and diving complex in the Zesiger Center.
“John has been the quintessential professional throughout his career at MIT,” says Julie Soriero, MIT director of athletics. “From his time on the pool deck as a coach to his attention to detail with compliance as an athletic administrator, everyone on our staff has respected his dedication, knowledge and service to MIT as well as our department. He has served as a strong and thoughtful leader who always led with integrity and professionalism. I speak for many when I say thank you and wish him the best as he steps into retirement.”
Prior to moving into administration, Benedick spent 22 years coaching the men’s and women’s swimming and diving teams, as well as serving as the head coach of the men’s water polo team.
As a coach, Benedick had over 50 of his student-athletes earn All-America honors to go along with numerous conference champions and three national champions. He was a two-time Coach of the Year honoree by the New England Intercollegiate Swimming Association while serving as the President of both the Collegiate Water Polo Association and the New England Intercollegiate Swimming Association.
Benedick was also the secretary treasurer for the New England Water Polo Association, has served on the NCAA Rules Committee for Water Polo and was a member of Women and Minorities Strategic Alliance Grant Committee for the NCAA. In addition, he has also served on numerous athletic department review committees for NCAA institutions.
Benedick graduated from California State University at Hayward with a bachelor of science degree in physical education. He then went on to earn his master’s degree in motor learning and sport sociology from the University of California at Berkeley.
As a swimmer and water polo player at Cal-State Heyward, Benedick was an All-American in three individual events and two relays. He was also the team captain and Far Western Conference Champion in the 50-yard freestyle. In 2012, he was inducted in to the Collegiate Water Polo Association Hall of Fame.
The MITx MicroMasters program recently added more pathway institutions, offering learners from around the globe enhanced access to “blended” master’s programs. Learners who pass an integrated set of MITx graduate-level courses on edX.org, and one or more proctored exams, will earn a MicroMasters credential from MITx, and can then apply to enter an accelerated, on campus, master’s degree program at MIT or other top universities that participate in the growing pathways program.
“We are proud to be driving increased access to higher education and career advancement through micromasters credentials and blended master’s programs for learners around the globe,” says MIT Dean for Digital Learning Krishna Rajagopal, “allowing them to more effectively balance their professional and personal lives with learning. The growing number of pathway institutions offer great on-campus experiences and enable accelerated, convenient, and more affordable access to a master’s degree.”
The recently-added pathway institutions are:
Instituto Tecnológico Autónomo de México (ITAM, in Mexico) for DEDP;
Hong Kong Polytechnic University (Hong Kong) for SCM;
Sasin Graduate Institute of Business Administration of Chulalongkorn University (Thailand) for SCM, DEDP, and the programs in manufacturing principles and statistics and data science; and
Duale Hochschule Baden-Württemberg (Germany) for SCM.
To date, 19 pathway institutions in 11 countries offer 58 different pathways to a master’s degree.
Benefits of the pathways for global learners
The pathway network enables MicroMasters credential holders, who are typically working professionals, to obtain a master’s degree from MIT or a growing number of pathway institutions whose campuses may be geographically accessible to them wherever they are.
Learners from around the globe, especially those who are well into their professional lives, may find it impossible to commit to a full-time one-year or two-year on-campus master’s program. It’s not merely a problem of time and money, but also of making sacrifices in their professional and family lives as they invest in higher education.
With its growing network of pathway institutions, the MicroMasters program changes the calculus for global learners. They can begin by taking flexible, cost-effective online MicroMasters courses that enable them to keep working, earning a MicroMasters certificate, a valuable professional and academic credential in and of itself. Many MicroMasters recipients benefit professionally immediately, advancing their careers. MITx credential recipients can also decide to seek a master’s degree by way of entering an on-campus degree program and receiving credit for their MicroMasters courses, shortening the residential requirement.
In order to learn more about the benefits of this pathway to a master’s program, three learners who’ve just completed their master’s degrees in supply chain management recently provided their insights on the program.
Dan Covert was already working as a supply chain professional for global retailer Ahold Delhaize (the Dutch owner of Stop and Shop supermarket) when he realized he “didn’t understand the core fundamentals of running a supply chain for a global company.” Covert had another problem: “I didn’t see a path forward for a master’s degree. I just wasn’t willing to leave my job, commit two years to a master’s program, and take on those financial burdens.” He signed up for cost-effective online MicroMasters courses in SCM, learning at night and on weekends: “It was the perfect way for me to keep working full-time while dipping my toe into higher education,” he says.
Like Dan Covert, Ramon Paulino took online SCM courses and eventually earned his master’s in June, coming onto the MIT campus for one semester. While taking his final online course, Paulino decided to pursue the on-campus portion. “I really liked what I was learning, and had this appetite to keep the momentum going after I’d tested the waters.” Paulino also mentions the low financial investment and accelerated, accessible nature of the blended master’s: “I don’t think I could have committed to even a full-year, on-campus program,” he says, “because of the amount of money and the burdens of fitting education into my professional and private life.” For example, Paulino remembers “taking an online test at the airport while traveling for my consulting company.”
Paulino says his wife was crucial in helping him balance work, life, and learning: “she kept working while I spent the few months on-campus at MIT. It made things so much easier because I only had to spend a short time on campus, versus an expensive one-year commitment while not working. Being able to compress that time made it all possible,” he says.
Rafaela Nunes, now working in Sao Paulo, Brazil, emphasizes three points when asked to explain the benefits of the online courses: “Accessibility, meaning I could study from anywhere; flexibility, meaning I could learn around my work schedule, and affordability of cost.” All three SCM learners interviewed said the online courses prepared them well for the accelerated on-campus learning experience. And all three maintain that the collaborative, face-to-face nature of the on-campus experience was critical for their learning. Nunes describes her overall experience in the SCM blended master’s program as “intense, unforgettable and of immeasurable value to my future.”
Benefit of the pathways for MIT, institutions, and companies
Pathways don’t just benefit global learners by offering a crucial on-campus learning experience, but also benefit the pathway institutions, and companies looking for cutting-edge talent. For instance, some pathway institutions are already integrating some of the MITx MicroMasters curriculum into what they’re doing, using it as a model. Moreover, being part of the growing MITx MicroMasters pathway network gives these institutions access to talented, well-prepared students who have already shown a proven commitment to learning, mastering graduate-level MITx coursework through obtaining the MITx MicroMasters credential, and who may not otherwise have considered completing master's degree.
Tracy Tan, director of the MicroMasters Program, adds that the pathways program “helps advance MIT’s educational mission of promoting access to world-class learning, and allows MIT to make a greater global impact with its world-renowned educational content.” By making learning more accessible for working professionals around the globe, geographically accessible pathways even help global companies access more talent in an array of professional areas.
From symbol classification in the brain to understanding built-in versus learned knowledge in children, the research ideas associated with the MIT Quest for Intelligence are pushing the boundaries of the field.
In a packed auditorium on a recent afternoon, leaders of The Quest hosted a workshop to discuss their mission and initial progress, as well as to brainstorm new project ideas with members of the MIT community. The workshop aimed to leverage the collective brainpower of MIT’s network to spark innovative and ambitious ideas that could advance research in human and machine intelligence.
“The goal of The Quest is to do fundamental research in our understanding of what is intelligence, and then use those discoveries to build applications that will solve real problems,” said Antonio Torralba, director of The Quest, as he kicked off the workshop. “One of the key aspects for this enterprise to be successful is that we need involvement from the whole community of MIT.”
Launched last spring, The Quest is a multidisciplinary effort to bolster MIT’s longstanding leadership in studying and engineering intelligence. Torralba, a professor of electrical engineering and computer science who also heads the MIT-IBM Watson AI Lab, and other MIT colleagues discussed The Quest’s mission, which is to address two crucial questions in the field: Can we reverse-engineer intelligence? How can we use our understanding of intelligence to create tools and strategies that make a difference in society?
“These are simultaneously the greatest open questions in the natural sciences and the most important engineering challenges of our time,” said James DiCarlo, director of The Quest Core and head of the Department of Brain and Cognitive Sciences. “These two great challenges are really at an intersection right now.”
Noting that the study of intelligence goes far beyond brain science and computer science, Torralba, DiCarlo, and their collaborators encouraged researchers from across the Institute's five schools to reach out if their work could in any way integrate with the research and education missions of The Quest. Many speakers noted that MIT is particularly well-suited to these collaborative efforts due to the range of expertise found on campus and in the broader Institute community.
Nicholas Roy, director of The Quest Bridge and a professor of aeronautics and astronautics, noted that one important goal of The Quest is to reduce the barrier to entry for tools related to intelligence. Roy described his team’s plans to develop a campus-wide data management platform, bring together hardware and software resources, and provide software engineers to benefit activities in classrooms, research laboratories, and across campus.
“We want to do all of this in the context of a community,” said Roy. “It’s important for us to recognize that as we put together a platform of tools and services for AI for the campus and the broader world, we do it in a way that is ethical, transparent, and unbiased. We need help doing all of these things.”
Leaders within The Quest each shared brief updates on their current work in intelligence and opened the floor to discussion, questions, and idea sharing from workshop attendees.
Leslie Kaelbling, co-scientific director of The Quest Core and a professor of computer science and engineering, described her work to develop physical robots to test and understand fundamental questions in artificial intelligence. She noted that while robots present unique challenges, they allow researchers to integrate multiple aspects of intelligence at once and garner concrete data about how we learn and interact with the world around us.
“Every single time I’ve sat in my office and imagined what that data is going to be like, I’ve been wrong,” said Kaelbling. “I don’t trust myself or really anyone else to imagine what the sensorimotor experience of a robot is going to be like. I think you can do a lot in simulation, but you should eventually touch the world.”
Looking beyond the workshop, Aude Oliva, executive director of The Quest, noted that the team had put out a call for white papers outlining ideas for ambitious projects. She reiterated that individuals from any of the five schools are welcome to build on the momentum of the brainstorming workshop and submit white papers or email them to firstname.lastname@example.org by Nov. 1.
The team also shared that thanks to several generous sponsors — including former Alphabet executive chairman Eric Schmidt and his wife Wendy; the MIT-IBM Watson AI Lab; and the MIT-SenseTime Alliance on Artificial Intelligence — The Quest will fund up to 100 undergraduate students to conduct innovative Quest-associated research through the MIT Undergraduate Research Opportunities Program.
By involving as much of the MIT community as possible, The Quest team said they aim to identify and grow the ideas that are most likely to expand our understanding of intelligence and to enable application of this knowledge to real-world problem solving.
The team plans to host several more workshops and events to engage with interested members of the community throughout the fall and spring.
When Juan Ruiz Ruiz arrived at MIT he was not planning a career in fusion research. A visiting student from École Polytechnique in Paris, he was finishing his master’s degree project and joining the Department of Aeronautics and Astronautics, ready to pursue further studies at MIT’s Space Propulsion Lab.
But something about fusion energy sparked his interest and offered a new path.
Introduced to MIT’s Plasma Science and Fusion Center (PSFC) through the weekly seminar series, Ruiz changed course, finishing his MIT master's under the direction of Nuclear Science and Engineering (NSE) professor Anne White, and eventually joining her department to work on his PhD at the PSFC.
“The potential of fusion energy is immense,” he explains. “I just wanted to contribute to this effort.”
MIT has been at the forefront of fusion research for decades, most significantly via the PSFC’s series of “Alcator” tokamaks, known for their compact size and high magnetic fields. Tokamaks create fusion in donut-shaped vacuum chambers, confining the hot, chaotic plasma fuel with magnets that wrap around the chamber. The soup of electrons and ions that comprise the plasma naturally follows the magnetic field lines, staying confined and away from the machine walls.
Ruiz arrived at MIT in time to witness the final years of experiments on the Center’s Alcator C-Mod tokamak, which ended its run in September 2016 after two decades. His focus now is on a PSFC collaboration with Princeton Plasma Physics Laboratory (PPPL) and the National Spherical Torus Experiment (NSTX).
“A regular tokamak is like a donut,” explains Ruiz. “The spherical tokamak at NSTX is like a donut that you compress, making the contour almost a complete sphere, with a central hole that is much smaller, almost the width of a thin wire.”
Ruiz is researching how to keep the plasma in a tokamak hot enough for fusion to take place. This is challenging because the hottest particles in the plasma, found in the core, leak towards the cooler areas at the edges, creating a plasma that will not be hot enough to sustain fusion. One of the factors pushing the heat to cooler areas is the turbulence of the plasma.
Ruiz explains that historically, the spherical tokamak was developed as a way of reducing that turbulence. In the 1980s researchers realized, through modeling, that designing a tokamak that was more compact, more spherical, would reduce the heat loss. NSTX was built at the PPPL in part to explore this issue. Experiments performed on the device showed that, under certain conditions, large-scale (or ion-scale) turbulence was indeed suppressed as expected in the spherical tokamak. What researchers did not predict is that small-scale fluctuations – at the electron-scale – suddenly gained more importance, making the overall heat loss worse than in standard tokamaks.
“You would not expect small scale fluctuations to be that important,” Ruiz says. “But it turns out that, in conditions where ion scale turbulence is suppressed, as observed in spherical tokamaks, the electron-scale turbulence becomes super important.”
To study this phenomenon NSTX researchers built a high-k scattering diagnostic, which for the first time could measure the small-scale fluctuations in the plasma.
“We send a beam of microwaves into the tokamak and propagate it through the plasma,” Ruiz says. “Then we detect how the plasma interacts with — or scatters — these microwaves, which provides us with information about the turbulence in the plasma.”
The system works a bit like a speed velocity gun police use to track cars. The scattered waves are shifted in frequency by the motion of the electrons.
While this diagnostic was useful at probing the turbulent state of the plasma, it still could not provide answers about how heat was leaking out. For that researchers needed to do turbulence simulations, requiring state-of-the-art supercomputers.
Ruiz has developed a synthetic diagnostic, a numerical tool like a computer or a code, through which he can model how the experiment works. His goal is to replicate the results of the high-k scattering experiments so that he can be confident his simulation will provide an accurate picture of the turbulence in the NSTX plasma and of where the heat is going.
Ruiz is intrigued by the difficulty of evaluating turbulence in fusion plasmas.
“If you have a neutral fluid, such as water, and add a drop of dye to it, you can see the turbulence in the fluid, and how it is everywhere,” he explains. “In our experiments we have plasmas at millions of degrees, and we are only measuring one point in the plasma. It is like being completely blind. You ask: ‘From a measurement of one point, how can I extrapolate anything? How can I determine what is going on elsewhere in the plasma?’”
Ruiz is comparing that one point in the high-k scattering experiment to the same point in his simulation to see how well they agree, projecting that if they agree at one point, he will have reason to believe they could agree at other places throughout the plasma. In this way he can obtain a complete picture of where the heat is going in the spherical tokamak.
“It is still a big leap of faith,” he acknowledges. “But it’s as good as we have so far.”
His research is relevant not only to spherical tokamaks, but to high-performance plasmas in standard tokamaks, including those that will be generated in ITER, the next generation fusion device being built in France, and SPARC, the PSFC’s planned high-field, net fusion energy experiment.
Ruiz is energized not only by the challenges of fusion research, but by participating in MIT activities. He is Treasurer of MIT’s Soccer and Judo Clubs, and Vice President of the Spanish Association. He speaks passionately about the MIT Global Start-Up Workshop, which organizes a yearly conference around the theme of entrepreneurship, start-ups and ecosystems, always in a different area of the world.
But he is driven most by the complex questions that arise doing fusion research.
“When you get into it you realize how much physics is involved, and how rich it is,” he says. “People have been working on solving the fusion puzzle for fifty years. I just want to contribute to fusion with my unique piece of the overall picture.”
Biopharmaceuticals, a class of drugs comprising proteins such as antibodies and hormones, represent a fast-growing sector of the pharmaceutical industry. They’re increasingly important for “precision medicine” — drugs tailored toward the genetic or molecular profiles of particular groups of patients.
Such drugs are normally manufactured at large facilities dedicated to a single product, using processes that are difficult to reconfigure. This rigidity means that manufacturers tend to focus on drugs needed by many patients, while drugs that could help smaller populations of patients may not be made.
To help make more of these drugs available, MIT researchers have developed a new way to rapidly manufacture biopharmaceuticals on demand. Their system can be easily reconfigured to produce different drugs, enabling flexible switching between products as they are needed.
“Traditional biomanufacturing relies on unique processes for each new molecule that is produced,” says J. Christopher Love, a professor of chemical engineering at MIT and a member of MIT’s Koch Institute for Integrative Cancer Research. “We’ve demonstrated a single hardware configuration that can produce different recombinant proteins in a fully automated, hands-free manner.”
The researchers have used this manufacturing system, which can fit on a lab benchtop, to produce three different biopharmaceuticals, and showed that they are of comparable quality to commercially available versions.
Love is the senior author of the study, which appears in the XX issue of the journal Nature Biotechnology. The paper’s lead authors are graduate students Laura Crowell and Amos Lu, and research scientist Kerry Routenberg Love.
A streamlined process
Biopharmaceuticals, which usually have to be injected, are often used to treat cancer, as well as other diseases including cardiovascular disease and autoimmune disorders. Most of these drugs are produced in “bioreactors” where bacteria, yeast, or mammalian cells churn out large quantities of a single drug. These drugs must be purified before use, so the entire production process can include dozens of steps, many of which require human intervention. As a result, it can take weeks to months to produce a single batch of a drug.
The MIT team wanted to come up with a more agile system that could be easily reprogrammed to rapidly produce a variety of different drugs on demand. They also wanted to create a system that would require very little human oversight while maintaining the high quality of protein required for use in patients.
“Our goal was to make the entire process automated, so once you set up our system, you press ‘go’ and then you come back a few days later and there’s purified, formulated drug waiting for you,” Crowell says.
One key element of the new system is that the researchers used a different type of cell in their bioreactors — a strain of yeast called Pichia pastoris. Yeast can begin producing proteins much faster than mammalian cells, and they can grow to higher population densities. Additionally, Pichia pastoris secretes only about 150 to 200 proteins of its own, compared to about 2,000 for Chinese hamster ovary (CHO) cells, which are often used for biopharmaceutical production. This makes the purification process for drugs produced by Pichia pastoris much simpler.
The researchers also greatly reduced the size of the manufacturing system, with the ultimate goal of making it portable. Their system consists of three connected modules: the bioreactor, where yeast produce the desired protein; a purification module, where the drug molecule is separated from other proteins using chromatography; and a module in which the protein drug is suspended in a buffer that preserves it until it reaches the patient.
In this study, the researchers used their new technology to produce three different drugs: human growth hormone; interferon alpha 2b, which is used to treat cancer; and granulocyte colony-stimulating factor (GCSF), which is used to boost the immune systems of patients receiving chemotherapy.
They found that for all three molecules, the drugs produced with the new process had the same biochemical and biophysical traits as the commercially manufactured versions. The GCSF product behaved comparably to a licensed product from Amgen when tested in animals.
Reconfiguring the system to produce a different drug requires simply giving the yeast the genetic sequence for the new protein and replacing certain modules for purification. With colleagues at Rensselaer Polytechnic Institute, the researchers also designed software that helps to come up with a new purification process for each drug they want to produce. Using this approach, they can come up with a new procedure and begin manufacturing a new drug within about three months. In contrast, developing a new industrial manufacturing process can take 18 to 24 months.
The ease with which the system switches between production of different drugs could enable many different applications. For one, it could be useful for producing drugs to treat rare diseases. Currently, such diseases have few treatments available, because it’s not worthwhile for drug companies to devote an entire factory to producing a drug that is not widely needed. With the new MIT technology, small-scale production of such drugs could be easily achieved, and the same machine could be used to produce a wide variety of such drugs.
Another potential use is producing small quantities of drugs needed for “precision medicine,” which involves giving patients with cancer or other diseases drugs that are specific to a genetic mutation or other feature of their particular disease. Many of these drugs are also needed in only small quantities.
“This paper is an important breakthrough in the possibility to produce and develop biotherapeutics at the point of care, and makes personalized medicine a reality,” says Huub Schellekens, a professor of medical biotechnology at Utrecht University in the Netherlands, who was not involved in the research.
These machines could also be deployed to regions of the world that do not have large-scale drug manufacturing facilities.
“Instead of centralized manufacturing, you can move to decentralized manufacturing, so you can have a couple of systems in Africa, and then it’s easier to get those drugs to those patients rather than making everything in North America, shipping it there, and trying to keep it cold,” Crowell says.
This type of system could also be used to rapidly produce drugs needed to respond to an outbreak such as Ebola.
The researchers are now working on making their device more modular and portable, as well as experimenting with producing other therapies, including vaccines. The system could also be deployed to speed up the process of developing and testing new drugs, the researchers say.
“You could be prototyping many different molecules because you can really build processes that are simple and fast to deploy. We could be looking in the clinic at a lot of different assets and making decisions about which ones perform the best clinically at an early stage, since we could potentially achieve the quality and quantity necessary for those studies,” Routenberg Love says.
The research was funded by the Defense Advanced Research Projects Agency, SPAWAR Systems Center Pacific, and the Koch Institute Support (core) Grant from the National Cancer Institute.