MIT Latest News
In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.
For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.
In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.
The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.
“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”
VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.
Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.
An “unbounded canvas” for analytics
The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.
Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.
The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.
“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”
With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.
Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.
According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.
“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.
“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”
The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.
Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.
“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”
Cytokines, small proteins released by immune cells to communicate with each other, have for some time been investigated as a potential cancer treatment.
However, despite their known potency and potential for use alongside other immunotherapies, cytokines have yet to be successfully developed into an effective cancer therapy.
That is because the proteins are highly toxic to both healthy tissue and tumors alike, making them unsuitable for use in treatments administered to the entire body.
Injecting the cytokine treatment directly into the tumor itself could provide a method of confining its benefits to the tumor and sparing healthy tissue, but previous attempts to do this have resulted in the proteins leaking out of the cancerous tissue and into the body’s circulation within minutes.
Now researchers at the Koch Institute for Integrative Cancer Research at MIT have developed a technique to prevent cytokines escaping once they have been injected into the tumor, by adding a Velcro-like protein that attaches itself to the tissue.
In this way the researchers, led by Dane Wittrup, the Carbon P. Dubbs Professor in Chemical Engineering and Biological Engineering and a member of the Koch Institute, hope to limit the harm caused to healthy tissue, while prolonging the treatment’s ability to attack the tumor.
To develop their technique, which they describe in a paper published today in the journal Science Translational Medicine, the researchers first investigated the different proteins found in tumors, to find one that could be used as a target for the cytokine treatment. They chose collagen, which is expressed abundantly in solid tumors.
They then undertook an extensive literature search to find proteins that bind effectively to collagen. They discovered a collagen-binding protein called lumican, which they then attached to the cytokines.
“When we inject (a collagen-anchoring cytokine treatment) intratumorally, we don’t have to worry about collagen found elsewhere in the body; we just have to make sure we have a protein that binds to collagen very tightly,” says lead author Noor Momin, a graduate student in the Wittrup Lab at MIT.
To test the treatment, the researchers used two cytokines known to stimulate and expand immune cell responses. The cytokines, interleukin-2 (IL-2) and interleukin-12 (IL-12), are also known to combine well with other immunotherapies.
Although IL-2 already has FDA approval, its severe side-effects have so far prevented its clinical use. Meanwhile IL-12 therapies have not yet reached phase 3 clinical trials due to their severe toxicity.
The researchers tested the treatment by injecting the two different cytokines into tumors in mice. To make the test more challenging, they chose a type of melanoma that contains relatively low amounts of collagen, compared to other tumor types.
They then compared the effects of administering the cytokines alone and of injecting cytokines attached to the collagen-binding lumican.
“In addition, all of the cytokine therapies were given alongside a form of systemic therapy, such as a tumor-targeting antibody, a vaccine, a checkpoint blockade, or chimeric antigen receptor (CAR)-T cell therapy, as we wanted to show the potential of combining cytokines with many different immunotherapy modalities,” Momin says.
They found that when any of the treatments were administered individually, the mice did not survive. Combining the treatments improved survival rates slightly, but when the cytokine was administered with the lumican to bind to the collagen, the researchers found that over 90 percent of the mice survived with some combinations.
“So we were able to show that these combinations are synergistic, they work really well together, and that cytokines attached to lumican really helped reap the full benefits of the combination,” Momin says.
What’s more, attaching the lumican eliminated the problem of toxicity associated with cytokine treatments alone.
The paper attempts to address a major obstacle in the oncology field, that of how to target potent therapeutics to the tumor microenvironment to enable their local action, according to Shannon Turley, a staff scientist and specialist in cancer immunology at Genentech, who was not involved in the research.
“This is important because many of the most promising cancer drugs can have unwanted side effects in tissues beyond the tumor,” Turley says. “The team’s approach relies on two principles that together make for a novel approach: injection of the drug directly into the tumor site, and engineering of the drug to contain a ‘Velcro’ that attaches the drug to the tumor to keep it from leaking into circulation and acting all over the body.”
The researchers now plan to carry out further work to improve the technique, and to explore other treatments that could benefit from being combined with collagen-binding lumican, Momin says.
Ultimately, they hope the work will encourage other researchers to consider the use of collagen binding for cancer treatments, Momin says.
“We’re hoping the paper seeds the idea that collagen anchoring could be really advantageous for a lot of different therapies across all solid tumors.”
Forensic investigators arrive at the scene of a crime to search for clues. There are no known suspects, and every second that passes means more time for the trail to run cold. A DNA sample is discovered, collected, and then sent to a nearby forensics laboratory. There, it is sequenced and fed into a program that compares its genetic contents to DNA profiles stored in the FBI’s National DNA Index System (NDIS) — a database containing profiles of 18 million people who have passed through the criminal justice system. The hope is that the crime scene sample will match a profile from the database, pointing the way to a suspect. The sample can also be used for kinship analysis through which the sample is linked to blood relatives, as was done last April to catch the infamous Golden State Killer.
DNA forensics is a powerful tool, yet it presents a computational scaling problem when it is improved and expanded for complex samples (those containing DNA from more than one individual) and kinship analysis. Consider the volume of data that the FBI must handle for the nation. “If you think of all the police stations across the country, all operating each week, it’s a lot of data to keep track of and organize,” says Darrell Ricke from the Bioengineering Systems and Technologies Group. To put this into perspective, if each state compares 2,000 crime scene samples weekly, that’s 100,000 samples to compare against 18 million profiles per week.
Ricke is part of a team at the laboratory that developed an integrated web-based platform called IdPrism that provides expanded comparison capabilities without compromising speed or functionality. IdPrism allows identification of more than 10 individuals in a complex DNA sample, along with extended kinship results. At its heart are two algorithms that Ricke developed, FastID and TachysSTR, which encode genetic markers as bits (0 or 1) and operate quickly and smoothly. These algorithms recently won a 2018 R&D 100 Award, which is given annually by R&D Magazine to the 100 most significant inventions of the year.
These markers are two types of variations in DNA called short tandem repeats (STR) and single nucleotide polymorphisms (SNP). They are considered to be a kind of DNA fingerprint that can be used to identify individuals as well as their relatives. Each person has a unique combination of SNP or STR variations — one person’s combination presents in a specific pattern, while another person’s presents in a different pattern. When analysts run a crime-scene DNA sample against a profile in the NDIS database, finding a matching combination of these STRs shows a high chance that the DNA belongs to the same person.
The FBI currently uses software algorithms that must pass through a complex set of calculations to reveal if a sample matches a profile. Ricke’s algorithms assign a bit value to normal (0) or rare (1) versions of SNPs, or a bit for each different STR marker. The normal label indicates that the SNP or STR is common in many people and is thus not a unique marker that can be used to identify an individual. With this digital DNA encoding for both identity comparisons and complex mixtures, analysis can be done with just three hardware bit instructions: exclusive OR, logical AND, and population count.
An exclusive OR instruction allows for a comparison of whether two DNA profiles are the same or different. For the forensic comparisons, this instruction will output a 0 when an SNP or STR in a sample matches that in a profile, and it will output a 1 when they don’t match. This technique works well when the crime scene sample contains DNA from only one individual, but if there are more contributors, a matching result could be hidden among mismatches from the other people in the same sample. This issue is addressed by adding a logical AND with the database profile to the results of the exclusive OR. This step, in a sense, gets rid of the mismatch noise to reveal whether the database profile has matched against an individual in the sample. The final step is population count, which sums up all of the 1s. In the end, a match is represented by mostly 0s and a mismatch will have a high number of 1s.
Using these three hardware bit instructions, the FastID algorithm can compare 5,000 SNPs in a crime scene DNA sample against 20 million reference profiles in under 12 seconds. Alternative methods would take hours to do so on this scale. Similarly, TachysSTR can compare STRs in 1 million samples in 1.8 seconds, whereas current algorithms take 10 minutes to do the same.
The results are displayed inside the IdPrism system in which investigators can run, view, query, and store their DNA comparison data. In addition to being fast and convenient, the system has improved the accuracy of forensics by including a panel of 2,650 SNP markers that are used for complex sample and kinship analysis.
Last November, the system was transitioned to users outside of the laboratory. "Although getting IdPrism to a transition-ready product was challenging, it is awesome to think that our technology is being used," says Philip Fremont-Smith, who is also from the Bioengineering Systems and Technologies Group and was involved in the bioinformatics side of the project.
“When Hollywood finds out about this, they’re going to change their scripts,” Ricke says. “The capabilities are so different from what’s out there.”
A team of MIT researchers is making it easier for novices to get their feet wet with artificial intelligence, while also helping experts advance the field.
In a paper presented at the Programming Language Design and Implementation conference this week, the researchers describe a novel probabilistic-programming system named “Gen.” Users write models and algorithms from multiple fields where AI techniques are applied — such as computer vision, robotics, and statistics — without having to deal with equations or manually write high-performance code. Gen also lets expert researchers write sophisticated models and inference algorithms — used for prediction tasks — that were previously infeasible.
In their paper, for instance, the researchers demonstrate that a short Gen program can infer 3-D body poses, a difficult computer-vision inference task that has applications in autonomous systems, human-machine interactions, and augmented reality. Behind the scenes, this program includes components that perform graphics rendering, deep-learning, and types of probability simulations. The combination of these diverse techniques leads to better accuracy and speed on this task than earlier systems developed by some of the researchers.
Due to its simplicity — and, in some use cases, automation — the researchers say Gen can be used easily by anyone, from novices to experts. “One motivation of this work is to make automated AI more accessible to people with less expertise in computer science or math,” says first author Marco Cusumano-Towner, a PhD student in the Department of Electrical Engineering and Computer Science. “We also want to increase productivity, which means making it easier for experts to rapidly iterate and prototype their AI systems.”
The researchers also demonstrated Gen’s ability to simplify data analytics by using another Gen program that automatically generates sophisticated statistical models typically used by experts to analyze, interpret, and predict underlying patterns in data. That builds on the researchers’ previous work that let users write a few lines of code to uncover insights into financial trends, air travel, voting patterns, and the spread of disease, among other trends. This is different from earlier systems, which required a lot of hand coding for accurate predictions.
“Gen is the first system that’s flexible, automated, and efficient enough to cover those very different types of examples in computer vision and data science and give state of-the-art performance,” says Vikash K. Mansinghka ’05, MEng ’09, PhD ’09, a researcher in the Department of Brain and Cognitive Sciences who runs the Probabilistic Computing Project.
Joining Cusumano-Towner and Mansinghka on the paper are Feras Saad and Alexander K. Lew, both CSAIL graduate students and members of the Probabilistic Computing Project.
Best of all worlds
In 2015, Google released TensorFlow, an open-source library of application programming interfaces (APIs) that helps beginners and experts automatically generate machine-learning systems without doing much math. Now widely used, the platform is helping democratize some aspects of AI. But, although it’s automated and efficient, it’s narrowly focused on deep-learning models which are both costly and limited compared to the broader promise of AI in general.
But there are plenty of other AI techniques available today, such as statistical and probabilistic models, and simulation engines. Some other probabilistic programming systems are flexible enough to cover several kinds of AI techniques, but they run inefficiently.
The researchers sought to combine the best of all worlds — automation, flexibility, and speed — into one. “If we do that, maybe we can help democratize this much broader collection of modeling and inference algorithms, like TensorFlow did for deep learning,” Mansinghka says.
In probabilistic AI, inference algorithms perform operations on data and continuously readjust probabilities based on new data to make predictions. Doing so eventually produces a model that describes how to make predictions on new data.
Building off concepts used in their earlier probabilistic-programming system, Church, the researchers incorporate several custom modeling languages into Julia, a general-purpose programming language that was also developed at MIT. Each modeling language is optimized for a different type of AI modeling approach, making it more all-purpose. Gen also provides high-level infrastructure for inference tasks, using diverse approaches such as optimization, variational inference, certain probabilistic methods, and deep learning. On top of that, the researchers added some tweaks to make the implementations run efficiently.
Beyond the lab
External users are already finding ways to leverage Gen for their AI research. For example, Intel is collaborating with MIT to use Gen for 3-D pose estimation from its depth-sense cameras used in robotics and augmented-reality systems. MIT Lincoln Laboratory is also collaborating on applications for Gen in aerial robotics for humanitarian relief and disaster response.
Gen is beginning to be used on ambitious AI projects under the MIT Quest for Intelligence. For example, Gen is central to an MIT-IBM Watson AI Lab project, along with the U.S. Department of Defense’s Defense Advanced Research Projects Agency’s ongoing Machine Common Sense project, which aims to model human common sense at the level of an 18-month-old child. Mansinghka is one of the principal investigators on this project.
“With Gen, for the first time, it is easy for a researcher to integrate a bunch of different AI techniques. It’s going to be interesting to see what people discover is possible now,” Mansinghka says.
Zoubin Ghahramani, chief scientist and vice president of AI at Uber and a professor at Cambridge University, who was not involved in the research, says, "Probabilistic programming is one of most promising areas at the frontier of AI since the advent of deep learning. Gen represents a significant advance in this field and will contribute to scalable and practical implementations of AI systems based on probabilistic reasoning.”
Peter Norvig, director of research at Google, who also was not involved in this research, praised the work as well. “[Gen] allows a problem-solver to use probabilistic programming, and thus have a more principled approach to the problem, but not be limited by the choices made by the designers of the probabilistic programming system,” he says. “General-purpose programming languages … have been successful because they … make the task easier for a programmer, but also make it possible for a programmer to create something brand new to efficiently solve a new problem. Gen does the same for probabilistic programming.”
Gen’s source code is publicly available and is being presented at upcoming open-source developer conferences, including Strange Loop and JuliaCon. The work is supported, in part, by DARPA.
Want to create a brand new type of protein that might have useful properties? No problem. Just hum a few bars.
In a surprising marriage of science and art, researchers at MIT have developed a system for converting the molecular structures of proteins, the basic building blocks of all living beings, into audible sound that resembles musical passages. Then, reversing the process, they can introduce some variations into the music and convert it back into new proteins never before seen in nature.
Although it’s not quite as simple as humming a new protein into existence, the new system comes close. It provides a systematic way of translating a protein’s sequence of amino acids into a musical sequence, using the physical properties of the molecules to determine the sounds. Although the sounds are transposed in order to bring them within the audible range for humans, the tones and their relationships are based on the actual vibrational frequencies of each amino acid molecule itself, computed using theories from quantum chemistry.
The system was developed by Markus Buehler, the McAfee Professor of Engineering and head of the Department of Civil and Environmental Engineering at MIT, along with postdoc Chi Hua Yu and two others. As described today in the journal ACS Nano, the system translates the 20 types of amino acids, the building blocks that join together in chains to form all proteins, into a 20-tone scale. Any protein’s long sequence of amino acids then becomes a sequence of notes.
While such a scale sounds unfamiliar to people accustomed to Western musical traditions, listeners can readily recognize the relationships and differences after familiarizing themselves with the sounds. Buehler says that after listening to the resulting melodies, he is now able to distinguish certain amino acid sequences that correspond to proteins with specific structural functions. “That’s a beta sheet,” he might say, or “that’s an alpha helix.”
Learning the language of proteins
The whole concept, Buehler explains, is to get a better handle on understanding proteins and their vast array of variations. Proteins make up the structural material of skin, bone, and muscle, but are also enzymes, signaling chemicals, molecular switches, and a host of other functional materials that make up the machinery of all living things. But their structures, including the way they fold themselves into the shapes that often determine their functions, are exceedingly complicated. “They have their own language, and we don’t know how it works,” he says. “We don’t know what makes a silk protein a silk protein or what patterns reflect the functions found in an enzyme. We don’t know the code.”
By translating that language into a different form that humans are particularly well-attuned to, and that allows different aspects of the information to be encoded in different dimensions — pitch, volume, and duration — Buehler and his team hope to glean new insights into the relationships and differences between different families of proteins and their variations, and use this as a way of exploring the many possible tweaks and modifications of their structure and function. As with music, the structure of proteins is hierarchical, with different levels of structure at different scales of length or time.
The new method translates an amino acid sequence of proteins into this sequence of percussive and rhythmic sounds. Courtesy of Markus Buehler.
The team then used an artificial intelligence system to study the catalog of melodies produced by a wide variety of different proteins. They had the AI system introduce slight changes in the musical sequence or create completely new sequences, and then translated the sounds back into proteins that correspond to the modified or newly designed versions. With this process they were able to create variations of existing proteins — for example of one found in spider silk, one of nature’s strongest materials — thus making new proteins unlike any produced by evolution.
The percussive, rhythmic, and musical sounds heard here are generated entirely from amino acid sequences. Courtesy of Markus Buehler.
Although the researchers themselves may not know the underlying rules, “the AI has learned the language of how proteins are designed,” and it can encode it to create variations of existing versions, or completely new protein designs, Buehler says. Given that there are “trillions and trillions” of potential combinations, he says, when it comes to creating new proteins “you wouldn’t be able to do it from scratch, but that’s what the AI can do.”
“Composing” new proteins
By using such a system, he says training the AI system with a set of data for a particular class of proteins might take a few days, but it can then produce a design for a new variant within microseconds. “No other method comes close,” he says. “The shortcoming is the model doesn’t tell us what’s really going on inside. We just know it works.”
This way of encoding structure into music does reflect a deeper reality. “When you look at a molecule in a textbook, it’s static,” Buehler says. “But it’s not static at all. It’s moving and vibrating. Every bit of matter is a set of vibrations. And we can use this concept as a way of describing matter.”
The method does not yet allow for any kind of directed modifications — any changes in properties such as mechanical strength, elasticity, or chemical reactivity will be essentially random. “You still need to do the experiment,” he says. When a new protein variant is produced, “there’s no way to predict what it will do.”
The team also created musical compositions developed from the sounds of amino acids, which define this new 20-tone musical scale. The art pieces they constructed consist entirely of the sounds generated from amino acids. “There are no synthetic or natural instruments used, showing how this new source of sounds can be utilized as a creative platform,” Buehler says. Musical motifs derived from both naturally existing proteins and AI-generated proteins are used throughout the examples, and all the sounds, including some that resemble bass or snare drums, are also generated from the sounds of amino acids.
The researchers have created a free Android smartphone app, called Amino Acid Synthesizer, to play the sounds of amino acids and record protein sequences as musical compositions.
“Markus Buehler has been gifted with a most creative soul, and his explorations into the inner workings of biomolecules are advancing our understanding of the mechanical response of biological materials in a most significant manner,” says Marc Meyers, a professor of materials science at the University of California at San Diego, who was not involved in this work.
Meyers adds, “The focusing of this imagination to music is a novel and intriguing direction. This is experimental music at its best. The rhythms of life, including the pulsations of our heart, were the initial sources of repetitive sounds that engendered the marvelous world of music. Markus has descended into the nanospace to extract the rythms of the amino acids, the building blocks of life.”
“Protein sequences are complex, as are comparisons between protein sequences,” says Anthony Weiss, a professor of biochemistry and molecular biotechnology at the University of Sydney, Australia, who also was not connected to this work. The MIT team “provides an impressive, entertaining and unusual approach to accessing and interpreting this complexity. ... The approach benefits from our innate ability to hear complex musical patterns. Through harmony and discord, we now have an entertaining and useful tool to compare and contrast amino acid sequences.”
The team also included research scientist Zhao Qin and Francisco Martin-Martinez at MIT. The work was supported by the U.S. Office of Naval Research and the National Institutes of Health.
A new study demonstrates, for the first time, that “social robots” used in support sessions held in pediatric units at hospitals can lead to more positive emotions in sick children.
Many hospitals host interventions in pediatric units, where child life specialists will provide clinical interventions to hospitalized children for developmental and coping support. This involves play, preparation, education, and behavioral distraction for both routine medical care, as well as before, during, and after difficult procedures. Traditional interventions include therapeutic medical play and normalizing the environment through activities such as arts and crafts, games, and celebrations.
For the study, published today in the journal Pediatrics, researchers from the MIT Media Lab, Boston Children’s Hospital, and Northeastern University deployed a robotic teddy bear, “Huggable,” across several pediatric units at Boston Children’s Hospital. More than 50 hospitalized children were randomly split into three groups of interventions that involved Huggable, a tablet-based virtual Huggable, or a traditional plush teddy bear. In general, Huggable improved various patient outcomes over those other two options.
The study primarily demonstrated the feasibility of integrating Huggable into the interventions. But results also indicated that children playing with Huggable experienced more positive emotions overall. They also got out of bed and moved around more, and emotionally connected with the robot, asking it personal questions and inviting it to come back later to meet their families. “Such improved emotional, physical, and verbal outcomes are all positive factors that could contribute to better and faster recovery in hospitalized children,” the researchers write in their study.
Although it is a small study, it is the first to explore social robotics in a real-world inpatient pediatric setting with ill children, the researchers say. Other studies have been conducted in labs, have studied very few children, or were conducted in public settings without any patient identification.
But Huggable is designed only to assist health care specialists — not replace them, the researchers stress. “It’s a companion,” says co-author Cynthia Breazeal, an associate professor of media arts and sciences and founding director of the Personal Robots group. “Our group designs technologies with the mindset that they’re teammates. We don’t just look at the child-robot interaction. It’s about [helping] specialists and parents, because we want technology to support everyone who’s invested in the quality care of a child.”
“Child life staff provide a lot of human interaction to help normalize the hospital experience, but they can’t be with every kid, all the time. Social robots create a more consistent presence throughout the day,” adds first author Deirdre Logan, a pediatric psychologist at Boston Children’s Hospital. “There may also be kids who don’t always want to talk to people, and respond better to having a robotic stuffed animal with them. It’s exciting knowing what types of support we can provide kids who may feel isolated or scared about what they’re going through.”
Joining Breazeal and Logan on the paper are: Sooyeon Jeong, a PhD student in the Personal Robots group; Brianna O’Connell, Duncan Smith-Freedman, and Peter Weinstock, all of Boston Children’s Hospital; and Matthew Goodwin and James Heathers, both of Northeastern University.
First prototyped in 2006, Huggable is a plush teddy bear with a screen depicting animated eyes. While the eventual goal is to make the robot fully autonomous, it is currently operated remotely by a specialist in the hall outside a child’s room. Through custom software, a specialist can control the robot’s facial expressions and body actions, and direct its gaze. The specialists could also talk through a speaker — with their voice automatically shifted to a higher pitch to sound more childlike — and monitor the participants via camera feed. The tablet-based avatar of the bear had identical gestures and was also remotely operated.
During the interventions involving Huggable — involving kids ages 3 to 10 years — a specialist would sing nursery rhymes to younger children through robot and move the arms during the song. Older kids would play the I Spy game, where they have to guess an object in the room described by the specialist through Huggable.
Through self-reports and questionnaires, the researchers recorded how much the patients and families liked interacting with Huggable. Additional questionnaires assessed patient’s positive moods, as well as anxiety and perceived pain levels. The researchers also used cameras mounted in the child’s room to capture and analyze speech patterns, characterizing them as joyful or sad, using software.
A greater percentage of children and their parents reported that the children enjoyed playing with Huggable more than with the avatar or traditional teddy bear. Speech analysis backed up that result, detecting significantly more joyful expressions among the children during robotic interventions. Additionally, parents noted lower levels of perceived pain among their children.
The researchers noted that 93 percent of patients completed the Huggable-based interventions, and found few barriers to practical implementation, as determined by comments from the specialists.
A previous paper based on the same study found that the robot also seemed to facilitate greater family involvement in the interventions, compared to the other two methods, which improved the intervention overall. “Those are findings we didn’t necessarily expect in the beginning,” says Jeong, also a co-author on the previous paper. “We didn’t tell family to join any of the play sessions — it just happened naturally. When the robot came in, the child and robot and parents all interacted more, playing games or in introducing the robot.”
An automated, take-home bot
The study also generated valuable insights for developing a fully autonomous Huggable robot, which is the researchers’ ultimate goal. They were able to determine which physical gestures are used most and least often, and which features specialists may want for future iterations. Huggable, for instance, could introduce doctors before they enter a child’s room or learn a child’s interests and share that information with specialists. The researchers may also equip the robot with computer vision, so it can detect certain objects in a room to talk about those with children.
“In these early studies, we capture data … to wrap our heads around an authentic use-case scenario where, if the bear was automated, what does it need to do to provide high-quality standard of care,” Breazeal says.
In the future, that automated robot could be used to improve continuity of care. A child would take home a robot after a hospital visit to further support engagement, adherence to care regimens, and monitoring well-being.
“We want to continue thinking about how robots can become part of the whole clinical team and help everyone,” Jeong says. “When the robot goes home, we want to see the robot monitor a child’s progress. … If there’s something clinicians need to know earlier, the robot can let the clinicians know, so [they’re not] surprised at the next appointment that the child hasn’t been doing well.”
Next, the researchers are hoping to zero in on which specific patient populations may benefit the most from the Huggable interventions. “We want to find the sweet spot for the children who need this type of of extra support,” Logan says.
Catherine Drennan says nothing in her job thrills her more than the process of discovery. But Drennan, a professor of biology and chemistry, is not referring to her landmark research on protein structures that could play a major role in reducing the world’s waste carbons.
“Really the most exciting thing for me is watching my students ask good questions, problem-solve, and then do something spectacular with what they’ve learned,” she says.
For Drennan, research and teaching are complementary passions, both flowing from a deep sense of “moral responsibility.” Everyone, she says, “should do something, based on their skill set, to make some kind of contribution.”
Drennan’s own research portfolio attests to this sense of mission. Since her arrival at MIT 20 years ago, she has focused on characterizing and harnessing metal-containing enzymes that catalyze complex chemical reactions, including those that break down carbon compounds.
She got her start in the field as a graduate student at the University of Michigan, where she became captivated by vitamin B12. This very large vitamin contains cobalt and is vital for amino acid metabolism, the proper formation of the spinal cord, and prevention of certain kinds of anemia. Bound to proteins in food, B12 is released during digestion.
“Back then, people were suggesting how B12-dependent enzymatic reactions worked, and I wondered how they could be right if they didn’t know what B12-dependent enzymes looked like,” she recalls. “I realized I needed to figure out how B12 is bound to protein to really understand what was going on.”
Drennan seized on X-ray crystallography as a way to visualize molecular structures. Using this technique, which involves bouncing X-ray beams off a crystallized sample of a protein of interest, she figured out how vitamin B12 is bound to a protein molecule.
“No one had previously been successful using this method to obtain a B12-bound protein structure, which turned out to be gorgeous, with a protein fold surrounding a novel configuration of the cofactor,” says Drennan.
Carbon-loving microbes show the way
These studies of B12 led directly to Drennan’s one-carbon work. “Metallocofactors such as B12 are important not just medically, but in environmental processes,” she says. “Many microbes that live on carbon monoxide, carbon dioxide, or methane — eating carbon waste or transforming carbon — use metal-containing enzymes in their metabolic pathways, and it seemed like a natural extension to investigate them.”
Some of Drennan’s earliest work in this area, dating from the early 2000s, revealed a cluster of iron, nickel, and sulfur atoms at the center of the enzyme carbon monoxide dehydrogenase (CODH). This so-called C-cluster serves hungry microbes, allowing them to “eat” carbon monoxide and carbon dioxide.
Recent experiments by Drennan analyzing the structure of the C-cluster-containing enzyme CODH showed that in response to oxygen, it can change configurations, with sulfur, iron, and nickel atoms cartwheeling into different positions. Scientists looking for new avenues to reduce greenhouse gases took note of this discovery. CODH, suggested Drennan, might prove an effective tool for converting waste carbon dioxide into a less environmentally destructive compound, such as acetate, which might also be used for industrial purposes.
Drennan has also been investigating the biochemical pathways by which microbes break down hydrocarbon byproducts of crude oil production, such as toluene, an environmental pollutant.
“It’s really hard chemistry, but we’d like to put together a family of enzymes to work on all kinds of hydrocarbons, which would give us a lot of potential for cleaning up a range of oil spills,” she says.
The threat of climate change has increasingly galvanized Drennan’s research, propelling her toward new targets. A 2017 study she co-authored in Science detailed a previously unknown enzyme pathway in ocean microbes that leads to the production of methane, a formidable greenhouse gas: “I’m worried the ocean will make a lot more methane as the world warms,” she says.
Drennan hopes her work may soon help to reduce the planet’s greenhouse gas burden. Commercial firms have begun using the enzyme pathways that she studies, in one instance employing a proprietary microbe to capture carbon dioxide produced during steel production — before it is released into the atmosphere — and convert it into ethanol.
“Reengineering microbes so that enzymes take not just a little, but a lot of carbon dioxide out of the environment — this is an area I’m very excited about,” says Drennan.
Creating a meaningful life in the sciences
At MIT, she has found an increasingly warm welcome for her efforts to address the climate challenge.
“There’s been a shift in the past decade or so, with more students focused on research that allows us to fuel the planet without destroying it,” she says.
In Drennan’s lab, a postdoc, Mary Andorfer, and a rising junior, Phoebe Li, are currently working to inhibit an enzyme present in an oil-consuming microbe whose unfortunate residence in refinery pipes leads to erosion and spills. “They are really excited about this research from the environmental perspective and even made a video about their microorganism,” says Drennan.
Drennan delights in this kind of enthusiasm for science. In high school, she thought chemistry was dry and dull, with no relevance to real-world problems. It wasn’t until college that she “saw chemistry as cool.”
The deeper she delved into the properties and processes of biological organisms, the more possibilities she found. X-ray crystallography offered a perfect platform for exploration. “Oh, what fun to tell the story about a three-dimensional structure — why it is interesting, what it does based on its form,” says Drennan.
The elements that excite Drennan about research in structural biology — capturing stunning images, discerning connections among biological systems, and telling stories — come into play in her teaching. In 2006, she received a $1 million grant from the Howard Hughes Medical Institute (HHMI) for her educational initiatives that use inventive visual tools to engage undergraduates in chemistry and biology. She is both an HHMI investigator and an HHMI professor, recognition of her parallel accomplishments in research and teaching, as well as a 2015 MacVicar Faculty Fellow for her sustained contribution to the education of undergraduates at MIT.
Drennan attempts to reach MIT students early. She taught introductory chemistry classes from 1999 to 2014, and in fall 2018 taught her first introductory biology class.
“I see a lot of undergraduates majoring in computer science, and I want to convince them of the value of these disciplines,” she says. “I tell them they will need chemistry and biology fundamentals to solve important problems someday.”
Drennan happily migrates among many disciplines, learning as she goes. It’s a lesson she hopes her students will absorb. “I want them to visualize the world of science and show what they can do,” she says. “Research takes you in different directions, and we need to bring the way we teach more in line with our research.”
She has high expectations for her students. “They’ll go out in the world as great teachers and researchers,” Drennan says. “But it’s most important that they be good human beings, taking care of other people, asking what they can do to make the world a better place.”
It doesn’t get any better than this — at least not at MIT. There’s the roar of raucous laughter as students play games or test products that they themselves have designed and built. There’s the chatter of questions asked and answered, all to the effect of “How did you do that?” and “Here’s what I did.”
To top it off, there’s the welcoming smell of pizza, slices being pulled from rapidly cooling boxes by a group of students and teaching assistants from the four sections of 6.08 (Introduction to EECS via Interconnected Embedded Systems). They have gathered for a special occasion during the last week of spring term: to show off their class final projects.
“This is the best class I've taken here,” says Mussie Demisse, a sophomore in EECS, dressed in a hoodie with a square contraception on his back that could have fallen off Iron Man. He and his team have designed a “Smart Suit” that analyzes and assesses a user’s pushup form.
“The class has given me the opportunity to do research on my own,” Demisse says. “It’s introduced us to many things and it now falls on us to pursue the things we like.”
The course introduces students to working with multiple platforms, servers, databases, and microcontrollers. For the final project, four-person teams design, program, build, and demonstrate their own cloud-connected, handheld, or wearable Internet of Things systems. The result: about 85 projects ranging from a Frisbee that analyzes velocity and acceleration to a “better” GPS system for tracking the location of the MIT shuttle.
“Don’t hit the red apple! Noooo,” yells first-year student Bradley Albright as Joe Steinmeyer, EECS lecturer and 6.08 instructor, hits the wrong target while playing “Vegetable Assassins.” The object of the game is to slice the vegetables scrolling by on a computer screen, but Steinmeyer, using an internet-connected foam sword, has managed to hit an apple instead.
Albright had the idea for a “Fruit Ninja”-style game during his first days at MIT, when he envisioned the visceral experience of slicing the air with a katana, or Japanese sword, and hitting a virtual target. Then, he and his team of Johnny Bui and Eesam Hourani, both sophomores in EECS, and Tingyu Li, a junior in management, were able to, as they put it, “take on the true villains of the food pyramid: vegetables.” They built a server-client model in which data from the sword is sent to a browser via a server connection. The server facilitates communication between all components through multiple WebSocket connections.
“It took a lot of work. Coming down to the last night, we had some problems that we had to spend a whole night finishing but I think we are all incredibly happy with the work we put into it,” Albright says.
Steinmeyer teaches 6.08 with two EECS colleagues: Max Shulaker, the Emmanuel E. Landsman (1958) Career Development Assistant Professor, and Stefanie Mueller, the X-Window Consortium Career Development Assistant Professor. The course was co-created by Steinmeyer and Joel Voldman, an EECS professor and associate department head.
Mueller, for one, is impressed with the students’ collaborative efforts as they developed their projects in just four weeks: “They really had to pull together to work,” she says.
Even projects that don’t quite work as expected are learning experiences, Steinmeyer notes. “I’m a big fan of having people do work early on and then go and do it again later. That’s how I learned the best. I always had to learn a dumb way first.”
Demisse and his team — Amadou Bah and Stephanie Yoon, both sophomores in EECS, and Sneha Ramachandran, a junior in EECS — confronted a few setbacks in developing their Smart Suit. “We wanted something to force ourselves to play around with electronics and hardware,” he explains. “During our brainstorming session, we thought of things that would monitor your heart rate.”
Initially, they considered something that runners might use to track their form. “But running’s pretty hard. [We thought,] ‘Let’s take a step back,” Demisse recalls. “It was a natural evolution from that to pushups.”
They designed a zip-up hoodie with inertial measurement unit sensors on an elbow, the upper back, and the lower back to measure the acceleration of each body part as the user does pushups for 10 seconds. Those data are then analyzed and compared to the measurements of what is considered the “ideal” pushup form.
A particular challenge: getting the data from various sources analyzed in reasonable amount of time. The system uses a multiplex approach, but just “listens” to one input at a time. “That makes it easier to record data at a faster rate,” Demisse says.
Another team developed a fishing game in which users cast a handheld pole and pick up “fish” viewed on a nearby screen. First-year Rafael Olivera-Cintron demonstrates by casting; a soft noise accompanies the movement. “Do you hear that ambient sound? That’s lake sounds, the sounds of water and mosquitos,” he says. He casts again and waits. And waits. “Yes, it’s a lot like fishing. A lot of waiting,” he says. “That’s my favorite part.” His teammates included EECS juniors Mohamadou Bella Bah and Chad Wood and EECS sophomores Julian Espada and Veronica Muriga.
Several teams’ projects involve music. Diana Voronin, Julia Moseyko, and Terryn Brunelle, all first-year students, are happy to show off “DJam,” an interconnected spin on Guitar Hero. Rather than pushing buttons that correspond to imaginary guitar chords, users spin a turntable to different positions — all to the beat of a song playing in the background.
“We just knew we wanted to do something with music because it would be fun,” Moseyko says. “We also wanted to work with something that turned. From a technical point of view, it was interesting to use that kind of sensor.”
Music from the Middle Ages inspired the team of Shahir Rahman and Patrick Kao, both sophomores in EECS, and Adam Potter and Lilia Luong, both first-years. Using a plywood version of a medieval instrument called a hurdy-gurdy, they created “Hurdy-Gurdy Hero,” which uses a built-in microphone to capture and save favorite songs to a database that processes the audio into a playable game.
“The idea is to give joy, to be able to play an actual instrument but not necessarily just for those who [already] know to play,” Rahman says. He cranks the machine and slightly squeaky but oddly harmonic notes emerge. Other students are clearly impressed by what they’re hearing. Olivera-Cintron sums up in just three words: “That is awesome.”
The MIT Machine Intelligence Community began with a few friends meeting over pizza to discuss landmark papers in machine learning. Three years later, the undergraduate club boasts 500 members, an active Slack channel, and an impressive lineup of student-led reading groups and workshops meant to demystify machine learning and artificial intelligence (AI) generally. This year, MIC and MIT Quest for Intelligence joined forces to advance their common cause of making AI tools accessible to all.
Starting last fall, the MIT Quest opened its offices to MIC members and extended access to IBM and Google-donated cloud credits, providing a boost of computing power to students previously limited to running their AI models on desktop machines loaded with extra graphics processors. The MIT Quest and MIC are now collaborating on a host of projects, independently and through MIT’s Undergraduate Research Opportunities Program (UROP).
“We heard about their mission to spread machine learning to all undergrads and thought, ‘That’s what we’re trying to do — let’s do it together!” says Joshua Joseph, chief software engineer with the MIT Quest Bridge.
A makerspace for AI
U.S. Army ROTC students Ian Miller and Rishi Shah came to MIC for the free cloud credits, but stayed for the workshop on neural computing sticks. A compute stick allows mobile devices to do image processing on the fly, and when the cadets learned what one could do, they knew their idea for a portable computer vision system would work.
“Without that, we’d have to send images to a central place to do all this computing,” says Miller, a rising junior. “It would have been a logistical headache.”
Built in two months, for $200, their wallet-sized device is designed to plug into a tablet strapped to an Army soldier’s chest and scan the surrounding area for cars and people. With more training, they say, it could learn to spot cellphones and guns. In May, the cadets demo'd their device at MIT’s Soldier Design Competition and were invited by an Army sergeant to visit Fort Devens to continue working on it.
Rose Wang, a rising senior majoring in computer science, was also drawn to MIC by the free cloud credits, and a chance to work on projects with the Quest for Intelligence and with other students. This spring, she used IBM cloud credits to run a reinforcement learning model that’s part of her research with MIT Professor Jonathan How, training robot agents to cooperate on tasks that involve limited communication and information. She recently presented her results at a workshop at the International Conference on Machine Learning.
“It helped me try out different techniques without worrying about the compute bottleneck and running out of resources,” she says.
Improving AI access at MIT
The MIC has launched several AI projects of its own. The most ambitious is Monkey, a container-based, cloud-native service that would allow MIT undergraduates to log in and train an AI model from anywhere, tracking the training as it progresses and managing the credits allotted to each student. On a Friday afternoon in April, the team gathered in a Quest for Intelligence conference room as Michael Silver, a rising senior, sketched out the modules Monkey would need.
As Silver scrawled the words "Docker Image Build Service" on the board, the student assigned to research the module apologized. “I didn’t make much progress on it because I had three midterms!” he said.
The planning continued, with Steven Shriver, a software engineer with the Quest Bridge, interjecting bits of advice. The students had assumed the container service they planned to use, Docker, would be secure. It isn’t.
“Well, I guess we have another task here,” said Silver, adding the word “security” to the white board.
Later, the sketch would be turned into a design document and shared with the two UROP students helping to execute Monkey. The team hopes to launch sometime next year.
“The coding isn’t the difficult part,” says UROP student Amanda Li, a member of MIC Dev-Ops. “It’s the exploring the server side of machine learning — Docker, Google Cloud, and the API. The most important thing I’ve learned is how to efficiently design and pipeline a project as big as this.”
Silver knew he wanted to be an AI engineer in 2016, when the computer program AlphaGo defeated the world’s reigning Go champion. As a senior at Boston University Academy, Silver worked on natural language processing in the lab of MIT Professor Boris Katz, and has continued to work with Katz since coming to MIT. Seeking more coding experience, he left HackMIT, where he had been co-director, to join MIC Dev-Ops.
“A lot of students read about machine learning models, but have no idea how to train one,” he says. “Even if you know how to train one, you’d need to save up a few thousand dollars to buy the GPUs to do it. MIC lets students interested in machine learning reach that next level.”
Conceived by MIC members, a second project is focused on making AI research papers posted on arXiv easier to explore. Nearly 14,000 academic papers are uploaded each month to the site, and although papers are tagged by field, drilling into subtopics can be overwhelming.
Wang, for one, grew frustrated while doing a basic literature search on reinforcement learning. “You have a ton of data and no effective way of representing it to the user,” she says. “It would have been useful to see the papers in a larger context, and to explore by number of citations or their relevance to each other.”
A third MIC project focuses on crawling MIT’s hundreds of listservs for AI-related talks and events to populate a Google calendar. The tool will be closely patterned after an app Silver helped build during MIT’s Independent Activities Period in January. Called Dormsp.am, the app classifies listserv emails sent to MIT undergraduates and plugs them into a calendar-email client. Students can then search for events by day or by a color-coded topic, such as tech, food, or jobs. Once Dormsp.am launches, Silver will adapt it to search for and post AI-related events at MIT to an MIC calendar.
Silver says the team spent extra time on the user interface, taking a page from MIT Professor Daniel Jackson’s Software Studio class. “This is an app that can live or die on its usability, so the front end is really important,” he says.
Wang is now collaborating with Moin Nadeem, MIC’s outgoing president, to build the visualization tool. It’s exactly the kind of hands-on experience MIC was intended to provide, says Nadeem, a rising senior. “Students learn fundamental concepts in class but don’t know how to implement them,” he says. “I’m trying to build what freshman me would have liked to have had: a community of people excited to do interesting stuff with machine learning.”
For the past 20 years, officials from the U.S. Navy and leaders in the shipbuilding industry have convened on MIT’s campus each spring for the MIT Ship Design and Technology Symposium. The daylong event is a platform to update industry and military leaders on the latest groundbreaking research in naval construction and engineering being conducted at MIT.
The main event of the symposium was the design project presentations given by Course 2N (Naval Construction and Engineering) graduate students. These projects serve as a capstone of their three-year curriculum.
This year, recent graduate Andrew Freeman MEng '19, SM '19, who was advised by Dick K. P. Yue, the Philip J. Solondz Professor of Engineering, and William Taft MEng '19, SM '19, who works with James Kirtley, professor of electrical engineering and computer science, presented their current research. Rear Admiral Ronald A. Boxall, director of surface warfare at the U.S. Navy, served as keynote speaker at the event, which took place in May.
“The Ship Design and Technology Symposium gives students in the 2N program the opportunity to present ship and submarine design and conversions, as well as thesis research, to the leaders of the U.S. Navy and design teams from industry,” explains Joe Harbour, professor of the practice of naval construction at MIT. “Through the formal presentation and poster sessions, the naval and industrial leaders can better understand opportunities to improve designs and design processes.”
Since 1901, the Course 2N program has been educating active-duty officers in the Navy and U.S. Coast Guard, in addition to foreign naval officers. This year, eight groups of 2N students presented design or conversion project briefs to an audience of experts in the Samberg Conference Center.
The following three projects exemplify the ways in which these students are adapting existing naval designs and creating novel designs that can help increase the capabilities and efficiency of naval vessels.
The next generation of hospital ships
The Navy has a fleet of hospital ships ready for any major combat situations that might arise. These floating hospitals allow doctors to care for large numbers of casualties, perform operations, stabilize patients, and help transfer patients to other medical facilities.
Lately, these ships have been instrumental in response efforts during major disasters — such as the recent hurricanes in the Caribbean. The ships also provide an opportunity for doctors to train local medical professionals in developing countries.
The Navy's current fleet of hospital ships is aging. Designed in the 1980s, these ships require an update to complement the way naval operations are conducted in modern times. As such, the U.S. Navy is looking to launch the next fleet of hospital ships in 2035.
A team of Course 2N students including Aaron Sponseller, Travis Rapp, and Robert Carelli was tasked with assessing current hospital ship designs and proposing a design for the next generation of hospital ships.
“We looked at several different hull form sizes that could achieve the goals of our sponsors, and assigned scores to rank their attributes and determine which one could best achieve their intended mission,” explains Carelli.
In addition to visiting the USS Mercy, a hospital ship that was commissioned during World War II, the team toured nearby Tufts Medical Center to get a sense of what a state-of-the-art medical facility looked like. One thing that immediately struck the team was how different the electrical needs of a modern-day medical facility are from the needs nearly 40 years ago, when the medical ships were first being designed.
“Part of the problem with the current ships is they scaled their electrical capacity with older equipment from the 1980s in mind,” adds Rapp. This capacity doesn’t account for the increased electrical burden of digital CT scans, high-tech medical devices, and communication suites.
The current ships have a separate propulsion plant and electrical generation plant. The team found that combining the two would increase the ship’s electrical capacity, especially while "on station" — a term used when a ship maintains its position in the water.
“These ships spend a lot of time on station while doctors operate on patients,” explains Carelli. “By using the same system for propelling and electrical generation, you have a lot more capacity for these medical operations when it’s on station and for speed when the ship is moving.”
The team also recommended that the ship be downsized and tailored to treat intensive care cases rather than having such large stable patient areas. “We trimmed the fat, so to speak, and are moving the ship toward what really delivers value — intensive care capability for combat operations,” says Rapp.
The team hopes their project will inform the decisions the Navy makes when they do replace large hospital ships in 2035. “The Navy goes through multiple iterations of defining how they want their next ship to be designed and we are one small step in that process,” adds Sponseller.
Autonomous fishing vessels
Over the past few decades, advances in artificial intelligence and sensory hardware have led to increasingly sophisticated unmanned vehicles in the water. Sleek autonomous underwater vehicles operate below the water’s surface. Rather than work on these complex and often expensive machines, Course 2N students Jason Barker, David Baxter, and Brian Stanfield assessed the possibility of using something far more commonplace for their design project: fishing vessels.
“We were charged with looking at the possibility of going into a port, acquiring a low-end vessel like a fishing boat, and making that boat an autonomous machine for various missions,” explains Barker.
With such a broad scope, Barker and his teammates set some parameters to guide their research. They honed in on one fishing boat in particular: a 44 four-drum seiner.
The next step was determining how such a vessel could be outfitted with sensors to carry out a range of missions including measuring marine life, monitoring marine traffic in a given area, carrying out intelligence, surveillance and reconnaissance (ISR) missions, and, perhaps most importantly, conducting search and rescue operations.
The team estimated that the cost of transforming an everyday fishing boat into an autonomous vehicle would be roughly $2 million — substantially lower than building a new autonomous vehicle. The relatively low cost could make this an appealing exercise in areas where piracy is a potential concern. “Because the price of entry is so low, it’s not as risky as using a capital asset in these areas,” Barker explains.
The low price could also lead to a number of such autonomous vehicles in a given area. “You could put out a lot of these vessels,” adds Barker. “With the advances of swarm technologies you could create a network or grid of autonomous boats.”
Increasing endurance and efficiency in Freedom-class ships
For Course 2N student Charles Hasenbank, working on a conversion project for the engineering plant of Freedom-class ships was a natural fit. As a lieutenant in the U.S. Navy, Hasenbank served on the USS Freedom.
Freedom-class ships can reach upwards of 40 knots, 10 knots faster than most combat ships. “To get those extra knots requires a substantial amount of power,” explains Hasenbank. This power is generated by two diesel engines and two gas turbines that are also used to power large aircraft like the Dreamliner.
For their new frigate program, the Navy is looking to achieve a maximum speed of 30 knots, making the extra power provided by these engines unnecessary. The endurance range of these new frigates, however, would be higher than what the current Freedom-class ships allow. As such, Hasenbank and his fellow students Tikhon Ruggles and Cody White were tasked with exploring alternate forms of propulsion.
The team had five driving criteria in determining how to best convert the ships’ power system — minimize weight changes, increase efficiency, maintain or decrease acquisition costs, increase simplicity, and improve fleet commonality.
“The current design is a very capable platform, but the efficiencies aren’t there because speed was a driving factor,” explains Hasenbank.
When redesigning the engineering plant, the team landed on the use of four propellers, which would maintain the amount of draft currently experienced by these ships. To accommodate this change, the structure of the stern would need to be altered.
By removing a step currently in the stern design, the team made an unexpected discovery. Above 12 knots, their stern design would decrease hull resistance. “Something we didn’t initially expect was we improved efficiency and gained endurance through decreasing the hull resistance,” adds Hasenbank. “That was a nice surprise along the way.”
The team’s new design would be able to meet the 30 knot speed requirement of the new frigate program and it would add anywhere between 500 and 1,000 nautical miles of endurance to the ship.
Along with the other design projects presented at the MIT Ship Design and Technology Symposium, the work conducted by Hasenbank and his team could inform important decisions the U.S. Navy has to make in the coming years as it looks to update and modernize its fleet.
The following email was sent today to the MIT community by President L. Rafael Reif.
To the members of the MIT community,
MIT has flourished, like the United States itself, because it has been a magnet for the world’s finest talent, a global laboratory where people from every culture and background inspire each other and invent the future, together.
Today, I feel compelled to share my dismay about some circumstances painfully relevant to our fellow MIT community members of Chinese descent. And I believe that because we treasure them as friends and colleagues, their situation and its larger national context should concern us all.
As the US and China have struggled with rising tensions, the US government has raised serious concerns about incidents of alleged academic espionage conducted by individuals through what is widely understood as a systematic effort of the Chinese government to acquire high-tech IP.
As head of an institute that includes MIT Lincoln Laboratory, I could not take national security more seriously. I am well aware of the risks of academic espionage, and MIT has established prudent policies to protect against such breaches.
But in managing these risks, we must take great care not to create a toxic atmosphere of unfounded suspicion and fear. Looking at cases across the nation, small numbers of researchers of Chinese background may indeed have acted in bad faith, but they are the exception and very far from the rule. Yet faculty members, post-docs, research staff and students tell me that, in their dealings with government agencies, they now feel unfairly scrutinized, stigmatized and on edge – because of their Chinese ethnicity alone.
Nothing could be further from – or more corrosive to – our community’s collaborative strength and open-hearted ideals. To hear such reports from Chinese and Chinese-American colleagues is heartbreaking. As scholars, teachers, mentors, inventors and entrepreneurs, they have been not only exemplary members of our community but exceptional contributors to American society. I am deeply troubled that they feel themselves repaid with generalized mistrust and disrespect.
The signal to the world
For those of us who know firsthand the immense value of MIT’s global community and of the free flow of scientific ideas, it is important to understand the distress of these colleagues as part of an increasingly loud signal the US is sending to the world.
Protracted visa delays. Harsh rhetoric against most immigrants and a range of other groups, because of religion, race, ethnicity or national origin. Together, such actions and policies have turned the volume all the way up on the message that the US is closing the door – that we no longer seek to be a magnet for the world’s most driven and creative individuals. I believe this message is not consistent with how America has succeeded. I am certain it is not how the Institute has succeeded. And we should expect it to have serious long-term costs for the nation and for MIT.
For the record, let me say with warmth and enthusiasm to every member of MIT’s intensely global community: We are glad, proud and fortunate to have you with us! To our alumni around the world: We remain one community, united by our shared values and ideals! And to all the rising talent out there: If you are passionate about making a better world, and if you dream of joining our community, we welcome your creativity, we welcome your unstoppable energy and aspiration – and we hope you can find a way to join us.
* * *
In May, the world lost a brilliant creative force: architect I.M. Pei, MIT Class of 1940. Raised in Shanghai and Hong Kong, he came to the United States at 17 to seek an education. He left a legacy of iconic buildings from Boston to Paris and China to Washington, DC, as well on our own campus. By his own account, he consciously stayed alive to his Chinese roots all his life. Yet, when he died at the age of 102, the Boston Globe described him as “the most prominent American architect of his generation.”
Thanks to the inspired American system that also made room for me as an immigrant, all of those facts can be true at the same time.
As I have discovered through 40 years in academia, the hidden strength of a university is that every fall, it is refreshed by a new tide of students. I am equally convinced that part of the genius of America is that it is continually refreshed by immigration – by the passionate energy, audacity, ingenuity and drive of people hungry for a better life.
There is certainly room for a wide range of serious positions on the actions necessary to ensure our national security and to manage and improve our nation’s immigration system. But above the noise of the current moment, the signal I believe we should be sending, loud and clear, is that the story of American immigration is essential to understanding how the US became, and remains, optimistic, open-minded, innovative and prosperous – a story of never-ending renewal.
In a nation like ours, immigration is a kind of oxygen, each fresh wave reenergizing the body as a whole. As a society, when we offer immigrants the gift of opportunity, we receive in return vital fuel for our shared future. I trust that this wisdom will always guide us in the life and work of MIT. And I hope it can continue to guide our nation.
L. Rafael Reif
Greentown Labs is the largest clean technology incubator in North America, a fact that’s easy to accept when you walk inside. The massive, open entrance of Greentown’s Somerville, Massachusetts, headquarters gives visitors the impression they’ve entered the office of one of Greater Boston’s most successful tech companies.
Beyond the modern entryway are smaller working spaces — some cluttered with startup prototypes, others lined with orderly lab equipment — to enable foundational, company-building experiments.
In addition to the space and equipment, Greentown offers startups equity-free legal, information technology, marketing, and sales support, and a coveted network of corporations and industry investors.
But what many entrepreneurs say they like most about Greentown is the people.
“Greentown offers a lot of different things, but first and foremost among them is a community of entrepreneurs who are striving to solve big challenges in climate, energy, and the environment,” says Greentown Labs CEO Emily Reichert MBA ’12.
Greentown is full of stories of peers bumping into each other in the kitchen only to find they’re struggling with similar problems or, even better, that one of them already grappled with the problem and found a solution.
MIT has played a pivotal role in Greentown’s success since its inception. Reichert estimates about 60 percent of Greentown’s more than 90 current startups were founded by MIT alumni.
The current version of Greentown looks like the result of some well-funded, grand vision set forth long ago. But Greentown’s rise was every bit as spontaneous — and tenuous — as the early days of any startup.
A space for building
In 2010, Sorin Grama SM ’07 and Sam White were looking for office space to work on a new chiller design for their startup, Promethean Power Systems, which still develops off-grid refrigeration systems in India. They needed a place to build the big, leaky refrigeration prototypes they’d thought up. It also needed to be close to MIT, where the company founders connected with advisors and interns.
Eventually, White found “a dilapidated warehouse” on Charles Street in Cambridge for the right price. What the space lacked in beauty it made up for in size, so the founders decided to use an MIT email list to see if other founders would like to join them. Some founders building an app were first to respond. Their first reaction was to ask White and Grama to clean up a bit, and they were politely shown the door.
Without exactly intending to, Grama and White had made their warehouse a builder space. Over the next week, a few more founders came in, including Jason Hanna, the co-founder of building efficiency company Embue; Jeremy Pitts SM ’10, MBA ’10, who was creating more efficient compressor systems for the oil and gas industry as the founder of Oscomp Systems; and Adam Rein MBA ’10 and Ben Glass ’07 SM ’10, whose company Altaeros was building airborne wind turbines. The warehouse looked perfect to them.
“What we all had in common was we just needed a space to prototype and build stuff, where we could spill stuff, make noise, and share tools,” Grama says. “Pretty quickly it became a nice band of startups that appreciated the same thing.”
The winter of 2010-2011 was a freezing one in the warehouse, made worse by icy cement floors, but the founders couldn’t help but notice the benefits of working together. Any time an intern or investor came to see one company, they were introduced to the others. Founders with expertise in areas like grant writing or funding rounds would give lunchtime presentations to help the others.
Rein remembers thinking he was in the perfect environment to succeed despite the sometimes comical dysfunction of the space. One day an official with the United States Agency for International Development (USAID) stopped by to evaluate one of the startups for a grant. The visit went well enough — until she got locked in the bathroom. The founders eventually got her out, but they didn’t think the incident boded for their chances of getting that grant.
When the landlord kicked them out of Charles Street, they found a similar space in South Boston, recruiting friends and employees to help strip wires, scrape walls, and paint over the course of a week. Rein recalls his regular duties included ordering toilet paper for the building.
The space was also twice as large as the one in Cambridge, so as Greentown’s reputation spread throughout 2011, five startups became 15, then 20.
“It really took on a life of its own,” Grama says.
Among the curious MIT students who journeyed to Greentown that year was Reichert. Having worked as a chemist for 10 years in spotless, safety-certified labs before coming to MIT, she was shocked to see the condition of Greentown.
“The first time I walked in I had two gut reactions,” Reichert says. “The first was I felt this amazing energy and passion, and kind of a buzzing. If you walk into Greentown today you still feel those things. The second was, ‘Oh my god, this place is a death trap.’”
After earning her MBA, Reichert initially helped out as a consultant at Greentown. By February of 2013, she joined Greentown to run it full time. It was a critical time for the growing co-op: White and Grama were getting ready to move to India to work on Promethean, and Hanna, who had primarily led Greentown to that point, was expecting the birth of his first child.
At the same time, real estate prices in South Boston were skyrocketing, and Greentown was again being forced to move.
Reichert, who worked as CEO without a salary for more than a year, remembers those first six months on the job as the most stressful of her life. With no money to put toward a new space, she was able to partner with the City of Somerville to secure some funding and find a new location. Reichert signed a construction contract to renovate the Somerville space before she knew where the money would come from, and began lobbying state and corporate officials for sponsorships.
She still remembers the day Greentown was to be evicted from South Boston, with everyone scrambling to clean out the cluttered warehouse and a few determined founders running one last experiment until 7 p.m. before throwing the last of the equipment in a U-Haul truck and beginning the next phase of Greentown’s journey.
Within 15 months of the move to Somerville, Greentown’s 40,000 square feet were completely filled and Reichert began the process of expanding the headquarters.
Today, Greentown’s three buildings make up more than 100,000 square feet of prototyping, office, and event space and feature a wet lab, electronics lab, and machine shop.
Since its inception, Greentown has supported more than 200 startups that have created around 2,800 jobs, many in the Boston area.
The original founders still serve on Greentown’s board of directors, ensuring every dollar Greentown makes goes toward supporting startups.
Of the founding companies, only Promethean and Altaeros are still housed in Greentown, although they’re all still operating in some form.
“We probably should’ve moved out, but it’s important to work in a place you really enjoy,” Rein says of Altaeros.
Grama, meanwhile, has come full circle. After ceding the reigns of Promethean and returning from India, last year he started another company, Transaera, that’s developing efficient, environmentally friendly cooling systems based on research from MIT.
This time, it took him a lot less time to find office space.
On June 6, the MIT AgeLab, in partnership with AARP, presented the fourth annual OMEGA scholarship awards to three accomplished young adults from New England. Sidonie Brown from Brookline High School in Brookline, Massachusetts, Brook Masse from Mount Greylock Regional High School in Williamstown, Massachusetts, and Jay Park from Newton South High School in Newton, Massachusetts, were each awarded a 2019 OMEGA scholarship. OMEGA scholarships recognize young people who are leading efforts in their schools to foster intergenerational connections within their communities.
The three winners are developers and leaders of programs that support older adults’ needs, utilize their experience and wisdom, and furnish social connections across generations. Brown has led an ongoing Brookline High School program called Brookline SHOP (Students Helping Older People), which recruits students to assist independent-living older adults with grocery shopping, technology use, and other instrumental activities. Masse started a student initiative with a local retirement community in which students converse, play games, garden, and create art with the residents. Park supported a program called Spanish Immersion Jamaica Plain and Brookline, which engages Spanish-speaking older adults as conversation partners with high school students to improve students’ mastery of the Spanish language.
The OMEGA awards were presented at the MIT AgeLab before the recipients’ families, members of the MIT AgeLab’s Lifestyle Leaders Panel, Michael Festa, the director of AARP Massachusetts, AgeLab researchers, and leaders of community organizations serving older adults that collaborated in the recipients’ projects. The OMEGA scholarships will provide $1,000 toward each recipient’s college tuition and an additional $1,000 to each recipient’s school or community partner to continue their outstanding intergenerational efforts.
OMEGA, which stands for Opportunities for Multigenerational Engagement, Growth, and Action, was developed to support the development and growth of student-led programs and clubs that connect high school students with older adults. The MIT AgeLab is a multidisciplinary research organization that works with business, government, and non-governmental organizations to improve the quality of life of older adults and those who care for them.
Patagonia, the outdoor apparel and gear company, organizes an annual case competition as a platform for graduate students across the country to solve pressing challenges in environmental sustainability. This year, teams were asked to propose environmentally-benign alternatives to single-use plastic packaging for apparel and food products that can be implemented at scale by 2025. The pervasive use of single-use plastics, which constitute a significant portion of the 330 million metric tons of plastics produced annually, has become of increasing concern as the material has been found to pollute marine environments and to take centuries to degrade.
A group of six MIT PhD and MBA students collaborated to develop and hone novel innovations fulfilling the Patagonia Case Competition prompt. Team NourishMIT collectively represented five different programs across the Institute: Audrey Bazerghi, an MBA candidate and master's student in civil and environmental engineering; Cristina Bleicher, an MBA candidate; Ty Christoff-Tempesta, a PhD candidate in materials science and engineering; Cherry Gao, a PhD candidate in biological engineering; Ellena Kim, an MBA candidate; and Jordan Landis, an MBA candidate and master's student in mechanical engineering. The team started working in October 2018 to ultimately devise the winning proposal that focused on cost-effective and timely biodegradation of apparel polybags and everyday food packaging.
One hundred twenty-four teams from across the world entered the competition with written proposals, and the top 10 finalists were invited to pitch their solutions to a panel of judges at the Haas School of Business at the University of California at Berkeley in April. Teams competed for cash prizes totaling $22,500, and the top two teams were also invited to travel to Patagonia’s headquarters in Ventura, California, to advance implementation of the proposed solutions and to surf with Patagonia’s employees. The 2019 competition marks the first time that an MIT team has won first place in the Patagonia Case Competition since its inception in 2016.
Team NourishMIT received financial support from the Parsons Laboratory for Environmental Science and Engineering, as well as from the MIT Sloan Sustainability Initiative.
Patients with type 1 diabetes have to regularly inject themselves with insulin, a hormone that helps their cells absorb glucose from the bloodstream. Another hormone called glucagon, which has the opposite effect, is given to diabetic patients to revive them if they become unconscious due to severe hypoglycemia.
The form of glucagon given to patients is powdered and has to be dissolved in liquid immediately before being injected, because if stored as a liquid, the protein tends to form clumps, also called amyloid fibrils. A new study from MIT reveals the structure of these glucagon fibrils and suggests possible strategies for altering the amino acid sequence so that the protein is less likely to become clumped.
“Insulin in solution is stable for many weeks, and the goal is to achieve the same solution stability with glucagon,” says Mei Hong, an MIT professor of chemistry and one of the senior authors of the study. “Peptide fibrillization is a problem that the pharmaceutical industry has been working for many years to solve.”
Using nuclear magnetic resonance (NMR) spectroscopy, the researchers found that the structure of glucagon fibrils is unlike any other amyloid fibrils whose structures are known.
Yongchao Su, an associate principal scientist at Merck and Co., is also a senior author of the study, which appears in the June 24 issue of Nature Structural and Molecular Biology. MIT graduate student Martin Gelenter is the lead author of the paper.
Amyloid fibrils form when proteins fold into a shape that allows them to clump together. These proteins are often associated with disease. For example, the amyloid beta protein forms plaques associated with Alzheimer’s disease, and alpha synuclein forms Lewy bodies in the neurons of Parkinson’s disease patients.
Hong has previously studied the structures of other amyloid peptides, including one that binds to metals such as zinc. After giving a talk on her research at Merck, she teamed up with scientists there to figure out the structure of the fibrillized form of glucagon.
Inside the human body, glucagon exists as an “alpha helix” that binds tightly with a receptor found on liver cells, setting off a cascade of reactions that releases glucose into the bloodstream. However, when glucagon is dissolved in a solution at high concentrations, it begins transforming into a fibril within hours, which is why it has to be stored as a powder and mixed with liquid just before injecting it.
The MIT team used NMR, a technique that analyzes the magnetic properties of atomic nuclei to reveal the structures of the molecules containing those nuclei, to determine the structure of the glucagon fibrils. They found that the glucagon fibril consists of many layers of flat sheets known as beta sheets stacked on top of one another. Each sheet is made up of rows of identical peptides. However, the researchers discovered that, unlike any other amyloid fibril whose structure is known, the peptides run antiparallel to each other. That is, each strand runs in the opposite direction from the two on either side of it.
“All thermodynamically stable amyloid fibrils known so far are parallel packed beta sheets,” Hong says. “A stable antiparallel beta strand amyloid structure has never been seen before.”
In addition, the researchers found that the glucagon beta strand has no disordered segments. Each of the tens of thousands of peptide strands that make up the fibril is held tight in the antiparallel beta sheet conformation. This allows each peptide to form a 10-nanometer-long beta strand.
“This is an extremely stable strand, and is the longest beta strand known so far among any proteins,” Hong says.
One major reason that glucagon fibrils are so stable is that side chains extending from the amino acids making up the glucagon peptides interact strongly with side chains of the peptides above and below them, creating very secure attachment points, also called steric zippers, that help to maintain the overall structure.
Courtesy of the researchers.
While all previously studied amyloid fibrils have a fixed set of residues that form the steric zippers, in glucagon fibrils, even-numbered residues from one strand and odd-numbered residues from the neighboring strand alternately form the steric zipper interface between two beta sheet layers. This conformational duality is another novel feature of the glucagon fibril structure.
“We can see from this structure why the fibril is so stable, and why it’s so hard to prevent it from forming,” Hong says. “To block it, you really have to change the identity of the amino acid residues. I’m now working with a colleague here to come up with ways to modify the sequence and break those stabilizing interactions, so that the peptide won’t self-assemble to form this fibril.”
Such alternative peptide sequences could remain shelf-stable for a longer period of time in solution, eliminating the need to mix glucagon with liquid before using it.
“Considering the crucial physiological role of glucagon, it is encouraging that new structural data on this polypeptide hormone continue to be collected,” says Kurt Wuthrich, a professor of biophysics at ETH Zurich, who was not involved in the research. “Although the structural data reported here characterize an ‘unwanted’ form of glucagon, the authors point out that it promises to provide novel leads for engineering glucagon analogs which would have improved physico-chemical properties for its administration as a drug, specifically a reduced tendency to form amyloid fibers.”
The research was funded by Merck Sharp and Dohme Corp., a subsidiary of Merck and Co., and the National Institutes of Health.
When medical devices are implanted in the body, the immune system often attacks them, producing scar tissue around the device. This buildup of tissue, known as fibrosis, can interfere with the device’s function.
MIT researchers have now come up with a novel way to prevent fibrosis from occurring, by incorporating a crystallized immunosuppressant drug into devices. After implantation, the drug is slowly secreted to dampen the immune response in the area immediately surrounding the device.
“We developed a crystallized drug formulation that can target the key players involved in the implant rejection, suppressing them locally and allowing the device to function for more than a year,” says Shady Farah, an MIT and Boston Children’s Hospital postdoc and co-first author of the study, who is soon starting a new position as an assistant professor of the Wolfson Faculty of Chemical Engineering and the Russell Berrie Nanotechnology Institute at Technion-Israel Institute of Technology.
The researchers showed that these crystals could dramatically improve the performance of encapsulated islet cells, which they are developing as a possible treatment for patients with type 1 diabetes. Such crystals could also be applied to a variety of other implantable medical devices, such as pacemakers, stents, or sensors.
Former MIT postdoc Joshua Doloff, now an assistant professor of Biomedical and Materials Science Engineering and member of the Translational Tissue Engineering Center at Johns Hopkins University School of Medicine, is also a lead author of the paper, which appears in the June 24 issue of Nature Materials. Daniel Anderson, an associate professor in MIT’s Department of Chemical Engineering and a member of MIT’s Koch Institute for Integrative Cancer Research and Institute for Medical Engineering and Science (IMES), is the senior author of the paper.
Anderson’s lab is one of many research groups working on ways to encapsulate islet cells and transplant them into diabetic patients, in hopes that such cells could replace the patients’ nonfunctioning pancreatic cells and eliminate the need for daily insulin injections.
Fibrosis is a major obstacle to this approach, because scar tissue can block the islet cells’ access to the oxygen and nutrients. In a 2017 study, Anderson and his colleagues showed that systemic administration of a drug that blocks cell receptors for a protein called CSF-1 can prevent fibrosis by suppressing the immune response to implanted devices. This drug targets immune cells called macrophages, which are the primary cells responsible for initiating the inflammation that leads to fibrosis.
“That work was focused on identifying next-generation drug targets, namely which cell and cytokine players were essential for fibrotic response,” says Doloff, who was the lead author on that study, which also involved Farah. He adds, “After knowing what we had to target to block fibrosis, and screening drug candidates needed to do so, we still had to find a sophisticated way of achieving local delivery and release for as long as possible.”
In the new study, the researchers set out to find a way to load the drug directly into an implantable device, to avoid giving patients drugs that would suppress their entire immune system.
“If you have a small device implanted in your body, you don’t want to have your whole body exposed to drugs that are affecting the immune system, and that’s why we’ve been interested in creating ways to release drugs from the device itself,” Anderson says.
To achieve that, the researchers decided to try crystallizing the drugs and then incorporating them into the device. This allows the drug molecules to be very tightly packed, allowing the drug-releasing device to be miniaturized. Another advantage is that crystals take a long time to dissolve, allowing for long-term drug delivery. Not every drug can be easily crystallized, but the researchers found that the CSF-1 receptor inhibitor they were using can form crystals and that they could control the size and shape of the crystals, which determines how long it takes for the drug to break down once in the body.
“We showed that the drugs released very slowly and in a controlled fashion,” says Farah. “We took those crystals and put them in different types of devices and showed that with the help of those crystals, we can allow the medical device to be protected for a long time, allowing the device to keep functioning.”
Encapsulated islet cells
To test whether these drug crystalline formulations could boost the effectiveness of encapsulated islet cells, the researchers incorporated the drug crystals into 0.5-millimeter-diameter spheres of alginate, which they used to encapsulate the cells. When these spheres were transplanted into the abdomen or under the skin of diabetic mice, they remained fibrosis-free for more than a year. During this time, the mice did not need any insulin injections, as the islet cells were able to control their blood sugar levels just as the pancreas normally would.
“In the past three-plus years, our team has published seven papers in Nature journals — this being the seventh — elucidating the mechanisms of biocompatibility,” says Robert Langer, the David H. Koch Institute Professor at MIT and an author of the paper. “These include an understanding of the key cells and receptors involved, optimal implant geometries and physical locations in the body, and now, in this paper, specific molecules that can confer biocompatibility. Taken together, we hope these papers will open the door to a new generation of biomedical implants to treat diabetes and other diseases.”
The researchers believe that it should be possible to create crystals that last longer than those they studied in these experiments, by altering the structure and composition of the drug crystals. Such formulations could also be used to prevent fibrosis of other types of implantable devices. In this study, the researchers showed that crystalline drug could be incorporated into PDMS, a polymer frequently used for medical devices, and could also be used to coat components of a glucose sensor and an electrical muscle stimulation device, which include materials such as plastic and metal.
“It wasn’t just useful for our islet cell therapy, but could also be useful to help get a number of different devices to work long-term,” Anderson says.
The research was funded by JDRF, the National Institutes of Health, the Leona M. and Harry B. Helmsley Charitable Trust Foundation, and the Tayebati Family Foundation.
Other authors of the paper include MIT Principal Research Scientist Peter Muller; MIT grad students Atieh Sadraei and Malia McAvoy; MIT research affiliate Hye Jung Han; former MIT postdoc Katy Olafson; MIT technical associate Keval Vyas; former MIT grad student Hok Hei Tam; MIT postdoc Piotr Kowalski; former MIT undergraduates Marissa Griffin and Ashley Meng; Jennifer Hollister-Locke and Gordon Weir of the Joslin Diabetes Center; Adam Graham of Harvard University; James McGarrigle and Jose Oberholzer of the University of Illinois at Chicago; and Dale Greiner of the University of Massachusetts Medical School.
Mason Grimshaw grew up on the Rosebud Sioux Indian Reservation in South Dakota but moved to Rapid City during high school to pursue a better education. When it came time to apply to college, he hopped online, typed “best engineering schools” into Google, and applied to two places: MIT and his father’s alma mater, the South Dakota School of Mines and Technology. He was admitted to both, but when he got into the Institute, his father insisted that he go.
It wasn’t an easy decision, however. Grimshaw felt guilt about leaving his community, where he says that everyone helps each other get by. The move to Rapid City had been difficult enough for him, given that 90 percent of his family lived back at the reservation. Coming to Cambridge was an even bigger step, but his family encouraged him to take the opportunity.
“I didn’t really want to leave home, because that is such a strong community for me. I thought if I did leave, it was only going to be worth it if I could get the best education possible,” he says.
Now a graduate student at the MIT Sloan School of Management working toward a Master of Business Analytics (MBAn) degree, Grimshaw hopes to eventually bring the skills and knowledge he acquires at MIT back home to the reservation.
Looking at the big picture, Grimshaw has aspirations to bring programming to Rosebud. The ultimate dream would be to open a software or web development consulting firm where he could teach community members computer science skills that they could, in turn, teach others. He hopes that through this business, he can equip people in the community with enough technical skills to be able to sustain the company on their own without his help. It’s a long-term goal, but Grimshaw aims high.
After earning his bachelor’s in business analytics at MIT, Grimshaw saw the MBAn as a natural next step. The program teaches students to apply the techniques of data science, programming, machine learning, and optimization to come up with business solutions.
“Because I did it as an undergrad, I thought this stuff was so cool. You can kind of predict the future and help anyone make a better decision. If I was going to be that person to help people make decisions that are important and change people’s lives, I wanted to make sure that I was as prepared as possible,” Grimshaw says.
Surprisingly, Grimshaw did not touch a line of code before coming to MIT. In fact, he entered college intending to study mechanical engineering. But in his first year, his friend was having issues with an assignment for a computer science class, so he decided to help him take a crack at the problem.
The work was fun, Grimshaw says, and coding came naturally for him. Eventually, he dropped his mechanical engineering pursuits and started studying computer science. He later switched majors and applied his computer science education to business analytics.
As a part of his MBAn program, he must complete an analytics capstone project, in which students work with a sponsor organization to create data-driven solutions to specific problems. Grimshaw, along with his program partner Amal Rar, will be working with the Massachusetts Bay Transportation Authority (MBTA) this summer to make The Ride, MBTA’s door-to-door paratransit service, more efficient.
Bringing business to invisible places
Grimshaw is also currently assisting MIT Sloan Senior Lecturer Anjali Sastry in writing a case study for South African nonprofit RLabs. RLabs seeks to inspire hope by providing business training and consulting to underprivileged South African communities. Grimshaw liked the organization’s mission, and he hopes that working on the RLabs case could give him some ideas about how to bring hope and innovation to his own community back home.
The nonprofit has, in part, inspired some of Grimshaw’s future aspirations for Rosebud. It has also gotten him to think about alternative ways to invest in or give back to communities that don’t necessarily focus on money. Some people, he says, need a place to stay or food more immediately than they need money.
Evaluating those circumstances and developing business models that address those more immediate needs as a form of payment can be a unique alternative to traditional compensation. Grimshaw stresses that monetary compensation is still important, but that being responsive to the specific areas of need within a community also has value.
“There’s a fine line. You can’t just say, ‘These people have nothing so they should just be happy to have a roof over their heads.’ I’m certainly not trying to do that, but there’s a difference in values and in what people place value on. Using that to make your business a little more sustainable is interesting,” Grimshaw says.
The reservation that Grimshaw is from lies within Todd County, an area that was previously listed as one of the poorest in America. He hopes to demonstrate to businesses that it is possible and worthwhile to invest in overlooked areas. He says that a lot of case studies in his field don’t feature stories from the emerging world or rural areas. He wants to show that through creative thinking and problem-solving, companies can work in these places, create jobs, and help lift people out of poverty.
Outside of his studies, Grimshaw mostly spends time with his wife and 5-month-old son, Augustine. His face lights up as he speaks about them.
His wife, Julia, also has a passion for helping people and works as the assistant activities director at Hale House, an assisted senior living facility in Boston. The two of them grew up together and hope to move their family closer to home after Grimshaw finishes his MBAn. For now, their favorite things to do in Boston are going to the Public Gardens (Augustine loves the grass, Grimshaw says), getting a bite at Tasty Burger in Fenway, and watching the “Great British Bake Off” at home.
He also continues to participate in the American Indian Science and Engineering Society (AISES), which he joined as an undergraduate. There were very few members when he arrived at MIT in 2014, and while the number is still small, Grimshaw is enthusiastic about its growth.
“It was pretty cool because when I came here there were four, and on a good day five, of us. I still go to meetings. As I go now, there’s always 10 people, sometimes up to 12 or 15, and it’s awesome to see how much it’s growing,” he says.
While most people going into his field may opt for Silicon Valley or somewhere else on the coasts, Grimshaw would rather take his skill set closer to home. He won’t necessarily move back to Rosebud itself; somewhere within a reasonable driving-distance is more likely. He’s thinking about Denver, with its up-and-coming tech scene, but nothing is set in stone. Wherever he ends up, if a company is interested in helping others through data, Mason Grimshaw is here to help.
It might be stretching it a bit to call it “MIT Medical-Killian Court,” but MIT Medical’s once-a-year, tent-based “satellite facility” stands ready to provide an amazing range of medical services during each year’s MIT Commencement. This year was no different, as the tent went up in the southeast corner of Killian Court during the first week of June, and MIT Medical clinicians prepared to care for the Institute’s 2,454 graduates and more than 10,000 family members and guests. In addition, the change in venue this year for Thursday’s doctoral hooding ceremony, from the Johnson Athletics Center Ice Rink to Killian Court, meant two days of staffing the medical tent, rather than the usual one.
While the tent might not be a full-fledged medical facility, much thought goes into equipping it with everything from basic first-aid items to medical supplies that might be needed to respond to more serious emergencies, explains Colleen Collins, chief of MIT Medical’s Urgent Care Service. The tent has a stainless-steel sink with its own water supply, multiple cots with privacy screens, and two dedicated porta-potties, including — new this year — one that is wheelchair-accessible.
“After every Commencement, we make note of additional supplies we might stock or things we could do differently,” Collins says. “This year, we were very cognizant of the fact that our PhD grads often have young children, so, before the hooding ceremony on Thursday, we held a briefing that focused on some of the medical emergencies young children might face.” Collins adds that she made a conscious effort to staff the tent both days with nurses and physicians who have experience with children, including some who are certified in pediatric advanced life support.
The 3,000-plus attendees at Thursday’s hooding ceremony enjoyed comfortable temperatures with overcast skies. It was a relatively quiet day in the tent for Collins and nurse Anne Marcoux — “mostly Band-Aid requests from women with new shoes and blisters,” notes Collins.
On Friday morning, the sun came out, the temperature rose, and the number of people in Killian Court swelled to more than 10,000. Marcoux was back for a second day, accompanied by sports medicine physician Angie Elliott in the morning, Associate Medical Director for Primary Care Patrick Egan in the afternoon, and family physician Jen Nohrden, who staffed the tent all day. Chief of Student Health Shawn Ferullo accompanied the long line of graduating seniors from their point of assembly at Rockwell Cage to Killian Court and then joined his colleagues in the medical tent. Also on hand were paramedics from the Cambridge Fire Department and Pro-EMS, an advanced life-support ambulance service, along with a large contingent of emergency medical technicians (EMTs) from MIT’s student-run Emergency Medical Services (MIT EMS), who were stationed throughout Killian Court, enabling them to respond promptly to any medical need.
Along with bandaging new-shoe-related blisters and responding to requests for sunscreen and ibuprofen, a few people came in with heat-related symptoms, sunburns, or symptoms of dehydration. But while clinicians in the tent were often busy, the only serious medical problem involved a guest who was transported to the hospital with symptoms of stroke.
“It was great to see students that we have worked with and assisted during their years at MIT realize their goal on such a beautiful sunny day,” says Ferullo.
Elliott and Nohrden, working the medical tent at Commencement for the first time, echo Ferullo’s sentiments. “It was exciting to see the happiness that graduation brings forth in the graduates and their family members,” Elliot says, “The teamwork of all the campus departments is another remarkable aspect of the day.”
Nohrden also came away from the experience with a new respect for the teamwork involved in creating MIT’s biggest day of the year. “Having attended a graduation as a visitor, I can say that I ‘took for granted’ all the work and prep that goes into making it a successful experience,” she says.
“I now realize what an accomplishment it is for people to ‘take it for granted.’” She continues. “For if everything goes off without a hitch, and there are no hiccups, and people only notice the stage and graduates, that is the ultimate sign of success. Onward to 2020!”
On a pedestal stands a pale yellow bud, which reveals a mosaic of bright colors when opened.
“I thought it would be meaningful to build a piece that represents how it's OK to show what's on the inside, even when it’s confusing or all over the place. It's good to let people know who you are as a person,” says Stephanie Chou ’19, who built a kinetic art sculpture to encourage members of the MIT community to express themselves and find support in challenging times.
Her piece consists of a bud that uses a lever which, when cranked, reveals a rainbow-colored glass interior surrounded by yellow petals. She calls it a "Mess of Gold."
“Yellow is always a happy and positive color. It’s the color of the sun and smiley faces, and it brightens you,” Chou says when asked about her choice of the color yellow.
But this is not the only reason she chose yellow as her sculpture’s central hue. One of Chou’s best friends, Katherine Hunter, loved yellow roses. Unfortunately, Hunter, an MIT student, passed away after a brief illness in 2017. Dealing with her friend’s passing and the pressure of schoolwork affected Chou’s well-being.
“I had never really dealt with grief or depression before. After losing Katherine, I struggled with how to cope with my feelings, while also completing schoolwork and trying to re-understand the world in a new way,” says Chou.
During this challenging time, Chou sought help from friends and support resources on campus. She also approached her UROP advisor, professor of mechanical engineering Maria Yang, with an idea to build an engineering-intensive artwork that would honor the memory of her friend and offer comfort to others who interact with it.
“I think the sculpture is a deeply felt expression of hope and renewal, manifested in a uniquely MIT way,” says Yang. “It’s clear that Stephanie drew on her background as an engineer, but also as someone who is creative, playful, and caring to create the piece.”
With a mission in her head and a blueprint in her hand, Chou applied to the MindHandHeart Innovation Fund and was awarded funding to design the kinetic art sculpture.
“MindHandHeart gave me a lot of different resources and introduced me to professors who were interested in kinetic art and mental health in general. After speaking with them I was able to apply for the grant and come up with my designs,” says Chou.
Grateful for the support, Chou’s mechanical artwork is her way of letting MIT community members know they are not alone in their struggles, and that there are support networks available — one only needs to reach out to them. The sculpture is on display on the third floor of the Stratton Student Center, and includes a box for passersby to drop notes expressing their thoughts and emotions.
Chou graduated this spring, and is heading to the West Coast to join a cybersecurity company. She hopes to use the experience and skills she gained at MIT to help build communities where everyone feels they belong.
“MIT poses many challenges, but it also teaches you how to tackle them, how to stay motivated, and how to just keep going,” Chou reflects. “I'm grateful to know that I can do almost anything I want to do. It's a great feeling.”
MIT chemical engineers have devised a new way to create very tiny droplets of one liquid suspended within another liquid, known as nanoemulsions. Such emulsions are similar to the mixture that forms when you shake an oil-and-vinegar salad dressing, but with much smaller droplets. Their tiny size allows them to remain stable for relatively long periods of time.
The researchers also found a way to easily convert the liquid nanoemulsions to a gel when they reach body temperature (37 degrees Celsius), which could be useful for developing materials that can deliver medication when rubbed on the skin or injected into the body.
“The pharmaceutical industry is hugely interested in nanoemulsions as a way of delivering small molecule therapeutics. That could be topically, through ingestion, or by spraying into the nose, because once you start getting into the size range of hundreds of nanometers you can permeate much more effectively into the skin,” says Patrick Doyle, the Robert T. Haslam Professor of Chemical Engineering and the senior author of the study.
In their new study, which appears in the June 21 issue of Nature Communications, the researchers created nanoemulsions that were stable for more than a year. To demonstrate the emulsions’ potential usefulness for delivering drugs, the researchers showed that they could incorporate ibuprofen into the droplets.
Seyed Meysam Hashemnejad, a former MIT postdoc, is the first author of the study. Other authors include former postdoc Abu Zayed Badruddoza, L’Oréal senior scientist Brady Zarket, and former MIT summer research intern Carlos Ricardo Castaneda.
One of the easiest ways to create an emulsion is to add energy — by shaking your salad dressing, for example, or using a homogenizer to break down fat globules in milk. The more energy that goes in, the smaller the droplets, and the more stable they are.
Nanoemulsions, which contain droplets with a diameter 200 nanometers or smaller, are desirable not only because they are more stable, but they also have a higher ratio of surface area to volume, which allows them to carry larger payloads of active ingredients such as drugs or sunscreens.
Over the past few years, Doyle’s lab has been working on lower-energy strategies for making nanoemulsions, which could make the process easier to adapt for large-scale industrial manufacturing.
Detergent-like chemicals called surfactants can speed up the formation of emulsions, but many of the surfactants that have previously been used for creating nanoemulsions are not FDA-approved for use in humans. Doyle and his students chose two surfactants that are uncharged, which makes them less likely to irritate the skin, and are already FDA-approved as food or cosmetic additives. They also added a small amount of polyethylene glycol (PEG), a biocompatible polymer used for drug delivery that helps the solution to form even smaller droplets, down to about 50 nanometers in diameter.
“With this approach, you don’t have to put in much energy at all,” Doyle says. “In fact, a slow stirring bar almost spontaneously creates these super small emulsions.”
Active ingredients can be mixed into the oil phase before the emulsion is formed, so they end up loaded into the droplets of the emulsion.
Once they had developed a low-energy way to create nanoemulsions, using nontoxic ingredients, the researchers added a step that would allow the emulsions to be easily converted to gels when they reach body temperature. They achieved this by incorporating heat-sensitive polymers called poloxamers, or Pluronics, which are already FDA-approved and used in some drugs and cosmetics.
Pluronics contain three “blocks” of polymers: The outer two regions are hydrophilic, while the middle region is slightly hydrophobic. At room temperature, these molecules dissolve in water but do not interact much with the droplets that form the emulsion. However, when heated, the hydrophobic regions attach to the droplets, forcing them to pack together more tightly and creating a jelly-like solid. This process happens within seconds of heating the emulsion to the necessary temperature.
MIT chemical engineers have devised a way to convert liquid nanoemulsions into solid gels. These gels (red) form almost instantaneously when drops of the liquid emulsion enter warm water.
The researchers found that they could tune the properties of the gels, including the temperature at which the material becomes a gel, by changing the size of the emulsion droplets and the concentration and structure of the Pluronics that they added to the emulsion. They can also alter traits such as elasticity and yield stress, which is a measure of how much force is needed to spread the gel.
Doyle is now exploring ways to incorporate a variety of active pharmaceutical ingredients into this type of gel. Such products could be useful for delivering topical medications to help heal burns or other types of injuries, or could be injected to form a “drug depot” that would solidify inside the body and release drugs over an extended period of time. These droplets could also be made small enough that they could be used in nasal sprays for delivering inhalable drugs, Doyle says.
For cosmetic applications, this approach could be used to create moisturizers or other products that are more shelf-stable and feel smoother on the skin.
The research was funded by L’Oréal.