MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 10 hours 19 min ago

Exceptional individuals receive 2019 MIT Excellence Awards and Collier Medal

Mon, 03/25/2019 - 12:25pm

Eleven individuals and three teams were award recipients at the 2019 MIT Excellence Awards and Collier Medal ceremony on Friday, March 22.

Among the highest honors awarded to staff at the Institute, the Excellence Awards acknowledge our community’s extraordinary dedication to MIT’s goals, values, and mission, and recognize colleagues who excel in service to us all.

The Collier Medal honors the memory of Officer Sean Collier, who gave his life protecting and serving the MIT community nearly six years ago, and celebrates an individual or group whose actions demonstrate the importance of community.

The 2019 MIT Excellence Award recipients are:

  • Mercedes Balcells-Camps, William H. Kindred, and the MIT Press Diversity and Inclusion Working Group Team in the category of Advancing Inclusion and Global Perspectives;
  • Elizabeth DeRienzo and the International Students Office Team in the category of Bringing out the Best;
  • Debby Carr in the category of Innovative Solutions;
  • Chris Budny, Emily Gallagher, Shikha Sharma, and Claire Walsh in the category of Outstanding Contributor;
  • Gerry O'Toole and Donyatta Small in the category of Serving the Client; and
  • The Summit Farms Solar Power Purchase Agreement Team in the category of Sustaining MIT.

The 2019 Collier Medal recipient is Arman Rezaee, PhD candidate in the Department of Electrical Engineering and Computer Science in the School of Engineering.

Visit the MIT HR website for more information about the award categories, selection process, and recipients.

New approach could boost energy capacity of lithium batteries

Mon, 03/25/2019 - 11:59am

Researchers around the globe have been on a quest for batteries that pack a punch but are smaller and lighter than today’s versions, potentially enabling electric cars to travel further or portable electronics to run for longer without recharging. Now, researchers at MIT and in China say they’ve made a major advance in this area, with a new version of a key component for lithium batteries, the cathode.

The team describes their concept as a “hybrid” cathode, because it combines aspects of two different approaches that have been used before, one to increase the energy output per pound (gravimetric energy density), the other for the energy per liter (volumetric energy density). The synergistic combination, they say, produces a version that provides the benefits of both, and more.

The work is described today in the journal Nature Energy, in a paper by Ju Li, an MIT professor of nuclear science and engineering and of materials science and engineering; Weijiang Xue, an MIT postdoc; and 13 others.

Today’s lithium-ion batteries tend to use cathodes (one of the two electrodes in a battery) made of a transition metal oxide, but batteries with cathodes made of sulfur are considered a promising alternative to reduce weight. Today, the designers of lithium-sulfur batteries face a tradeoff.

The cathodes of such batteries are usually made in one of two ways, known as intercalation types or conversion types. Intercalation types, which use compounds such as lithium cobalt oxide, provide a high volumetric energy density — packing a lot of punch per volume because of their high densities. These cathodes can maintain their structure and dimensions while incorporating lithium atoms into their crystalline structure.

The other cathode approach, called the conversion type, uses sulfur that gets transformed structurally and is even temporarily dissolved in the electrolyte. “Theoretically, these [batteries] have very good gravimetric energy density,” Li says. “But the volumetric density is low,” partly because they tend to require a lot of extra materials, including an excess of electrolyte and carbon, used to provide conductivity.

In their new hybrid system, the researchers have managed to combine the two approaches into a new cathode that incorporates both a type of molybdenum sulfide called Chevrel-phase, and pure sulfur, which together appear to provide the best aspects of both. They used particles of the two materials and compressed them to make the solid cathode. “It is like the primer and TNT in an explosive, one fast-acting, and one with higher energy per weight,” Li says.

Among other advantages, the electrical conductivity of the combined material is relatively high, thus reducing the need for carbon and lowering the overall volume, Li says. Typical sulfur cathodes are made up of 20 to 30 percent carbon, he says, but the new version needs only 10 percent carbon.

The net effect of using the new material is substantial. Today’s commercial lithium-ion batteries can have energy densities of about 250 watt-hours per kilogram and 700 watt-hours per liter, whereas lithium-sulfur batteries top out at about 400 watt-hours per kilogram but only 400 watt-hours per liter. The new version, in its initial version that has not yet gone through an optimization process, can already reach more than 360 watt-hours per kilogram and 581 watt-hours per liter, Li says.  It can beat both lithium-ion and lithium-sulfur batteries in terms of the combination of these energy densities. 

With further work, he says, “we think we can get to 400 watt-hours per kilogram and 700 watt-hours per liter,” with that latter figure equaling that of lithium-ion. Already, the team has gone a step further than many laboratory experiments aimed at developing a large-scale battery prototype: Instead of testing small coin cells with capacities of only several milliamp-hours, they have produced a three-layer pouch cell (a standard subunit in batteries for products such as electric vehicles) with a capacity of more than 1,000 milliamp-hours. This is comparable to some commercial batteries, indicating that the new device does match its predicted characteristics.

So far, the new cell can’t quite live up to the longevity of lithium-ion batteries in terms of the number of charge-discharge cycles it can go through before losing too much power to be useful. But that limitation is “not the cathode’s problem”; it has to do with the overall cell design, and “we’re working on that,” Li says. Even in its present early form, he says, “this may be useful for some niche applications, like a drone with long range,” where both weight and volume matter more than longevity.

“I think this is a new arena for research,” Li says.

The work was supported by the Samsung Advanced institute of Technology, the National Key Technologies R&D Program of China, the National Science Foundation of China, and MIT’s Department of Materials Science and Engineering. The team also included professor Jing Kong and others at MIT, as well as researchers at the Chinese Academy of Sciences in Beijing, the Songshan Lake Materials Laboratory in Guangdong, China, the Samsung Advanced Institute of Technology America in Burlington, Massachusetts, and Tongji University in Shanghai.

3 Questions: Why are student-athletes amateurs?

Sun, 03/24/2019 - 11:59pm

Debate about the unpaid status of NCAA athletes has surged in the last decade — and did so again last month when the best player in men’s college basketball, Zion Williamson, got injured in a high-profile game. Meanwhile, graduate student unionization drives frequently raise the same question: Aren’t some students also workers creating value for universities? And how did we come to regard student-athletes, say, as amateurs in the first place?

Jennifer Light, the Bern Dibner Professor in the History of Science and Technology and a professor of urban studies and planning, has just published an article in the Harvard Educational Review on the history of this idea that students are not part of the labor force. She places its origins in the 1890-1930 movement to expand public schooling, which promoted schools as alternatives to child labor and put them forth as “protected” places for young people to focus on future-oriented training. MIT News talked to Light about her research. This interview has been edited for length.

Q: How did you become interested in the topic of value-producing students, and the question of whether or not they’re fairly compensated?

A: Previously, I taught at Northwestern University, where I encountered many student-athletes because two of my classes covered some sports history. On more than one occasion, someone raised the question, “Why are we not getting compensated when we bring in so much money [for the university]?” Or I’d hear anecdotally about how a video game company came to scan their bodies for a game that was “cool,” but also not something they got paid for.

That got me thinking about unpaid labor. Of course, student-athletes receive scholarships, but those are quite limited when compared with the compensation they’d get playing outside of school. So I went looking for the origins of the idea that students are not part of the labor force. As it turns out, this idea dates to the emergence of mass schooling in the United States. As it also turns out, there is a long history of schools profiting from student activities — and not just sports.

Q: The “alternative history” of students you describe primarily occurs from about 1890 to 1930, with the movement to make public education available for everyone. What happened in this time period that was so important, in this regard?

A: Public education became popular, along with compulsory-schooling legislation, largely because of the industrial economy. The spread of schools was part of a national effort to train children for future industrial jobs and reduce child labor. And at some level, yes, when kids went to school, they were protected from going into factories or coal mines.

On the other hand, because so many public schools needed to get off the ground at the same time and local governments did not have adequate resources, educators assigned pupils to build and operate their schools: making desks and lockers, building playground equipment and gyms, keeping financial records, running the lunch room, everything from ordering supplies to cooking and serving the meal. Kids repaired school plumbing and heating systems, did health inspections, and tracked down truants.

This was celebrated as the cutting-edge curriculum, the "new education" for the industrial age. John Dewey said when you bring the school close to life, that motivates students’ learning. Because these things were done for educational purposes and no money exchanged hands, they were not considered "work." Of course, if kids did the same tasks in the “real world,” they would be paid. We still use this language of school versus the real world today. So this mindset originated in public schools and only later carried over into education for older adolescents, which was less common, particularly before World War II.

Q: How and when did the idea of the “protected” student make the leap from public schools to universities?

A: In the American mindset, until about 1930, you were a kid until you were between about 14 and 16, because in most places high school was not compulsory; education was compulsory until the eighth grade. And people were fighting to change this, but there were plenty of late teenagers in the work force.

Mass unemployment during the Great Depression was a catalyst for extending this protective period to older adolescents, through their early 20s. [The thinking was] that adults should be top priority for available jobs. So although increasingly specialized jobs were a contributing factor, the desire not to compete with adults was a major force behind the expansion of training for this age group — in high schools, community colleges, universities, and specialized programs such as those sponsored by the National Youth Administration. As with the curriculum for younger pupils, institutional maintenance was a feature of these programs. 

In recent years this sort of routine economic activity inside schools has declined but not disappeared. Today's controversies around student-athletes and teaching assistants stem in part from the century-old assumption that students by definition are cultivating their human capital and defering economic participation until they graduate to the “real world.” What I’m trying to show is, that’s always been a fantasy. Of course when students go to school, they get an education. But they could also be producing value for their institutions. 

Developing tech for, and with, people with disabilities

Fri, 03/22/2019 - 1:50pm

Lora Brugnaro says to think of her like a Weeble toy that constantly wobbles then falls down. She has cerebral palsy, which severely impacts her balance, and for years she has used a walker to help her stay upright while moving around. Unfortunately, she has found that walkers available on the market are cheap, unstable, and prone to flipping on rough surfaces, leaving her sprawled out on the floor of an MBTA station or in the middle of the street. She had even started considering using a wheelchair to avoid such situations.

"I have felt for a very long time that the daily choice I made between safety and living with the freedom to move was an unnecessary choice predicated on poor design," she says.

Brugnaro was one of the co-designers at the sixth annual Assistive Technologies Hackathon, or ATHack. The event pairs teams of students, most but not all of whom study at MIT, with people from the Boston and Cambridge area living with disabilities.

Team Lora spent the hackathon working to create a more stable walker. Team member Zoe Levitt, walked with Brugnaro through a typical journey she takes to get from home to work and back again. Levitt observed Brugnaro's challenges and took videos to share with the rest of the team, supplementing Brugnaro's feedback.

"The collaboration was the most meaningful part of the event for me," Brugnaro says. "I’ve spoken to many people about my struggles; nobody ever offered solutions or suggested there might be any. It’s depressing and demeaning when companies ignore your feedback, and medical professionals seem satisfied with the standard options."

For her part, Levitt found Brugnaro's engaged collaboration instrumental in designing a product that worked; "Lora is super-awesome; she had a really clear idea of a problem she needed help with and was really eager to hear our ideas," she says. "Actually, after the hackathon … [Brugnaro] took the walker home, and on Monday I got this email with about 15 bullet points about what worked and what didn't work, and what could be improved!"

ATHack was founded in 2014 by MIT undergraduates Jaya Narain, Ishwarya Ananthabhotla, and Abigail Klein. They had previously been a part of tech projects that involved at the design stage the people who would ultimately be using the technology, and they had found the experience meaningful and valuable. They thought that other MIT students could benefit from a similar experience, and so created the ATHack to bring together community members living with disabilities and student designers.

"We wanted to create a way for a lot of people at MIT to become exposed to assistive technology through the co-design model," Narain says.

Unlike much of the research and development that takes place at MIT, the solutions created by the ATHack designers are often very straightforward and even simple. To Ananthabhotla, that simplicity is a marker of success.

"Unlike most other hackathons, this event is really not about building the best, 'coolest' tech," she explains. "It is about building a relationship with a community member and building the design skills needed to identify a need, understand a context, and ideate iteratively." 

Narain and Ananthabhotla organized MIT ATHack 2019 with Hosea Siu ’14, SM ’15, PhD ’18 and student volunteers Tareq El Dandachi, Imane Bouzit, Sally Beiruti, and Samuel Mendez. Together, they set up a Meet the Co-Designers dinner on Feb.11, where groups of students were paired up with co-designers on the bases of skills and interests.

The students had a few weeks to collaborate with their co-designer, brainstorm, and request specialized materials before the hackathon at the MIT Lincoln Laboratory Beaver Works Center on March 2. Teams were also allowed to start building before the hackathon, if they chose to do so.

Alex Rosenberg, another ATHack co-designer, uses a wheelchair and has limited arm strength and an inability to use his fingers.

"Having just become disabled, the most difficult thing is to find time for the problem solving of everyday life problems," Rosenberg says. "The hospitals, the therapists, and family and friends take care of the major things, but picking up a cup or throwing a ball to your kids are deemed by everybody else as not important enough to spend precious time figuring out how to make easier. The hackathon is so valuable because it makes space for that time."

Team Alex created a system to launch and catch a ball so that Alex could play with his two sons. They were particularly focused on making it simple enough for Alex to use it without the aid of another adult. The team came in second in the co-designer collaboration category, but more importantly, Alex has already been able to make use of the technology.

"The product we created at the hackathon as a team has already brought so much joy to my relationship with my sons," he says.

Sara Falcone was part of Team Reese. After first being introduced to him at the Meet the Co-Designers dinner, Falcone and her teammates at ATHack started designing a robust neck support brace. Reese, a teenager with cerebral palsy, has difficulty controlling his head motion, making it hard for him to use eye-tracking-based communication tools, watch TV, see the board during classes, and drive his chair. Team Reese created a pneumatic brace, which uses pressurized air to keep his head stable, to allow him and his family to control how much support he has at any given time.

"Reese was an awesome co-designer and a great person to meet; I learned a lot from him," Falcone says. "Since he is mostly nonverbal, I was initially concerned we wouldn't be able to make something that really worked for him, but he was able to fully communicate with few words. Reese has a contagious smile and laughed at all our failed first attempts, which really made us feel comfortable trying a bunch of different, goofy, often terrible ideas with him, and made for a fun day."

Winning teams were selected in four categories: usability, co-designer collaboration, technical innovation, and documentation. Team Reese came in first in the technical innovation category for their pneumatic neck brace. But Falcone isn't satisfied.

"I'm excited to keep working on our solution for Reese to make it a polished, durable tool for him," she says. "We ended the hackathon with a pretty solid direction, but the prototype was made in less than 12 hours, so there's a number of things we'd like to improve to make it better for Reese."

The winner in the usability category was Team Sara, who developed a portable bidet that would allow their co-designer to use the bathroom at work and in other public places. They published an Instructable on their product, so that other people can build one too.

Team Phil won the documentation category with a low-cost modular stander to aid people with limited lower body control. They published an entire website, with Ikea-like instructions on how to build their modular stander they named "PhilGood."

Finally, the co-designer collaboration category was won by Team Lora. Brugnaro used her improved walker on the way home from the hackathon, and has continued to use it every day since. She says she hopes to eventually bring the design to other people with mobility issues.

"I no longer feel like a second-class citizen with a crappily [designed], cheap rollator that breaks apart every six months," she says. "I finally have my own personal vehicle, with a brake system that actually brakes and a wheel system that provides 100 percent stability."

Cameron Taylor, one of the designers in Team Sara, sums up why ATHack is so important.

"Technologies are designed for those who fit standard parameters, leaving people who have unique physiologies technologically behind," he says. "From this economic perspective, having a unique physiology doesn’t make you disabled, it makes your technology disabled. Because the designers at ATHack do not come to the hack driven by economics, the hackathon breaks this cycle and allows technologies to be developed for people on an individual level."

Brugnaro puts it even more simply: "Without this hackathon, it would have been impossible to work with people who understood the science behind my problem and proved that there is a solution. Two weeks, four bright students, and one user changed my life."

Model learns how individual amino acids determine protein function

Fri, 03/22/2019 - 1:46pm

A machine-learning model from MIT researchers computationally breaks down how segments of amino acid chains determine a protein’s function, which could help researchers design and test new proteins for drug development or biological research. 

Proteins are linear chains of amino acids, connected by peptide bonds, that fold into exceedingly complex three-dimensional structures, depending on the sequence and physical interactions within the chain. That structure, in turn, determines the protein’s biological function. Knowing a protein’s 3-D structure, therefore, is valuable for, say, predicting how proteins may respond to certain drugs.

However, despite decades of research and the development of multiple imaging techniques, we know only a very small fraction of possible protein structures — tens of thousands out of millions. Researchers are beginning to use machine-learning models to predict protein structures based on their amino acid sequences, which could enable the discovery of new protein structures. But this is challenging, as diverse amino acid sequences can form very similar structures. And there aren’t many structures on which to train the models.

In a paper being presented at the International Conference on Learning Representations in May, the MIT researchers develop a method for “learning” easily computable representations of each amino acid position in a protein sequence, initially using 3-D protein structure as a training guide. Researchers can then use those representations as inputs that help machine-learning models predict the functions of individual amino acid segments — without ever again needing any data on the protein’s structure.

In the future, the model could be used for improved protein engineering, by giving researchers a chance to better zero in on and modify specific amino acid segments. The model might even steer researchers away from protein structure prediction altogether.

“I want to marginalize structure,” says first author Tristan Bepler, a graduate student in the Computation and Biology group in the Computer Science and Artificial Intelligence Laboratory (CSAIL). “We want to know what proteins do, and knowing structure is important for that. But can we predict the function of a protein given only its amino acid sequence? The motivation is to move away from specifically predicting structures, and move toward [finding] how amino acid sequences relate to function.”

Joining Bepler is co-author Bonnie Berger, the Simons Professor of Mathematics at MIT with a joint faculty position in the Department of Electrical Engineering and Computer Science, and head of the Computation and Biology group.

Learning from structure

Rather than predicting structure directly — as traditional models attempt — the researchers encoded predicted protein structural information directly into representations. To do so, they use known structural similarities of proteins to supervise their model, as the model learns the functions of specific amino acids.

They trained their model on about 22,000 proteins from the Structural Classification of Proteins (SCOP) database, which contains thousands of proteins organized into classes by similarities of structures and amino acid sequences. For each pair of proteins, they calculated a real similarity score, meaning how close they are in structure, based on their SCOP class.

The researchers then fed their model random pairs of protein structures and their amino acid sequences, which were converted into numerical representations called embeddings by an encoder. In natural language processing, embeddings are essentially tables of several hundred numbers combined in a way that corresponds to a letter or word in a sentence. The more similar two embeddings are, the more likely the letters or words will appear together in a sentence.

In the researchers’ work, each embedding in the pair contains information about how similar each amino acid sequence is to the other. The model aligns the two embeddings and calculates a similarity score to then predict how similar their 3-D structures will be. Then, the model compares its predicted similarity score with the real SCOP similarity score for their structure, and sends a feedback signal to the encoder.

Simultaneously, the model predicts a “contact map” for each embedding, which basically says how far away each amino acid is from all the others in the protein’s predicted 3-D structure — essentially, do they make contact or not? The model also compares its predicted contact map with the known contact map from SCOP, and sends a feedback signal to the encoder. This helps the model better learn where exactly amino acids fall in a protein’s structure, which further updates each amino acid’s function.

Basically, the researchers train their model by asking it to predict if paired sequence embeddings will or won’t share a similar SCOP protein structure. If the model’s predicted score is close to the real score, it knows it’s on the right track; if not, it adjusts.

Protein design

In the end, for one inputted amino acid chain, the model will produce one numerical representation, or embedding, for each amino acid position in a 3-D structure. Machine-learning models can then use those sequence embeddings to accurately predict each amino acid’s function based on its predicted 3-D structural “context” — its position and contact with other amino acids.

For instance, the researchers used the model to predict which segments, if any, pass through the cell membrane. Given only an amino acid sequence, the researchers’ model predicted all transmembrane and non-transmembrane segments more accurately than state-of-the-art models.

“The work by Bepler and Berger is a significant advance in representing the local structural properties of a protein sequence,” says Serafim Batzoglou, a professor of computer science at Stanford University. “The representation is learned using state-of-the-art deep learning methods, which have made major strides in protein structure prediction in systems such as RaptorX and AlphaFold. This work has ultimate application in human health and pharmacogenomics, as it facilitates detection of deleterious mutations that disrupt protein structures.”

Next, the researchers aim to apply the model to more prediction tasks, such as figuring out which sequence segments bind to small molecules, which is critical for drug development. They’re also working on using the model for protein design. Using their sequence embeddings, they can predict, say, at what color wavelengths a protein will fluoresce.

“Our model allows us to transfer information from known protein structures to sequences with unknown structure. Using our embeddings as features, we can better predict function and enable more efficient data-driven protein design,” Bepler says. “At a high level, that type of protein engineering is the goal.”

Berger adds: “Our machine learning models thus enable us to learn the ‘language’ of protein folding — one of the original ‘Holy Grail’ problems — from a relatively small number of known structures.”

The chemist and the stage

Fri, 03/22/2019 - 1:20pm

Audrey Pillsbury has many different identities on campus: She is a musician and composer, she rows for the women's openweight crew, and she studies chemistry (Course 5). Now she is exploring her identity as a second-generation Asian-American through her first collaborative musical, "The Jade Bracelet."

Encouraged by a group of friends and professors, Pillsbury channeled her lifelong passion for music and dance to tell a story about cultures, family dynamics, and interracial relationships that are part of Pillsbury’s reality, being half Chinese and half Caucasian.

"The Jade Bracelet" is about members of the Wong family, who immigrate to America to escape China’s one-child policy. Later, the Wong sisters Jaden and Amy are seen dealing in high school dealing with stereotypes from both Asian and American cultures, interracial dating conflicts, and trying to balance different, and sometimes conflicting, identities. Pillsbury was able to connect with students from Harvard University, Wellesley College, the University of Massachusetts at Boston, Berklee College of Music, and other area schools who had experienced similar issues growing up in multicultural families.

“I want to feel close to my Asian roots but what does that mean?” says Pillsbury. “I’ve never been to Asia. I love Chinese food but what does that mean? I think those are the moments I had with my own mother and trying to figure out her past and to see things from her perspective”

As an MIT Burchard Scholar, Pillsbury discovered many options for sharing her experiences through the humanities, arts, and social sciences, which led her to write "The Jade Bracelet" libretto. “Being at MIT has given me access to a lot of resources. I have this platform where people will sort of care about what I’ve written or what I’ve done. Fellow students want to see what their fellow students came up with,” says Pillsbury.

"The Jade Bracelet" is more than just songs and dialogue for Pillsbury. “It’s really about putting all of these people’s experiences together, the process we’ve had in making it come together, and this journey,” she says. “Being Asian is a really important part of who I am. No one should be color blind. We should all see what each other for which cultures and backgrounds are important to us.”

Being a woman in a STEM field, Pillsbury sees art as a release, and she encourages MIT students to explore more within the arts field by getting creative on campus and telling their own stories.

“We have so many creative people here. I know it’s hard because at MIT we have to carve time out of our day,” she says. “You have to make the time to do it but we just have so many creative people and you have all the resources here.”

Pillsbury wants to continue writing music while she begins her full-time position at Raytheon Space and Airborne Systems this fall. In the meantime, the MIT Theater Guild put on two staged readings of "The Jade Bracelet" this month in Kresge Auditorium’s Little Theater.

What's in a face?

Fri, 03/22/2019 - 1:00pm

Our brains are incredibly good at processing faces, and even have specific regions specialized for this function. But what face dimensions are we observing? Do we observe general properties first, then look at the details? Or are dimensions such as gender or other identity details decoded interdependently? In a study published in Nature Communications, neuroscientists at the McGovern Institute for Brain Research measured the response of the brain to faces in real-time, and found that the brain first decodes properties such as gender and age before drilling down to the specific identity of the face itself.

While functional magnetic resonance imaging (fMRI) has revealed an incredible level of detail about which regions of the brain respond to faces, the technology is less effective at telling us when these brain regions become activated. This is because fMRI measures brain activity by detecting changes in blood flow; when neurons become active, local blood flow to those brain regions increases. However, fMRI works too slowly to keep up with the brain’s millisecond-by-millisecond dynamics. Enter magnetoencephalography (MEG), a technique developed by MIT physicist David Cohen that detects the minuscule fluctuations in magnetic fields that occur with the electrical activity of neurons. This allows better temporal resolution of neural activity.

McGovern Institute investigator Nancy Kanwisher, the Walter A Rosenblith Professor in the MIT Department of Brain and Cognitive Sciences, and postdoc Katharina Dobs, along with their co-authors Leyla Isik and Dimitrios Pantazis, selected this temporally precise approach to measure the time it takes for the brain to respond to different dimensional features of faces.

“From a brief glimpse of a face, we quickly extract all this rich multidimensional information about a person, such as their sex, age, and identity,” explains Dobs. “I wanted to understand how the brain accomplishes this impressive feat, and what the neural mechanisms are that underlie this effect, but no one had measured the time scales of responses to these features in the same study.”

Previous studies have shown that people with prosopagnosia, a condition characterized by the inability to identify familiar faces, have no trouble determining gender, suggesting these features may be independent. “But examining when the brain recognizes gender and identity, and whether these are interdependent features, is less clear,” explains Dobs.

By recording the brain activity of subjects in the MEG machine, Dobs and her co-authors found that the brain responds to coarse features, such as the gender of a face, much faster than the identity of the face itself. Their data showed that, in as little as 60-70 milliseconds, the brain begins to decode the age and gender of a person. Roughly 30 milliseconds later — at around 90 milliseconds — the brain begins processing the identity of the face.

After establishing a paradigm for measuring responses to these face dimensions, the authors then decided to test the effect of familiarity. It’s generally understood that the brain processes information about “familiar faces” more robustly than unfamiliar faces. For example, our brains are adept at recognizing actress Scarlett Johansson across multiple photographs, even if her hairstyle is different in each picture. Our brains have a much harder time, however, recognizing two images of the same person if the face is unfamiliar.

“Actually, for unfamiliar faces the brain is easily fooled,” Dobs explains. “Variations in images, shadows, changes in hair color or style, quickly lead us to think we are looking at a different person. Conversely, we have no problem if a familiar face is in shadow, or a friend changes their hair style. But we didn’t know why familiar face perception is much more robust, whether this is due to better feed-forward processing, or based on later memory retrieval.”

To test the effect of familiarity, the authors measured brain responses while the subjects viewed familiar faces (American celebrities) and unfamiliar faces (German celebrities) in the MEG. Surprisingly, they found that subjects recognize gender more quickly in familiar faces than unfamiliar faces. For example our brains decode that actor Scarlett Johansson presents as female, before we even realize she is Scarlett Johansson. And for the less familiar German actor, Karoline Herferth, our brains unpack the same information less well.

Dobs and co-authors argue that better gender and identity recognition is not “top-down” for familiar faces, meaning that improved responses to familiar faces is not about retrieval of information from memory, but rather, a feed-forward mechanism. They found that the brain responds to facial familiarity at a much slower time scale (400 milliseconds) than it responds to gender, suggesting that the brain may be remembering associations related to the face (such as placing Johansson in the "Lost in Translation" movie) in that longer timeframe.

This is good news for artificial intelligence. “We are interested in whether feed-forward deep learning systems can learn faces using similar mechanisms,” explains Dobs, “and help us to understand how the brain can process faces it has seen before in the absence of pulling on memory.”

When it comes to immediate next steps, Dobs would like to explore where in the brain these facial dimensions are extracted, how prior experience affects the general processing of objects, and whether computational models of face processing can capture these complex human characteristics.

Energy monitor can find electrical failures before they happen

Thu, 03/21/2019 - 11:59pm

A new system devised by researchers at MIT can monitor the behavior of all electric devices within a building, ship, or factory, determining which ones are in use at any given time and whether any are showing signs of an imminent failure. When tested on a Coast Guard cutter, the system pinpointed a motor with burnt-out wiring that could have led to a serious onboard fire.

The new sensor, whose readings can be monitored on an easy-to-use graphic display called a NILM (non-intrusive load monitoring) dashboard, is described in the March issue of IEEE Transactions on Industrial Informatics, in a paper by MIT professor of electrical engineering Steven Leeb, recent graduate Andre Aboulian MS ’18, and seven others at MIT, the U.S. Coast Guard, and the U.S. Naval Academy. A second paper will appear in the April issue of Marine Technology, the publication of the Society of Naval Architects and Marine Engineers.

The system uses a sensor that simply is attached to the outside of an electrical wire at a single point, without requiring any cutting or splicing of wires. From that single point, it can sense the flow of current in the adjacent wire, and detect the distinctive “signatures” of each motor, pump, or piece of equipment in the circuit by analyzing tiny, unique fluctuations in the voltage and current whenever a device switches on or off. The system can also be used to monitor energy usage, to identify possible efficiency improvements and determine when and where devices are in use or sitting idle.

The technology is especially well-suited for relatively small, contained electrical systems such as those serving a small ship, building, or factory with a limited number of devices to monitor. In a series of tests on a Coast Guard cutter based in Boston, the system provided a dramatic demonstration last year.

About 20 different motors and devices were being tracked by a single dashboard, connected to two different sensors, on the cutter USCGC Spencer. The sensors, which in this case had a hard-wired connection, showed that an anomalous amount of power was being drawn by a component of the ship’s main diesel engines called a jacket water heater. At that point, Leeb says, crewmembers were skeptical about the reading but went to check it anyway. The heaters are hidden under protective metal covers, but as soon as the cover was removed from the suspect device, smoke came pouring out, and severe corrosion and broken insulation were clearly revealed.

“The ship is complicated,” Leeb says. “It’s magnificently run and maintained, but nobody is going to be able to spot everything.”

Lt. Col. Nicholas Galanti, engineer officer on the cutter, says “the advance warning from NILM enabled Spencer to procure and replace these heaters during our in-port maintenance period, and deploy with a fully mission-capable jacket water system. Furthermore, NILM detected a serious shock hazard and may have prevented a class Charlie [electrical] fire in our engine room.”

The system is designed to be easy to use with little training. The computer dashboard features dials for each device being monitored, with needles that will stay in the green zone when things are normal, but swing into the yellow or red zone when a problem is spotted.

Detecting anomalies before they become serious hazards is the dashboard’s primary task, but Leeb points out that it can also perform other useful functions. By constantly monitoring which devices are being used at what times, it could enable energy audits to find devices that were turned on unnecessarily when nobody was using them, or spot less-efficient motors that are drawing more current than their similar counterparts. It could also help ensure that proper maintenance and inspection procedures are being followed, by showing whether or not a device has been activated as scheduled for a given test.

“It’s a three-legged stool,” Leeb says. The system allows for “energy scorekeeping, activity tracking, and condition-based monitoring.” But it’s that last capability that could be crucial, “especially for people with mission-critical systems,” he says. In addition to the Coast Guard and the Navy, he says, that includes companies such as oil producers or chemical manufacturers, who need to monitor factories and field sites that include flammable and hazardous materials and thus require wide safety margins in their operation.

One important characteristic of the system that is attractive for both military and industrial applications, Leeb says, is that all of its computation and analysis can be done locally, within the system itself, and does not require an internet connection at all, so the system can be physically and electronically isolated and thus highly resistant to any outside tampering or data theft.

Although for testing purposes the team has installed both hard-wired and noncontact versions of the monitoring system — both types were installed in different parts of the Coast Guard cutter — the tests have shown that the noncontact version could likely produce sufficient information, making the installation process much simpler. While the anomaly they found on that cutter came from the wired version, Leeb says, “if the noncontact version was installed” in that part of the ship, “we would see almost the same thing.”

The research team also included graduate students Daisy Green, Jennifer Switzer, Thomas Kane, and Peer Lindahl at MIT; Gregory Bredariol of the U.S. Coast Guard; and John Donnal of the U.S. Naval Academy in Annapolis, Maryland. The research was funded by the U.S. Navy’s Office of Naval Research NEPTUNE project, through the MIT Energy Initiative.

Locally grown

Thu, 03/21/2019 - 11:59pm

Dasjon Jordan’s classmates call him the mayor of New Orleans.

And Jordan, a second-year master’s student in the Department of Urban Studies and Planning (DUSP), has indeed brought deep NOLA roots to MIT. His family has lived in the same few city blocks, on America Street in eastern New Orleans, for generations; Jordan even attended the same neighborhood elementary school that his mother did — and that his grandmother did before her.

That all changed in 2005, when Jordan was 12 years old: His family lost their home due to flooding from Hurricane Katrina. Leaving their multigenerational urban neighborhood, his immediate family joined a vast, post-Katrina suburban migration, ultimately settling about 45 minutes outside of the city in LaPlace, Louisiana.

One of the starkest changes that Jordan recalls was the separation of commerce and community in the suburbs. Instead of locally owned corner stores providing the pulse of the neighborhood, the commercial activity was fragmented along the nearby highway, and local culture was hard to find. As the years went on, Jordan felt this loss of place more acutely with repeated visits back to his grandparents’ home. Now, nearly 15 years later, his family elementary school is a plot of grass.

Perhaps due to this uprooting, Jordan approaches urban planning with an affinity for local solutions: “We want to have place-based solutions to economic disparities and racial divisions,” he says. “There’s an importance to physical space: connecting with local environments, buildings, cultures, and hearing the stories of people in their neighborhoods.”

His early experiences also solidified his conviction that local businesses and cultural spaces provide essential context for empowering underserved communities. “There’s evidence to say that when you support small business owners, they hire locally, invest in their communities, and promote a sense of cultural identity,” he says.

Responding to the call

In October 2015, as a senior architecture student at Louisiana State University, Jordan attended the Black in Design Conference in Cambridge, Massachusetts, that first placed him on the path to MIT. “Before I got off the plane, I could feel that I was going to spend a major part of my life here,” Jordan says. “The city of Boston called to me.”

At that conference, he met DUSP alumni who encouraged him to apply to MIT, and he felt sure that urban planning was right for him. However, that spring he learned that he hadn’t been accepted into MIT — or any of the other programs he applied to.

With no confirmed job after graduating, Jordan initially thought his plans were derailed. But a month later, an MIT alum he met at the design conference connected him with another DUSP graduate who started a community development nonprofit organization in the heart of New Orleans.

The nonprofit, Broad Community Connections, and many of the business it supports stand on and around Bayou Road, the oldest road in New Orleans. Since its settlement, the area has been a trading post for Native Americans, free people of color, and local business owners. Uncovering these layers of history, culture, and place reconnected Jordan to his first home.

In his new position, Jordan was able to provide holistic support for small business owners, and he began to see what a critical nexus small businesses are for revitalizing a community: “[Small business owners] are the most engaged citizens. They’re local employers, community leaders, mentors, and culture-bearers. Their personalities and knowledge of neighborhood history give their cities distinct character. If small business owners aren’t supported, a city feels generic.”

After working for a year, Jordan reapplied to graduate school with a sharpened sense of purpose. This time, he was admitted to MIT and began his program in September 2017.

Building bridges

After several core classes in planning theory, microeconomics, and geographic information system mapping, DUSP students are encouraged to pursue electives in their area of interest. For Jordan, this has meant taking courses focused on economic development in urban communities, in the Comparative Media Studies program, and even a few courses at Harvard University.

Jordan’s most transformative classes, 11.437 (Financing Economic Development) and 11.360 (Community Growth and Land Use Planning), took him into Boston-area communities like Lynn and Lawrence, “gateway” cities with large immigrant populations. These areas, often former mill cities suffering from white flight and governmental disinvestment, are considered potential commercial opportunity zones, but working closely with them requires sensitivity to local history and culture. To that end, Jordan considers the primary work of a planner in community development to be partnering with those on the margins of society.

“At its core, the most important city planning questions are: Are we taking the time to observe and listen to the challenges communities are facing? Are we willing to be a bridge and make connections in spaces where we have power to? And how do we create platforms for people’s voices to be heard without being translators? The people most affected by an issue have a better understanding of how to solve it; they sometimes just need our partnership to fix it.”

For Jordan, it’s work that calls back to his nonprofit experience on Bayou Road.

His thesis will focus on how to support entrepreneurs of color in improving their businesses by leveraging their cultural authenticity, innovative marketing strategies, and curated experiences. It’s a place-based solution that Jordan believes can help to address economic disparities and social divisions.

In tandem, he has also launched a startup, Roux, awarded funding from DUSP’s startup accelerator DesignX. Roux’s digital platforms aim to connect millennials to heart of culture in cities across to country. Rather than a standard review-based system, like Yelp, the company will emphasize curated experiences that showcase the intersection of culture, community, and commerce.

Leading by example

Since his arrival in 2017, Jordan has found his own sense of place in Cambridge. Two classmates in particular, Hannah Diaz and Morgan Augillard, have given Jordan a feeling of connectedness here. They often dinner at Suya Joint in Dudley Square to joke and debrief after class, or go to Muqueca Restaurant for Brazilian feijoada, a bean dish that reminds him of the red beans and rice of his hometown.

Jordan, Diaz, and Augillard are also all members of DUSP’s Students of Color Committee (SCC), which provides a safe space for students of color and, critically, connects prospective students with current students and alumni during the application process through the Peer Application Support Service (PASS) program. Jordan joined SCC his first year and currently sits on its board.

That experience has given him keen perspectives on diversity initiatives at MIT.

“There’s a lot of conversation about diversity and inclusion across MIT, but many students and myself would like to see more of a focus on social justice and equity in the student experience,” Jordan says. “When I saw the MIT and Slavery Initiative, I thought it was great, like, we’re grappling with hard truths. But what’s being done to move forward on the current issues? Because the challenges experienced by the first students of color admitted to MIT are the same issues happening now. We can’t get stuck in a mentality of gradualism.”

In a complementary vein, Jordan also wants students to better understand their power in an institutional setting. “I wish students knew how powerful their individual and collective voices are,” he says. He himself has recently taken steps to actualize this idea, as a recently appointed Office of Graduate Education Fellow focusing on graduate diversity initiatives.

Jordan’s leadership is also being recognized institutionally; he was chosen as this year’s graduate speaker at MIT’s annual MLK celebration in February.

His advice to other students is to embrace MIT — and their power as students within it. “Students come here to world-leading institution — they shouldn’t forget that they come here as leaders.”

Apptimize helps companies create seamless digital experiences

Thu, 03/21/2019 - 12:04pm

Nancy Hua discovered the power of numbers early on in life. Her aptitude for math earned her a scholarship to MIT and later brought her success as an algorithmic trader, first in Chicago, then New York City.

When her mother was diagnosed with cancer, Hua says she used numbers to avoid her own emotions, finding comfort in long work days that made her feel productive and in control. But when her mother passed away, the finance world she’d committed herself to began to feel hollow.

She began to think deeply about what would make her happy in life, and after much self reflection, she decided to leave finance and refocus herself around priorities such as building relationships, making an impact, and learning.

The new perspective led Hua to start Apptimize, a company that helps product teams experiment with and test digital features across their customer-facing platforms. The service allows companies to make quick, data-driven decisions about customer engagement to create the best possible digital experience.

Today, the impact of Apptimize can be measured in the hundreds of millions: Companies using Apptimize have around 300 million monthly active users on their apps. Hua says Apptimize is the top-selling testing solution for mobile apps, and now the company is working to help its customers optimize every digital channel.

Charting a path

Hua was born in China and moved to America when she was 4. Hua’s father became a science professor, while her mother worked as a waitress. Both worked long hours to make ends meet as Hua received free lunches at school and came home to an empty house at night.

Hua repaid her parents for their sacrifices by devoting herself to school, using her analytical thinking and affinity for math and science to earn a scholarship to MIT. After graduating in 2007, Hua took a lucrative job as an algorithmic trader in Chicago.

“I just wanted to make as much money as possible, because we never had any when I was growing up,” Hua says.

She excelled at the job and was having fun competing with the other quantitative traders in the financial markets. Indeed, by the numbers, Hua seemed to be living the American dream. Then both of her parents got sick.

Hua’s mother went to the doctor for back pain and was diagnosed with stage 4 lung cancer. Around the same time, her father found out he had colon cancer. Hua’s father would eventually enter remission, but when her mother’s conditioned worsened, Hua moved her team to New York City to be closer to her. Even as she spent more time with her mother, Hua says she used her work to avoid the shock and grief she felt. When her mother ultimately passed away, it caused Hua to reevaluate her life pursuits.

“[After my mother’s death], I tried to think about what needed to happen for me to feel like I was living a fulfilling life, and I think my notion of that expanded,” Hua says. “So I thought about how much impact I wanted to have and the importance of connecting with people. Starting a company just seemed like the next step.”

Hua left the finance world in 2012. During her time off, she visited some friends from MIT in San Francisco and found inspiration in the community feeling of the startups where they worked. Hua understood that starting a company brought major risks and would require a broader set of skills than anything she’d experienced in finance. Ultimately, she says it was a risk she had to take.

A new chapter

When Hua founded Apptimize in 2013, she says the process for updating mobile apps reminded her of the early 2000s, when software updates were mailed on CDs. While it was common for companies to experiment with different features on their websites, the process of testing and optimizing mobile apps was far more laborious and imprecise.

This resulted in poor user experiences and lost revenue, so Hua partnered with software engineer and Apptimize co-founder Jeremy Orlow to build a tool that gives product teams the ability to perform mobile experiments with the same speed and control as website testing tools.

From the company’s founding in 2013 to October of last year, Hua served as the copmany’s CEO, often leaning heavily on her MIT network.

“My first executive [hire] was one of my MIT dormmates,” Hua recalls. “MIT kids are just really straightforward and honest and hardworking, so that’s affected our culture and hiring profile a lot. I’m always recruiting from MIT.”

Hua also thinks her academic independence at MIT has improved her confidence.

“At MIT, you get a lot of room to do what you want,” Hua says. “They’re really supportive and ready to give you resources for anything you want to pursue. That’s led to the culture and values we have at Apptimize, and also how I live my life. I like initiating things; I never think anything is impossible. I just think if we work harder and learn more we’ll be able to figure it out.”

That attitude has helped Hua’s team build a seamless testing experience for customers. To get started, companies install Aptimize’s software development kit and use tags and visual editors on Apptimize’s web interface to change variables — no programming required. Companies can also use Apptimize to experiment with different customer flows or test different features during a rollout. Apptimize integrates with popular tracking tools like Google Analytics and offers its own analytics platform.

In October, Apptimize announced an expansion of its platform that gives companies the ability to test and track user acitivity across mobile, web, streaming, and even in-store channels.

Hua recently moved to chairman of the board at Apptimize, but she’s still actively involved in company operations. Notably, she’s found happiness working on the human side of the company, making new hires, planning company outings, and navigating multiple office moves in search for the right space for her team.

Success in those tasks is more subjective than anything she worked on in finance, but Hua prefers it that way.

“My mother’s death caused me to think about my own death in a way, and to consider how I could live differently,” Hua says. “Then it took me years to make that shift. I realized life is not all about work and money. You have to have other things that make you feel like you’re doing something meaningful.”

Kicking neural network automation into high gear

Thu, 03/21/2019 - 11:48am

A new area in artificial intelligence involves using algorithms to automatically design machine-learning systems known as neural networks, which are more accurate and efficient than those developed by human engineers. But this so-called neural architecture search (NAS) technique is computationally expensive.

A state-of-the-art NAS algorithm recently developed by Google to run on a squad of graphical processing units (GPUs) took 48,000 GPU hours to produce a single convolutional neural network, which is used for image classification and detection tasks. Google has the wherewithal to run hundreds of GPUs and other specialized hardware in parallel, but that’s out of reach for many others.

In a paper being presented at the International Conference on Learning Representations in May, MIT researchers describe an NAS algorithm that can directly learn specialized convolutional neural networks (CNNs) for target hardware platforms — when run on a massive image dataset — in only 200 GPU hours, which could enable far broader use of these types of algorithms.

Resource-strapped researchers and companies could benefit from the time- and cost-saving algorithm, the researchers say. The broad goal is “to democratize AI,” says co-author Song Han, an assistant professor of electrical engineering and computer science and a researcher in the Microsystems Technology Laboratories at MIT. “We want to enable both AI experts and nonexperts to efficiently design neural network architectures with a push-button solution that runs fast on a specific hardware.”

Han adds that such NAS algorithms will never replace human engineers. “The aim is to offload the repetitive and tedious work that comes with designing and refining neural network architectures,” says Han, who is joined on the paper by two researchers in his group, Han Cai and Ligeng Zhu.

“Path-level” binarization and pruning

In their work, the researchers developed ways to delete unnecessary neural network design components, to cut computing times and use only a fraction of hardware memory to run a NAS algorithm. An additional innovation ensures each outputted CNN runs more efficiently on specific hardware platforms — CPUs, GPUs, and mobile devices — than those designed by traditional approaches. In tests, the researchers’ CNNs were 1.8 times faster measured on a mobile phone than traditional gold-standard models with similar accuracy.

A CNN’s architecture consists of layers of computation with adjustable parameters, called “filters,” and the possible connections between those filters. Filters process image pixels in grids of squares — such as 3x3, 5x5, or 7x7 — with each filter covering one square. The filters essentially move across the image and combine all the colors of their covered grid of pixels into a single pixel. Different layers may have different-sized filters, and connect to share data in different ways. The output is a condensed image — from the combined information from all the filters — that can be more easily analyzed by a computer.

Because the number of possible architectures to choose from — called the “search space” — is so large, applying NAS to create a neural network on massive image datasets is computationally prohibitive. Engineers typically run NAS on smaller proxy datasets and transfer their learned CNN architectures to the target task. This generalization method reduces the model’s accuracy, however. Moreover, the same outputted architecture also is applied to all hardware platforms, which leads to efficiency issues.

The researchers trained and tested their new NAS algorithm on an image classification task directly in the ImageNet dataset, which contains millions of images in a thousand classes. They first created a search space that contains all possible candidate CNN “paths” — meaning how the layers and filters connect to process the data. This gives the NAS algorithm free reign to find an optimal architecture.

This would typically mean all possible paths must be stored in memory, which would exceed GPU memory limits. To address this, the researchers leverage a technique called “path-level binarization,” which stores only one sampled path at a time and saves an order of magnitude in memory consumption. They combine this binarization with “path-level pruning,” a technique that traditionally learns which “neurons” in a neural network can be deleted without affecting the output. Instead of discarding neurons, however, the researchers’ NAS algorithm prunes entire paths, which completely changes the neural network’s architecture.

In training, all paths are initially given the same probability for selection. The algorithm then traces the paths — storing only one at a time — to note the accuracy and loss (a numerical penalty assigned for incorrect predictions) of their outputs. It then adjusts the probabilities of the paths to optimize both accuracy and efficiency. In the end, the algorithm prunes away all the low-probability paths and keeps only the path with the highest probability — which is the final CNN architecture.


Another key innovation was making the NAS algorithm “hardware-aware,” Han says, meaning it uses the latency on each hardware platform as a feedback signal to optimize the architecture. To measure this latency on mobile devices, for instance, big companies such as Google will employ a “farm” of mobile devices, which is very expensive. The researchers instead built a model that predicts the latency using only a single mobile phone.

For each chosen layer of the network, the algorithm samples the architecture on that latency-prediction model. It then uses that information to design an architecture that runs as quickly as possible, while achieving high accuracy. In experiments, the researchers’ CNN ran nearly twice as fast as a gold-standard model on mobile devices.

One interesting result, Han says, was that their NAS algorithm designed CNN architectures that were long dismissed as being too inefficient — but, in the researchers’ tests, they were actually optimized for certain hardware. For instance, engineers have essentially stopped using 7x7 filters, because they’re computationally more expensive than multiple, smaller filters. Yet, the researchers’ NAS algorithm found architectures with some layers of 7x7 filters ran optimally on GPUs. That’s because GPUs have high parallelization — meaning they compute many calculations simultaneously — so can process a single large filter at once more efficiently than processing multiple small filters one at a time.

“This goes against previous human thinking,” Han says. “The larger the search space, the more unknown things you can find. You don’t know if something will be better than the past human experience. Let the AI figure it out.”

The work was supported, in part, by the MIT Quest for Intelligence, the MIT-IBM Watson AI lab, SenseTime, and Xilinx.

Undergraduate financial aid remains strong

Thu, 03/21/2019 - 10:00am

MIT will further boost its undergraduate financial aid budget for the 2019-20 academic year. The 4.9 percent increase in aid will more than counterbalance a 3.75 percent increase in tuition and fees.

The Institute will commit $136.3 million for financial aid next year. The net cost for an average MIT student receiving need-based aid will be $22,500 in 2019, a 29 percent reduction compared to 2000, when the net cost was $31,860 when converted to current year dollars (based upon the CPI-U index for June 2000-June 2018).

“We cannot say it enough: Our students are the most important ingredient in making MIT the special place it is,” says Vice Chancellor Ian A. Waitz. “We work tirelessly to find ways to keep the doors of MIT open to extraordinary, imaginative, and ambitious thinkers and doers — independent of their financial circumstances. Once they are here, our goal is to make sure they can focus on exploring their academic and personal passions.”

The estimated average MIT scholarship for students receiving financial aid next year is $51,500. More than 30 percent of MIT undergraduates receive aid sufficient to allow them to attend the Institute tuition-free. Financial aid packages cover the rate of living in a double in a Tier 1 residence hall and the cost of the most extensive meal plan, providing the most meals per week.

For undergraduates who do not receive need-based financial aid, tuition, and fees will be $53,790 next year. With average housing and dining costs included, students not receiving financial aid will pay $70,180.

MIT is one of only five American colleges and universities that admit all undergraduate students without regard to their financial circumstances; that award all financial aid based on need; and that meet the full demonstrated financial need of all admitted students.

For students with family incomes under $90,000 a year and typical assets, MIT guarantees that scholarship funding from all sources will allow them to attend the Institute tuition-free. While the Institute’s financial aid program primarily supports students from lower- and middle-income households, even families earning more than $250,000 may qualify for need-based financial aid based on their family circumstances, such as if two or more children are in college at the same time.

About 58 percent of MIT’s 4,550 undergraduates receive need-based financial aid from the Institute and 18 percent receive Federal Pell Grants, which generally go to U.S. students with family incomes below $60,000.

MIT treats the Pell grant in a unique way to further support low income students. Unlike most other colleges and universities, MIT allows students to use the Pell grant to offset what they are expected to contribute through work-study during the semester and the summer. MIT also changed its financial aid policies this year to provide more support for U.S. veterans.

In 2018, 72 percent of MIT seniors graduated with no debt; of those who did assume debt to finance their education, the median indebtedness at graduation was $14,840.

Academic institutions grant commercial license for CRISPR-based SHERLOCK diagnostic technology in developed world

Thu, 03/21/2019 - 9:43am

The following press release was issued today by the Broad Institute of MIT and Harvard.

A group of academic institutions has granted a license for SHERLOCK™, the highly-sensitive, low-cost CRISPR-based diagnostic, for commercial uses in the developed world, while reserving rights to enable its broad use by organizations to serve developing nations as well as unmet public health needs in the developed world.

First unveiled in 2017, SHERLOCK lifts a barrier to rapid deployment of diagnostics in outbreak zones. The system (which stands for Specific High-sensitivity Enzymatic Reporter unLOCKing), allows clinicians to quickly and inexpensively diagnose disease and track epidemics, such as Ebola and Zika, without the need for extensive specialized equipment. SHERLOCK can detect the presence of viruses with an unmatched degree of sensitivity in clinical samples like blood or saliva.

Under an agreement announced today, the institutions — Broad Institute of MIT and Harvard, Massachusetts Institute of Technology, Harvard University, Massachusetts General Hospital (MGH), Rutgers, The State University of New Jersey, Skolkovo Institute of Science and Technology (Skoltech), Wageningen, and University of Tokyo — have granted a license to Sherlock Biosciences Inc., a biotechnology company.

The license provides a limited exclusive right, under the Broad Institute’s inclusive innovation model, to deploy SHERLOCK diagnostic tools for commercial applications in the developed world.

“Because SHERLOCK is simple and inexpensive, it holds impressive potential for transforming how we detect disease,” said Issi Rozen, chief business officer at the Broad Institute. “It is therefore important to ensure creative commercial innovation while at the same time protecting access to new diagnostic tools in the developing world, and for public health applications in the developed world, where they are desperately needed. We designed our licensing strategy to accomplish this.” (Rozen will serve as an academic representative on the board of directors of Sherlock Biosciences, but will receive no personal compensation.)

The licensing agreement announced today does not cover SHERLOCK’s use in the developing world. In addition, the license is not exclusive for certain public health applications in the developed world — for example, the licensing structure is designed to make SHERLOCK available to help health care professionals quickly diagnose a host of circulating bacterial and viral infections such as malaria, tuberculosis, Zika, and rotavirus, among others. For such purposes, the academic coalition will ensure SHERLOCK is made widely available. In addition, SHERLOCK tools, knowledge, and methods will continue to be made freely available for academic research worldwide.

The technology was developed by a team of scientists from the Broad Institute, the McGovern Institute for Brain Research at MIT, the Institute for Medical Engineering & Science at MIT, the Wyss Institute for Biologically Inspired Engineering at Harvard University, MGH, Rutgers, and Skolkovo Institute of Science and Technology. It is a rapid, inexpensive, highly sensitive diagnostic tool with the potential for a transformative effect on research and global public health.

“Skoltech is proud to be working with Broad Institute and the international academic community to address important medical challenges facing humanity, save lives and improve health and well-being for the world’s citizens,” said Professor Alexander Kuleshov, President of Skoltech, and Member of the Russian National Academy of Sciences.

Broad Institute of MIT and Harvard was launched in 2004 to empower this generation of creative scientists to transform medicine. The Broad Institute seeks to describe all the molecular components of life and their connections; discover the molecular basis of major human diseases; develop effective new approaches to diagnostics and therapeutics; and disseminate discoveries, tools, methods, and data openly to the entire scientific community.

Founded by MIT, Harvard, Harvard-affiliated hospitals, and the visionary Los Angeles philanthropists Eli and Edythe L. Broad, the Broad Institute includes faculty, professional staff, and students from throughout the MIT and Harvard biomedical research communities and beyond, with collaborations spanning over a hundred private and public institutions in more than 40 countries worldwide. For further information about the Broad Institute, go to

“Particle robot” works as a cluster of simple units

Wed, 03/20/2019 - 1:59pm

Taking a cue from biological cells, researchers from MIT, Columbia University, and elsewhere have developed computationally simple robots that connect in large groups to move around, transport objects, and complete other tasks.

This so-called “particle robotics” system — based on a project by MIT, Columbia Engineering, Cornell University, and Harvard University researchers — comprises many individual disc-shaped units, which the researchers call “particles.” The particles are loosely connected by magnets around their perimeters, and each unit can only do two things: expand and contract. (Each particle is about 6 inches in its contracted state and about 9 inches when expanded.) That motion, when carefully timed, allows the individual particles to push and pull one another in coordinated movement. On-board sensors enable the cluster to gravitate toward light sources.

In a Nature paper published today, the researchers demonstrate a cluster of two dozen real robotic particles and a virtual simulation of up to 100,000 particles moving through obstacles toward a light bulb. They also show that a particle robot can transport objects placed in its midst.

Particle robots can form into many configurations and fluidly navigate around obstacles and squeeze through tight gaps. Notably, none of the particles directly communicate with or rely on one another to function, so particles can be added or subtracted without any impact on the group. In their paper, the researchers show particle robotic systems can complete tasks even when many units malfunction.

The paper represents a new way to think about robots, which are traditionally designed for one purpose, comprise many complex parts, and stop working when any part malfunctions. Robots made up of these simplistic components, the researchers say, could enable more scalable, flexible, and robust systems.

“We have small robot cells that are not so capable as individuals but can accomplish a lot as a group,” says Daniela Rus, director of the Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Andrew and Erna Viterbi Professor of Electrical Engineering and Computer Science. “The robot by itself is static, but when it connects with other robot particles, all of a sudden the robot collective can explore the world and control more complex actions. With these ‘universal cells,’ the robot particles can achieve different shapes, global transformation, global motion, global behavior, and, as we have shown in our experiments, follow gradients of light. This is very powerful.”

Joining Rus on the paper are: first author Shuguang Li, a CSAIL postdoc; co-first author Richa Batra and corresponding author Hod Lipson, both of Columbia Engineering; David Brown, Hyun-Dong Chang, and Nikhil Ranganathan of Cornell; and Chuck Hoberman of Harvard.

At MIT, Rus has been working on modular, connected robots for nearly 20 years, including an expanding and contracting cube robot that could connect to others to move around. But the square shape limited the robots’ group movement and configurations.

In collaboration with Lipson’s lab, where Li was a postdoc until coming to MIT in 2014, the researchers went for disc-shaped mechanisms that can rotate around one another. They can also connect and disconnect from each other, and form into many configurations.

Each unit of a particle robot has a cylindrical base, which houses a battery, a small motor, sensors that detect light intensity, a microcontroller, and a communication component that sends out and receives signals. Mounted on top is a children’s toy called a Hoberman Flight Ring — its inventor is one of the paper’s co-authors — which consists of small panels connected in a circular formation that can be pulled to expand and pushed back to contract. Two small magnets are installed in each panel.

The trick was programming the robotic particles to expand and contract in an exact sequence to push and pull the whole group toward a destination light source. To do so, the researchers equipped each particle with an algorithm that analyzes broadcasted information about light intensity from every other particle, without the need for direct particle-to-particle communication.

The sensors of a particle detect the intensity of light from a light source; the closer the particle is to the light source, the greater the intensity. Each particle constantly broadcasts a signal that shares its perceived intensity level with all other particles. Say a particle robotic system measures light intensity on a scale of levels 1 to 10: Particles closest to the light register a level 10 and those furthest will register level 1. The intensity level, in turn, corresponds to a specific time that the particle must expand. Particles experiencing the highest intensity — level 10 — expand first. As those particles contract, the next particles in order, level 9, then expand. That timed expanding and contracting motion happens at each subsequent level.

“This creates a mechanical expansion-contraction wave, a coordinated pushing and dragging motion, that moves a big cluster toward or away from environmental stimuli,” Li says. The key component, Li adds, is the precise timing from a shared synchronized clock among the particles that enables movement as efficiently as possible: “If you mess up the synchronized clock, the system will work less efficiently.”

In videos, the researchers demonstrate a particle robotic system comprising real particles moving and changing directions toward different light bulbs as they’re flicked on, and working its way through a gap between obstacles. In their paper, the researchers also show that simulated clusters of up to 10,000 particles maintain locomotion, at half their speed, even with up to 20 percent of units failed.

“It’s a bit like the proverbial ‘gray goo,’” says Lipson, a professor of mechanical engineering at Columbia Engineering, referencing the science-fiction concept of a self-replicating robot that comprises billions of nanobots. “The key novelty here is that you have a new kind of robot that has no centralized control, no single point of failure, no fixed shape, and its components have no unique identity.”

The next step, Lipson adds, is miniaturizing the components to make a robot composed of millions of microscopic particles.

Machine learning identifies links between world’s oceans

Wed, 03/20/2019 - 11:20am

Oceanographers studying the physics of the global ocean have long found themselves facing a conundrum: Fluid dynamical balances can vary greatly from point to point, rendering it difficult to make global generalizations.

Factors like the wind, local topography, and meteorological exchanges make it difficult to compare one area to another. To add to the complexity, one would have to analyze billions of data points for numerous parameters — temperature, salinity, velocity, how things change with depth, whether there is a trend present — to pinpoint what physics are most dominant in a given region.

“You would have to look at an overwhelming number of different global maps and mentally match them up to figure out what matters most where,” says Maike Sonnewald, a postdoc working in the MIT Department of Earth, Atmospheric and Planetary Sciences (EAPS) and a member of the EAPS Program in Atmospheres, Oceans and Climate (PAOC). “It’s beyond what any human could decipher.”

Sonnewald, who has a background in physical oceanography and data science, uses computers to reveal connections and patterns in the ocean that would otherwise be beyond human capability. Recently, she applied a machine learning algorithm that sifted through vast amounts of data to identify patterns in the ocean that have similar physics, showing that there are five global dynamically consistent regions that make up the global ocean.

“It is amazing because it's so simple,” says Sonnewald. “It takes the really complicated world ocean and distills it down to a few important patterns. We use these to infer what's going on and to highlight areas that are more complicated.”

Sonnewald and co-authors Carl Wunsch, EAPS professor emeritus of physical oceanography and PAOC member, and Patrick Heimbach, an EAPS research affiliate and former senior research scientist, now at the University of Texas at Austin, published their findings in a special issue on “Geoscience Papers of the Future” in Earth and Space Science

For data on what is happening in the ocean, Sonnewald used the Estimating the Circulation and Climate of the Ocean (ECCO) state estimate. ECCO is a 20-year estimate of ocean climate and circulation based on billions of points of observational data. Sonnewald then applied an algorithm that’s common in fields ranging from pharmaceutical to engineering research, called K-means clustering, which allows for identification of robust patterns in data to determine what the dominant physics in the ocean are and where they apply.

The results show that there are five clusters that compose 93.7 percent of the global ocean. For example, in the largest cluster, accounting for 43 percent of the global ocean, the most dominant physical attribute is that wind stress on the surface of the ocean is balanced by bottom torques. Areas where this is found: a thin ribbon in the Southern Ocean, large areas of the Arctic seas, zonal streaks in the tropics, and subtropical and subpolar gyres in the Northern Hemisphere.

The other four clusters similarly describe the dominant physical force and in what part of the global ocean it can be found. The algorithm also identified the remaining 6.3 percent of the ocean as areas that were too complicated to be pinned down to a simple set of physical properties. This finding is also helpful, says Sonnewald, as it allows researchers the advantage of knowing where outliers apply.

“I think that it really will ease a lot of the analysis and help us focus our research in the right places,” says Sonnewald.

Wunsch says one exciting implication of the research is that it may help oceanography look more like geology in that researchers who focus on specific regions of the ocean can collaborate and compare notes. A scientist working in one region could compare that region to one that behaves similarly.

“In a way, it’s a better way to use our tools,” says Wunsch.

What it can’t tell you, says Wunsch, is why regions behave differently. “That still takes a human being to go in and to try to understand what is going on in places where the machine identified to look,” he says.

As a next step, Sonnewald is running the same method with higher resolution data to pin down the complicated remaining 6.3 percent. The focus will be on the overturning and gyre circulation, which are both sensitive to a changing climate.

Sonnewald hopes these early findings offer compelling evidence for oceanographers to work more with data scientists to reveal more patterns present in the global ocean. Prior to coming to MIT, Sonnewald received a master’s degree in complex system simulation at The Institute for Complex Systems Simulation at the University of Southampton and a PhD in physical oceanography and complex systems simulation based at the National Oceanography Center in Southampton, England. Since then, she has focused on applying data science to physical oceanography as a postdoc at MIT and Harvard University.

Both fields have seen dramatic advancements in recent decades, says Sonnewald. But there still remains a gap between the "black-box" computing power of artificial intelligence and the deep trove of observational data that make efforts like ECCO possible.  

“Because we're kind of guiding the machine learning algorithm using ocean physics and verifying the results by the canonical regimes that we know should be there, we're able to close that gap,” says Sonnewald. “It's like building a bridge between machine learning and oceanography, and hopefully other people are going to cross that bridge.”

This work was funded by the U.S. NASA Sea Level Change Team and through ECCO Consortium funding via the Jet Propulsion Laboratory.

Alana gift to MIT launches Down syndrome research center, technology program for disabilities

Wed, 03/20/2019 - 10:30am

As part of its continued mission to help build a better world, MIT is establishing the Alana Down Syndrome Center, an innovative new research endeavor, technology development initiative, and fellowship program launched with a $28.6 million gift from Alana Foundation, a nonprofit organization started by Ana Lucia Villela of São Paulo, Brazil.

In addition to multidisciplinary research across neuroscience, biology, engineering, and computer science labs, the gift will fund a four-year program with MIT’s Deshpande Center for Technological Innovation called “Technology to Improve Ability,” in which creative minds around the Institute will be encouraged and supported in designing and developing technologies that can improve life for people with different intellectual abilities or other challenges.

The Alana Down Syndrome Center, based out of MIT’s Picower Institute for Learning and Memory, will engage the expertise of scientists and engineers in a research effort to increase understanding of the biology and neuroscience of Down syndrome. The center will also provide new training and educational opportunities for early career scientists and students to become involved in Down syndrome research. Together, the center and technology program will work to accelerate the generation, development, and clinical testing of novel interventions and technologies to improve the quality of life for people with Down syndrome.

“At MIT, we value frontier research, particularly when it is aimed at making a better world,” says MIT President L. Rafael Reif. “The Alana Foundation’s inspiring gift will position MIT’s researchers to investigate new pathways to enhance and extend the lives of those with Down syndrome. We are grateful to the foundation’s leadership — President Ana Lucia Villela and Co-President Marcos Nisti — for entrusting our community with this critical challenge.”

With a $1.7 million gift to MIT in 2015, Alana funded studies to create new laboratory models of Down syndrome and to improve understanding of the mechanisms of the disorder and potential therapies. In creating the new center, MIT and the Alana Foundation officials say they are building on that partnership to promote discovery and technology development aimed at helping people with different abilities gain greater social and practical skills to enhance their participation in the educational system, in the workforce, and in community life.

“We couldn’t be happier and more hopeful as to the size of the impact this center can generate,” Villela says. “It’s an innovative approach that doesn’t focus on the disability but, instead, focuses on the barriers that can prevent people with Down syndrome from thriving in life in their own way.”

Marcos Nisti, co-president of Alana, adds, “This grant represents all the trust we have in MIT especially because the values our family hold are so aligned with MIT’s own values and its mission.”

Villela and Nisti have two daughters, one with Down syndrome. MIT Executive Vice President and Treasurer Israel Ruiz has had a personal connection to the foundation.

“It is an extraordinary day,” Ruiz says. “It has been a pleasure getting to know Ana Lucia, Marcos and their family over the past few years. Their work to advance the needs of the Down syndrome community is truly exemplary, and I look forward to future collaborations. Today, MIT celebrates their generosity in recognizing all abilities and working to provide opportunities to all.”

Down syndrome, also known as trisomy 21, is characterized by extra genetic material from some or all of chromosome 21 in many or all of an individual’s cells and occurs in one out of every 700 babies in the United States. Though the chromosomal hallmark of Down syndrome has been well known for decades, and advances in research, health care and social services have doubled lifespans over the past 25 years, significant challenges remain for individuals with different abilities and their families because the underlying neurobiology of the disorder is complex.

The center will be co-directed by Angelika Amon, the Kathleen and Curtis Marble Professor in Cancer Research, and Li-Huei Tsai, the Picower Professor of Neuroscience. Amon is an expert in understanding the health impacts of chromosomal instability and aneuploidy, the presence of an abnormal chromosome number, while Tsai is renowned for her work in the field of neurodegenerative disorders, including Alzheimer’s disease, which shares important underlying similarities with Down syndrome.

In the first four years, the new center will employ cutting-edge techniques to study Down syndrome in the brain with two main focuses: systems and circuits as well as genes and cells.

With the support of the previous Alana Foundation gift, Hiruy Meharena, senior fellow in Tsai’s neuroscience lab, has already been deeply engaged in studying Down syndrome’s impact in the brain at the cellular and genomic level, examining key differences in gene expression in cultures of neurons and glia created from patient-derived induced pluripotent stem cells.

At the molecular and cellular level, Professor Manolis Kellis, director of MIT’s Computational Biology Group and a leader in big-data integration and analysis of genomic, epigenomic, and gene expression data will collaborate with Tsai for single-cell profiling of brain samples to understand the genes, molecular pathways, and cellular states that play causal roles in cognitive differences in Down syndrome.

At the systems and circuits level, Ed Boyden, the Y. Eva Tan Professor in Neurotechnology will lead efforts to conduct high-resolution 3-D brain mapping and will collaborate with Tsai to examine the potential of using her emerging non-invasive, sensory-based therapy for Alzheimer’s in Down syndrome.

Amon’s lab will bring its deep expertise from their study of cancer to the new center. Researchers there have made important discoveries about how aneuploidy may undermine overall health, for instance by causing stresses within cells. It is their hope that identifying genetic alterations that suppress the stresses associated with trisomy 21 could lead to the development of therapeutics that improve cell function in individuals with Down syndrome.

To further support these research endeavors and to increase the long-term global pipeline of scientists trained in the study of Down syndrome, the Alana Down Syndrome Center will fund postdoctoral Alana Fellowships and graduate fellowships.

The Alana Center will also convene an annual symposium on Down syndrome research, the first of which is tentatively scheduled for this fall.

The Alana Foundation gift supports the MIT Campaign for a Better World, which was publicly launched in 2016 with a mission to advance MIT’s work in education, research, and innovation to address humanity’s urgent challenges. A joint statement guiding the gift’s purpose is available at

Women in Data Science conference unites global community of researchers and practitioners

Wed, 03/20/2019 - 10:25am

The MIT Institute for Data, Systems, and Society (IDSS) convened professional data scientists, academic researchers, and students from a variety of disciplines for the third annual daylong Women in Data Science (WiDS) conference in Cambridge. WiDS Cambridge is one of many global satellite events of the WiDS conference at Stanford University, where attendees join a global community of data science researchers and practitioners. The conference is open to anyone interested in data science, but strives especially to create opportunities for women in the field to showcase their work and network with each other.

“I think WiDS is a great opportunity to bring together women at all professional levels — students, postdocs, faculty, and professionals in industry — who are working in data science, building community, and learning from a wide variety of perspectives,” said Stefanie Jegelka, an IDSS affiliate faculty member with the Department of Electrical Engineering and Computer Science (EECS). Jegelka is an MIT WiDS planning committee member who also gave a talk exploring the properties of neural networks, focusing on ResNet architecture and neural networks for graphs.

Topics at this year’s WiDS Cambridge included artificial intelligence, bias in algorithms, prediction and forecasting, and developing a better understanding of machine learning and neural network properties. WiDS speakers work in a variety of fields, including health care, criminal justice, and business administration. Esther Duflo, an IDSS affiliate with the MIT Department of Economics, talked about using machine learning in poverty alleviation. Machine learning, she argued, has more use in development and economics than prediction. “Machine learning and randomized control trials can be useful complements,” she said.

The wide array of applications for data science was also on display during the conference’s poster session, where almost 30 students presented their research. This session provides newer practitioners the chance to hone their communication skills, get feedback on their work, and form useful connections for their career. Marie Charpignon, a student with the IDSS social and engineering systems doctoral program, offered an example of this kind of connection. “During WiDS, someone stopped by my poster on the modeling of Ebola spread and offered to connect me with New England Complex Systems — it is right across the street from MIT, but I simply did not know,” she said.

WiDS attendees from industry similarly have the opportunity to learn from cutting-edge academic research and strengthen industry connections with academia. ”It was great to see the WiDS conference showcasing data scientists using their skills to improve the world,” said Michael DeAddio, president and chief operating officer of WorldQuant, a sponsor of WiDS and industry partner of IDSS. “From supporting disaster recovery efforts, to improving predictions for health outcomes and developing more accurate models for recidivism — this is a great platform to highlight such efforts and inspire women interested in data science."

Adds Charpignon: “Meeting with people who inspire us and that we can inspire back is what WiDS is all about.”

WiDS Cambridge was co-hosted by Harvard University’s Institute for Applied Computational Science at the Microsoft Research New England NERD Center.

How tumors behave on acid

Wed, 03/20/2019 - 12:00am

Scientists have long known that tumors have many pockets of high acidity, usually found deep within the tumor where little oxygen is available. However, a new study from MIT researchers has found that tumor surfaces are also highly acidic, and that this acidity helps tumors to become more invasive and metastatic.

The study found that the acidic environment helps tumor cells to produce proteins that make them more aggressive. The researchers also showed that they could reverse this process in mice by making the tumor environment less acidic.

“Our findings reinforce the view that tumor acidification is an important driver of aggressive tumor phenotypes, and it indicates that methods that target this acidity could be of value therapeutically,” says Frank Gertler, an MIT professor of biology, a member of MIT’s Koch Institute for Integrative Cancer Research, and the senior author of the study.

Former MIT postdoc Nazanin Rohani is the lead author of the study, which appears in the journal Cancer Research.

Mapping acidity

Scientists usually attribute a tumor’s high acidity to the lack of oxygen, or hypoxia, that often occurs in tumors because they don’t have an adequate blood supply. However, until now, it has been difficult to precisely map tumor acidity and determine whether it overlaps with hypoxic regions.

In this study, the MIT team used a probe called pH (Low) Insertion Peptide (pHLIP), originally developed by researchers at the University of Rhode Island, to map the acidic regions of breast tumors in mice. This peptide is floppy at normal pH but becomes more stable at low, acidic pH. When this happens, the peptide can insert itself into cell membranes. This allows the researchers to determine which cells have been exposed to acidic conditions, by identifying cells that have been tagged with the peptide.

To their surprise, the researchers found that not only were cells in the oxygen-deprived interior of the tumor acidic, there were also acidic regions at the boundary of the tumor and the structural tissue that surrounds it, known as the stroma.

“There was a great deal of tumor tissue that did not have any hallmarks of hypoxia that was quite clearly exposed to acidosis,” Gertler says. “We started looking at that, and we realized hypoxia probably wouldn’t explain the majority of regions of the tumor that were acidic.”

Further investigation revealed that many of the cells at the tumor surface had shifted to a type of cell metabolism known as aerobic glycolysis. This process generates lactic acid as a byproduct, which could account for the high acidity, Gertler says. The researchers also discovered that in these acidic regions, cells had turned on gene expression programs associated with invasion and metastasis. Nearly 3,000 genes showed pH-dependent changes in activity, and close to 300 displayed changes in how the genes are assembled, or spliced.

“Tumor acidosis gives rise to the expression of molecules involved in cell invasion and migration. This reprogramming, which is an intracellular response to a drop in extracellular pH, gives the cancer cells the ability to survive under low-pH conditions and proliferate,” Rohani says.

Those activated genes include Mena, which codes for a protein that normally plays a key role in embryonic development. Gertler’s lab had previously discovered that in some tumors, Mena is spliced differently, producing an alternative form of the protein known as MenaINV (invasive). This protein helps cells to migrate into blood vessels and spread though the body.

Another key protein that undergoes alternative splicing in acidic conditions is CD44, which also helps tumor cells to become more aggressive and break through the extracellular tissues that normally surround them. This study marks the first time that acidity has been shown to trigger alternative splicing for these two genes.

Reducing acidity

The researchers then decided to study how these genes would respond to decreasing the acidity of the tumor microenvironment. To do that, they added sodium bicarbonate to the mice’s drinking water. This treatment reduced tumor acidity and shifted gene expression closer to the normal state. In other studies, sodium bicarbonate has also been shown to reduce metastasis in mouse models.

Sodium bicarbonate would not be a feasible cancer treatment because it is not well-tolerated by humans, but other approaches that lower acidity could be worth exploring, Gertler says. The expression of new alternative splicing genes in response to the acidic microenvironment of the tumor helps cells survive, so this phenomenon could be exploited to reverse those programs and perturb tumor growth and potentially metastasis.

“Other methods that would more focally target acidification could be of great value,” he says.

The research was funded by the Koch Institute Support (core) Grant from the National Cancer Institute, the Howard Hughes Medical Institute, the National Institutes of Health, the KI Quinquennial Cancer Research Fellowship, and MIT’s Undergraduate Research Opportunities Program.

Other authors of the paper include Liangliang Hao, a former MIT postdoc; Maria Alexis and Konstantin Krismer, MIT graduate students; Brian Joughin, a lead research modeler at the Koch Institute; Mira Moufarrej, a recent graduate of MIT; Anthony Soltis, a recent MIT PhD recipient; Douglas Lauffenburger, head of MIT’s Department of Biological Engineering; Michael Yaffe, a David H. Koch Professor of Science; Christopher Burge, an MIT professor of biology; and Sangeeta Bhatia, the John and Dorothy Wilson Professor of Health Sciences and Technology and Electrical Engineering and Computer Science.

3Qs: Sarah Williams on mapping urban transport

Tue, 03/19/2019 - 12:20pm

Imagine that you’re a city planner who needs to make decisions about where to place public housing, amenities, or critical services, but you don’t have a complete picture of how people move throughout the city. You simply don’t have the data needed to make these decisions. That is the case for 92 percent of the world’s largest low- and middle-income cities faced with transportation data deficits. Add informal transit into the picture — matatus in Nairobi, colectivos in Mexico City, jeepneys in Manila — and the situation gets even more complex since these modes operate outside of formal public transportation and their routes and schedules tend to be irregular. Not every city has the means of creating or collecting data on informal transit to get that full picture of the network. Sarah Williams is combining her skills as a geographer, architect, data scientist, and city planner to address such deficiencies in developing cities. Her goal is to create data for civic change. 

Q: What is your new initiative and what you hope to accomplish?

A: We’re creating an open platform for anyone who is interested in accessing tools for mapping urban informal transit in Latin American and Caribbean cities. Transportation data is essential for economic development, and the goal is to make creating and collecting transportation data easier.

Our resource center will link people to the right resources and tools to create transportation data that can influence policy outcomes. We’re linking city transit operators, local governments, nonprofit and civic organizations, startups, and researchers to open access data collection and analysis tools, tutorials, case studies, and a global knowledge network on policy, data, and mobility. Overall, the resource center’s efforts contribute to the United Nations Sustainable Development Goal 11 to “make cities inclusive, safe, resilient and sustainable” and to target 11.2, which calls for “safe, affordable, accessible, and sustainable transport systems for all.”

The MIT Civic Data Design Lab’s main partners for this project are the Inter-American Development Bank and Mastercard Center for Inclusive Growth, and it will be led by World Resources Institute Mexico, the MIT International Policy Lab, and Columbia University’s Earth Institute.

Q: What are the main challenges to collecting urban data in this region and how are you addressing those challenges?

A: When it comes to developing cities, one major challenge is that data is scarce. This is the case across many sectors but especially urban transportation. Another challenge is that governments, NGOs, transit operators, and other actors don’t know how to access funds to pay for data collection, and there is lack of knowledge about the tools that are available for accomplishing this. On top of everything, transportation networks in developing cities are rarely unified. There are hundreds of operators across public transit and informal transit that are not necessarily coordinated with each other in terms of who goes where and who serves whom. This presents challenges to urban planning, reaching sustainable development targets, and providing accessibility to public transit and amenities in cities. 

To address these challenges, we coordinate the right stakeholders to be part of transit mapping initiatives, help connect them to funding sources, train people to develop transit data in a standardized format, show people who use transit data as an analysis tool, and connect people to the local tech community to build new products with the transit data.

Q. How did you become interested in urban transportation?  

A: I wasn't always interested in transportation, but when I saw how severe congestion in Nairobi could bring the city to a standstill, I knew I needed to get involved and use my skills to address critical transportation problems. I quickly learned how the crippling problems I saw in Nairobi also afflict other developing cities. 

The resource center that we’ve launched is largely inspired by the Civic Data Design Lab’s Digital Matatus project in Nairobi. Launched in 2012, Digital Matatus began as a collaboration between MIT, Columbia University, and the University of Nairobi. The project captured transportation data for Nairobi’s informal matatu network and resulted in the development of mobile routing applications and a new transit map for the city. The data, maps and apps are now free and available to the public, transforming the way residents of Nairobi navigate and think about their transportation system. 

Sorority leaders recognized for creating leadership, support, service, and scholarship opportunities

Tue, 03/19/2019 - 11:25am

MIT’s Panhellenic Association (Panhel) has been honored with three awards from the 2019 Northeast Greek Leadership Association (NGLA). MIT Panhel governs seven MIT sorority chapters. With over 750 total members, the sorority community fosters friendships and provides opportunities for members to explore academics, careers, and leadership, and get involved with community service.

The awards were announced at the annual NGLA Conference in Hartford, Connecticut, earlier this month. The conference brings together student leaders from fraternities and sororities across the Northeast to learn from one another. The conference’s mission is to empower student leaders and their communities to align their actions and values. Additionally, the NGLA honors Greek organizations for academic achievement, chapter development and leadership, membership recruitment and intake, multicultural initiatives and programming, civic engagement, public relations, and risk management.

This year, MIT Panhel took home three major awards. The Josette Kaufman Award recognized the STAR Program, a series of six education sessions that cover women’s health and well-being topics ranging from bystander training to substance abuse. Additionally, the two Amy Vojta Impact Awards recognized Panhel’s risk-management efforts on campus, and its pre-recruitment inclusivity program and multicultural initiatives. Additionally, MIT’s Zeta Delta chapter of Delta Phi Epsilon was recognized for its academic achievement.

“The conference went really well! It was a great experience to interact and exchange ideas with interfraternity councils and panhellenic councils from around the northeast region,” says Vanessa Wong, executive vice president of MIT Panhel.

“Being part of a sorority and the Panhel community is something to be really proud of,” says Alice Zhou, president of the MIT Panhellenic Association. “The members we have here in sororities are incredibly amazing and inspiring.”

Moving forward, MIT’s Panhellenic Association hopes to continue to grow, improve, and celebrate diversity and inclusivity on campus. “We are so appreciative and thankful for the leadership of MIT’s Panhellenic Association and the positive impact they have on our campus community,” said Suzy Nelson, vice president and dean for student life.