MIT Latest News

Subscribe to MIT Latest News feed
MIT News is dedicated to communicating to the media and the public the news and achievements of the students, faculty, staff and the greater MIT community.
Updated: 21 hours 54 min ago

Researchers tune material’s color and thermal properties separately

Tue, 04/02/2019 - 1:30pm

The color of a material can often tell you something about how it handles heat. Think of wearing a black shirt on a sweltering summer’s day — the darker the pigment, the warmer you’re likely to feel. Likewise, the more transparent a glass window, the more heat it can let through. A material’s responses to visible and infrared radiation are often naturally linked.

Now MIT engineers have made samples of strong, tissue-like polymer material, the color and heat properties of which they can tailor independently of the other. For instance, they have fabricated samples of very thin black film designed to reflect heat and stay cool. They’ve also made films exhibiting a rainbow of other colors, each made to reflect or absorb infrared radiation regardless of the way they respond to visible light.

The researchers can specifically tune the color and heat properties of this new material to fit the requirements for a host of wide-ranging applications, including colorful, heat-reflecting building facades, windows, and roofs; light-absorbing, heat-dissipating covers for solar panels; and lightweight fabric for clothing, outerwear, tents, and backpacks — all designed to either trap or reflect heat, depending on the environments in which they would be used.

“With this material, everything could look more colorful, because then you wouldn’t be concerned with what color does to the thermal balance of, say, a building, or a window, or your clothing,” says Svetlana Boriskina, a research scientist in MIT’s Department of Mechanical Engineering.

Boriskina is author of a study that appears today in the journal Optical Materials Express, outlining the new material-engineering technique. Her MIT co-authors are Luis Marcelo Lozano, Seongdon Hong, Yi Huang, Hadi Zandavi, Yoichiro Tsurimaki, Jiawei Zhou, Yanfei Xu, and Gang Chen, the Carl Richard Soderberg Professor of Power Engineering, along with Yassine Ait El Aoud and Richard Osgood III, both of the Combat Capabilities Development Command Soldier Center, in Natick, Massachusetts.

Polymer conductors

For this work, Boriskina was inspired by the vibrant colors in stained-glass windows, which for centuries have been made by adding particles of metals and other natural pigments to glass.

“However, despite providing excellent visual transparency, glass has many limitations as a material,” Boriskina notes. “It is bulky, inflexible, fragile, does not spread heat well, and is obviously not suitable for wearable applications.”

She says that while it’s relatively simple to tailor the color of glass, the material’s response to heat is difficult to tune. For instance, glass panels reflect room-temperature heat and trap it inside the room. Furthermore, if colored glass is exposed to incoming sunlight from a particular direction, the heat from the sun can create a hotspot, which is difficult to dissipate in glass. If a material like glass can’t conduct or dissipate heat well, that heat could damage the material.

The same can be said for most plastics, which can be engineered in any color but for the most part are thermal absorbers and insulators, concentrating and trapping heat rather than reflecting it away.

For the past several years, Chen’s lab has been looking into ways to manipulate flexible, lightweight polymer materials to conduct, rather than insulate, heat, mostly for applications in electronics. In previous work, the researchers found that by carefully stretching polymers like polyethylene, they could change the material’s internal structure in a way that also changed its heat-conducting properties.

Boriskina thought this technique might be useful not just for fabricating polymer-based electronics, but also in architecture and apparel. She adapted this polymer-fabrication technique, adding a twist of color.

“It’s very hard to develop a new material with all these different properties in it,” she says. “Usually if you tune one property, the other gets destroyed. Here, we started with one property that was discovered in this group, and then we added a new property creatively. All together it works as a multifunctional material.”

Hotspots stretched away

To fabricate the colorful films, the team started with a mixture of polyethylene powder and a chemical solvent, to which they added certain nanoparticles to give the film a desired color. For instance, to make black film, they added particles of silicon; other red, blue, green, and yellow films were made with the addition of various commercial dyes.

The team then attached each nanoparticle-embedded film onto a roll-to-roll apparatus, which they heated up to soften the film, making it more pliable as the researchers carefully stretched the material.

As they stretched each film, they found, unsurprisingly, that the material became more transparent. They also observed that polyethylene’s microscopic structure changed as it stretched. Where normally the material’s polymer chains resemble a disorganized tangle, similar to cooked spaghetti, when stretched these chains straighten out, forming parallel fibers.

When the researchers placed each sample under a solar simulator — a lamp that mimics the visible and thermal radiation of the sun — they found the more stretched out a film, the more heat it was able to dissipate. The long, parallel polymer chains essentially provided a direct route along which heat could travel. Along these chains, heat, in the form of phonons, could then shoot away from its source, in a “ballistic” fashion, avoiding the formation of hotspots.

The researchers also found that the less they stretched the material, the more insulating it was, trapping heat, and forming hotspots within polymer tangles.

By controlling the degree to which the material is stretched, Boriskina could control polyethylene’s heat-conducting properties, regardless of the material’s color. She also carefully chose the nanoparticles, not just by their visual color, but also by their interactions with invisible radiative heat. She says researchers can potentially use this technique to produce thin, flexible, colorful polymer films, that can conduct or insulate heat, depending on the application.

Going forward, she plans to launch a website that offers algorithms to calculate a material’s color and thermal properties, based on its dimensions and internal structure.

In addition to films, her group is now working on fabricating nanoparticle-embedded polyethylene thread, which can be stitched together to form lightweight apparel, designed to be either insulating, or cooling.

“This is in film factor now, but we’re working it into fibers and fabrics,” Boriskina says. “Polyethylene is produced by the billions of tons and could be recycled, too. I don’t see any significant impediments to large-scale production.”

This research was supported, in part, by the Combat Capabilities Development Command Soldier Center.

Teaching machines to reason about what they see

Tue, 04/02/2019 - 11:15am

A child who has never seen a pink elephant can still describe one — unlike a computer. “The computer learns from data,” says Jiajun Wu, a PhD student at MIT. “The ability to generalize and recognize something you’ve never seen before — a pink elephant — is very hard for machines.”

Deep learning systems interpret the world by picking out statistical patterns in data. This form of machine learning is now everywhere, automatically tagging friends on Facebook, narrating Alexa’s latest weather forecast, and delivering fun facts via Google search. But statistical learning has its limits. It requires tons of data, has trouble explaining its decisions, and is terrible at applying past knowledge to new situations; It can’t comprehend an elephant that’s pink instead of gray.  

To give computers the ability to reason more like us, artificial intelligence (AI) researchers are returning to abstract, or symbolic, programming. Popular in the 1950s and 1960s, symbolic AI wires in the rules and logic that allow machines to make comparisons and interpret how objects and entities relate. Symbolic AI uses less data, records the chain of steps it takes to reach a decision, and when combined with the brute processing power of statistical neural networks, it can even beat humans in a complicated image comprehension test. 

A new study by a team of researchers at MITMIT-IBM Watson AI Lab, and DeepMind shows the promise of merging statistical and symbolic AI. Led by Wu and Joshua Tenenbaum, a professor in MIT’s Department of Brain and Cognitive Sciences and the Computer Science and Artificial Intelligence Laboratory, the team shows that its hybrid model can learn object-related concepts like color and shape, and leverage that knowledge to interpret complex object relationships in a scene. With minimal training data and no explicit programming, their model could transfer concepts to larger scenes and answer increasingly tricky questions as well as or better than its state-of-the-art peers. The team presents its results at the International Conference on Learning Representations in May.

“One way children learn concepts is by connecting words with images,” says the study’s lead author Jiayuan Mao, an undergraduate at Tsinghua University who worked on the project as a visiting fellow at MIT. “A machine that can learn the same way needs much less data, and is better able to transfer its knowledge to new scenarios.”

The study is a strong argument for moving back toward abstract-program approaches, says Jacob Andreas, a recent graduate of the University of California at Berkeley, who starts at MIT as an assistant professor this fall and was not involved in the work. “The trick, it turns out, is to add more symbolic structure, and to feed the neural networks a representation of the world that’s divided into objects and properties rather than feeding it raw images,” he says. “This work gives us insight into what machines need to understand before language learning is possible.”

The team trained their model on images paired with related questions and answers, part of the CLEVR image comprehension test developed at Stanford University. As the model learns, the questions grow progressively harder, from, “What’s the color of the object?” to “How many objects are both right of the green cylinder and have the same material as the small blue ball?” Once object-level concepts are mastered, the model advances to learning how to relate objects and their properties to each other.

Like other hybrid AI models, MIT’s works by splitting up the task. A perception module of neural networks crunches the pixels in each image and maps the objects. A language module, also made of neural nets, extracts a meaning from the words in each sentence and creates symbolic programs, or instructions, that tell the machine how to answer the question. A third reasoning module runs the symbolic programs on the scene and gives an answer, updating the model when it makes mistakes.

Key to the team’s approach is a perception module that translates the image into an object-based representation, making the programs easier to execute. Also unique is what they call curriculum learning, or selectively training the model on concepts and scenes that grow progressively more difficult. It turns out that feeding the machine data in a logical way, rather than haphazardly, helps the model learn faster while improving accuracy.

Once the model has a solid foundation, it can interpret new scenes and concepts, and increasingly difficult questions, almost perfectly. Asked to answer an unfamiliar question like, “What’s the shape of the big yellow thing?” it outperformed its peers at Stanford and nearby MIT Lincoln Laboratory with a fraction of the data. 

While other models trained on the full CLEVR dataset of 70,000 images and 700,000 questions, the MIT-IBM model used 5,000 images and 100,000 questions. As the model built on previously learned concepts, it absorbed the programs underlying each question, speeding up the training process. 

Though statistical, deep learning models are now embedded in daily life, much of their decision process remains hidden from view. This lack of transparency makes it difficult to anticipate where the system is susceptible to manipulation, error, or bias. Adding a symbolic layer can open the black box, explaining the growing interest in hybrid AI systems.

“Splitting the task up and letting programs do some of the work is the key to building interpretability into deep learning models,” says Lincoln Laboratory researcher David Mascharka, whose hybrid model, Transparency by Design Network, is benchmarked in the MIT-IBM study.      

The MIT-IBM team is now working to improve the model’s performance on real-world photos and extending it to video understanding and robotic manipulation. Other authors of the study are Chuang Gan and Pushmeet Kohli, researchers at the MIT-IBM Watson AI Lab and DeepMind, respectively.

School of Science announces 2019 Infinite Mile Awards

Tue, 04/02/2019 - 10:12am

The MIT School of Science has announced the winners of the 2019 Infinite Mile Award, which is presented annually to staff members within the school who demonstrate exemplary dedication to making MIT a better place.

Nominated by their colleagues, these winners are notable for their unrelenting and extraordinary hard work in their positions, which can include mentoring fellow community members, innovating new solutions to problems big and small, building their communities, or going far above and beyond their job descriptions to support the goals of their home departments, labs, and research centers.

The 2019 Infinite Mile Award winners are:

Christine Brooks, an administrative assistant in the Department of Chemistry, nominated by Mircea Dincă and several members of the Dincă, Schrock, and Cummins groups;

Annie Cardinaux, a research specialist in the Department of Brain and Cognitive Sciences, nominated by Pawan Sinha;

Kimberli DeMayo, a human resources consultant in the Department of Mathematics, nominated by Nan Lin, Dennis Porche, and Paul Seidel, with support from several other faculty members;

Arek Hamalian, a technical associate at the Picower Institute for Learning and Memory, nominated by Susumu Tonegawa;

Jonathan Harmon, an administrative assistant in the Department of Mathematics, nominated by Pavel Etingof and Kimberli DeMayo, with support from several other faculty members;

Tanya Khovanova, a lecturer in the Department of Mathematics, nominated by Pavel Etingof, David Jerison, and Slava Gerovitch;

Kelley Mahoney, an SRS financial staff member in the Kavli Institute for Astrophysics and Space Research, nominated by Sarah Brady, Michael McDonald, Anna Frebel, Jacqueline Hewitt, Jack Defandorf, and Stacey Sullaway;

Walter Massefski, the director of instrumentation facility in the Department of Chemistry, nominated by Timothy Jamison and Richard Wilk;

Raleigh McElvery, a communications coordinator in the Department of Biology, nominated by Vivian Siegel with support from Amy Keating, Julia Keller, and Erika Reinfeld; and

Kate White, an administrative officer in the Department of Brain and Cognitive Sciences, nominated by Jim DiCarlo, Michale Fee, Sara Cody-Larnard, Rachel Donahue, Federico Chiavazza, Matthew Regan, Gayle Lutchen, and William Lawson.

The recipients will receive a monetary award in addition to being honored at a celebratory reception, along with their peers, family and friends, and the recipients of the 2019 Infinite Kilometer Award this month.

Finding common ground

Mon, 04/01/2019 - 5:00pm

One evening last November, something unexpected came out of a microfluidics and nanofluidics lab at MIT.

As postdocs David Cheng and Rozzeta Dolah ran an experiment, their conversation drifted from the work at hand to their future plans. “It’s a somewhat inevitable question that postdocs hate ... yet just cannot escape from discussing,” says Cheng. Both wished they had more role models, particularly past postdocs, that they could reach out to for advice.

Then an idea emerged. What about holding an event to enable current and former postdocs to network and share experiences?

Cheng and Dolah — officers of the MIT Postdoctoral Association (PDA) — ended up doing just that, hosting the first-ever MIT PDA Homecoming on March 15. With support from the Office of the Vice President for Research (VPR) and the MIT Alumni Association, the PDA organized a panel discussion and reception that drew more than 120 attendees, including 21 past postdocs and a number of current students.

Five former postdocs with a range of academic and industry experience participated in the panel: Brent Grocholski, a physical science associate editor for Science; Sisir Karumanchi, a robotics technologist in mobility and robotics systems at NASA’s Jet Propulsion Laboratory; Lamia Youseff, a research scientist in scalable machine learning at Stanford University and a Sloan Fellow at Stanford’s business school; Rajesh Jugulum, an infomatics director at CIGNA and adjunct professor at Northeastern University; and Virginia Burger, a senior scientist and director of scientific collaborations at XtalPi, Inc.

The event kicked off with virtual remarks from Vice President for Research Maria Zuber, whose office provides oversight of postdoctoral affairs, and professor of physics Edmund Bertschinger, who serves as faculty mentor to the PDA. Zuber noted recent enhancements, done in collaboration with the PDA, to support the over 1,500 postdocs at MIT. They include an increased focus on mentoring, career guidance, and professional development; partnering with the Alumni Association to offer postdocs access to infinite connection accounts and the job board, and adding postdoc representation to Institute committees.

“While we have more to achieve in the coming years, let us take pride and celebrate the success of our past and current postdocs,” Zuber said. “May the bonds of MIT and the spirit of 'mens et manus' ['mind and hand'] guide us as we strive to create new knowledge and transform society for the better, wherever we are.”

Both Zuber and Bertschinger offered special recognition to Dana Bresee Keeth, director of postdoctoral services in VPR. Keeth plans to retire this month after eight years of service in that role.

Alex Albanese, a postdoc in the Institute for Medical Engineering and Science, moderated a panel discussion. His “full-time hobby” is producing a podcast called GLiMPSE, offering a window into the work of postdocs and scientists across MIT. “The diversity of cultures, perspectives, and scientific backgrounds never ceases to amaze me,” he said in his opening remarks. Albanese peppered the panelists with questions about their postdoc experiences; making the transition to a career outside MIT; current responsibilities and challenges; and general advice for success at MIT and beyond.

Throughout the discussion, panelists described how their current work differs from their postdoc experience. “You get good at a specific problem as a postdoc,” Grocholski said, adding that now he doesn’t have the “bandwidth” to understand all the technical aspects of the papers he reviews. “I need to look at the 30,000-foot view. That’s something that requires you to stretch yourself and try to see things a bit more broadly.”

“In academia, you look for perfection in a particular topic or concept, and time is not a factor,” Jugulum said. In industry, perfection is not that important, “but time is a factor … you have to understand the importance of time.”

Being nimble and adapting to change is key, Youseff said. “The ability to be flexible, and move across technologies, understand the different foundations of the technology” but not be tied to any particular one, has served her well as her career has progressed.

Effective communication was also a common theme. “All the skill sets [you have] in writing proposals or papers will come in very handy” no matter what you do, Karumanchi told the audience. Conveying technical concepts to a broader audience, however, has often proved challenging for the panelists.

Understanding your audience is important, Jugulum said. Talking to a business leader is a different skill for those with a technical background. A colleague recommended using a storytelling approach. “Tell the story first. What is the result? How did you achieve these results? Then go to the technique,” he said.

When asked what skill they wished they had developed as postdocs, several panelists said they should have networked and explored more. “I urge all of you to take advantage of the opportunities that are here to the fullest extent,” Grocholski said.

Burger recalled that when she was just a few months shy of the end of her postdoc appointment, her plan to stay in academia changed radically. Having submitted her faculty applications, she had more time to participate in activities beyond her field. She discovered a passion for entrepreneurship that changed the trajectory of her career. In retrospect, she said, it would have been good to make an effort to explore “something outside of my interest” every semester.

The panel resonated with attendees Jose Ruiperez, a postdoc in open learning, and Christina Tringides, a graduate student in health sciences and technology. “For me, the postdoc is an important point when you have decide whether you’re going to remain in academia or move into industry,” Ruiperez said. After going back and forth, he’s decided to stay in academia, “but it’s nice to see why other people decided to move to industry or to environments other than the typical academic environment.”

“The event was really nicely done,” Tringides said. “Everyone had such different backgrounds, but they still said a lot of the same themes.” She appreciated getting a sense of career options for postdocs at this stage of her training.

In closing the program, Albanese gave the audience an assignment.

“I know a lot of panelists regretted not networking. We heard it’s a really important skill," he said. "So talk to one new person tonight.”

Pages