MIT Latest News
Self-assembling materials called block copolymers, which are known to form a variety of predictable, regular patterns, can now be made into much more complex patterns that may open up new areas of materials design, a team of MIT researchers say.
The new findings appear in the journal Nature Communications, in a paper by postdoc Yi Ding, professors of materials science and engineering Alfredo Alexander-Katz and Caroline Ross, and three others.
“This is a discovery that was in some sense fortuitous,” says Alexander-Katz. “Everyone thought this was not possible,” he says, describing the team’s discovery of a phenomenon that allows the polymers to self-assemble in patterns that deviate from regular symmetrical arrays.
Self-assembling block copolymers are materials whose chain-like molecules, which are initially disordered, will spontaneously arrange themselves into periodic structures. Researchers had found that if there was a repeating pattern of lines or pillars created on a substrate, and then a thin film of the block copolymer was formed on that surface, the patterns from the substrate would be duplicated in the self-assembled material. But this method could only produce simple patterns such as grids of dots or lines.
In the new method, there are two different, mismatched patterns. One is from a set of posts or lines etched on a substrate material, and the other is an inherent pattern that is created by the self-assembling copolymer. For example, there may be a rectangular pattern on the substrate and a hexagonal grid that the copolymer forms by itself. One would expect the resulting block copolymer arrangement to be poorly ordered, but that’s not what the team found. Instead, “it was forming something much more unexpected and complicated,” Ross says.
There turned out to be a subtle but complex kind of order — interlocking areas that formed slightly different but regular patterns, of a type similar to quasicrystals, which don’t quite repeat the way normal crystals do. In this case, the patterns do repeat, but over longer distances than in ordinary crystals. “We’re taking advantage of molecular processes to create these patterns on the surface” with the block copolymer material, Ross says.
This potentially opens the door to new ways of making devices with tailored characteristics for optical systems or for “plasmonic devices” in which electromagnetic radiation resonates with electrons in precisely tuned ways, the researchers say. Such devices require very exact positioning and symmetry of patterns with nanoscale dimensions, something this new method can achieve.
Katherine Mizrahi Rodriguez, who worked on the project as an undergraduate, explains that the team prepared many of these block copolymer samples and studied them under a scanning electron microscope. Yi Ding, who worked on this for his doctoral thesis, “started looking over and over to see if any interesting patterns came up,” she says. “That’s when all of these new findings sort of evolved.”
The resulting odd patterns are “a result of the frustration between the pattern the polymer would like to form, and the template,” explains Alexander-Katz. That frustration leads to a breaking of the original symmetries and the creation of new subregions with different kinds of symmetries within them, he says. “That’s the solution nature comes up with. Trying to fit in the relationship between these two patterns, it comes up with a third thing that breaks the patterns of both of them.” They describe the new patterns as a “superlattice.”
Having created these novel structures, the team went on to develop models to explain the process. Co-author Karim Gadelrab PhD ’19, says, “The modeling work showed that the emergent patterns are in fact thermodynamically stable, and revealed the conditions under which the new patterns would form.”
Ding says “We understand the system fully in terms of the thermodynamics,” and the self-assembling process “allows us to create fine patterns and to access some new symmetries that are otherwise hard to fabricate.”
He says this removes some existing limitations in the design of optical and plasmonic materials, and thus “creates a new path” for materials design.
So far, the work the team has done has been confined to two-dimensional surfaces, but in ongoing work they are hoping to extend the process into the third dimension, says Ross. “Three dimensional fabrication would be a game changer,” she says. Current fabrication techniques for microdevices build them up one layer at a time, she says, but “if you can build up entire objects in 3-D in one go,” that would potentially make the process much more efficient.
These findings “open new pathways to generate templates for nanofabrication with symmetries not achievable from the copolymer alone,” says Thomas P. Russell, the Silvio O. Conte Distinguished Professor of Polymer Science and Engineering at the University of Massachusetts, Amherst, who was not involved in this work. He adds that it “opens the possibility of exploring a large parameter space for uncovering other symmetries than those discussed in the manuscript.”
Russel says “The work is of the highest quality,” and adds “The pairing of theory and experiment is quite powerful and, as can be seen in the text, the agreement between the two is remarkably good.”
The research was funded by the Office of General Sciences of the U.S. Department of Energy. The team also included graduate student Hejin Huang.
Omer Tanovic, a PhD candidate in the Department of Electrical Engineering and Computer Science, joined the Laboratory for Information and Decision Systems (LIDS) because he loves studying theory and turning research questions into solvable math problems. But Omer says that his engineering background — before coming to MIT he received undergraduate and master’s degrees in electrical engineering and computer science at the University of Sarajevo in Bosnia-Herzegovina — has taught him never to lose sight of the intended applications of his work, or the practical parameters for implementation.
“I love thinking about things on the abstract math level, but it’s also important to me that the work we are doing will help to solve real-world problems,” Omer says. “Instead of building circuits, I am creating algorithms that will help make better circuits.”
One real-world problem that captured Omer’s attention during his PhD is power efficiency in wireless operations. The success of wireless communications has led to massive infrastructure expansion in the United States and around the world. This has included many new cell towers and base stations. As these networks and the volume of information they handle grow, they consume an increasingly hefty amount of power, some of which goes to powering the system as it’s supposed to, but much of which is lost as heat due to energy inefficiency. This is a problem both for companies such as mobile network operators, which have to pay large utility bills to cover their operational costs, and for society at large, as the sector’s greenhouse gas emissions rise.
These concerns are what motivate Omer in his research. Most of the projects that he has worked on at MIT seek to design signal processing systems, optimized to different measures, that will increase power efficiency while ensuring that the output signal (what you hear when talking to someone on the phone, for instance) is true to the original input (what was said by the person on the other end of the call).
His latest project seeks to address the power efficiency problem by decreasing the peak-to-average power ratio (PAPR) of wireless communication signals. In the broadest sense, PAPR is an indirect indicator of how much power is required to send and receive a clear signal across a network. The lower this ratio is, the more energy-efficient the transmission. Namely, much of the power consumed in cellular networks is dedicated to power amplifiers, which collect low-power electronic input and convert it to a higher-power output, such as picking up a weak radio signal generated inside a cell phone and amplifying it so that, when emitted by an antenna it is strong enough to reach a cell tower. This ensures that the signal is robust enough to maintain adequate signal-to-noise ratio over the communication link. Power amplifiers are at their most efficient when operating near their saturation level, at maximum output power. However, because cellular network technology has evolved in a way that accommodates a huge volume and variety of information across the network — resulting in far less uniform signals than in the past — modern communication standards require signals with big peak-to-average power ratios. This means that a radio frequency transmitter must be designed such that the underlying power amplifier can handle peaks much higher than the average power being transmitted, and therefore, most of the time, the power amplifier is working inefficiently — far from its saturation level.
“Every cell tower has to have some kind of PAPR reduction algorithm in place in order to operate. But the algorithms they use are developed with little or no guaranties on improving system performance,” Omer says. “A common conception is that optimal algorithms, which would certainly improve system performance, are either too expensive to implement — in terms of power or computational capacity — or cannot be implemented at all.”
Omer, who is supervised by LIDS Professor Alexandre Megretski, designed an algorithm that can decrease the PAPR of a modern communication signal, which would allow the power amplifier to operate closer to its maximum efficiency, thus reducing the amount of energy lost in the process. To create this system he first considered it as an optimization problem, the conditions of which meant that any solution would not be implementable, as it would require infinite latency, meaning an infinite delay before transmitting the signal. However, Omer showed that the underlying optimal system, even though of infinite latency, has a desirable fading-memory property, and so he could create an approximation with finite latency — an acceptable lag time. From this, he developed a way to best approximate the optimal system. The approximation, which is implementable, allows tradeoffs between precision and latency, so that real-time realizations of the algorithm can improve power efficiency without adding too much transmission delay or too much distortion to the signal. Omer applied this system using standardized test signals for 4G communication and found that, on average, he could get around 50 percent reduction in the peak-to-average power ratio while satisfying standard measures of quality of digital communication signals.
Omer’s algorithm, along with improving power efficiency, is also computationally efficient. “This is important in order to ensure that the algorithm is not just theoretically implementable, but also practically implementable,” Omer says, once again stressing that abstract mathematical solutions are only valuable if they cohere to real-world parameters. Microchip real estate in communications is a limited commodity, so the algorithm cannot take up much space, and its mathematical operations have to be executed quickly, as latency is a critical factor in wireless communications. Omer believes that the algorithm could be adapted to solve other engineering problems with similar frameworks, including envelope tracking and model predictive control.
While he has been working on this project, Omer has made a home for himself at MIT. Two of his three sons were born here in Cambridge — in fact, the youngest was born on campus, in the stairwell of Omer and his wife’s graduate housing building. “The neighbors slept right through it,” Omer says with a laugh.
Omer quickly became an active member of the LIDS community when he arrived at MIT. Most notably, he was part of the LIDS student conference and student social committees, where, in addition to helping run the annual LIDS Student Conference, a signature lab event now in its 25th year, he also helped to organize monthly lunches, gatherings, and gaming competitions, including a semester-long challenge dubbed the OLIDSpics (an homage to the Olympic Games). He says that being on the committees was a great way to engage with and contribute to the LIDS community, a group for which he is grateful.
“At MIT, and especially at LIDS, you can learn something new from everyone you speak to. I’ve been in many places, and this is the only place where I’ve experienced a community like that,” Omer says.
As Omer’s time at LIDS draws to an end, he is still debating what to do next. On one hand, his love of solving real-world problems is drawing him toward industry. He spent four summers during his PhD interning at companies including the Mitsubishi Electric Research Lab. He enjoyed the fast pace of industry, being able to see his solutions implemented relatively quickly.
On the other hand, Omer is not sure he could ever leave academia for long; he loves research and is also truly passionate about teaching. Omer, who grew up in Bosnia-Herzegovina, began teaching in his first year of high school, at a math camp for younger children. He has been teaching in one form or another ever since.
At MIT, Omer has taught both undergraduate- and graduate-level courses, including as an instructor-G, an appointment only given to advanced students who have demonstrated teaching expertise. He has won two teaching awards, the MIT School of Engineering Graduate Student Extraordinary Teaching and Mentoring Award in 2018 and the MIT EECS Carlton E. Tucker Teaching Award in 2017.
The magnitude of Omer’s love for teaching is clear when he speaks about working with students: “That moment when you explain something to a student and you see them really understand the concept is priceless. No matter how much energy you have to spend to make that happen, it’s worth it,” Omer says.
In communications, power efficiency is key, but when it comes to research and teaching, there’s no limit to Omer’s energy.
To design buildings that can withstand the largest of storms, Kostas Keremidis, a PhD candidate at the MIT Concrete Sustainability Hub, is using research at the smallest scale — that of the atom.
His approach, which derives partially from materials science, models a building as a collection of points that interact through forces like those found at the atomic scale.
“When you look at a building, it is actually a series of connections between columns, windows, doors, and so on,” says Keremidis. “Our new framework looks at how different building components connect together to form a building like atoms form a molecule — similar forces hold them together, both at the atomic and building scale.” The framework is called molecular dynamics-based structural modeling.
Eventually, Keremidis hopes it will provide developers and builders with a new way to readily predict building damage from disasters like hurricanes and earthquakes.
But before he can predict building damage, Keremidis must first assemble a model.
He begins by taking a building and dividing its respective elements into nodes, or "atoms." This is a standard procedure called "discretization," whereby a building is divided into different points. Then he gives each "atom" different properties according to its material. For example, the weight of each "atom" may depend on if it’s part of a floor, a door, a window, and so on. After modeling them, he defines their bonds.
The first type of bond between points in a building model is called an axial bond. These describe how elements deform under a load in the direction of their span — in other words, they model how a column shrinks and then rebounds under a load, like a spring.
The second type of connection is that of the angular bonds, which represent how elements like a beam bend in the lateral direction. Keremidis uses these vertical and lateral interactions to model the deformation and breaking of different building elements. Breaking occurs when these bonds deform too much, just like in real structures.
To see how one of his buildings will fare under conditions like storms or earthquakes, Keremidis must thoroughly test these assembled atoms and their bonds under numerous simulations.
“Once I have my model and my building, I then run around 10,000 simulations,” explains Keremidis. “I can assign 10,000 different loads to one element or building, or I can also assign that element 10,000 different properties.”
For him to assess the results of these simulated conditions or properties, Keremidis returns to the bonds. “When they deform during a simulation, these bonds will try to bring the building back to its original position,” he notes. “But they may also get damaged, too. This is how we model damage — we count how many bonds are destroyed and where.”
The damage is in the details
The model’s innovations actually lie in its damage prediction.
Traditionally, engineers have used a method called finite element analysis to model building damage. Like MIT’s approach, it also breaks down a building into component parts. But it is generally a time-consuming technique that is set up around the elasticity of elements. This means that it can model only small deformations in a building, rather than large-scale inelastic deformations, like fracture, that frequently occur under hurricane loads.
An added benefit of his molecular dynamics model is that Keremidis can explore “different materials, different structural properties, and different building geometries” by playing with the layout and nature of atoms and their bonds. This means that molecular dynamics can potentially model any element of a building, and more quickly, too.
By scaling this approach beyond individual buildings, molecular dynamics could also better inform city, state, and even federal hazard-mitigation efforts.
For hazard mitigation, cities currently rely on a model by the Federal Emergency Management Agency (FEMA) called HAZUS. It takes historical weather data and a dozen standard building models to predict the damage that a community might experience during a hazard.
While useful, HAZUS is not ideal. It offers around only a dozen standardized building types and provides qualitative, rather than quantitative, results.
The MIT model, however, will allow stakeholders to go into finer detail. “With FEMA’s HAZUS, the current level of categorization is too coarse. Instead, we should have 50 or 60 building types,” says Keremidis. “Our model will allow us to collect and model this wider range of buildings types.”
Since it measures damage by counting the broken bonds between atoms, a molecular dynamics approach will also more easily quantify the damage that hazards like windstorms or earthquakes can inflict on a community. Such a quantifiable understanding of hazard damage should lead to more accurate estimations of mitigation costs and recovery.
According to the U.S. Congressional Budget Office, wind storms currently cause $28 billion in damage annually. By 2075, they will cause $38 billion, due to climate change and coastal development.
With a molecular dynamics approach, developers and government agencies will have one more tool to predict and mitigate these damages.
The MIT Concrete Sustainability Hub is a team of researchers from several departments across MIT working on concrete and infrastructure science, engineering, and economics. Its research is supported by the Portland Cement Association and the Ready Mixed Concrete Research and Education Foundation.
In any conventional silicon-based solar cell, there is an absolute limit on overall efficiency, based partly on the fact that each photon of light can only knock loose a single electron, even if that photon carried twice the energy needed to do so. But now, researchers have demonstrated a method for getting high-energy photons striking silicon to kick out two electrons instead of one, opening the door for a new kind of solar cell with greater efficiency than was thought possible.
While conventional silicon cells have an absolute theoretical maximum efficiency of about 29.1 percent conversion of solar energy, the new approach, developed over the last several years by researchers at MIT and elsewhere, could bust through that limit, potentially adding several percentage points to that maximum output. The results are described today in the journal Nature, in a paper by graduate student Markus Einzinger, professor of chemistry Moungi Bawendi, professor of electrical engineering and computer science Marc Baldo, and eight others at MIT and at Princeton University.
The basic concept behind this new technology has been known for decades, and the first demonstration that the principle could work was carried out by some members of this team six years ago. But actually translating the method into a full, operational silicon solar cell took years of hard work, Baldo says.
That initial demonstration “was a good test platform” to show that the idea could work, explains Daniel Congreve PhD ’15, an alumnus now at the Rowland Institute at Harvard, who was the lead author in that prior report and is a co-author of the new paper. Now, with the new results, “we’ve done what we set out to do” in that project, he says.
The original study demonstrated the production of two electrons from one photon, but it did so in an organic photovoltaic cell, which is less efficient than a silicon solar cell. It turned out that transferring the two electrons from a top collecting layer made of tetracene into the silicon cell “was not straightforward,” Baldo says. Troy Van Voorhis, a professor of chemistry at MIT who was part of that original team, points out that the concept was first proposed back in the 1970s, and says wryly that turning that idea into a practical device “only took 40 years.”
The key to splitting the energy of one photon into two electrons lies in a class of materials that possess “excited states” called excitons, Baldo says: In these excitonic materials, “these packets of energy propagate around like the electrons in a circuit,” but with quite different properties than electrons. “You can use them to change energy — you can cut them in half, you can combine them.” In this case, they were going through a process called singlet exciton fission, which is how the light’s energy gets split into two separate, independently moving packets of energy. The material first absorbs a photon, forming an exciton that rapidly undergoes fission into two excited states, each with half the energy of the original state.
But the tricky part was then coupling that energy over into the silicon, a material that is not excitonic. This coupling had never been accomplished before.
As an intermediate step, the team tried coupling the energy from the excitonic layer into a material called quantum dots. “They’re still excitonic, but they’re inorganic,” Baldo says. “That worked; it worked like a charm,” he says. By understanding the mechanism taking place in that material, he says, “we had no reason to think that silicon wouldn’t work.”
What that work showed, Van Voorhis says, is that the key to these energy transfers lies in the very surface of the material, not in its bulk. “So it was clear that the surface chemistry on silicon was going to be important. That was what was going to determine what kinds of surface states there were.” That focus on the surface chemistry may have been what allowed this team to succeed where others had not, he suggests.
The key was in a thin intermediate layer. “It turns out this tiny, tiny strip of material at the interface between these two systems [the silicon solar cell and the tetracene layer with its excitonic properties] ended up defining everything. It’s why other researchers couldn’t get this process to work, and why we finally did.” It was Einzinger “who finally cracked that nut,” he says, by using a layer of a material called hafnium oxynitride.
The layer is only a few atoms thick, or just 8 angstroms (ten-billionths of a meter), but it acted as a “nice bridge” for the excited states, Baldo says. That finally made it possible for the single high-energy photons to trigger the release of two electrons inside the silicon cell. That produces a doubling of the amount of energy produced by a given amount of sunlight in the blue and green part of the spectrum. Overall, that could produce an increase in the power produced by the solar cell — from a theoretical maximum of 29.1 percent, up to a maximum of about 35 percent.
Actual silicon cells are not yet at their maximum, and neither is the new material, so more development needs to be done, but the crucial step of coupling the two materials efficiently has now been proven. “We still need to optimize the silicon cells for this process,” Baldo says. For one thing, with the new system those cells can be thinner than current versions. Work also needs to be done on stabilizing the materials for durability. Overall, commercial applications are probably still a few years off, the team says.
Other approaches to improving the efficiency of solar cells tend to involve adding another kind of cell, such as a perovskite layer, over the silicon. Baldo says “they’re building one cell on top of another. Fundamentally, we’re making one cell — we’re kind of turbocharging the silicon cell. We’re adding more current into the silicon, as opposed to making two cells.”
The researchers have measured one special property of hafnium oxynitride that helps it transfer the excitonic energy. “We know that hafnium oxynitride generates additional charge at the interface, which reduces losses by a process called electric field passivation. If we can establish better control over this phenomenon, efficiencies may climb even higher.” Einzinger says. So far, no other material they’ve tested can match its properties.
The research was supported as part of the MIT Center for Excitonics, funded by the U.S. Department of Energy.
All ecosystems around the globe are impacted by the interplay between herbivores and their gut microbes. Strict herbivores such as grazers are dependent on the enzymes produced by their gut microbes to digest the complex plant fibers that constitute their diet. These animals form a symbiotic relationship with their microbes, one that affects ecosystems around the globe because it allows for energy to be transferred from plants to animals.
One of the most remarkable examples of this symbiotic relationship is found in the Galapagos islands, where marine iguanas have evolved to graze exclusively on fast-growing algae found on the shores of the archipelago’s island. Unfortunately, specialization comes at a cost: Due to their strict dependency on just one type of algae, these iguanas are highly susceptible to environmental fluctuations that change the type of algae available on the islands. In the past, El Niño events — whose intensity and frequency is exacerbated by climate change — have led to a shift in the algal species, causing up to a 90 percent loss of the iguana population.
Associate Professor Otto Cordero of the Department of Civil and Environmental Engineering recently teamed up with researchers from the Universidad San Francisco de Quito and with Professor Itzhak Mizrahi from Ben Gurion University of the Negev. The group hypothesized that the susceptibility of the marine iguanas is caused by a loss of functional diversity in their microbiomes — in other words, that generations of a specialized diet has led to a shift in the iguana gut microbiome, favoring microorganisms that can only digest one type of algae.
To test this idea, the team visited the islands and collected samples from various iguana colonies around the archipelago. The group plans to identify the enzymes and the microbes responsible for the algal breakdown, and to study potential microbiome interventions that could expand the iguana diet and enable them to consume other forms of algae. If successful, this would represent a novel strategy for conservation based on microbiome engineering.
Submitted by: MIT Department of Civil and Environmental Engineering | Video by: Wild Hope Collective | 5 min, 33 sec
Gita Manaktala, editorial director of the MIT Press, was named the 2019 Association of University Presses (AUPresses) Constituency Award honoree at this year's AUPresses Annual Meeting in Detroit, Michigan. The award was introduced by Larin McLaughlin, editor-in-chief of University of Washington Press, during the opening banquet.
"Her letters of nomination for this award illustrate how much so many of us cherish Gita's contributions to our work," McLaughlin remarked. "One points out that 'her knowledge, her charisma, her humor, her charm are all generously bestowed on our membership.' Another describes Gita as 'a strong ambassador for the cooperative and collaborative spirit that defines the AUPresses.'"
Manaktala's leadership of diversity and inclusion initiatives was seen as a signal achievement by many of her nominators. She has been one of the principal mentors involved in the Mellon University Press Diversity Fellowship since its inception in 2016, a program that creates opportunities in university press acquisitions departments for talented scholars from diverse communities. The convener of a Diversity and Inclusion Working Group at her own press, she was also a founding member and co-chair, with McLaughlin, of the association's Diversity and Inclusion Task Force, created in 2017. The AUPresses task force will become a full committee for equity, justice, and inclusion, with Manaktala continuing as a co-chair, this fall.
Manaktala's nearly 30-year career at MIT Press has encompassed marketing as well as editorial areas of expertise. As marketing director at the press from 2004-08, she led global sales, marketing, publicity, and electronic product development efforts. As its editorial director since 2009, she has guided a large and complex acquisitions program and currently oversees the work of 14 acquiring editors.
Her additional volunteer service to the association has been equally varied:
- She chaired the 2011 Annual Meeting Program Committee, constructing a conference that is well-remembered for its vibrant and community-building offerings.
- As part of the 2015-16 Acquisitions Editorial Committee, she helped create the Best Practices for Peer Review handbook, bringing together insights from dozens of acquisitions editors to produce this guiding document.
- She has served on the Faculty Outreach, Digital Publishing, and Nominating committees, and as a member of the association's board of directors since 2017.
The association's award recognizes Manaktala as a multi-talented and collaborative leader and thanks her for her many contributions to the work of university presses and to a rich and inclusive publishing culture.
Created in 1991, the AUPresses Constituency Award recognizes staff at member presses who have demonstrated active leadership and service to the association and the university press community. Coincidentally, last year's award winner, Colleen Lanick, was also an MIT Press staffer; she is currently publicity director of Harvard University Press. The full honor roll is available here.
Before she even set foot on the MIT campus, Ankita Reddy ’19 was exploring questions of medicine, public health, and social inequities. During high school, she produced a documentary about Henrietta Lacks, an African-American woman whose cell line has proved invaluable to medical research — but who never gave consent for its use in this way. And while interning in a lab at the National Institutes of Health, Reddy found her focus shifting to a societal picture, as federal budget reductions squeezed scientists:
"I was curious about the impacts of cuts on physicians and researchers who were struggling to sustain work on human diseases," she recalls. "I realized I was increasingly interested in finding meaningful intersections of science and the humanities."
Intersections of science and the humanities
At MIT, Reddy swiftly identified a path for pursuing discipline-spanning studies. As a double major in anthropology and biology, she put her full range of interests to work. Her senior thesis involved hybrid research in these two areas: Reddy helped develop a rapid, inexpensive diagnostic for mosquito-borne disease, and she also performed field research and analysis looking at the potential deployment of this and other diagnostic devices in developing countries.
The efficacy and success of medical advances must always be evaluated within a larger social context, Reddy says. "Infectious diseases impact communities unequally — often hitting hardest those without resources," she notes. "In order to do the most good for individual patients and to slow the spread of disease, our interventions need to take into consideration the public health capacity of communities, as well as local ideas of health and sickness."
A foundation in anthropology
Reddy credits foundational anthropology coursework for her commitment to this kind of public health approach. As a first-year student, she took 21A.331[J] (Infections and Inequalities: Interdisciplinary Perspectives on Global Health), which was taught by three professors: chemical engineer Arup Chakraborty; biologist and physician Dennis Kim; and medical anthropologist Erica Caple James, who became Reddy's advisor.
"It is fascinating to view infectious diseases through multiple lenses," says Reddy. "The goal is finding the synergy of anthropological and medical thinking to make interventions more tailored and culturally sensitive, so they can be deployed to effect widespread change," she says.
Combining anthropology with biology for a public health mission
As she honed her skills in Boston-area wet labs and pursued a developmental health clinical internship at a Johannesburg, South Africa, hospital, Reddy sought opportunities to realize her interdisciplinary ambitions. James sent Reddy to the lab of Lee Gehrke, the Hermann L.F. von Helmholtz Professor in the Institute for Medical Engineering and Science at MIT. There, she was recruited by senior researcher Irene Bosch to help design an inexpensive paper-based diagnostic for such diseases as Zika, dengue, and Chikungunya. This proved the ideal venue for Reddy to play out her fusion of anthropology and scientific interests within a public health mission.
Starting in 2017, Reddy helped tweak the diagnostics in the lab, and spent some frenzied months field-testing the devices internationally — all while carrying a full course load.
"In junior year, I took the devices to Brazil for a long weekend, and it was really challenging," she recounts. "I had to take 10 flights round-trip, which really tested my motivation and resilience."
These trips helped spark the idea behind her senior thesis in anthropology. While evaluating the efficacy of the diagnostic in the field, Reddy was also wondering how such tests "could be be meaningfully deployed in resource-poor areas."
Experience and intuition
So in 2018, with the help of an Eloranta Summer Research Fellowship, Reddy spent several months in Hyderabad and Bangalore, India, interviewing and observing physicians and medical students during rounds at infectious disease hospitals catering largely to poor populations. She hoped to learn whether mosquito-borne diseases posed a major issue for these hospitals; what kind of improvements in treatment, public health infrastructure, or diagnosis physicians might seek; and how they used technology or other methods to relieve the suffering of patients in their daily practice.
Drawing on ethnographic expertise garnered from such classes as 21A.802 (Seminar in Ethnography and Fieldwork), Reddy was able to tease out some central themes from interview transcripts and field notes.
"Experience and intuition play a huge role in medical expertise in these hospitals," says Reddy. "Everyone had a story about a physician who had a sixth sense, who knew from a glance — without using any technology — what disease a patient suffered from." Any attempt to bring new technologies into these hospital environments, she says, "must acknowledge the existing structures that are based on medical improvisation and intuition."
From diagnostics to doctor
These insights will prove useful as Reddy launches her post-graduation life as a researcher at E25Bio, a startup spun out of the Gehrke Lab. With her grasp of cultural context, Reddy hopes to help craft a realistic business model to attract funding and speed the dissemination of her team's diagnostic technology. She particularly looks forward to the project's next phase, where uploaded data from globally deployed diagnostic devices could provide a detailed picture of the spread or containment of mosquito-borne illnesses around the world.
But even as she helps advance this pathbreaking biotechnology, Reddy is intent on pursuing a more direct way of contributing to public health: She is applying to medical schools.
"I aspire to be a physician-anthropologist, because I don't think I can choose one or the other," she says. "I'd like to use the power of the white coat to listen to what people have to say, take care of them in a collaborative way, and maybe, while doing this, contribute a new perspective to both the medical and anthropology fields."
Story prepared by MIT Anthropology and MIT SHASS Communications
Communications Director: Emily Hiestand
Liaison: Irene Hartford
Writer: Leda Zimmerman
Seven MIT educators have received awards this year for their significant digital learning innovations and their contributions to teaching and learning at MIT and around the world.
Polina Anikeeva, Martin Bazant, and Jessica Sandland shared the third annual MITx Prize for Teaching and Learning in MOOCs — an award given to educators who have developed massive open online courses (MOOCs) that share the best of MIT knowledge and perspectives with learners around the world. Additionally, John Belcher, Amy Carleton, Jared Curhan, and Erik Demaine received Teaching with Digital Technology Awards, nominated by MIT students for their innovative use of digital technology to improve their teaching at MIT.
The MITx Prize for Teaching and Learning in MOOCs
This year’s MITx prize winners were honored at an MIT Open Learning event in May. Professor Polina Anikeeva of the Department of Materials Science and Engineering and Digital Learning Lab Scientist Jessica Sandland received the award for teaching 3.024x (Electronic, Optical and Magnetic Properties of Materials). The course was praised for not only its global impact, but also for the way in which it enhanced the residential experience. Increased flexibility from integrating the online content allowed for the addition of design reviews, which give MIT students firsthand experience working on complicated engineering problems.
3.024x is fast-paced and challenging. To bring some levity to the subject, the instructors designed problem sets around a series of superhero-themed comic strips that integrated the science and engineering concepts that students learned in class.
Martin Bazant, of the departments of Chemical Engineering and Mathematics, received the MITx prize for his course, 10.50.1x (Analysis of Transport Phenomena Mathematical Methods). Most problems in the course involve long calculations, which can be tricky to demonstrate online.
To solve this challenge, Bazant broke up problems into smaller parts that included tips and tutorials to help learners solve the problem while maintaining the rigorous intellectual challenge. Course participants included a diverse group of college students, industry professionals, and faculty from other universities in many science and engineering disciplines across the globe.
Teaching with Digital Technology Awards
Co-sponsored by MIT Open Learning and the Office of the Vice Chancellor, the Teaching with Digital Technology Awards are student-nominated awards for faculty and instructors who have improved teaching and learning at MIT with digital technology. MIT students nominated 117 faculty and instructors for this award this year, more than in any previous year. The winners were celebrated at an awards luncheon in early June. John Belcher, Erik Demaine, and Jared Curhan attended the awards luncheon, and — in the spirit of an award reception for digital innovation — Amy Carleton joined the event virtually, through video chat.
John Belcher was honored for his physics courses on electricity and magnetism. Students appreciated the way that Belcher incorporated videos with his lectures to help provide a physical representation of an abstract subject. He created the animated videos to show visualizations of fundamental physics concepts such as energy transfer and magnetic fields. Students remarked that the videos helped them learn about everything from solar flares and the solar cycle to the fundamentally relativistic nature of electromagnetism.
Erik Demaine of the Computer Science and Artificial Intelligence Lab received the award for his course 6.892 (Fun with Hardness Proofs). The course flipped the traditional classroom model. Instead of lecturing in person, all lectures were posted online and problems were done in class. This allowed the students to spend class time working together on collaborative problem solving through an online application that Demaine created, called Coauthor.
Jared Curhan received the award for his negotiation courses at the MIT Sloan School of Management, including 15.672 (Negotiation Analysis), which he designed for students across the Institute. Curhan used digital technology to provide feedback while students practiced their negotiating skills in class. A platform called iDecisionGames helped simulate negotiation exercises between students, and after each exercise it provided data about how each participant performed, both objectively and subjectively.
Amy Carleton received the award for her course on science writing and new media. During the course, students learned how to write about scientific and technical topics for a general audience. They put their skills to work by writing Wikipedia articles, where they used advanced editing techniques and wrote mathematical expressions in LaTEX. They also used Google Docs during class to edit articles in small groups, and developed PowerPoint presentations where they learned to incorporate sound and graphics to emphasize their ideas.
Both awards celebrate instructors who are using technology in innovative ways to help teach challenging courses to both traditional students and online learners.
“At MIT, there is no shortage of digital learning innovation, and this year’s winners reflect the Institute’s strong commitment to transforming teaching and learning at MIT and around the globe,” says MIT Professor Krishna Rajagopal, dean for digital learning. “They have set new standards for online and blended learning.”
MIT professors Angelika Amon and Dina Katabi have been named to the Carnegie Corporation of New York’s 2019 list of Great Immigrants, Great Americans. These 38 naturalized U.S. citizens are noted as individuals who “strengthen America’s economy, enrich our culture and communities, and invigorate our democracy through their lives, their work, and their examples.”
Angelika Amon, who hails from Austria, is a molecular and cell biologist who studies cell growth and division and how errors in this process — specifically abnormal numbers of chromosomes — contribute to cancer, aging, and birth defects.
Amon arrived in Cambridge, Massachusetts, from Vienna in 1994 to complete a two-year postdoctoral fellowship at the Whitehead Institute for Biomedical Research; she was subsequently named a Whitehead Fellow for three years. Amon then joined the MIT Center for Cancer Research, now the Koch Institute for Integrative Cancer Research at MIT, and MIT’s Department of Biology in 1999. She became a full professor in 2007 and is currently the Kathleen and Curtis Marble Professor in Cancer Research, a Howard Hughes Medical Institute investigator, the co-associate director of the Glenn Center for Science of Aging Research at MIT, and the inaugural director of the Alana Down Syndrome Center at MIT. Her most recent awards include the 2019 Vilcek Prize in Biomedical Science and the 2019 Breakthrough Prize in Life Sciences.
Dina Katabi, who was born in Syria, is an engineer who works to improve the speed, reliability, and security of wireless networks. She is especially known for her work on a wireless system that can track human movement even through walls — a technology that has great potential for medical use.
Katabi joined the Department of Electrical Engineering and Computer Science faculty in 2003. She is a principal investigator in the Computer Science and Artificial Intelligence Laboratory (CSAIL), as well as director of the Networks at MIT research group and co-director of the MIT Center for Wireless Networks and Mobile Computing, both in CSAIL. Among other honors, Katabi has received a MacArthur Fellowship (sometimes called a “genius grant”), the Association for Computing Machinery (ACM) Prize in Computing, the ACM Grace Murray Hopper Award, a Test of Time Award from the ACM’s Special Interest Group on Data Communications, a National Science Foundation CAREER Award, and a Sloan Research Fellowship. She is an ACM Fellow and was elected to the National Academy of Engineering. She earned a bachelor’s degree from Damascus University and master’s and PhD degrees from MIT.
The Carnegie Corporation celebrates its Great Immigrants every Fourth of July as a way to honor exemplary naturalized U.S. citizens. The organization has named nearly 600 individuals to its list since 2006. Past MIT honorees include Professor Daron Acemoglu (Turkey), Professor Nergis Mavalvala (Pakistan), President L. Rafael Reif (Venezuela), Professor Emeritus Rainer Weiss (Germany), and Professor Feng Zhang (China).
Years ago, MIT Professor Neil Gershenfeld had an audacious thought. Struck by the fact that all the world’s living things are built out of combinations of just 20 amino acids, he wondered: Might it be possible to create a kit of just 20 fundamental parts that could be used to assemble all of the different technological products in the world?
Gershenfeld and his students have been making steady progress in that direction ever since. Their latest achievement, presented this week at an international robotics conference, consists of a set of five tiny fundamental parts that can be assembled into a wide variety of functional devices, including a tiny “walking” motor that can move back and forth across a surface or turn the gears of a machine.
Previously, Gershenfeld and his students showed that structures assembled from many small, identical subunits can have numerous mechanical properties. Next, they demonstrated that a combination of rigid and flexible part types can be used to create morphing airplane wings, a longstanding goal in aerospace engineering. Their latest work adds components for movement and logic, and will be presented at the International Conference on Manipulation, Automation and Robotics at Small Scales (MARSS) in Helsinki, Finland, in a paper by Gershenfeld and MIT graduate student Will Langford.
Their work offers an alternative to today’s approaches to contructing robots, which largely fall into one of two types: custom machines that work well but are relatively expensive and inflexible, and reconfigurable ones that sacrifice performance for versatility. In the new approach, Langford came up with a set of five millimeter-scale components, all of which can be attached to each other by a standard connector. These parts include the previous rigid and flexible types, along with electromagnetic parts, a coil, and a magnet. In the future, the team plans to make these out of still smaller basic part types.
Using this simple kit of tiny parts, Langford assembled them into a novel kind of motor that moves an appendage in discrete mechanical steps, which can be used to turn a gear wheel, and a mobile form of the motor that turns those steps into locomotion, allowing it to “walk” across a surface in a way that is reminiscent of the molecular motors that move muscles. These parts could also be assembled into hands for gripping, or legs for walking, as needed for a particular task, and then later reassembled as those needs change. Gershenfeld refers to them as “digital materials,” discrete parts that can be reversibly joined, forming a kind of functional micro-LEGO.
The new system is a significant step toward creating a standardized kit of parts that could be used to assemble robots with specific capabilities adapted to a particular task or set of tasks. Such purpose-built robots could then be disassembled and reassembled as needed in a variety of forms, without the need to design and manufacture new robots from scratch for each application.
Langford's initial motor has an ant-like ability to lift seven times its own weight. But if greater forces are required, many of these parts can be added to provide more oomph. Or if the robot needs to move in more complex ways, these parts could be distributed throughout the structure. The size of the building blocks can be chosen to match their application; the team has made nanometer-sized parts to make nanorobots, and meter-sized parts to make megarobots. Previously, specialized techniques were needed at each of these length scale extremes.
“One emerging application is to make tiny robots that can work in confined spaces,” Gershenfeld says. Some of the devices assembled in this project, for example, are smaller than a penny yet can carry out useful tasks.
To build in the “brains,” Langford has added part types that contain millimeter-sized integrated circuits, along with a few other part types to take care of connecting electrical signals in three dimensions.
The simplicity and regularity of these structures makes it relatively easy for their assembly to be automated. To do that, Langford has developed a novel machine that's like a cross between a 3-D printer and the pick-and-place machines that manufacture electronic circuits, but unlike either of those, this one can produce complete robotic systems directly from digital designs. Gershenfeld says this machine is a first step toward to the project's ultimate goal of “making an assembler that can assemble itself out of the parts that it's assembling.”
“Standardization is an extremely important issue in microrobotics, to reduce the production costs and, as a result, to improve acceptance of this technology to the level of regular industrial robots,” says Sergej Fatikow, head of the Division of Microrobotics and Control Engineering, at the University of Oldenburg, Germany, who was not associated with this research. The new work “addresses assembling of sophisticated microrobotic systems from a small set of standard building blocks, which may revolutionize the field of microrobotics and open up numerous applications at small scales,” he says.
A newly developed material that is so perfectly transparent you can barely see it could unlock many new uses for solar heat. It generates much higher temperatures than conventional solar collectors do — enough to be used for home heating or for industrial processes that require heat of more than 200 degrees Celsius (392 degrees Fahrenheit).
The key to the process is a new kind of aerogel, a lightweight material that consists mostly of air, with a structure made of silica (which is also used to make glass). The material lets sunlight pass through easily but blocks solar heat from escaping. The findings are described in the journal ACS Nano, in a paper by Lin Zhao, an MIT graduate student; Evelyn Wang, professor and head of the Department of Mechanical Engineering; Gang Chen, the Carl Richard Soderberg Professor in Power Engineering; and five others.
The key to efficient collection of solar heat, Wang explains, is being able to keep something hot internally while remaining cold on the outside. One way of doing that is using a vacuum between a layer of glass and a dark, heat-absorbing material, which is the method used in many concentrating solar collectors but is relatively expensive to install and maintain. There has been great interest in finding a less expensive, passive system for collecting solar heat at the higher temperature levels needed for space heating, food processing, or many industrial processes.
Aerogels, a kind of foam-like material made of silica particles, have been developed for years as highly efficient and lightweight insulating materials, but they have generally had limited transparency to visible light, with around a 70 percent transmission level. Wang says developing a way of making aerogels that are transparent enough to work for solar heat collection was a long and difficult process involving several researchers for about four years. But the result is an aerogel that lets through over 95 percent of incoming sunlight while maintaining its highly insulating properties.
The key to making it work was in the precise ratios of the different materials used to create the aerogel, which are made by mixing a catalyst with grains of a silica-containing compound in a liquid solution, forming a kind of gel, and then drying it to get all the liquid out, leaving a matrix that is mostly air but retains the original mixture’s strength. Producing a mix that dries out much faster than those in conventional aerogels, they found, produced a gel with smaller pore spaces between its grains, and that therefore scattered the light much less.
In tests on a rooftop on the MIT campus, a passive device consisting of a heat-absorbing dark material covered with a layer of the new aerogel was able to reach and maintain a temperature of 220 C, in the middle of a Cambridge winter when the outside air was below 0 C.
Such high temperatures have previously only been practical by using concentrating systems, with mirrors to focus sunlight onto a central line or point, but this system requires no concentration, making it simpler and less costly. That could potentially make it useful for a wide variety of applications that require higher levels of heat.
For example, simple flat rooftop collectors are often used for domestic hot water, producing temperatures of around 80 C. But the higher temperatures enabled by the aerogel system could make such simple systems usable for home heating as well, and even for powering an air conditioning system. Large-scale versions could be used to provide heat for a wide variety of applications in chemical, food production, and manufacturing processes.
Zhao describes the basic function of the aerogel layer as “like a greenhouse effect. The material we use to increase the temperature acts like the Earth’s atmosphere does to provide insulation, but this is an extreme example of it.”
For most purposes, the passive heat collection system would be connected to pipes containing a liquid that could circulate to transfer the heat to wherever it’s needed. Alternatively, Wang suggests, for some uses the system could be connected to heat pipes, devices that can transfer heat over a distance without requiring pumps or any moving parts.
Because the principle is essentially the same, an aerogel-based solar heat collector could directly replace the vacuum-based collectors used in some existing applications, providing a lower-cost option. The materials used to make the aerogel are all abundant and inexpensive; the only costly part of the process is the drying, which requires a specialized device called a critical point dryer to allow for a very precise drying process that extracts the solvents from the gel while preserving its nanoscale structure.
Because that is a batch process rather than a continuous one that could be used in roll-to-roll manufacturing, it could limit the rate of production if the system is scaled up to industrial production levels. “The key to scaleup is how we can reduce the cost of that process,” Wang says. But even now, a preliminary economic analysis shows that the system can be economically viable for some uses, especially in comparison with vacuum-based systems.
The research team included research scientist Bikram Bhatia, postdoc Sungwoo Yang, graduate student Elise Strobach, instructor Lee Weinstein and postdoc Thomas Cooper. The work was primarily funded by the U.S. Department of Energy’s ARPA-E program.
The MIT Energy Initiative (MITEI) recently awarded seven grants totaling approximately $1 million through its Seed Fund Program, which supports early-stage innovative energy research at MIT through an annual competitive process.
“Supporting basic research has always been a core component of MITEI’s mission to transform and decarbonize global energy systems,” says MITEI Director Robert C. Armstrong, the Chevron Professor of Chemical Engineering. “This year’s funded projects highlight just a few examples of the many ways that people working across the energy field are researching vital topics to create a better world.”
The newly awarded projects will address topics such as developing efficient strategies for recycling plastics, improving the stability of high-energy metal-halogen flow batteries, and increasing the potential efficiency of silicon solar cells to accelerate the adoption of photovoltaics. Awardees include established energy faculty members and others who are new to the energy field, from disciplines including applied economics, chemical engineering, biology, and other areas.
Demand-response policies and incentives for energy efficiency adoption
Most of today’s energy growth is occurring in developing countries. Assistant Professor Namrata Kala and Professor Christopher Knittel, both of whom focus on applied economics at the MIT Sloan School of Management, will use their grant to examine key policy levers for meeting electricity demand and renewable energy growth without jeopardizing system reliability in the developing world.
Kala and Knittel plan to design and run a randomized control trial in New Delhi, India, in collaboration with a large Indian power company. “We will estimate the willingness of firms to enroll in services that reduce peak consumption, and also promote energy efficiency,” says Kala, the W. Maurice Young (1961) Career Development Professor of Management. “Estimating the costs and benefits of such services, and their allocation across customers and electricity providers, can inform policies that promote energy efficiency in a cost-effective manner.”
Efficient conversion of methane to methanol
Methane, the primary component of natural gas, has become an increasingly important part of the global energy portfolio. However, the chemical inertness of methane and the lack of efficient methods to convert this gaseous carbon feedstock into liquid fuels has significantly limited its application. Yang Shao-Horn, the W.M. Keck Professor of Energy in the departments of Mechanical Engineering and Materials Science and Engineering, seeks to address this problem using her seed fund grant. Shao-Horn and Shuai Yuan, a postdoc in the Research Laboratory of Electronics, will focus on achieving efficient, cost-effective gas-to-liquid conversion using metal-organic frameworks (MOFs) as electrocatalysts.
Current methane activation and conversion processes are usually accomplished by costly and energy-intensive steam reforming at elevated temperature and high pressure. Shao-Horn and Yuan’s goal is to design efficient MOF-based electrocatalysts that will permit the methane-to-methanol conversion process to proceed at ambient temperature and pressure.
“If successful, this electrochemical gas-to-liquid concept could lead to a modular, efficient, and cost-effective solution that can be deployed in both large-scale industrial plants and remotely located oil fields to increase the utility of geographically isolated gas reserves,” says Shao-Horn.
Using machine learning to solve the “zeolite conundrum”
The energy field is replete with opportunities for machine learning to expedite progress toward a variety of innovative energy solutions. Rafael Gómez-Bombarelli, the Toyota Assistant Professor in Materials Processing in the Department of Materials Science and Engineering, received a grant for a project that will combine machine learning and simulation to accelerate the discovery cycle of zeolites.
Zeolites are materials with wide-ranging industrial applications as catalysts and molecular sieves because of their high stability and selective nanopores that can confine small molecules. Despite decades of abundant research, only 248 zeolite frameworks have been realized out of the millions of possible structures that have been proposed using computers — the so-called zeolite conundrum.
The problem, notes Gómez-Bombarelli, is that discovery of these new frameworks has relied mostly on trial-and-error in the lab — an approach that is both slow and labor-intensive.
In his seed grant work, Gómez-Bombarelli and his team will be using theory to speed up that process. “Using machine learning and first-principles simulations, we’ll design small molecules to dock on specific pores and direct the formation of targeted structures,” says Gómez-Bombarelli. “This computational approach will drive new synthetic outcomes in zeolites faster.”
Effective recycling of plastics
Professor Anthony Sinskey of the Department of Biology, Professor Gregory Stephanopoulos of the Department of Chemical Engineering, and graduate student Linda Zhong of biology have joined forces to address the environmental and economic problems posed by polyethylene terephthalate (PET). One of the most synthesized plastics, PET exhibits an extremely low degradation rate and its production is highly dependent on petroleum feedstocks.
“Due to the huge negative impacts of PET products, efficient recycling strategies need to be designed to decrease economic loss and adverse environmental impacts associated with single-use practices,” says Sinskey.
“PET is essentially an organic polymer of terephthalic acid and ethylene glycol, both of which can be metabolized by bacteria as energy and nutrients. These capacities exist in nature, though not together,” says Zhong. “Our goal is to engineer these metabolic pathways into E. coli to allow the bacterium to grow on PET. Using genetic engineering, we will introduce the PET-degrading enzymes into E. coli and ultimately transfer them into bioremediation organisms.”
The long-term goal of the project is to prototype a bioprocess for closed-loop PET recycling, which will decrease the volume of discarded PET products as well as the consumption of petroleum and energy for PET synthesis.
The researchers’ primary motivation in pursuing this project echoes MITEI’s overarching goal for the seed fund program: to push the boundaries of research and innovation to solve global energy and climate challenges. Zhong says, “We see a dire need for this research because our world is inundated in plastic trash. We’re only attempting to solve a tiny piece of the global problem, but we must try when much of what we hold dear depends on it.”
The MITEI Seed Fund Program has awarded new grants each year since it was established in 2008. Funding for the grants comes chiefly from MITEI’s founding and sustaining members, supplemented by gifts from generous donors. To date, MITEI has supported 177 projects with grants totaling approximately $23.6 million.
Recipients of MITEI Seed Fund grants for 2019 are:
- "Development and prototyping of stable, safe, metal‐halogen flow batteries with high energy and power densities" — Martin Bazant of the departments of Chemical Engineering and Mathematics and T. Alan Hatton of the Department of Chemical Engineering;
- "Silicon solar cells sensitized by exciton fission" — Marc Baldo of the Department of Electrical Engineering and Computer Science;
- "Automatic design of structure‐directing agents for novel realizable zeolites" — Rafael Gómez‐Bombarelli of the Department of Materials Science and Engineering;
- "Demand response, energy efficiency, and firm decisions" — Namrata Kala and Christopher Knittel of the Sloan School of Management;
- "Direct conversion of methane to methanol by MOF‐based electrocatalysts" — Yang Shao‐Horn of the departments of Mechanical Engineering and Materials Science and Engineering;
- "Biodegradation of plastics for efficient recycling and bioremediation" — Anthony Sinskey of the Department of Biology and Gregory Stephanopoulos of the Department of Chemical Engineering; and
- "Asymmetric chemical doping for photocatalytic CO2 reduction" — Michael Strano of the Department of Chemical Engineering.
One Saturday this spring, toy-sized cars were zipping along the classroom floor of the Roxbury Innovation Center. However, no one was following them around with remote controls. Instead, each car used code to react autonomously to obstacles — code written by a classroom full of middle school students.
Andrew Fishberg, a staff member in the Advanced Capabilities and Systems Group at Lincoln Laboratory, had seen how students engaged with the Rapid Autonomous Complex-Environment Competing Ackermann-steering Robot (RACECAR) during the workshop at the Beaver Works Summer Institute (BWSI). However, BWSI is aimed at high school seniors who already excel in science, technology, engineering, and math, and Fishberg was worried that the program was reaching students too late in their educations to have maximum impact. "It only gets harder [to learn coding] the later you get to the students," he says. "I think the future of these things is at the middle school age."
So, with the help of a handful of volunteers including Eyassu Shimelis from the Advanced Concepts Technologies Group and several high school volunteers — almost all of whom were coincidentally named Dan — and the Timothy Smith Network, Fishberg designed a four-week program to introduce middle schoolers to coding by programming race cars.
The Timothy Smith Network is named for a wealthy merchant who spent most of his life in Roxbury, Massachusetts, and upon his death in 1918 bequeathed his estate to improve the welfare of Roxbury residents. Since 1996, that trust has been used to bring the benefits of computer technology to residents via dozens of public technology centers and educational programs. Collaboration between the Timothy Smith Network and Lincoln Laboratory could help the RACECAR program reach middle schoolers who might not otherwise have the opportunity to learn to code. "Our motto is inclusiveness," says Khalid Mustafa, the IT director of the Timothy Smith Network. "Too often we have all these rules that filter people out. How do we invite people in?"
Both groups wanted to stress accessibility. Although coding is becoming a fundamental part of a modern education, schools in communities with limited educational budgets are significantly less likely to offer computer science classes. Holding the workshop at the Roxbury Innovation Center, instead of at the Beaver Works Center in Cambridge, made it more accessible for lower-income students and students of color.
The workshop was stretched out over four hours each Saturday for a month. Between one and two dozen students attended each time. The first three workshops focused on coding basics, such as Boolean data (data that has one of two possible values, often true or false) and the difference between "or" and "exclusive or" ("exclusive or" is true if only one value is true, whereas "or" is true if at least one value is true).
"We don't want to lock anybody out because they haven't had a chance to program before, so we had to start from square one," Fishberg explains. He would teach a principle and demonstrate the code on screen at the front of the classroom, then have students call out answers to build a program together. More often than not, the code the students created wouldn't run the way they wanted it to. At that point, Fishberg would walk the students through the code, explaining the logic with which computers approach problems and allowing the students to find the bugs themselves. Usually, the code spat out numbers as expected within minutes.
On the fourth Saturday, Fishberg brought out the race cars. Slightly larger than a phone book, the cars have the wheels and body of an off-the-shelf remote-controlled car, upon which is mounted a piece of cardboard that holds a spinning lidar, a small processor, and a battery pack. The cars cost around $500 each to build and made their debut at this RACECAR middle school event as a cost-effective alternative to the race cars used at BWSI. The excitement — from the volunteers, students, and observers — was palpable. "Basically everything they've been learning … turns into the logic to drive the car," Fishberg said. "That application really drives home the learning objectives."
The race cars "see" by using lidar – each car's lidar system shoots out a pulse of laser light and measures how long it takes to bounce back. By aiming laser beams in 720 directions all around the car, the lidar system can map the distance between it and the nearest obstacles. The students began by calibrating their race cars, finding which direction marked zero degrees in the lidar's measurements by circling the car with pieces of cardstock and looking for changes in the lidar's readouts. They also calibrated the car to drive straight forward when it is prompted, and then moved on to harder coding challenges, such as having the car stop itself when it sensed an obstacle. The students were exhilarated by their successes and suggested designs for complex obstacle courses the cars could navigate. Fishberg used this as an opportunity to impart another fundamental principle of coding: KISS, or Keep It Simple, Silly.
Both Fishberg and Mustafa hope to make this year's workshop the first of many. Coding is becoming a vital skill, and introducing students to it at a young age opens doors in their education — such as to the BWSI, which is aimed at high school seniors who perform well in the STEM fields. "We're looking to expand," Mustafa says. "Our goal is to really establish this [class] as a model."
Mustafa thought that the most powerful part of the workshop and the collaboration between the Timothy Smith Network and Lincoln Laboratory was the way it brought people together who wouldn't otherwise be able to share their skills. "How do we give each other access to each other?" Mustafa asks. "That's really the key."
The MIT Media Lab has added 11 members to the diverse group of visionary innovators and leaders it calls the Director’s Fellows.
Now in its seventh year, the Director’s Fellows program links a vast array of creators, advocates, artists, scientists, educators, philosophers, and others to the lab. The goal of the program is for the fellows to get involved in the lab’s work, bringing new perspectives, ideas, and knowledge to projects and initiatives.
Conversely, the fellows spread insights, knowledge, and work of the lab out into the world, giving it exposure in spaces as varied as fashion, human rights, and sports.
“My intention was to bring a wide range of voices into the Media Lab that we might not otherwise hear, because I firmly believe that technology and engineering alone cannot address the complexity of the challenges we face in today’s world,” says Joi Ito, director of the Media Lab. “Addressing an issue as complex as climate change or public health require solutions involving philosophy and politics and anthropology — a range of knowledge, skills, and talents that we don’t necessarily have at the lab.”
With the addition of this year’s fellows, the Director’s Fellows network will be 70-ish persons strong. The fellows may collaborate on projects with students and faculty, serve as advisers, bring a project idea into the lab, or work on projects together. Those living abroad may participate in Media Lab workshops and other offsite events.
Fellows have a formal affiliation with the lab for two years, but the hope is that the network continues to flourish after that period ends. “Our intention is to keep them as close as possible, both to each other and to the lab,” says Claudia Robaina, the program’s director. “They are great resources for us and for each other, a huge network of collaborators.”
The fellows this year are as diverse as ever, although Robaina says there is perhaps a greater diversity of age than in the typical class. Among them are a career police officer, a freestyle skateboarder, and a physician.
The Media Lab’s 2019 Directors Fellows are listed below.
Jaylen Brown, an NBA basketball player with the Boston Celtics, has a wide range of interests, including history, finance, technology, and meditation. Considered an innovator by his peers, he entered the NBA draft in 2016 without an agent, and a year later created a stir by pulling together a networking event for rookie players at the NBA Summer League, which was followed by a “Tech Hustle” event at the NBA All-Star Weekend that attracted venture capitalists, rap stars, and corporate chieftains to help players understand investment.
Jan Fuller, a former senior digital forensics investigator for the Redmond Police Department in Washington state, began conducting forensic investigations of electronic devices in 2003, when 1 gigabyte was a lot of data. Currently, she’s pursuing projects aimed at improving law enforcement capabilities deployed against digital crimes and coaching and mentoring students interested in careers in digital forensics.
Kathy Jetñil-Kijiner, a poet of Marshall Islands ancestry, achieved international acclaim with her performance at the opening of the United Nations Climate Summit in New York in 2014. She has published a collection of poetry, Iep Jāltok: Poems from a Marshallese Daughter, and she directs a Marshall Islands-based nonprofit dedicated to empowering Marshallese youth to seek solutions to the environmental challenges their homeland faces.
Ayana Elizabeth Johnson, founder and chief executive of Ocean Collectiv, a consulting firm for conservation solutions, is a marine biologist and policy expert. She founded the Urban Ocean Lab, a think tank focused on coastal cities, and has worked on ocean policy at the U.S. Environmental Protection Agency and the National Oceanic and Atmospheric Administration.
Lehua Kamalu, an apprentice navigator and the voyaging director at the Polynesian Voyaging Society, researched and devised the sail plan for Hōkūleʻa, a double hulled canoe, as it circumnavigated the Earth from 2014 to 2018 on a voyage named “Malama Honua — to care for the Earth.” She sees the practice of deep-sea voyaging as a means to challenge the depth and quality of our individual relationships to the ocean, nature, and one another.
AiLun Ku, president and chief operating officer at The Opportunity Network, works to create spaces for first-generation high school and college students of color to enhance and improve their postsecondary and career readiness education. She trains partners to integrate culturally balanced, student-centered curriculum design with rigorous data-driven practices with the goal of influencing systems that have traditionally excluded young people of color from college and career opportunities.
Nonabah Lane, a member of the Navajo Nation, is a sustainability specialist and entrepreneur in environmental and culturally conscious business development, energy education, and tribal community commitment. She is a co-founder of Navajo Ethno-Agriculture, a farm that teaches Navajo culture through traditional farming and bilingual education and is active in promoting and developing tribal sustainable energy strategies.
Kate McCall-Kiley, co-founder and director at xD, an emerging technology lab within the U.S. government, works to create new environments and mechanisms for behavior change while experimenting with different ways to productively challenge convention. She served as a White House Presidential Innovation Fellow for the Obama administration, where she worked on projects including vote.gov, The Opportunity Project, worker.gov, BroadbandUSA, and Vice President Joe Biden's Cancer Moonshot.
Rodney Mullen, co-founder of one of the most dominant skateboarding companies in America, invented many of the tricks in use in skateboarding today and holds two patents related to the sport’s equipment. He has pivoted to work in the open source community, where he finds many parallels between the creativity of skateboarders and hackers. He still skates two hours a day.
Elizabeth Pettit, executive director of Clínica Integral Almas in Álamos, Mexico, which works with remote indigenous communities, is a physician. Medicine and work in rural public health is a second act: Pettit previously was a designer, creating specialty materials for art and architecture and for the film and entertainment industry.
Michael Tubbs, mayor of Stockton, California, has received national attention for his ambitious progressive agenda, which includes securing $20 million to finance scholarships to triple the number of the city’s students entering and graduating from college, and the country’s first universal basic income pilot project. He is the youngest mayor in the history of the country to represent a city with more than 100,000 residents and is Stockton’s first African-American mayor.
Learn more about all of the fellows from all seven cohorts at directorsfellows.media.mit.edu.
M. Taylor Fravel, the Arthur and Ruth Sloan Professor of Political Science, has been named director of the MIT Security Studies Program (SSP). Barry Posen, Ford International Professor of Political Science and director of SSP since 2006, announced the leadership transition to the SSP community at its recent gala dinner.
Fravel takes over as director today. Posen will continue his research and teaching responsibilities at MIT. As a member of SSP, he will continue leading the Grand Strategy, Security, and Statecraft Fellows Program.
“SSP is a community of scholars dedicated to the proposition that the problem of international and internal war merits sustained study. I have every confidence that Taylor will bring an infusion of new ideas, and energy to attempt new initiatives, that come with a new leader,” says Posen.
SSP is widely recognized as a leader in its field, generating research on international security issues and training graduate students for careers in academia, government, business, and civil society organizations. The MIT Center for International Studies (CIS) provides the intellectual home and the administrative infrastructure for SSP.
Fravel is an expert on international security, with a focus on China’s foreign and security policies. He joined MIT in 2004 as assistant professor of political science and member of SSP. He currently serves on the editorial boards of Security Studies, Journal of Strategic Studies, and the China Quarterly, and is a member of the board of directors for the National Committee on United States-China Relations.
Fravel’s most recent book, "Active Defense: China's Military Strategy Since 1949," was published by Princeton University Press earlier this year. It has been praised as “the first book to provide a comprehensive history of China’s military doctrine as it has evolved since the founding of the People’s Republic.”
“SSP is one of the country’s preeminent university-based programs for the study of international security,” Fravel says. “For more than 40 years, the faculty, fellows, and students of SSP have been conducting policy-relevant and rigorous research on questions of war and peace, both among states and within them. I am honored to be given this opportunity to serve as director and look forward to working with my SSP and MIT colleagues in this new role.”
Fravel is a graduate of Middlebury College and Stanford University, where he received his PhD in political science. He has been a postdoc at the Olin Institute for Strategic Studies at Harvard University, a predoctoral fellow at the Center for International Security and Cooperation at Stanford University, a fellow with the Princeton-Harvard China and the World Program, and a visiting scholar at the American Academy of Arts and Sciences. He also has graduate degrees from the London School of Economics and Oxford University, where he was a Rhodes scholar.
Today’s smartphones often use artificial intelligence (AI) to help make the photos we take crisper and clearer. But what if these AI tools could be used to create entire scenes from scratch?
A team from MIT and IBM has now done exactly that with “GANpaint Studio,” a system that can automatically generate realistic photographic images and edit objects inside them. In addition to helping artists and designers make quick adjustments to visuals, the researchers say the work may help computer scientists identify “fake” images.
David Bau, a PhD student at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL), describes the project as one of the first times computer scientists have been able to actually “paint with the neurons” of a neural network — specifically, a popular type of network called a generative adversarial network (GAN).
Available online as an interactive demo, GANpaint Studio allows a user to upload an image of their choosing and modify multiple aspects of its appearance, from changing the size of objects to adding completely new items like trees and buildings.
Boon for designers
Spearheaded by MIT professor Antonio Torralba as part of the MIT-IBM Watson AI Lab he directs, the project has vast potential applications. Designers and artists could use it to make quicker tweaks to their visuals. Adapting the system to video clips would enable computer-graphics editors to quickly compose specific arrangements of objects needed for a particular shot. (Imagine, for example, if a director filmed a full scene with actors but forgot to include an object in the background that’s important to the plot.)
GANpaint Studio could also be used to improve and debug other GANs that are being developed, by analyzing them for “artifact” units that need to be removed. In a world where opaque AI tools have made image manipulation easier than ever, it could help researchers better understand neural networks and their underlying structures.
“Right now, machine learning systems are these black boxes that we don’t always know how to improve, kind of like those old TV sets that you have to fix by hitting them on the side,” says Bau, lead author on a related paper about the system with a team overseen by Torralba. “This research suggests that, while it might be scary to open up the TV and take a look at all the wires, there’s going to be a lot of meaningful information in there.”
One unexpected discovery is that the system actually seems to have learned some simple rules about the relationships between objects. It somehow knows not to put something somewhere it doesn’t belong, like a window in the sky, and it also creates different visuals in different contexts. For example, if there are two different buildings in an image and the system is asked to add doors to both, it doesn’t simply add identical doors — they may ultimately look quite different from each other.
“All drawing apps will follow user instructions, but ours might decide not to draw anything if the user commands to put an object in an impossible location,” says Torralba. “It’s a drawing tool with a strong personality, and it opens a window that allows us to understand how GANs learn to represent the visual world.”
GANs are sets of neural networks developed to compete against each other. In this case, one network is a generator focused on creating realistic images, and the second is a discriminator whose goal is to not be fooled by the generator. Every time the discriminator ‘catches’ the generator, it has to expose the internal reasoning for the decision, which allows the generator to continuously get better.
“It’s truly mind-blowing to see how this work enables us to directly see that GANs actually learn something that’s beginning to look a bit like common sense,” says Jaakko Lehtinen, an associate professor at Finland’s Aalto University who was not involved in the project. “I see this ability as a crucial steppingstone to having autonomous systems that can actually function in the human world, which is infinite, complex and ever-changing.”
Stamping out unwanted “fake” images
The team’s goal has been to give people more control over GAN networks. But they recognize that with increased power comes the potential for abuse, like using such technologies to doctor photos. Co-author Jun-Yan Zhu says that he believes that better understanding GANs — and the kinds of mistakes they make — will help researchers be able to better stamp out fakery.
“You need to know your opponent before you can defend against it,” says Zhu, a postdoc at CSAIL. “This understanding may potentially help us detect fake images more easily.”
To develop the system, the team first identified units inside the GAN that correlate with particular types of objects, like trees. It then tested these units individually to see if getting rid of them would cause certain objects to disappear or appear. Importantly, they also identified the units that cause visual errors (artifacts) and worked to remove them to increase the overall quality of the image.
“Whenever GANs generate terribly unrealistic images, the cause of these mistakes has previously been a mystery,” says co-author Hendrik Strobelt, a research scientist at IBM. “We found that these mistakes are triggered by specific sets of neurons that we can silence to improve the quality of the image.”
Bau, Strobelt, Torralba and Zhu co-wrote the paper with former CSAIL PhD student Bolei Zhou, postdoctoral associate Jonas Wulff, and undergraduate student William Peebles. They will present it next month at the SIGGRAPH conference in Los Angeles. “This system opens a door into a better understanding of GAN models, and that’s going to help us do whatever kind of research we need to do with GANs,” says Lehtinen.
In a moment more reminiscent of a Comic-Con event than a typical MIT symposium, Shawn Robinson, senior research associate at the University of Wisconsin at Madison, helped kick off the first-ever MIT Science of Reading event dressed in full superhero attire as Doctor Dyslexia Dude — the star of a graphic novel series he co-created to engage and encourage young readers, rooted in his own experiences as a student with dyslexia.
The event, co-sponsored by the MIT Integrated Learning Initiative (MITili) and the McGovern Institute for Brain Research at MIT, took place earlier this month and brought together researchers, educators, administrators, parents, and students to explore how scientific research can better inform educational practices and policies — equipping teachers with scientifically-based strategies that may lead to better outcomes for students.
Professor John Gabrieli, MITili director, explained the great need to focus the collective efforts of educators and researchers on literacy.
“Reading is critical to all learning and all areas of knowledge. It is the first great educational experience for all children, and can shape a child’s first sense of self,” he said. “If reading is a challenge or a burden, it affects children’s social and emotional core.”
A great divide
Reading is also a particularly important area to address because so many American students struggle with this fundamental skill. More than six out of every 10 fourth graders in the United States are not proficient readers, and changes in reading scores for fourth and eighth graders have increased only slightly since 1992, according to the National Assessment of Education Progress.
Gabrieli explained that, just as with biomedical research, where there can be a “valley of death” between basic research and clinical application, the same seems to apply to education. Although there is substantial current research aiming to better understand why students might have difficulty reading in the ways they are currently taught, the research often does not necessarily shape the practices of teachers — or how the teachers themselves are trained to teach.
This divide between the research and practical applications in the classroom might stem from a variety of factors. One issue might be the inaccessibility of research publications that are available for free to all — as well as the general need for scientific findings to be communicated in a clear, accessible, engaging way that can lead to actual implementation. Another challenge is the stark difference in pacing between scientific research and classroom teaching. While research can take years to complete and publish, teachers have classrooms full of students — all with different strengths and challenges — who urgently need to learn in real time.
Natalie Wexler, author of "The Knowledge Gap," described some of the obstacles to getting the findings of cognitive science integrated into the classroom as matters of “head, heart, and habit.” Teacher education programs tend to focus more on some of the outdated psychological models, like Piaget’s theory of cognitive development, and less on recent cognitive science research. Teachers also have to face the emotional realities of working with their students, and might be concerned that a new approach would cause students to feel bored or frustrated. In terms of habit, some new, evidence-based approaches may be, in a practical sense, difficult for teachers to incorporate into the classroom.
“Teaching is an incredibly complex activity,” noted Wexler.
From labs to classrooms
Throughout the day, speakers and panelists highlighted some key insights gained from literacy research, along with some of the implications these might have on education.
Mark Seidenberg, professor of psychology at the University of Wisconsin at Madison and author of "Language at the Speed of Sight," discussed studies indicating the strong connection between spoken and printed language.
“Reading depends on speech,” said Seidenberg. “Writing systems are codes for expressing spoken language … Spoken language deficits have an enormous impact on children’s reading.”
The integration of speech and reading in the brain increases with reading skill. For skilled readers, the patterns of brain activity (measured using functional magnetic resonance imaging) while comprehending spoken and written language are very similar. Becoming literate affects the neural representation of speech, and knowledge of speech affects the representation of print — thus the two become deeply intertwined.
In addition, researchers have found that the language of books, even for young children, include words and expressions that are rarely encountered in speech to children. Therefore, reading aloud to children exposes them to a broader range of linguistic expressions — including more complex ones that are usually only taught much later. Thus reading to children can be especially important, as research indicates that better knowledge of spoken language facilitates learning to read.
Although behavior and performance on tests are often used as indicators of how well a student can read, neuroscience data can now provide additional information. Neuroimaging of children and young adults identifies brain regions that are critical for integrating speech and print, and can spot differences in the brain activity of a child who might be especially at-risk for reading difficulties. Brain imaging can also show how readers’ brains respond to certain reading and comprehension tasks, and how they adapt to different circumstances and challenges.
“Brain measures can be more sensitive than behavioral measures in identifying true risk,” said Ola Ozernov-Palchik, a postdoc at the McGovern Institute.
Ozernov-Palchik hopes to apply what her team is learning in their current studies to predict reading outcomes for other children, as well as continue to investigate individual differences in dyslexia and dyslexia-risk using behavior and neuroimaging methods.
Identifying certain differences early on can be tremendously helpful in providing much-needed early interventions and tailored solutions. Many speakers noted the problem with the current “wait-to-fail” model of noticing that a child has a difficult time reading in second or third grade, and then intervening. Research suggests that earlier intervention could help the child succeed much more than later intervention.
Speakers and panelists spoke about current efforts, including Reach Every Reader (a collaboration between MITili, the Harvard Graduate School of Education, and the Florida Center for Reading Research), that seek to provide support to students by bringing together education practitioners and scientists.
“We have a lot of information, but we have the challenge of how to enact it in the real world,” said Gabrieli, noting that he is optimistic about the potential for the additional conversations and collaborations that might grow out of the discussions of the Science of Reading event. “We know a lot of things can be better and will require partnerships, but there is a path forward.”
In many situations, engineers want to minimize the contact of droplets of water or other liquids with surfaces they fall onto. Whether the goal is keeping ice from building up on an airplane wing or a wind turbine blade, or preventing heat loss from a surface during rainfall, or preventing salt buildup on surfaces exposed to ocean spray, making droplets bounce away as fast as possible and minimizing the amount of contact with the surface can be key to keeping systems functioning properly.
Now, a study by researchers at MIT demonstrates a new approach to minimizing the contact between droplets and surfaces. While previous attempts, including by members of the same team, have focused on minimizing the amount of time the droplet spends in contact with the surface, the new method instead focuses on the spatial extent of the contact, trying to minimize how far a droplet spreads out before bouncing away.
The new findings are described in the journal ACS Nano in a paper by MIT graduate student Henri-Louis Girard, postdoc Dan Soto, and professor of mechanical engineering Kripa Varanasi. The key to the process, they explain, is creating a series of raised ring shapes on the material’s surface, which cause the falling droplet to splash upward in a bowl-shaped pattern instead of flowing out flat across the surface.
The work is a followup on an earlier project by Varanasi and his team, in which they were able to reduce the contact time of droplets on a surface by creating raised ridges on the surface, which disrupted the spreading pattern of impacting droplets. But the new work takes this farther, achieving a much greater reduction in the combination of contact time and contact area of a droplet.
In order to prevent icing on an airplane wing, for example, it is essential to get the droplets of impacting water to bounce away in less time than it takes for the water to freeze. The earlier ridged surface did succeed in reducing the contact time, but Varanasi says “since then, we found there’s another thing at play here,” which is how far the drop spreads out before rebounding and bouncing off. “Reducing the contact area of the impacting droplet should also have a dramatic impact on transfer properties of the interaction,” Varanasi says.
The team initiated a series of experiments that demonstrated that raised rings of just the right size, covering the surface, would cause the water spreading out from an impacting droplet to splash upward instead, forming a bowl-shaped splash, and that the angle of that upward splash could be controlled by adjusting the height and profile of those rings. If the rings are too large or too small compared to the size of the droplets, the system becomes less effective or doesn’t work at all, but when the size is right, the effect is dramatic.
It turns out that reducing the contact time alone is not sufficient to achieve the greatest reduction in contact; it’s the combination of the time and area of contact that’s critical. In a graph of the time of contact on one axis, and the area of contact on the other axis, what really matters is the total area under the curve — that is, the product of the time and the extent of contact. The area of the spreading was “was another axis that no one has touched” in previous research, Girard says. “When we started doing so, we saw a drastic reaction,” reducing the total time-and-area contact of the droplet by 90 percent. “The idea of reducing contact area by forming ‘waterbowls’ has far greater effect on reducing the overall interaction than by reducing contact time alone,” Varanasi says.
As the droplet starts to spread out within the raised circle, as soon as it hits the circle’s edge it begins to deflect. “Its momentum is redirected upward,” Girard says, and although it ends up spreading outward about as far as it would have otherwise, it is no longer on the surface, and therefore not cooling the surface off, or leading to icing, or blocking the pores on a “waterproof” fabric.
Credit: Henri-Louis Girard, Dan Soto, and Kripa Varanas
The rings themselves can be made in different ways and from different materials, the researchers say — it’s just the size and spacing that matters. For some tests, they used rings 3-D printed on a substrate, and for others they used a surface with a pattern created through an etching process similar to that used in microchip manufacturing. Other rings were made through computer controlled milling of plastic.
While higher-velocity droplet impacts generally can be more damaging to a surface, with this system the higher velocities actually improve the effectiveness of the redirection, clearing even more of the liquid than at slower speeds. That’s good news for practical applications, for example in dealing with rain, which has relatively high velocity, Girard says. “It actually works better the faster you go,” he says.
In addition to keeping ice off airplane wings, the new system could have a wide variety of applications, the researchers say. For example, “waterproof” fabrics can become saturated and begin to leak when water fills up the spaces between the fibers, but when treated with the surface rings, fabrics kept their ability to shed water for longer, and performed better overall, Girard says. “There was a 50 percent improvement by using the ring structures,” he says.
The research was supported by MIT’s Deshpande Center for Technological Innovation.
In the Iron Man movies, Tony Stark uses a holographic computer to project 3-D data into thin air, manipulate them with his hands, and find fixes to his superhero troubles. In the same vein, researchers from MIT and Brown University have now developed a system for interactive data analytics that runs on touchscreens and lets everyone — not just billionaire tech geniuses — tackle real-world issues.
For years, the researchers have been developing an interactive data-science system called Northstar, which runs in the cloud but has an interface that supports any touchscreen device, including smartphones and large interactive whiteboards. Users feed the system datasets, and manipulate, combine, and extract features on a user-friendly interface, using their fingers or a digital pen, to uncover trends and patterns.
In a paper being presented at the ACM SIGMOD conference, the researchers detail a new component of Northstar, called VDS for “virtual data scientist,” that instantly generates machine-learning models to run prediction tasks on their datasets. Doctors, for instance, can use the system to help predict which patients are more likely to have certain diseases, while business owners might want to forecast sales. If using an interactive whiteboard, everyone can also collaborate in real-time.
The aim is to democratize data science by making it easy to do complex analytics, quickly and accurately.
“Even a coffee shop owner who doesn’t know data science should be able to predict their sales over the next few weeks to figure out how much coffee to buy,” says co-author and long-time Northstar project lead Tim Kraska, an associate professor of electrical engineering and computer science in at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and founding co-director of the new Data System and AI Lab (DSAIL). “In companies that have data scientists, there’s a lot of back and forth between data scientists and nonexperts, so we can also bring them into one room to do analytics together.”
VDS is based on an increasingly popular technique in artificial intelligence called automated machine-learning (AutoML), which lets people with limited data-science know-how train AI models to make predictions based on their datasets. Currently, the tool leads the DARPA D3M Automatic Machine Learning competition, which every six months decides on the best-performing AutoML tool.
Joining Kraska on the paper are: first author Zeyuan Shang, a graduate student, and Emanuel Zgraggen, a postdoc and main contributor of Northstar, both of EECS, CSAIL, and DSAIL; Benedetto Buratti, Yeounoh Chung, Philipp Eichmann, and Eli Upfal, all of Brown; and Carsten Binnig who recently moved from Brown to the Technical University of Darmstadt in Germany.
An “unbounded canvas” for analytics
The new work builds on years of collaboration on Northstar between researchers at MIT and Brown. Over four years, the researchers have published numerous papers detailing components of Northstar, including the interactive interface, operations on multiple platforms, accelerating results, and studies on user behavior.
Northstar starts as a blank, white interface. Users upload datasets into the system, which appear in a “datasets” box on the left. Any data labels will automatically populate a separate “attributes” box below. There’s also an “operators” box that contains various algorithms, as well as the new AutoML tool. All data are stored and analyzed in the cloud.
The researchers like to demonstrate the system on a public dataset that contains information on intensive care unit patients. Consider medical researchers who want to examine co-occurrences of certain diseases in certain age groups. They drag and drop into the middle of the interface a pattern-checking algorithm, which at first appears as a blank box. As input, they move into the box disease features labeled, say, “blood,” “infectious,” and “metabolic.” Percentages of those diseases in the dataset appear in the box. Then, they drag the “age” feature into the interface, which displays a bar chart of the patient’s age distribution. Drawing a line between the two boxes links them together. By circling age ranges, the algorithm immediately computes the co-occurrence of the three diseases among the age range.
“It’s like a big, unbounded canvas where you can lay out how you want everything,” says Zgraggen, who is the key inventor of Northstar’s interactive interface. “Then, you can link things together to create more complex questions about your data.”
With VDS, users can now also run predictive analytics on that data by getting models custom-fit to their tasks, such as data prediction, image classification, or analyzing complex graph structures.
Using the above example, say the medical researchers want to predict which patients may have blood disease based on all features in the dataset. They drag and drop “AutoML” from the list of algorithms. It’ll first produce a blank box, but with a “target” tab, under which they’d drop the “blood” feature. The system will automatically find best-performing machine-learning pipelines, presented as tabs with constantly updated accuracy percentages. Users can stop the process at any time, refine the search, and examine each model’s errors rates, structure, computations, and other things.
According to the researchers, VDS is the fastest interactive AutoML tool to date, thanks, in part, to their custom “estimation engine.” The engine sits between the interface and the cloud storage. The engine leverages automatically creates several representative samples of a dataset that can be progressively processed to produce high-quality results in seconds.
“Together with my co-authors I spent two years designing VDS to mimic how a data scientist thinks,” Shang says, meaning it instantly identifies which models and preprocessing steps it should or shouldn’t run on certain tasks, based on various encoded rules. It first chooses from a large list of those possible machine-learning pipelines and runs simulations on the sample set. In doing so, it remembers results and refines its selection. After delivering fast approximated results, the system refines the results in the back end. But the final numbers are usually very close to the first approximation.
“For using a predictor, you don’t want to wait four hours to get your first results back. You want to already see what’s going on and, if you detect a mistake, you can immediately correct it. That’s normally not possible in any other system,” Kraska says. The researchers’ previous user study, in fact, “show that the moment you delay giving users results, they start to lose engagement with the system.”
The researchers evaluated the tool on 300 real-world datasets. Compared to other state-of-the-art AutoML systems, VDS’ approximations were as accurate, but were generated within seconds, which is much faster than other tools, which operate in minutes to hours.
Next, the researchers are looking to add a feature that alerts users to potential data bias or errors. For instance, to protect patient privacy, sometimes researchers will label medical datasets with patients aged 0 (if they do not know the age) and 200 (if a patient is over 95 years old). But novices may not recognize such errors, which could completely throw off their analytics.
“If you’re a new user, you may get results and think they’re great,” Kraska says. “But we can warn people that there, in fact, may be some outliers in the dataset that may indicate a problem.”
Cytokines, small proteins released by immune cells to communicate with each other, have for some time been investigated as a potential cancer treatment.
However, despite their known potency and potential for use alongside other immunotherapies, cytokines have yet to be successfully developed into an effective cancer therapy.
That is because the proteins are highly toxic to both healthy tissue and tumors alike, making them unsuitable for use in treatments administered to the entire body.
Injecting the cytokine treatment directly into the tumor itself could provide a method of confining its benefits to the tumor and sparing healthy tissue, but previous attempts to do this have resulted in the proteins leaking out of the cancerous tissue and into the body’s circulation within minutes.
Now researchers at the Koch Institute for Integrative Cancer Research at MIT have developed a technique to prevent cytokines escaping once they have been injected into the tumor, by adding a Velcro-like protein that attaches itself to the tissue.
In this way the researchers, led by Dane Wittrup, the Carbon P. Dubbs Professor in Chemical Engineering and Biological Engineering and a member of the Koch Institute, hope to limit the harm caused to healthy tissue, while prolonging the treatment’s ability to attack the tumor.
To develop their technique, which they describe in a paper published today in the journal Science Translational Medicine, the researchers first investigated the different proteins found in tumors, to find one that could be used as a target for the cytokine treatment. They chose collagen, which is expressed abundantly in solid tumors.
They then undertook an extensive literature search to find proteins that bind effectively to collagen. They discovered a collagen-binding protein called lumican, which they then attached to the cytokines.
“When we inject (a collagen-anchoring cytokine treatment) intratumorally, we don’t have to worry about collagen found elsewhere in the body; we just have to make sure we have a protein that binds to collagen very tightly,” says lead author Noor Momin, a graduate student in the Wittrup Lab at MIT.
To test the treatment, the researchers used two cytokines known to stimulate and expand immune cell responses. The cytokines, interleukin-2 (IL-2) and interleukin-12 (IL-12), are also known to combine well with other immunotherapies.
Although IL-2 already has FDA approval, its severe side-effects have so far prevented its clinical use. Meanwhile IL-12 therapies have not yet reached phase 3 clinical trials due to their severe toxicity.
The researchers tested the treatment by injecting the two different cytokines into tumors in mice. To make the test more challenging, they chose a type of melanoma that contains relatively low amounts of collagen, compared to other tumor types.
They then compared the effects of administering the cytokines alone and of injecting cytokines attached to the collagen-binding lumican.
“In addition, all of the cytokine therapies were given alongside a form of systemic therapy, such as a tumor-targeting antibody, a vaccine, a checkpoint blockade, or chimeric antigen receptor (CAR)-T cell therapy, as we wanted to show the potential of combining cytokines with many different immunotherapy modalities,” Momin says.
They found that when any of the treatments were administered individually, the mice did not survive. Combining the treatments improved survival rates slightly, but when the cytokine was administered with the lumican to bind to the collagen, the researchers found that over 90 percent of the mice survived with some combinations.
“So we were able to show that these combinations are synergistic, they work really well together, and that cytokines attached to lumican really helped reap the full benefits of the combination,” Momin says.
What’s more, attaching the lumican eliminated the problem of toxicity associated with cytokine treatments alone.
The paper attempts to address a major obstacle in the oncology field, that of how to target potent therapeutics to the tumor microenvironment to enable their local action, according to Shannon Turley, a staff scientist and specialist in cancer immunology at Genentech, who was not involved in the research.
“This is important because many of the most promising cancer drugs can have unwanted side effects in tissues beyond the tumor,” Turley says. “The team’s approach relies on two principles that together make for a novel approach: injection of the drug directly into the tumor site, and engineering of the drug to contain a ‘Velcro’ that attaches the drug to the tumor to keep it from leaking into circulation and acting all over the body.”
The researchers now plan to carry out further work to improve the technique, and to explore other treatments that could benefit from being combined with collagen-binding lumican, Momin says.
Ultimately, they hope the work will encourage other researchers to consider the use of collagen binding for cancer treatments, Momin says.
“We’re hoping the paper seeds the idea that collagen anchoring could be really advantageous for a lot of different therapies across all solid tumors.”